SIGGRAPH Asia 2012 Abstracts

by Mike Gleicher on December 30, 2012

ACM Transactions on Graphics (TOG) – Proceedings of ACM SIGGRAPH Asia 2012 Volume 31 Issue 6, November 2012 http://dl.acm.org/citation.cfm?id=2366145

Motion-guided mechanical toy modeling

http://dl.acm.org/citation.cfm?id=2366146
Lifeng Zhu, Weiwei Xu, John Snyder, Yang Liu, Guoping Wang, Baining Guo

We introduce a new method to synthesize mechanical toys solely from the motion of their features. The designer specifies the geometry and a time-varying rotation and translation of each rigid feature component. Our algorithm automatically generates a mechanism assembly located in a box below the feature base that produces the specified motion. Parts in the assembly are selected from a parameterized set including belt-pulleys, gears, crank-sliders, quick-returns, and various cams (snail, ellipse, and double-ellipse). Positions and parameters for these parts are optimized to generate the specified motion, minimize a simple measure of complexity, and yield a well-distributed layout of parts over the driving axes.

Recursive interlocking puzzle

http://dl.acm.org/citation.cfm?id=2366147

Peng Song, Chi-Wing Fu, Daniel Cohen-Or

Interlocking puzzles are very challenging geometric problems with the fascinating property that once we solve one by putting together the puzzle pieces, the puzzle pieces interlock with one another, preventing the assembly from falling apart. Though interlocking puzzles have been known for hundreds of years, very little is known about the governing mechanics. Thus, designing new interlocking geometries is basically accomplished with extensive manual effort or expensive exhaustive search with computers. In this paper, we revisit the notion of interlocking in greater depth, and devise a formal method of the interlocking mechanics. From this, we can develop a constructive approach for devising new interlocking geometries that directly guarantees the validity of the interlocking instead of exhaustively testing it.

Chopper: partitioning models into 3D-printable parts

http://dl.acm.org/citation.cfm?id=2366148
Linjie Luo, Ilya Baran, Szymon Rusinkiewicz, Wojciech Matusik

3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance.

3D-printing of non-assembly, articulated models

http://dl.acm.org/citation.cfm?id=2366149
Jacques Calì, Dan A. Calian, Cristina Amati, Rebecca Kleinberger, Anthony Steed, Jan Kautz, Tim Weyrich

Additive manufacturing (3D printing) is commonly used to produce physical models for a wide variety of applications, from archaeology to design. While static models are directly supported, it is desirable to also be able to print models with functional articulations, such as a hand with joints and knuckles, without the need for manual assembly of joint components. Apart from having to address limitations inherent to the printing process, this poses a particular challenge for articulated models that should be posable: to allow the model to hold a pose, joints need to exhibit internal friction to withstand gravity, without their parts fusing during 3D printing.

Quality prediction for image completion

http://dl.acm.org/citation.cfm?id=2366150
Johannes Kopf, Wolf Kienzle, Steven Drucker, Sing Bing Kang

We present a data-driven method to predict the quality of an image completion method. Our method is based on the state-of-the-art non-parametric framework of Wexler et al. [2007]. It uses automatically derived search space constraints for patch source regions, which lead to improved texture synthesis and semantically more plausible results. These constraints also facilitate performance prediction by allowing us to correlate output quality against features of possible regions used for synthesis. We use our algorithm to first crop and then complete stitched panoramas. Our predictive ability is used to find an optimal crop shape before the completion is computed, potentially saving significant amounts of computation.

Manifold preserving edit propagation

http://dl.acm.org/citation.cfm?id=2366151
Xiaowu Chen, Dongqing Zou, Qinping Zhao, Ping Tan

We propose a novel edit propagation algorithm for interactive image and video manipulations. Our approach uses the locally linear embedding (LLE) to represent each pixel as a linear combination of its neighbors in a feature space. While previous methods require similar pixels to have similar results, we seek to maintain the manifold structure formed by all pixels in the feature space. Specifically, we require each pixel to be the same linear combination of its neighbors in the result. Compared with previous methods, our proposed algorithm is more robust to color blending in the input data. Furthermore, since every pixel is only related to a few nearest neighbors, our algorithm easily achieves good runtime efficiency.

Sparse PDF maps for non-linear multi-resolution image operations

http://dl.acm.org/citation.cfm?id=2366152
Markus Hadwiger, Ronell Sicat, Johanna Beyer, Jens Krüger, Torsten Möller

We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation.

DressUp!: outfit synthesis through automatic optimization

http://dl.acm.org/citation.cfm?id=2366153
Lap-Fai Yu, Sai-Kit Yeung, Demetri Terzopoulos, Tony F. Chan

We present an automatic optimization approach to outfit synthesis. Given the hair color, eye color, and skin color of the input body, plus a wardrobe of clothing items, our outfit synthesis system suggests a set of outfits subject to a particular dress code. We introduce a probabilistic framework for modeling and applying dress codes that exploits a Bayesian network trained on example images of real-world outfits. Suitable outfits are then obtained by optimizing a cost function that guides the selection of clothing items to maximize the color compatibility and dress code suitability. We demonstrate our approach on the four most common dress codes: Casual, Sportswear, Business-Casual, and Business.

Example-based synthesis of 3D object arrangements

http://dl.acm.org/citation.cfm?id=2366154
Matthew Fisher, Daniel Ritchie, Manolis Savva, Thomas Funkhouser, Pat Hanrahan

We present a method for synthesizing 3D object arrangements from examples. Given a few user-provided examples, our system can synthesize a diverse set of plausible new scenes by learning from a larger scene database. We rely on three novel contributions. First, we introduce a probabilistic model for scenes based on Bayesian networks and Gaussian mixtures that can be trained from a small number of input examples. Second, we develop a clustering algorithm that groups objects occurring in a database of scenes according to their local scene neighborhoods. These contextual categories allow the synthesis process to treat a wider variety of objects as interchangeable.

An interactive approach to semantic modeling of indoor scenes with an RGBD camera

http://dl.acm.org/citation.cfm?id=2366155
Tianjia Shao, Weiwei Xu, Kun Zhou, Jingdong Wang, Dongping Li, Baining Guo

We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene.

A search-classify approach for cluttered indoor scene understanding

http://dl.acm.org/citation.cfm?id=2366156
Liangliang Nan, Ke Xie, Andrei Sharf

We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities.

Acquiring 3D indoor environments with variability and repetition

http://dl.acm.org/citation.cfm?id=2366157
Young Min Kim, Niloy J. Mitra, Dong-Ming Yan, Leonidas Guibas

Large-scale acquisition of exterior urban environments is by now a well-established technology, supporting many applications in search, navigation, and commerce. The same is, however, not the case for indoor environments, where access is often restricted and the spaces are cluttered. Further, such environments typically contain a high density of repeated objects (e.g., tables, chairs, monitors, etc.) in regular or non-regular arrangements with significant pose variations and articulations. In this paper, we exploit the special structure of indoor environments to accelerate their 3D acquisition and recognition with a low-end handheld scanner. Our approach runs in two phases: (i) a learning phase wherein we acquire 3D models of frequently occurring objects and capture their variability modes from only a few scans, and (ii) a recognition phase wherein from a single scan of a new area, we identify previously seen objects but in different poses and locations at an average recognition time of …

Structure extraction from texture via relative total variation

http://dl.acm.org/citation.cfm?id=2366158
Li Xu, Qiong Yan, Yang Xia, Jiaya Jia

It is ubiquitous that meaningful structures are formed by or appear over textured surfaces. Extracting them under the complication of texture patterns, which could be regular, near-regular, or irregular, is very challenging, but of great practical importance. We propose new inherent variation and relative total variation measures, which capture the essential difference of these two types of visual forms, and develop an efficient optimization system to extract main structures. The new variation measures are validated on millions of sample patches.

Digital reconstruction of halftoned color comics

http://dl.acm.org/citation.cfm?id=2366159
Johannes Kopf, Dani Lischinski

We introduce a method for automated conversion of scanned color comic books and graphical novels into a new high-fidelity rescalable digital representation. Since crisp black line artwork and lettering are the most important structural and stylistic elements in this important genre of color illustrations, our digitization process is geared towards faithful reconstruction of these elements. This is a challenging task, because commercial presses perform halftoning (screening) to approximate continuous tones and colors with overlapping grids of dots. Although a large number of inverse haftoning (descreening) methods exist, they typically blur the intricate black artwork. Our approach is specifically designed to descreen color comics, which typically reproduce color using screened CMY inks, but print the black artwork using non-screened solid black ink.

Automatic stylistic manga layout

http://dl.acm.org/citation.cfm?id=2366160
Ying Cao, Antoni B. Chan, Rynson W. H. Lau

Manga layout is a core component in manga production, characterized by its unique styles. However, stylistic manga layouts are difficult for novices to produce as it requires hands-on experience and domain knowledge. In this paper, we propose an approach to automatically generate a stylistic manga layout from a set of input artworks with user-specified semantics, thus allowing less-experienced users to create high-quality manga layouts with minimal efforts. We first introduce three parametric style models that encode the unique stylistic aspects of manga layouts, including layout structure, panel importance, and panel shape. Next, we propose a two-stage approach to generate a manga layout: 1) an initial layout is created that best fits the input artworks and layout structure model, according to a generative probabilistic framework; 2) the layout and artwork geometries are jointly refined using an efficient optimization procedure, resulting in a professional-looking manga layout.

Lazy selection: a scribble-based tool for smart shape elements selection

http://dl.acm.org/citation.cfm?id=2366161
Pengfei Xu, Hongbo Fu, Oscar Kin-Chung Au, Chiew-Lan Tai

This paper presents Lazy Selection, a scribble-based tool for quick selection of one or more desired shape elements by roughly stroking through the elements. Our algorithm automatically refines the selection and reveals the user’s intention. To give the user maximum flexibility but least ambiguity, our technique first extracts selection candidates from the scribble-covered elements by examining the underlying patterns and then ranks them based on their location and shape with respect to the user-sketched scribble. Such a design makes our tool tolerant to imprecise input systems and applicable to touch systems without suffering from the fat finger problem.

Material memex: automatic material suggestions for 3D objects

http://dl.acm.org/citation.cfm?id=2366162
Arjun Jain, Thorsten Thormählen, Tobias Ritschel, Hans-Peter Seidel

The material found on 3D objects and their parts in our everyday surroundings is highly correlated with the geometric shape of the parts and their relation to other parts of the same object. This work proposes to model this context-dependent correlation by learning it from a database containing several hundreds of objects and their materials. Given a part-based 3D object without materials, the learned model can be used to fully automatically assign plausible material parameters, including diffuse color, specularity, gloss, and transparency. Further, we propose a user interface that provides material suggestions. This user-interface can be used, for example, to refine the automatic suggestion.

Interactive bi-scale editing of highly glossy materials

http://dl.acm.org/citation.cfm?id=2366163
Kei Iwasaki, Yoshinori Dobashi, Tomoyuki Nishita

We present a new technique for bi-scale material editing using Spherical Gaussians (SGs). To represent large-scale appearances, an effective BRDF that is the average reflectance of small-scale details is used. The effective BRDF is calculated from the integral of the product of the Bidirectional Visible Normal Distribution (BVNDF) and BRDFs of small-scale geometry. Our method represents the BVNDF with a sum of SGs, which can be calculated on-the-fly, enabling interactive editing of small-scale geometry. By representing small-scale BRDFs with a sum of SGs, effective BRDFs can be calculated analytically by convolving the SGs for BVNDF and BRDF. We propose a new SG representation based on convolution of two SGs, which allows real-time rendering of effective BRDFs under all-frequency environment lighting and real-time editing of small-scale BRDFs.

An inverse problem approach for automatically adjusting the parameters for rendering clouds using photographs

http://dl.acm.org/citation.cfm?id=2366164
Yoshinori Dobashi, Wataru Iwasaki, Ayumi Ono, Tsuyoshi Yamamoto, Yonghao Yue, Tomoyuki Nishita

Clouds play an important role in creating realistic images of outdoor scenes. Many methods have therefore been proposed for displaying realistic clouds. However, the realism of the resulting images depends on many parameters used to render them and it is often difficult to adjust those parameters manually. This paper proposes a method for addressing this problem by solving an inverse rendering problem: given a non-uniform synthetic cloud density distribution, the parameters for rendering the synthetic clouds are estimated using photographs of real clouds. The objective function is defined as the difference between the color histograms of the photograph and the synthetic image.

Lighting hair from the inside: a thermal approach to hair reconstruction

http://dl.acm.org/citation.cfm?id=2366165
Tomas Lay Herrera, Arno Zinke, Andreas Weber

Generating plausible hairstyles is a very challenging problem. Despite recent efforts no definite solution was presented so far. Many of the current limitations are related to the optical complexity of hair. In this paper we present a technique for hair reconstruction based on thermal imaging. By using this technique several issues of conventional image-based techniques, such as shadowing and anisotropy in reflectance, can be avoided. Moreover, hair-skin segmentation becomes a trivial problem, and no special care about lighting has to be taken, as the hair is "lit from inside" with the head as light source. The capture process is fast and requires a single hand-held device only.

New measurements reveal weaknesses of image quality metrics in evaluating graphics artifacts

http://dl.acm.org/citation.cfm?id=2366166
Martin Čadík, Robert Herzog, Rafał Mantiuk, Karol Myszkowski, Hans-Peter Seidel

Reliable detection of global illumination and rendering artifacts in the form of localized distortion maps is important for many graphics applications. Although many quality metrics have been developed for this task, they are often tuned for compression/transmission artifacts and have not been evaluated in the context of synthetic CG-images. In this work, we run two experiments where observers use a brush-painting interface to directly mark image regions with noticeable/objectionable distortions in the presence/absence of a high-quality reference image, respectively. The collected data shows a relatively high correlation between the with-reference and no-reference observer markings. Also, our demanding per-pixel image-quality datasets reveal weaknesses of both simple (PSNR, MSE, sCIE-Lab) and advanced (SSIM, MS-SSIM, HDR-VDP-2) quality metrics.

Large-scale fluid simulation using velocity-vorticity domain decomposition

http://dl.acm.org/citation.cfm?id=2366167
Abhinav Golas, Rahul Narain, Jason Sewall, Pavel Krajcevski, Pradeep Dubey, Ming Lin

Simulating fluids in large-scale scenes with appreciable quality using state-of-the-art methods can lead to high memory and compute requirements. Since memory requirements are proportional to the product of domain dimensions, simulation performance is limited by memory access, as solvers for elliptic problems are not compute-bound on modern systems. This is a significant concern for large-scale scenes. To reduce the memory footprint and memory/compute ratio, vortex singularity bases can be used. Though they form a compact bases for incompressible vector fields, robust and efficient modeling of nonrigid obstacles and free-surfaces can be challenging with these methods. We propose a hybrid domain decomposition approach that couples Eulerian velocity-based simulations with vortex singularity simulations.

Staggered meshless solid-fluid coupling

http://dl.acm.org/citation.cfm?id=2366168
Xiaowei He, Ning Liu, Guoping Wang, Fengjun Zhang, Sheng Li, Songdong Shao, Hongan Wang

Simulating solid-fluid coupling with the classical meshless methods is an difficult issue due to the lack of the Kronecker delta property of the shape functions when enforcing the essential boundary conditions. In this work, we present a novel staggered meshless method to overcome this problem. We create a set of staggered particles from the original particles in each time step by mapping the mass and momentum onto these staggered particles, aiming to stagger the velocity field from the pressure field. Based on this arrangement, an new approximate projection method is proposed to enforce divergence-free on the fluid velocity with compatible boundary conditions.

Automated constraint placement to maintain pile shape

http://dl.acm.org/citation.cfm?id=2366169
Shu-Wei Hsu, John Keyser

We present a simulation control to support art-directable stacking designs by automatically adding constraints to stabilize the stacking structure. We begin by adapting equilibrium analysis in a local scheme to find "stable" objects of the stacking structure. Next, for stabilizing the structure, we pick suitable objects from those passing the equilibrium analysis and then restrict their DOFs by managing the insertion of constraints on them. The method is suitable for controlling stacking behavior of large scale. Results show that our control method can be used in varied ways for creating plausible animation.

Speculative parallel asynchronous contact mechanics

http://dl.acm.org/citation.cfm?id=2366170
Samantha Ainsley, Etienne Vouga, Eitan Grinspun, Rasmus Tamstorf

We extend the Asynchronous Contact Mechanics algorithm [Harmon et al. 2009] and improve its performance by two orders of magnitude, using only optimizations that do not compromise ACM’s three guarantees of safety, progress, and correctness. The key to this speedup is replacing ACM’s timid, forward-looking mechanism for detecting collisions—locating and rescheduling separating plane kinetic data structures—with an optimistic speculative method inspired by Mirtich’s rigid body Time Warp algorithm [2000]. Time warp allows us to perform collision detection over a window of time containing many of ACM’s asynchronous trajectory changes; in this way we cull away large intervals as being collision free.

Adaptive anisotropic remeshing for cloth simulation

http://dl.acm.org/citation.cfm?id=2366171
Rahul Narain, Armin Samii, James F. O’Brien

We present a technique for cloth simulation that dynamically refines and coarsens triangle meshes so that they automatically conform to the geometric and dynamic detail of the simulated cloth. Our technique produces anisotropic meshes that adapt to surface curvature and velocity gradients, allowing efficient modeling of wrinkles and waves. By anticipating buckling and wrinkle formation, our technique preserves fine-scale dynamic behavior. Our algorithm for adaptive anisotropic remeshing is simple to implement, takes up only a small fraction of the total simulation time, and provides substantial computational speedup without compromising the fidelity of the simulation. We also introduce a novel technique for strain limiting by posing it as a nonlinear optimization problem.

Motion graphs++: a compact generative model for semantic motion analysis and synthesis

http://dl.acm.org/citation.cfm?id=2366172
Jianyuan Min, Jinxiang Chai

This paper introduces a new generative statistical model that allows for human motion analysis and synthesis at both semantic and kinematic levels. Our key idea is to decouple complex variations of human movements into finite structural variations and continuous style variations and encode them with a concatenation of morphable functional models. This allows us to model not only a rich repertoire of behaviors but also an infinite number of style variations within the same action. Our models are appealing for motion analysis and synthesis because they are highly structured, contact aware, and semantic embedding. We have constructed a compact generative motion model from a huge and heterogeneous motion database (about two hours mocap data and more than 15 different actions).

Terrain runner: control, parameterization, composition, and planning for highly dynamic motions

http://dl.acm.org/citation.cfm?id=2366173
Libin Liu, KangKang Yin, Michiel van de Panne, Baining Guo

In this paper we learn the skills required by real-time physics-based avatars to perform parkour-style fast terrain crossing using a mix of running, jumping, speed-vaulting, and drop-rolling. We begin with a single motion capture example of each skill and then learn reduced-order linear feedback control laws that provide robust execution of the motions during forward dynamic simulation. We then parameterize each skill with respect to the environment, such as the height of obstacles, or with respect to the task parameters, such as running speed and direction. We employ a continuation process to achieve the required parameterization of the motions and their affine feedback laws.

Falling and landing motion control for character animation

http://dl.acm.org/citation.cfm?id=2366174
Sehoon Ha, Yuting Ye, C. Karen Liu

We introduce a new method to generate agile and natural human landing motions in real-time via physical simulation without using any mocap or pre-scripted sequences. We develop a general controller that allows the character to fall from a wide range of heights and initial speeds, continuously roll on the ground, and get back on its feet, without inducing large stress on joints at any moment. The character’s motion is generated through a forward simulator and a control algorithm that consists of an airborne phase and a landing phase. During the airborne phase, the character optimizes its moment of inertia to meet the ideal relation between the landing velocity and the angle of attack, under the laws of conservation of momentum.

Synthesis of concurrent object manipulation tasks

http://dl.acm.org/citation.cfm?id=2366175
Yunfei Bai, Kristin Siu, C. Karen Liu

We introduce a physics-based method to synthesize concurrent object manipulation using a variety of manipulation strategies provided by different body parts, such as grasping objects with the hands, carrying objects on the shoulders, or pushing objects with the elbows or the torso. We design dynamic controllers to physically simulate upper-body manipulation and integrate it with procedurally generated locomotion and hand grasping motion. The output of the algorithm is a continuous animation of the character manipulating multiple objects and environment features concurrently at various locations in a constrained environment. To capture how humans deftly exploit different properties of body parts and objects for multitasking, we need to solve challenging planning and execution problems.

Sculpting by numbers

http://dl.acm.org/citation.cfm?id=2366176
Alec Rivers, Andrew Adams, Frédo Durand

We propose a method that allows an unskilled user to create an accurate physical replica of a digital 3D model. We use a projector/camera pair to scan a work in progress, and project multiple forms of guidance onto the object itself that indicate which areas need more material, which need less, and where any ridges, valleys or depth discontinuities are. The user adjusts the model using the guidance and iterates, making the shape of the physical object approach that of the target 3D model over time. We show how this approach can be used to create a duplicate of an existing object, by scanning the object and using that scan as the target shape.

Stackabilization

http://dl.acm.org/citation.cfm?id=2366177
Honghua Li, Ibraheem Alhashim, Hao Zhang, Ariel Shamir, Daniel Cohen-Or

We introduce the geometric problem of stackabilization: how to geometrically modify a 3D object so that it is more amenable to stacking. Given a 3D object and a stacking direction, we define a measure of stackability, which is derived from the gap between the lower and upper envelopes of the object in a stacking configuration along the stacking direction. The main challenge in stackabilization lies in the desire to modify the object’s geometry only subtly so that the intended functionality and aesthetic appearance of the original object are not significantly affected. We present an automatic algorithm to deform a 3D object to meet a target stackability score using energy minimization.

Structural optimization of 3D masonry buildings

http://dl.acm.org/citation.cfm?id=2366178
Emily Whiting, Hijung Shin, Robert Wang, John Ochsendorf, Frédo Durand

In the design of buildings, structural analysis is traditionally performed after the aesthetic design has been determined and has little influence on the overall form. In contrast, this paper presents an approach to guide the form towards a shape that is more structurally sound. Our work is centered on the study of how variations of the geometry might improve structural stability. We define a new measure of structural soundness for masonry buildings as well as cables, and derive its closed-form derivative with respect to the displacement of all the vertices describing the geometry. We start with a gradient descent tool which displaces each vertex along the gradient.

Depth-presorted triangle lists

http://dl.acm.org/citation.cfm?id=2366179
Ge Chen, Pedro V. Sander, Diego Nehab, Lei Yang, Liang Hu

We present a novel approach for real-time rendering of static 3D models front-to-back or back-to-front relative to any viewpoint outside its bounding volume. The approach renders depth-sorted triangles using a single draw-call. At run-time, we replace the traditional sorting strategy of existing algorithms with a faster triangle selection strategy. The selection process operates on an extended sequence of triangles annotated by test planes, created by our off-line preprocessing stage. Based on these test planes, a simple run-time procedure uses the given viewpoint to select a subsequence of triangles for rasterization. Selected subsequences are statically presorted by depth and contain each input triangle exactly once.

Softshell: dynamic scheduling on GPUs

http://dl.acm.org/citation.cfm?id=2366180
Markus Steinberger, Bernhard Kainz, Bernhard Kerbl, Stefan Hauswiesner, Michael Kenzel, Dieter Schmalstieg

In this paper we present Softshell, a novel execution model for devices composed of multiple processing cores operating in a single instruction, multiple data fashion, such as graphics processing units (GPUs). The Softshell model is intuitive and more flexible than the kernel-based adaption of the stream processing model, which is currently the dominant model for general purpose GPU computation. Using the Softshell model, algorithms with a relatively low local degree of parallelism can execute efficiently on massively parallel architectures. Softshell has the following distinct advantages: (1) work can be dynamically issued directly on the device, eliminating the need for synchronization with an external source, i.e., the CPU; (2) its three-tier dynamic scheduler supports arbitrary scheduling strategies, including dynamic priorities and real-time scheduling; and (3) the user can influence, pause, and cancel work already submitted for parallel execution.

High-quality curve rendering using line sampled visibility

http://dl.acm.org/citation.cfm?id=2366181
Rasmus Barringer, Carl Johan Gribel, Tomas Akenine-Möller

Computing accurate visibility for thin primitives, such as hair strands, fur, grass, at all scales remains difficult or expensive. To that end, we present an efficient visibility algorithm based on spatial line sampling, and a novel intersection algorithm between line sample planes and Bézier splines with varying thickness. Our algorithm produces accurate visibility both when the projected width of the curve is a tiny fraction of a pixel, and when the projected width is tens of pixels. In addition, we present a rapid resolve procedure that computes final visibility.

Axis-aligned filtering for interactive sampled soft shadows

http://dl.acm.org/citation.cfm?id=2366182
Soham Uday Mehta, Brandon Wang, Ravi Ramamoorthi

We develop a simple and efficient method for soft shadows from planar area light sources, based on explicit occlusion calculation by raytracing, followed by adaptive image-space filtering. Since the method is based on Monte Carlo sampling, it is accurate. Since the filtering is in image-space, it adds minimal overhead and can be performed at real-time frame rates. We obtain interactive speeds, using the Optix GPU raytracing framework. Our technical approach derives from recent work on frequency analysis and sheared pixel-light filtering for offline soft shadows. While sample counts can be reduced dramatically, the sheared filtering step is slow, adding minutes of overhead.

Foveated 3D graphics

http://dl.acm.org/citation.cfm?id=2366183
Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, John Snyder

We exploit the falloff of acuity in the visual periphery to accelerate graphics computation by a factor of 5-6 on a desktop HD display (1920×1080). Our method tracks the user’s gaze point and renders three image layers around it at progressively higher angular size but lower sampling rate. The three layers are then magnified to display resolution and smoothly composited. We develop a general and efficient antialiasing algorithm easily retrofitted into existing graphics code to minimize "twinkling" artifacts in the lower-resolution layers. A standard psychophysical model for acuity falloff assumes that minimum detectable angular size increases linearly as a function of eccentricity.

Active co-analysis of a set of shapes

http://dl.acm.org/citation.cfm?id=2366184
Yunhai Wang, Shmulik Asafi, Oliver van Kaick, Hao Zhang, Daniel Cohen-Or, Baoquan Chen

Unsupervised co-analysis of a set of shapes is a difficult problem since the geometry of the shapes alone cannot always fully describe the semantics of the shape parts. In this paper, we propose a semi-supervised learning method where the user actively assists in the co-analysis by iteratively providing inputs that progressively constrain the system. We introduce a novel constrained clustering method based on a spring system which embeds elements to better respect their inter-distances in feature space together with the user-given set of constraints. We also present an active learning method that suggests to the user where his input is likely to be the most effective in refining the results.

Co-abstraction of shape collections

http://dl.acm.org/citation.cfm?id=2366185
Mehmet Ersin Yumer, Levent Burak Kara

We present a co-abstraction method that takes as input a collection of 3D objects, and produces a mutually consistent and individually identity-preserving abstraction of each object. In general, an abstraction is a simpler version of a shape that preserves its main characteristics. We hypothesize, however, that there is no single abstraction of an object. Instead, there is a variety of possible abstractions, and an admissible one can only be chosen conjointly with other objects’ abstractions. To this end, we introduce a new approach that hierarchically generates a spectrum of abstractions for each model in a shape collection.

An optimization approach for extracting and encoding consistent maps in a shape collection

http://dl.acm.org/citation.cfm?id=2366186
Qi-Xing Huang, Guo-Xin Zhang, Lin Gao, Shi-Min Hu, Adrian Butscher, Leonidas Guibas

We introduce a novel approach for computing high quality point-to-point maps among a collection of related shapes. The proposed approach takes as input a sparse set of imperfect initial maps between pairs of shapes and builds a compact data structure which implicitly encodes an improved set of maps between all pairs of shapes. These maps align well with point correspondences selected from initial maps; they map neighboring points to neighboring points; and they provide cycle-consistency, so that map compositions along cycles approximate the identity map. The proposed approach is motivated by the fact that a complete set of maps between all pairs of shapes that admits nearly perfect cycle-consistency are highly redundant and can be represented by compositions of maps through a single base shape.

Inverse design of urban procedural models

http://dl.acm.org/citation.cfm?id=2366187
Carlos A. Vanegas, Ignacio Garcia-Dorado, Daniel G. Aliaga, Bedrich Benes, Paul Waddell

We propose a framework that enables adding intuitive high level control to an existing urban procedural model. In particular, we provide a mechanism to interactively edit urban models, a task which is important to stakeholders in gaming, urban planning, mapping, and navigation services. Procedural modeling allows a quick creation of large complex 3D models, but controlling the output is a well-known open problem. Thus, while forward procedural modeling has thrived, in this paper we add to the arsenal an inverse modeling tool. Users, unaware of the rules of the underlying urban procedural model, can alternatively specify arbitrary target indicators to control the modeling process.

Capturing and animating the morphogenesis of polygonal tree models

http://dl.acm.org/citation.cfm?id=2366188
Sören Pirk, Till Niese, Oliver Deussen, Boris Neubert

Given a static tree model we present a method to compute developmental stages that approximate the tree’s natural growth. The tree model is analyzed and a graph-based description its skeleton is determined. Based on structural similarity, branches are added where pruning has been applied or branches have died off over time. Botanic growth models and allometric rules enable us to produce convincing animations from a young tree that converge to the given model. Furthermore, the user can explore all intermediate stages. By selectively applying the process to parts of the tree even complex models can be edited easily.

Analysis and synthesis of point distributions based on pair correlation

http://dl.acm.org/citation.cfm?id=2366189
A. Cengiz Öztireli, Markus Gross

Analyzing and synthesizing point distributions are of central importance for a wide range of problems in computer graphics. Existing synthesis algorithms can only generate white or blue-noise distributions with characteristics dictated by the underlying processes used, and analysis tools have not been focused on exploring relations among distributions. We propose a unified analysis and general synthesis algorithms for point distributions. We employ the pair correlation function as the basis of our methods and design synthesis algorithms that can generate distributions with given target characteristics, possibly extracted from an example point set, and introduce a unified characterization of distributions by mapping them to a space implied by pair correlations.

Blue noise through optimal transport

http://dl.acm.org/citation.cfm?id=2366190
Fernando de Goes, Katherine Breeden, Victor Ostromoukhov, Mathieu Desbrun

We present a fast, scalable algorithm to generate high-quality blue noise point distributions of arbitrary density functions. At its core is a novel formulation of the recently-introduced concept of capacity-constrained Voronoi tessellation as an optimal transport problem. This insight leads to a continuous formulation able to enforce the capacity constraints exactly, unlike previous work. We exploit the variational nature of this formulation to design an efficient optimization technique of point distributions via constrained minimization in the space of power diagrams. Our mathematical, algorithmic, and practical contributions lead to high-quality blue noise point sets with improved spectral and spatial properties.

GPU-accelerated path rendering

http://dl.acm.org/citation.cfm?id=2366191
Mark J. Kilgard, Jeff Bolz

For thirty years, resolution-independent 2D standards (e.g. PostScript, SVG) have depended on CPU-based algorithms for the filling and stroking of paths. Advances in graphics hardware have largely ignored accelerating resolution-independent 2D graphics rendered from paths. We introduce a two-step "Stencil, then Cover" (StC) programming interface. Our GPU-based approach builds upon existing techniques for curve rendering using the stencil buffer, but we explicitly decouple in our programming interface the stencil step to determine a path’s filled or stroked coverage from the subsequent cover step to rasterize conservative geometry intended to test and reset the coverage determinations of the first step while shading color samples within the path.

A vectorial solver for free-form vector gradients

http://dl.acm.org/citation.cfm?id=2366192
Simon Boyé, Pascal Barla, Gaël Guennebaud

The creation of free-form vector drawings has been greatly improved in recent years with techniques based on (bi)-harmonic interpolation. Such methods offer the best trade-off between sparsity (keeping the number of control points small) and expressivity (achieving complex shapes and gradients). In this paper, we introduce a vectorial solver for the computation of free-form vector gradients. Based on Finite Element Methods (FEM), its key feature is to output a low-level vector representation suitable for very fast GPU accelerated rasterization and close-form evaluation. This intermediate representation is hidden from the user: it is dynamically updated using FEM during drawing when control points are edited.

Gaze correction for home video conferencing

http://dl.acm.org/citation.cfm?id=2366193
Claudia Kuster, Tiberiu Popa, Jean-Charles Bazin, Craig Gotsman, Markus Gross

Effective communication using current video conferencing systems is severely hindered by the lack of eye contact caused by the disparity between the locations of the subject and the camera. While this problem has been partially solved for high-end expensive video conferencing systems, it has not been convincingly solved for consumer-level setups. We present a gaze correction approach based on a single Kinect sensor that preserves both the integrity and expressiveness of the face as well as the fidelity of the scene as a whole, producing nearly artifact-free imagery. Our method is suitable for mainstream home video conferencing: it uses inexpensive consumer hardware, achieves real-time performance and requires just a simple and short setup.

Discontinuity-aware video object cutout

http://dl.acm.org/citation.cfm?id=2366194
Fan Zhong, Xueying Qin, Qunsheng Peng, Xiangxu Meng

Existing video object cutout systems can only deal with limited cases. They usually require detailed user interactions to segment real-life videos, which often suffer from both inseparable statistics (similar appearance between foreground and background) and temporal discontinuities (e.g. large movements, newly-exposed regions following disocclusion or topology change). In this paper, we present an efficient video cutout system to meet this challenge. A novel directional classifier is proposed to handle temporal discontinuities robustly, and then multiple classifiers are incorporated to cover a variety of cases. The outputs of these classifiers are integrated via another classifier, which is learnt from real examples. The foreground matte is solved by a coherent matting procedure, and remaining errors can be removed easily by additive spatio-temporal local editing.

Transfusive image manipulation

http://dl.acm.org/citation.cfm?id=2366195
Kaan Yücer, Alec Jacobson, Alexander Hornung, Olga Sorkine

We present a method for consistent automatic transfer of edits applied to one image to many other images of the same object or scene. By introducing novel, content-adaptive weight functions we enhance the non-rigid alignment framework of Lucas-Kanade to robustly handle changes of view point, illumination and non-rigid deformations of the subjects. Our weight functions are content-aware and possess high-order smoothness, enabling to define high-quality image warping with a low number of parameters using spatially-varying weighted combinations of affine deformations. Optimizing the warp parameters leads to subpixel-accurate alignment while maintaining computation efficiency.

All-hex meshing using singularity-restricted field

http://dl.acm.org/citation.cfm?id=2366196
Yufei Li, Yang Liu, Weiwei Xu, Wenping Wang, Baining Guo

Decomposing a volume into high-quality hexahedral cells is a challenging task in geometric modeling and computational geometry. Inspired by the use of cross field in quad meshing and the CubeCover approach in hex meshing, we present a complete all-hex meshing framework based on singularity-restricted field that is essential to induce a valid all-hex structure. Given a volume represented by a tetrahedral mesh, we first compute a boundary-aligned 3D frame field inside it, then convert the frame field to be singularity-restricted by our effective topological operations. In our all-hex meshing framework, we apply the CubeCover method to achieve the volume parametrization. For reducing degenerate elements appearing in the volume parametrization, we also propose novel tetrahedral split operations to preprocess singularity-restricted frame fields.

Design-driven quadrangulation of closed 3D curves

http://dl.acm.org/citation.cfm?id=2366197
Mikhail Bessmeltsev, Caoyu Wang, Alla Sheffer, Karan Singh

We propose a novel, design-driven, approach to quadrangulation of closed 3D curves created by sketch-based or other curve modeling systems. Unlike the multitude of approaches for quad-remeshing of existing surfaces, we rely solely on the input curves to both conceive and construct the quad-mesh of an artist imagined surface bounded by them. We observe that viewers complete the intended shape by envisioning a dense network of smooth, gradually changing, flow-lines that interpolates the input curves. Components of the network bridge pairs of input curve segments with similar orientation and shape. Our algorithm mimics this behavior. It first segments the input closed curves into pairs of matching segments, defining dominant flow line sequences across the surface.

Field-guided registration for feature-conforming shape composition

http://dl.acm.org/citation.cfm?id=2366198
Hui Huang, Minglun Gong, Daniel Cohen-Or, Yaobin Ouyang, Fuwen Tan, Hao Zhang

We present an automatic shape composition method to fuse two shape parts which may not overlap and possibly contain sharp features, a scenario often encountered when modeling man-made objects. At the core of our method is a novel field-guided approach to automatically align two input parts in a feature-conforming manner. The key to our field-guided shape registration is a natural continuation of one part into the ambient field as a means to introduce an overlap with the distant part, which then allows a surface-to-field registration. The ambient vector field we compute is feature-conforming; it characterizes a piecewise smooth field which respects and naturally extrapolates the surface features.

Structure recovery by part assembly

http://dl.acm.org/citation.cfm?id=2366199
Chao-Hui Shen, Hongbo Fu, Kang Chen, Shi-Min Hu

This paper presents a technique that allows quick conversion of acquired low-quality data from consumer-level scanning devices to high-quality 3D models with labeled semantic parts and meanwhile their assembly reasonably close to the underlying geometry. This is achieved by a novel structure recovery approach that is essentially local to global and bottom up, enabling the creation of new structures by assembling existing labeled parts with respect to the acquired data.

Multi-scale partial intrinsic symmetry detection

http://dl.acm.org/citation.cfm?id=2366200
Kai Xu, Hao Zhang, Wei Jiang, Ramsay Dyer, Zhiquan Cheng, Ligang Liu, Baoquan Chen

We present an algorithm for multi-scale partial intrinsic symmetry detection over 2D and 3D shapes, where the scale of a symmetric region is defined by intrinsic distances between symmetric points over the region. To identify prominent symmetric regions which overlap and vary in form and scale, we decouple scale extraction and symmetry extraction by performing two levels of clustering. First, significant symmetry scales are identified by clustering sample point pairs from an input shape. Since different point pairs can share a common point, shape regions covered by points in different scale clusters can overlap. We introduce the symmetry scale matrix (SSM), where each entry estimates the likelihood two point pairs belong to symmetries at the same scale.

Perspective-aware warping for seamless stereoscopic image cloning

http://dl.acm.org/citation.cfm?id=2366201
Sheng-Jie Luo, I-Chao Shen, Bing-Yu Chen, Wen-Huang Cheng, Yung-Yu Chuang

This paper presents a novel technique for seamless stereoscopic image cloning, which performs both shape adjustment and color blending such that the stereoscopic composite is seamless in both the perceived depth and color appearance. The core of the proposed method is an iterative disparity adaptation process which alternates between two steps: disparity estimation, which re-estimates the disparities in the gradient domain so that the disparities are continuous across the boundary of the cloned region; and perspective-aware warping, which locally re-adjusts the shape and size of the cloned region according to the estimated disparities. This process guarantees not only depth continuity across the boundary but also models local perspective projection in accordance with the disparities, leading to more natural stereoscopic composites.

Enabling warping on stereoscopic images

http://dl.acm.org/citation.cfm?id=2366202
Yuzhen Niu, Wu-Chi Feng, Feng Liu

Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping.

A luminance-contrast-aware disparity model and applications

http://dl.acm.org/citation.cfm?id=2366203
Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, Hans-Peter Seidel, Wojciech Matusik

Binocular disparity is one of the most important depth cues used by the human visual system. Recently developed stereo-perception models allow us to successfully manipulate disparity in order to improve viewing comfort, depth discrimination as well as stereo content compression and display. Nonetheless, all existing models neglect the substantial influence of luminance on stereo perception. Our work is the first to account for the interplay of luminance contrast (magnitude/frequency) and disparity and our model predicts the human response to complex stereo-luminance images. Besides improving existing disparity-model applications (e.g., difference metrics or compression), our approach offers new possibilities, such as joint luminance contrast and disparity manipulation or the optimization of auto-stereoscopic content.

Correcting for optical aberrations using multilayer displays

http://dl.acm.org/citation.cfm?id=2366204
Fu-Chung Huang, Douglas Lanman, Brian A. Barsky, Ramesh Raskar

Optical aberrations of the human eye are currently corrected using eyeglasses, contact lenses, or surgery. We describe a fourth option: modifying the composition of displayed content such that the perceived image appears in focus, after passing through an eye with known optical defects. Prior approaches synthesize pre-filtered images by deconvolving the content by the point spread function of the aberrated eye. Such methods have not led to practical applications, due to severely reduced contrast and ringing artifacts. We address these limitations by introducing multilayer pre-filtering, implemented using stacks of semi-transparent, light-emitting layers. By optimizing the layer positions and the partition of spatial frequencies between layers, contrast is improved and ringing artifacts are eliminated.

The magic lens: refractive steganography

http://dl.acm.org/citation.cfm?id=2366205
Marios Papas, Thomas Houit, Derek Nowrouzezahrai, Markus Gross, Wojciech Jarosz

We present an automatic approach to design and manufacture passive display devices based on optical hidden image decoding. Motivated by classical steganography techniques we construct Magic Lenses, composed of refractive lenslet arrays, to reveal hidden images when placed over potentially unstructured printed or displayed source images. We determine the refractive geometry of these surfaces by formulating and efficiently solving an inverse light transport problem, taking into account additional constraints imposed by the physical manufacturing processes. We fabricate several variants on the basic magic lens idea including using a single source image to encode several hidden images which are only revealed when the lens is placed at prescribed orientations on the source image or viewed from different angles.

Lightweight binocular facial performance capture under uncontrolled lighting

http://dl.acm.org/citation.cfm?id=2366206
Levi Valgaerts, Chenglei Wu, Andrés Bruhn, Hans-Peter Seidel, Christian Theobalt

Recent progress in passive facial performance capture has shown impressively detailed results on highly articulated motion. However, most methods rely on complex multi-camera set-ups, controlled lighting or fiducial markers. This prevents them from being used in general environments, outdoor scenes, during live action on a film set, or by freelance animators and everyday users who want to capture their digital selves. In this paper, we therefore propose a lightweight passive facial performance capture approach that is able to reconstruct high-quality dynamic facial geometry from only a single pair of stereo cameras. Our method succeeds under uncontrolled and time-varying lighting, and also in outdoor scenes.

Accurate realtime full-body motion capture using a single depth camera

http://dl.acm.org/citation.cfm?id=2366207
Xiaolin Wei, Peizhao Zhang, Jinxiang Chai

We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures.

Data-driven finger motion synthesis for gesturing characters

http://dl.acm.org/citation.cfm?id=2366208
Sophie Jörg, Jessica Hodgins, Alla Safonova

Capturing the body movements of actors to create animations for movies, games, and VR applications has become standard practice, but finger motions are usually added manually as a tedious post-processing step. In this paper, we present a surprisingly simple method to automate this step for gesturing and conversing characters. In a controlled environment, we carefully captured and post-processed finger and body motions from multiple actors. To augment the body motions of virtual characters with plausible and detailed finger movements, our method selects finger motion segments from the resulting database taking into account the similarity of the arm motions and the smoothness of consecutive finger motions.

A statistical similarity measure for aggregate crowd dynamics

http://dl.acm.org/citation.cfm?id=2366209
Stephen J. Guy, Jur van den Berg, Wenxi Liu, Rynson Lau, Ming C. Lin, Dinesh Manocha

We present an information-theoretic method to measure the similarity between a given set of observed, real-world data and visual simulation technique for aggregate crowd motions of a complex system consisting of many individual agents. This metric uses a two-step process to quantify a simulator’s ability to reproduce the collective behaviors of the whole system, as observed in the recorded real-world data. First, Bayesian inference is used to estimate the simulation states which best correspond to the observed data, then a maximum likelihood estimator is used to approximate the prediction errors. This process is iterated using the EM-algorithm to produce a robust, statistical estimate of the magnitude of the prediction error as measured by its entropy (smaller is better).

A path space extension for robust light transport simulation

http://dl.acm.org/citation.cfm?id=2366210
Toshiya Hachisuka, Jacopo Pantaleoni, Henrik Wann Jensen

We present a new sampling space for light transport paths that makes it possible to describe Monte Carlo path integration and photon density estimation in the same framework. A key contribution of our paper is the introduction of vertex perturbations, which extends the space of paths with loosely coupled connections. The new framework enables the computation of path probabilities in the same space under the same measure, which allows us to use multiple importance sampling to combine Monte Carlo path integration and photon density estimation.

Light transport simulation with vertex connection and merging

http://dl.acm.org/citation.cfm?id=2366211
Iliyan Georgiev, Jaroslav Křivánek, Tomáš Davidovič, Philipp Slusallek

Developing robust light transport simulation algorithms that are capable of dealing with arbitrary input scenes remains an elusive challenge. Although efficient global illumination algorithms exist, an acceptable approximation error in a reasonable amount of time is usually only achieved for specific types of input scenes. To address this problem, we present a reformulation of photon mapping as a bidirectional path sampling technique for Monte Carlo light transport simulation. The benefit of our new formulation is twofold. First, it makes it possible, for the first time, to explain in a formal manner the relative efficiency of photon mapping and bidirectional path tracing, which have so far been considered conceptually incompatible solutions to the light transport problem.

Practical Hessian-based error control for irradiance caching

http://dl.acm.org/citation.cfm?id=2366212
Jorge Schwarzhaupt, Henrik Wann Jensen, Wojciech Jarosz

This paper introduces a new error metric for irradiance caching that significantly outperforms the classic Split-Sphere heuristic. Our new error metric builds on recent work using second order gradients (Hessians) as a principled error bound for the irradiance. We add occlusion information to the Hessian computation, which greatly improves the accuracy of the Hessian in complex scenes, and this makes it possible for the first time to use a radiometric error metric for irradiance caching. We enhance the metric making it based on the relative error in the irradiance as well as robust in the presence of black occluders. The resulting error metric is efficient to compute, numerically robust, supports elliptical error bounds and arbitrary hemispherical sample distributions, and unlike the Split-Sphere heuristic it is not necessary to arbitrarily clamp the computed error thresholds.

SURE-based optimization for adaptive sampling and reconstruction

http://dl.acm.org/citation.cfm?id=2366213
Tzu-Mao Li, Yu-Ting Wu, Yung-Yu Chuang

We apply Stein’s Unbiased Risk Estimator (SURE) to adaptive sampling and reconstruction to reduce noise in Monte Carlo rendering. SURE is a general unbiased estimator for mean squared error (MSE) in statistics. With SURE, we are able to estimate error for an arbitrary reconstruction kernel, enabling us to use more effective kernels rather than being restricted to the symmetric ones used in previous work. It also allows us to allocate more samples to areas with higher estimated MSE. Adaptive sampling and reconstruction can therefore be processed within an optimization framework. We also propose an efficient and memory-friendly approach to reduce the impact of noisy geometry features where there is depth of field or motion blur.

Adaptive rendering with non-local means filtering

http://dl.acm.org/citation.cfm?id=2366214
Fabrice Rousselle, Claude Knaus, Matthias Zwicker

We propose a novel approach for image space adaptive sampling and filtering in Monte Carlo rendering. We use an iterative scheme composed of three steps. First, we adaptively distribute samples in the image plane. Second, we denoise the image using a non-linear filter. Third, we estimate the residual per-pixel error of the filtered rendering, and the error estimate guides the sample distribution in the next iteration. The effectiveness of our approach hinges on the use of a state of the art image denoising technique, which we extend to an adaptive rendering framework. A key idea is to split the Monte Carlo samples into two buffers.

Elasticity-inspired deformers for character articulation

http://dl.acm.org/citation.cfm?id=2366215
Ladislav Kavan, Olga Sorkine

Current approaches to skeletally-controlled character articulation range from real-time, closed-form skinning methods to offline, physically-based simulation. In this paper, we seek a closed-form skinning method that approximates nonlinear elastic deformations well while remaining very fast. Our contribution is two-fold: (1) we optimize skinning weights for the standard linear and dual quaternion skinning techniques so that the resulting deformations minimize an elastic energy function. We observe that this is not sufficient to match the visual quality of the original elastic deformations and therefore, we develop (2) a new skinning method based on the concept of joint-based deformers. We propose a specific deformer which is visually similar to nonlinear variational deformation methods.

Simulation of complex nonlinear elastic bodies using lattice deformers

http://dl.acm.org/citation.cfm?id=2366216
Taylor Patterson, Nathan Mitchell, Eftychios Sifakis

Lattice deformers are a popular option for modeling the behavior of elastic bodies as they avoid the need for conforming mesh generation, and their regular structure offers significant opportunities for performance optimizations. Our work expands the scope of current lattice-based elastic deformers, adding support for a number of important simulation features. We accommodate complex nonlinear, optionally anisotropic materials while using an economical one-point quadrature scheme. Our formulation fully accommodates near-incompressibility by enforcing accurate nonlinear constraints, supports implicit integration for large time steps, and is not susceptible to locking or poor conditioning of the discrete equations. Additionally, we increase the accuracy of our solver by employing a novel high-order quadrature scheme on lattice cells overlapping with the model boundary, which are treated at sub-cell precision.

RigMesh: automatic rigging for part-based shape modeling and deformation

http://dl.acm.org/citation.cfm?id=2366217
Péter Borosán, Ming Jin, Doug DeCarlo, Yotam Gingold, Andrew Nealen

The creation of a 3D model is only the first stage of the 3D character animation pipeline. Once a model has been created, and before it can be animated, it must be rigged. Manual rigging is laborious, and automatic rigging approaches are far from real-time and do not allow for incremental updates. This is a hindrance in the real world, where the shape of a model is often revised after rigging has been performed. In this paper, we introduce algorithms and a user-interface for sketch-based 3D modeling that unify the modeling and rigging stages of the 3D character animation pipeline. Our algorithms create a rig for each sketched part in real-time, and update the rig as parts are merged or cut.

Smooth skinning decomposition with rigid bones

http://dl.acm.org/citation.cfm?id=2366218
Binh Huy Le, Zhigang Deng

This paper introduces the Smooth Skinning Decomposition with Rigid Bones (SSDR), an automated algorithm to extract the linear blend skinning (LBS) from a set of example poses. The SSDR model can effectively approximate the skin deformation of nearly articulated models as well as highly deformable models by a low number of rigid bones and a sparse, convex bone-vertex weight map. Formulated as a constrained optimization problem where the least squared error of the reconstructed vertices by LBS is minimized, the SSDR model can be solved by a block coordinate descent-based algorithm to iteratively update the weight map and the bone transformations.

User-guided white balance for mixed lighting conditions

http://dl.acm.org/citation.cfm?id=2366219
Ivaylo Boyadzhiev, Kavita Bala, Sylvain Paris, Frédo Durand

Proper white balance is essential in photographs to eliminate color casts due to illumination. The single-light case is hard to solve automatically but relatively easy for humans. Unfortunately, many scenes contain multiple light sources such as an indoor scene with a window, or when a flash is used in a tungsten-lit room. The light color can then vary on a per-pixel basis and the problem becomes challenging at best, even with advanced image editing tools. We propose a solution to the ill-posed mixed light white balance problem, based on user guidance. Users scribble on a few regions that should have the same color, indicate one or more regions of neutral color, and select regions where the current color looks correct.

Calibrated image appearance reproduction

http://dl.acm.org/citation.cfm?id=2366220
Erik Reinhard, Tania Pouli, Timo Kunkel, Ben Long, Anders Ballestad, Gerwin Damberg

Managing the appearance of images across different display environments is a difficult problem, exacerbated by the proliferation of high dynamic range imaging technologies. Tone reproduction is often limited to luminance adjustment and is rarely calibrated against psychophysical data, while color appearance modeling addresses color reproduction in a calibrated manner, albeit over a limited luminance range. Only a few image appearance models bridge the gap, borrowing ideas from both areas. Our take on scene reproduction reduces computational complexity with respect to the state-of-the-art, and adds a spatially varying model of lightness perception.

Coherent intrinsic images from photo collections

http://dl.acm.org/citation.cfm?id=2366221
Pierre-Yves Laffont, Adrien Bousseau, Sylvain Paris, Frédo Durand, George Drettakis

An intrinsic image is a decomposition of a photo into an illumination layer and a reflectance layer, which enables powerful editing such as the alteration of an object’s material independently of its illumination. However, decomposing a single photo is highly under-constrained and existing methods require user assistance or handle only simple scenes. In this paper, we compute intrinsic decompositions using several images of the same scene under different viewpoints and lighting conditions. We use multi-view stereo to automatically reconstruct 3D points and normals from which we derive relationships between reflectance values at different locations, across multiple views and consequently different lighting conditions.

Robust patch-based hdr reconstruction of dynamic scenes

http://dl.acm.org/citation.cfm?id=2366222
Pradeep Sen, Nima Khademi Kalantari, Maziar Yaesoubi, Soheil Darabi, Dan B. Goldman, Eli Shechtman

High dynamic range (HDR) imaging from a set of sequential exposures is an easy way to capture high-quality images of static scenes, but suffers from artifacts for scenes with significant motion. In this paper, we propose a new approach to HDR reconstruction that draws information from all the exposures but is more robust to camera/scene motion than previous techniques. Our algorithm is based on a novel patch-based energy-minimization formulation that integrates alignment and reconstruction in a joint optimization through an equation we call the HDR image synthesis equation. This allows us to produce an HDR result that is aligned to one of the exposures yet contains information from all of them.

Print Friendly, PDF & Email

Previous post:

Next post: