[talks] Tianqiang Liu's preFPO Feb. 10 at 10am in CS rm 301.

Nicki Gotsis ngotsis at CS.Princeton.EDU
Fri Feb 6 14:01:56 EST 2015


Tianqiang Liu's will present his PreFPO on Feb. 10 at 10am in CS rm 301.

The members of his committee are below:
Advisor: Thomas Funkhouser
Readers: Szymon Rusinkiewicz, Wilmot Li (Adobe System)
Non-readers: Adam Finkelstein, Jianxiong Xiao


Everyone is invited to attend his talk.  The talk title and abstract follow below:


Title: Analyzing, Optimizing and Synthesizing Scenes by Reasoning About Relationships Between Objects

Abstract:

Large numbers of virtual 3D scenes have been widely used for various applications, such as video games, movies, and creation of product images for furniture catalogs, which leads to an enormous demand for scene modeling. On the one hand, manually modeling 3D scenes is extremely tedious, which requires selecting and placing hundreds of objects even for a single scene. On the other hand, a data-driven approach to 3D scene modeling is also challenging, due to the large variabilities in 3D scenes (e.g. object classes and object placement), and the hardness of maintaining plausibility of 3D scenes (e.g. relative positions and style compatibility between objects). In this thesis, we utilize relationships between objects as cues to address three problems in scene understanding and scene synthesis.

First, we develop an algorithm to consistently segment and annotate 3D scenes in online repositories by using a probabilistic grammar learned from examples. Our grammar encodes the geometry priors of object classes and spatial relationships between objects within a semantic group. During experiments with these algorithms, we find our algorithm outperforms alternative approaches that consider only shape similarities and/or spatial relationships without considering semantic groups.

Second, we develop a tool to optimize a 3D scene in order to create an aesthetically appealing composition for it. Recently, companies are creating product advertisements and catalog images using computer renderings of 3D scenes. However, this is challenging, not only due to the need to balance the trade-off among aesthetic principles and design constraints, but also because of the huge search space induced by possible camera parameters, object replacement, material choices, etc.. In this work, we identify a set of composition rules by constraining the relationships between objects in the scene space as well as the image space, and develop algorithms to optimize the 2D composition by changing the camera view, object placement and materials. The value of this tool is demonstrated in a variety of applications motivated by product catalogs.

Third, we learn a style compatibility metric for furniture models and develop style-aware suggestion systems. Existing tools for selecting models for synthesizing 3D scenes generally ignore style compatibility between objects, and presence of incompatible styles diminishes the plausibility of the resultant 3D scenes. In this work, we develop a mathematical representation of style compatibility between objects that can be used to guide 3D scene modeling tools. During experiments, we find our method is effective at predicting style compatibility agreed upon by people, and we find in user studies that the learned compatibility metric is useful for novel interactive tools.



More information about the talks mailing list