•   When: Thursday, July 27, 2017 from 02:00 PM to 04:00 PM
  •   Speakers: Guilin Liu
  •   Location: ENGR 3802
  •   Export to iCal

A vast amount of 2D images and 3D meshes and points are created every day. Many applications require us to extract more semantic information beyond the original raw discrete representation of pixels, facets and points from these data. In this dissertation, I will focus on extracting some types of shape and physical properties from these data. However, estimating these properties suffers from some difficulties, including under-constrained setting, missing accurate ground truth data and expensive computation cost. I will develop methods to solve three tasks of estimating shape and physical properties with each task representing one of the difficulty.

 

These tasks are appearance synthesis, shape synthesis and motion synthesis. For the appearance synthesis, I will develop an end-to-end deep learning framework to estimate the material (reflectance property) from 2D images, which is original an under-constrained problem. The main ingredient of the end-to-end framework is the introduction of rendering layer. I will show the effectiveness of framework for editing the materials in 2D images. For shape synthesis, I will discuss how to combine the inaccurate and noisy ground truth normal data and the image itself to predict fine-scale normal from 2D images using the deep learning framework. Results will show that even though the ground truth normal is inaccurate and far from detailed, the trained deep learning model can still produce detailed normal predictions. The motion synthesis part will be about approximating the medial axis of a robot's configuration space. Originally, computing the medial axis of the configuration space is highly computationally expensive. I will show how the formulation of support vector machine can be adapted to solve this task efficiently. Detailed explanation of how the difficulties are resolved and plenty of experiment results will be provided. On the other hand, the methods for these tasks require sufficient and valid training data. I use synthetic dataset to compensate the lack of corresponding real image dataset for appearance synthesis and shape synthesis. Semantically meaningful segmentations of 3D shapes are used to generate plausible synthetic dataset for these two tasks. To train the model to approximate the medial axis of the robot's configuration space well, we need the segmentation with bounded geometric constraints of the 3D models in the environment. To generate the segmentations which are semantically meaningful segmentations and with bounded geometric constraints, I will propose a new part-aware shape feature and two nearly convex decompositions methods. Comparisons with human segmentation and other alternatives will validate the effectiveness of the proposed feature and methods. I hope this dissertation will stimulate future research on resolving other difficulties of property estimations and estimating other types of properties from 2D and 3D data as well.

Posted 6 years, 9 months ago