- When: Friday, April 07, 2017 11:00 AM
- Speakers: Jia-Bin Huang, Bradley Department of Electrical and Computer Engineering, Virginia Tech
- Location: ENGR 2901
- Export to iCal
Abstract:
The recent success of deep learning in computer vision rely on massive amounts of manually labeled images. Natural videos contain significantly more information than images and are readily available. The additional time dimension, however, imposes new challenges such as increased space complexity and spatiotemporal appearance variations. In this talk, I will present how we can extract supervisory signals from unlabeled videos and show examples that leverage the scene dynamics for temporally coherent video synthesis and unsupervised representation learning. I will end the talk by highlighting challenges and recent trends.
Bio:
Jia-Bin Huang is an Assistant Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. He received his Ph.D. degree in the Department of Electrical and Computer Engineering at University of Illinois, Urbana-Champaign in 2016. His research interests span computer vision, computer graphics, and machine learning. Huang's research received the best student paper award in IAPR International Conference on Pattern Recognition in 2012 and the best paper award in the ACM Symposium on Eye Tracking Research & Applications in 2014. Personal website: https://filebox.ece.vt.edu/~jbhuang
Posted 7 years, 7 months ago