Deep Retrieval: Learning Feature Representations for Image Search

GRAND Seminar Friday, Nov 17, 11 am, Room: ENGR 2901

Huei-Fang Yang
Researcher
Janelia Research Campus, Howard Hughes Medical Institute

Abstract:

Learning effective feature representations plays a key role in content-based image retrieval (CBIR). Recently, deep convolutional neural networks (CNNs) have gained much attention due to their impressive performance on image classification. The deep features learned from a large-scale dataset (e.g., ImageNet) can be transferred to other tasks including image retrieval. However, when applied to retrieval, such floating-valued features are inefficient and their performance is not as good as they are used for classification, which restricts their applicability to image search. In this talk, I will show how we overcome these two limitations. Specifically, I will present (1) a simple yet effective supervised deep hashing approach that constructs binary hash codes from labeled data for fast image search and (2) a new idea called cross-batch reference (CBR) learning to learn a feature space that improves retrieval performance. Our approaches achieve superior performance on several benchmarks and large datasets.

Short bio:

Dr. Huei-Fang Yang is currently a Bioinformatics specialist at Janelia Research Campus, Howard Hughes Medical Institute. Her research interests are in computer vision and deep learning and their applications to large-scale visual search, face image understanding and biomedical image analysis. Previously, she was a postdoctoral fellow at Academia Sinica, Taiwan, working with Chu-Song Chen, 2014-2017 and a postdoctral fellow in the MORPHEME group of INRIA led by Xavier Descombes, Sophia Antipolis, France, 2011-2013. She graduated from Texas A&M University with a Ph.D. in computer science, advised by Yoonsuck Choe, in 2011.