Title: Large-scale Visual Search
时 间： 2015年5月13日(星期三) 下午14:30-15:30
地 点： 计算机学院三楼会议室；
Qi Tian is currently a Full Professor in the Department of Computer Science, the University of Texas at San Antonio (UTSA). During 2008-2009, he took one-year Faculty Leave at Microsoft Research Asia (MSRA) in the Media Computing Group. He received his Ph.D. in ECE from University of Illinois at Urbana-Champaign (UIUC) in 2002 and his B.E and M.S degrees from Tsinghua University and Drexel University in 1992 and 1996, respectively, all from electronic engineering. Dr. Tian’s research interests focus on multimedia information retrieval and computer vision and published over 290 refereed journal and conference papers. He received the Best Paper Awards in PCM 2013, ACM ICIMCS 2012 and MMM 2013, a Top 10% Paper Award in MMSP 2011, the Best Student Paper Award in ICASSP 2006, and was a co-author of a Best Paper Candidate in PCM 2007. His research projects are funded by NSF, ARO, DHS, Google, FXPAL, NEC, SALSI, CIAS, Akiira Media Systems, HP and UTSA. He received 2010 ACM Service Award. He is the Guest Editors of IEEE Transactions on Multimedia, Journal of Computer Vision and Image Understanding, etc, and is the Associate Editor of IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IEEE Transactions on Multimedia, and in the Editorial Board of Journal of Multimedia (JMM) and Journal of Machine Vision and Applications (MVA).
Coupled with the massive social multimedia data and mobile visual search applications, techniques towards large-scale visual search and recognition are emerging. With the introduction of local invariant visual features, recent decade has witnessed the fast advance of large-scale image search. Current state-of-the-art image search algorithms and systems are motivated by the classic bag-of-visual-words model and the scalable index structure. Generally, an image search system is involved with several key modules, including feature representation, visual codebook construction, feature quantization, index strategy, scoring scheme, and post processing. Besides, post-processing techniques, such as geometric verification, query expansion and multi-modal fusion, can be plugged in to boost the retrieval performance.
In the first part of the talk, I will introduce those related works in each module as mentioned above and discuss the key research problems. In the second part, I will introduce our research work on large scale image search. We have done comprehensive work on feature representation, feature quantization, scalable indexing, spatial verification, et al. Several representative works will be discussed and the related demos will be shown. On feature representation, I will introduce our two recent works, i.e., Binary SIFT and Edge-SIFT, which are both binary local features derived from or inspired by SIFT. On feature quantization, I will introduce our recent research based on codebook-training-free strategy for large-scale image search. On image indexing, a novel co-indexing scheme will be discussed, which is designed to couple the distance metrics of multiple visual features. Finally, I will introduce an efficient geometric verification scheme which is demonstrated effective in boosting the performance of large-scale partial-duplicate image search. In the third part, I will discuss the potential research directions and promising applications on large scale image search.