Transfer learning aims to mimic human cognitive process to adapt previous well-learn knowledge to facilitate the new challenging learning tasks. In this presentation, I will briefly summarize transfer learning in visual recognition tasks and focus on two specific face recognition topics, i.e., missing modality and one-shot learning. First, we always confront the problem that we have no face samples of target modality available in the training stage, which arises when the face data are multi-modal. To overcome this, we first borrow an auxiliary database with complete modalities, then propose a two-directional knowledge transfer to solve the missing modality issue. Second, there is always a challenge that only one sample of some persons are accessible in the training process. It is very difficult for existing learning approaches to handle it, since limited data cannot well represent the data variance. Thus, we develop a novel generative model to synthesize meaningful data for one-shot persons by adapting the data variances from other normal persons.
Zhengming Ding received the B.Eng. degree in information security and the M.Eng. degree in computer software and theory from University of Electronic Science and Technology of China (UESTC), China, in 2010 and 2013, respectively. He received the Ph.D. degree from the Department of Electrical and Computer Engineering, Northeastern University, USA in 2018. He has been a faculty member affiliated with Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis since 2018. His research interests include transfer learning, multi-view learning and deep learning. He received the National Institute of Justice Fellowship during 2016-2018. He was the recipients of the best paper award (SPIE 2016) and best paper candidate (ACM MM 2017). He is currently an Associate Editor of the Journal of Electronic Imaging (JEI).