Bertram E. Shi

Biography

Bert received the B.S. and M.S. Degrees in Electrical Engineering at Stanford University, and the Ph.D. degree in Electrical Engineering at the University of California at Berkeley in 1994. Currently, he is Professor and Head of the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology. In 2009, he was a Visiting Professor at the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. In 2002, he was a Visiting Associate Professor in the Department of Bioengineering at the University of Pennsylvania.

 His research interests lie in the areas of neuromorphic engineering, robotics, human machine interfaces, computational neuroscience, with a particular focus on the use of machine learning in visual information processing and visually guided control. He was named IEEE Fellow in “for contributions to the analysis, implementation and application of cellular neural networks” in 2001. He was a Distinguished Lecturer for the IEEE Circuits and Systems Society for 2001-2002 and 2007-2008.  His research group won the 2017 Facial Expression Recognition and Analysis (FERA) AU Intensity Estimation Challenge, and have received top paper prizes at international conferences. He is or has been an Associate Editor for IEEE Transactions on Circuits and Systems I and II, the IEEE Transactions on Biomedical Circuits and Systems and Frontiers in Neuromorphic Engineering. He was a Chair of the IEEE Circuits and Systems Society Technical Committee on Cellular Neural Networks and Array Computing from 2003-2005. He was Technical Program Chair of the 2004 IEEE International Workshop on Cellular Neural Networks and their Applications, and General Chair of the 2005 IEEE International Workshop on Cellular Neural Networks and their Applications.

Abstract

Robot Self-Calibration via Active Efficient Coding

Sensorimotor contingencies can be used to establish mappings between actions are their perceptual consequences. We describe how robots can exploit these sensorimotor contingencies to self-calibrate via the Active Efficient Coding (AEC) framework. AEC is an extension of Barlow’s Efficient Coding (EC() hypothesis to include action. EC supposes that sensory neurons develop so that their responses efficiently encode the sensory input. One of the consequences of EC is that the processing performed by neurons depends upon the statistics of their input. AEC extends this by additionally hypothesizing that organisms learn to behave so as to structure their sensory input so that it can be efficiently encoded. This is a circularly coupled process, since behavior determines the statistics of the sensory input, which determines the properties of sensory neurons, which drive behavior. Robots incorporating AEC actively structure their behavior as they seek to build models of their sensory environment. Because coding efficiency is a generic property, AEC can enable the spontaneous emergence and self-calibration of a wide range of sensorimotor behaviors. We describe the use of AEC in a number of visually guided behaviors, including binocular vergence control, the optokinetic nystagmus, the vestibular ocular reflex, and saccadic eye movements. One of the most attractive properties of AEC is its ability to seamlessly integrate information from multiple sources: across space, across time, across scale, and across sensory modalities.

back to top