Speaker: Professor Song-Chun Zhu
Affiliation: UCLA - Statistics - Computer Science Department
ABSTRACT: In this talk, I will present an ongoing effort for developing autonomous robots that can collaborate with humans in real world scenes and tasks. One objective of our project is to explore a cognitive architecture that can embrace modern progresses in vision, cognition, learning, NLP, and cognitive robot using a unified knowledge representation — the spatial, temporal and causal and-or graph (STC-AOG). The STC-AOG is a probabilistic, graphical and compositional model that represents stochastic context sensitive grammars for the hierarchical structures in scenes and objects (spatial), in events and actions (temporal), and in the effects of actions on the scenes (causal). The representation support physical and causal reasoning. In addition, the representation must also consider the theory of mind, i.e the beliefs and intents of others, for collaborations. This model must be learned in a natural way, like situated dialogue between robot and humans. I will show a few examples of human robot collaborations. BIO: Song-Chun Zhu received a Ph.D. degree from Harvard University in 1996. He is currently a professor of Statistics and Computer Science, and director of the Center for Vision, Learning, Cognition and Autonomy at UCLA. His work in computer vision received a number of honors, including the Marr Prize in 2003 for image parsing with Tu et al, the Marr Prize honorary nominations in 1999 for texture modeling and 2007 for object modeling with Y. Wu et al. In 2008 he received the Aggarwal prize from the Intl Association of Pattern Recognition for “contributions to a unified foundation for visual pattern conceptualization, modeling, learning, and inference”. He received the Helmholtz Test-of-time prize at 2013. As a junior faculty he received the Sloan Fellow in Computer Science, NSF Career Award, and ONR Young Investigator Award in 2001. He is a fellow of the IEEE Computer Society since 2011. He is PI of two consecutive ONR MURI projects on Scene/Event Understanding and Commonsense Reasoning respectively. In recent years, he is also interested in situated dialogues and cognitive robots with the support of DARPA projects.
REFRESHMENTS at 3:45 pm, SPEAKER at 4:15 pm
Date(s) - Jan 12, 2016
4:15 pm - 5:45 pm