[Colloq] Talk - Matthew Walter - Learning Cognitive Models from Machine Vision and Natural Language - April 10, 11:00am, 166 WVH

Jessica Biron bironje at ccs.neu.edu
Wed Apr 9 08:28:55 EDT 2014



Thursday April 10th, 2014 11:00am - 12:00am 



166 WVH 



Matthew Walter 

Research Scientist 

Computer Science and Artificial Intelligence Lab Massachusetts Institute of Technology 



Title: Learning Cognitive Models from Machine Vision and Natural Language 



Abstract: 



Whether they are exploring the deepest depths of the ocean or the surface of Mars, or responding to a disaster, robots have proven tremendously effective as our surrogates, performing tasks that are either too difficult, dangerous, or dull for humans. The next generation of intelligent systems will cooperate with people in our homes and workplaces, providing personalized care, assisting the disabled, and carrying out advanced manufacturing. To be effective partners, robots must reason about their environment and their actions in the same way that humans do. However, robots currently use representations that are either hard-coded or require significant supervision by a domain expert. I seek to enable robots to efficiently learn shared cognitive models of their surroundings and available actions from their interaction with humans. 



This talk highlights my recent advances in semantic perception that enable robots to acquire shared cognitive models of objects and of their environment from limited supervision provided by human partners. First, I will describe a visual appearance-based algorithm that efficiently learns a robust representation of objects from a single, user-provided segmentation cue. Second, I will provide a probabilistic framework that allows robots to formulate human-centric models of their environment from natural language descriptions. I will then demonstrate how these learned representations allow people to command and interact with robots using free-form speech. Finally, I will end with my vision for how robots will formulate hierarchical cognitive models of their environments, the objects they contain, and the rich space of actions available to the robot. 


More information about the Colloq mailing list