The University of Arizona

Events & News

Cog Sci Brown Bag Seminar

DateFriday, March 5, 2010
Time12:00 pm
LocationGS 906
SpeakerIan Fasel
TitleAssistant Research Professor
AffiliationDepartment of Computer Science - University of Arizona

Perceptive, Information Gathering Autonomous Agents

ABSTRACT: How do we go from raw sensors to higher level concepts? In this talk, I will describe several research projects in the Arizona Robotics Research Group (ARRG) in which we propose and test possible answers to his question by building autonomous agents that actively explore and learn about the physical world. I will organize the talk around the topic of active information gathering. Robots, like organisms, are equipped with limited range sensors which return different kinds of information (such as color, contrast or edge features, distance estimates from sonar or stereo vision, or properties requiring direct physical contact such as texture or weight). Choosing which sensing actions to take is a critical question, akin to the problem of choosing scientific experiments, or choosing appropriate medical tests, and is a key element of making inferences about possible unseen "causes" of the observable world.

Inspired by Bayesian optimal experimental design (OED), we treat information gathering behavior as a stochastic optimal control problem, in which the aim is to minimize total uncertainty over an *extended sequence* of sensing actions in a partially observable Markov decision process (POMDP).

We refer to this as I information maximizing (InfoMax) control. We then test this in a number of real-world agents, including a robot baby that uses vocalizations to discover if people are there and then learns to detect their faces, a robot head that learns how to optimally look around the room to find friends or foes, and a robot explorer that learns to raverse large physical spaces to determine as quickly as possible what the objects in the room are. At the end of the talk, I will describe some projects currently underway in which our goal is to use InfoMax control as the driver for agents that, after exploring objects in multiple environments, will autonomously form, or "baptize", new hidden concepts that allow it to better predict what its future sensory actions will reveal.