Faculty       Staff       Contact       Institute Directory
Research Groups      
Undergraduate       Graduate       Institute Admissions: Undergraduate | Graduate      
Events       Institute Events      
Lab Manual       Institute Computing      
No Menu Selected

* Research

Ph.D. Theses

Learning Controllers for Human-Robot Interaction

By Eric Meisner
Advisor: Volkan Isler and Jeffrey Trinkle
February 6, 2009

In order for robots to assist and interact with humans, they must be socially intelligent. Social intelligence is the ability to communicate and understand meaning through social interaction. Artificial intelligence can be broadly described, as an effort to describe and simulate the property of human intelligence inside of a computational model. In most cases, this simulation happens in a vacuum. An agent, such as a robot, maintains a computational model which includes all information required to make decisions. This internal computational representation is what we consider its intellect. Information may enter this model through perception, and be expressed in the form of action. This separation of knowing and doing can be quite effective for representing certain types of intelligence, however it does not lend itself to simulating social cognition.

In order to communicate socially, an agent must be able to affect change in the mental representation of other agents, as well as the physical world. For this reason, when building artificial social intelligence, we need to pay attention to prevailing theories on how humans learn. Many popular theories from cognitive science, social psychology and language development suggest that action and perception are not subordinate to mental representations. Instead, mental representations are a result of action and perception that results from an agent's interaction with an environment and other agents. In particular, social learning theory says that the process which allows agents to understand one another happens from the ground up, starting with action and perception, and resulting in the shared mental representations, and understanding of how to affect change in the representations of others.

This thesis addresses the problem of building social intelligence into robotic systems using existing formalizations for computational learning and adaptive control. We focus on the problem of how to use decision theoretic planning for learning to interact with humans from the bottom up. We first examine the use of affect recognition in designing human-friendly control strategies. Next, we address the problem of defining subjective measures of interactivity by leveraging human expertise. Finally, we define an evaluate a method for participating in process of socially situated cognition. We emphasize learning to predict and modulate observable responses of the human rather than attempting to directly infer their mental or emotional states. The effectiveness of this method is demonstrated experimentally using customized robotic systems.

* Return to main PhD Theses page



---