Invited Speakers -- DARS 2006

Wednesday Jeffrey S. Rosenschein
9:10-10:10 The Hebrew University of Jerusalem
Title: Voting in Multiagent Systems: Manipulation, Juntas, and Average-Case Complexity
Abstract: Social choice theory can serve as a useful foundation for multiagent applications. There is a rich literature on the subject of voting, and builders of automated agents can benefit from this work as they engineer systems that need to reach group decisions.
This talk will review various voting techniques and give an overview of some of the seminal results in social choice theory. We will then consider the application of these techniques (in scenarios such as multiagent planning), and examine nuances in their use. In particular, we'll consider the issue of preference extraction in these systems, with an emphasis on the complexity of manipulating group outcomes. We show that a family of important voting protocols is susceptible to manipulation by coalitions in the average case, when the number of candidates is constant (even though their worst-case manipulations are NP-hard).
(joint work with Ariel D. Procaccia)

Thursday Hajime Asama
1:15-2:15 RACE (Research into Artifacts, Center for Engineering), The University of Tokyo, Kashiwanoha 5-1-5, Kashiwa-shi, Chiba 277-8568, Japan
http://www.race.u-tokyo.ac.jp/~asama/
Title: Mobiligence: Adaptiveness of Distributed Autonomous Systems
Abstract: Adaptiveness is one of the target functions of research on distributed autonomous robotic systems. Since the 1st DARS symposium was held in 1992, design principles to realize the adaptiveness of the robotic systems have been sought and proposed in the context of self-organizing or emergent behaviors of the distributed robotic agents. However, the adaptiveness realized in the DARS research so far is quite limited and specific to sample problems and systems configurations. On the other hand, all the animals from primitive ones to insects or mammals have commonly the adaptiveness to behave in an unexpected environment. Such adaptive behaviors are the intelligent sensory-motor functions, and most essential and indispensable ones for animals to survive.
It must be effective to consult biological systems to find the general design principle to realize the adaptiveness in artificial systems as well as robotic systems. However, the secret of the mechanism to realize the adaptiveness in animals is not yet thoroughly revealed even in biology as well as brain science and neurophysiology. Such an adaptive function is considered to emerge from the interaction of the body, brain, and environment, which is caused by a subject to act or move. We call the intelligence for generating adaptive motor function mobiligence.
The Mobiligence project started from 2005 [1], which was accepted as a five-year program of Scientific Research on Priority Areas of Grant-in-Aid Scientific Research from the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT). The present project is designed to investigate the mechanisms of mobiligence by collaborative research in biology and engineering from systematic and synthetic (constructive) approach. In this talk, the abstract of the project is introduced in contrast to the history and the trend of DARS research.
References: [1] http://www.arai.pe.u-tokyo.ac.jp/mobiligence/index_e.html

Friday Holly Yanco
9:00-10:00 University of Massachusetts at Lowell
Title: Designing Human-Robot Interaction for Assistive Robotics
Abstract: Over the past several years, we have conducted studies of human-robot interaction (HRI) in two different assistive robotics applications: urban search and rescue (USAR) robots and robot wheelchairs. In these studies, we have observed several problems arising from current interface and robot designs, including the following:
  • Users do not switch modes effectively.
  • Users are unable to intervene after a long period of autonomy.
  • Users have a lack of situation awareness.
  • Information is presented ineffectively.
  • Bystanders are confused when a robot acts in an unexpected fashion.
In this talk, I will discuss how we are addressing these issues in our lab's research, starting with the design guidelines that we have developed for HRI. I will present our USAR interface that fuses multiple sensor modalities into a single display and suggests appropriate actions to its user. I will also present our HRI architecture, currently under development. The talk will also address the evaluation methodologies that we have developed for HRI systems.
Bio: Holly Yanco is an Assistant Professor in the Computer Science Department at UMass Lowell, where she heads the Robotics Lab. Her research interests include human-robot interaction and assistive technology. She received a Career Award from NSF in 2006. She was the PI of the NSF funded Pyro Project, which was awarded the NEEDS Premiere Award for Courseware in 2005. She has a PhD and MS from MIT and a BA from Wellesley College, all in Computer Science.
Copyright: © 2005-2006 by the Regents of the University of Minnesota. All rights reserved.
Comments to: Maria Gini