Making a robot that makes up its mind | News | Notre Dame News | University of Notre Dame Skip To Content Skip To Navigation Skip To Search University of Notre Dame Notre Dame News Experts ND in the News Subscribe About Us Home Contact Search Menu Home › News › Making a robot that makes up its mind Making a robot that makes up its mind Published: May 08, 2008 Author: Shannon Roddel For some 20 years, University of Notre Dame psycholinguist Kathleen Eberhard has studied spoken language production and comprehension, including how people consider one anothers perspectives during conversation. But it wasnt until last year that she began applying her research to conversations of the non-human variety. An associate professor of psychology, Eberhard is part of a team of researchers from Notre Dame and Indiana, Arizona State and Stanford Universities that is working to improve robotic technology by studyingEffective Human-Robot Interaction under Time Pressure through Robust Natural Language Dialogue.The project is funded by a five-year, $2.5 million Multidisciplinary University Research Initiative (MURI) grant from the U.S. Office of Naval Research. Each institution is concentrating on a different component of the program. Eberhards focus is natural language. Our goals are quite lofty,Eberhard admitted.Were trying to create more autonomous, decision-making robots that can think, plan and prioritize, without being completely controlled by humans. The Department of Defense is interested in the development ofsmarterrobots for military missions, including surveillance, location of casualties, and detection and diffusion of explosives.Current state-of-the-art robots are remotely controlled by humans who get visual and other information from their cameras, then manually direct their actions. Eberhard is working to help equip the machines to process language in human-like ways to facilitate their ability to take directions from humans, as well as learn from and be able to generalize those directions.To do this, she is conducting exercises designed to identify potential human-robot language barriers. There are a lot of them,Eberhard said.Speech is full of disfluencies in the form of hesitations, pauses filled with ‘uhor ‘umand speech errors, which may or may not be corrected, particularly when the speaker is under time pressure and stress.Speech also is rife with ambiguity, including words such as ‘rightthat have multiple meanings.Another source is phrase modifications, especially with regard to spatial descriptions.For example, the sentence ‘Put the block in the box on the chaircould be a command to either place the block thats in the box onto the chair or to put the block into the box thats on the chair. Eberhards research goals are enabling robots to ignore disfluencies and resolve ambiguity correctly and equipping them with the ability to provide evidence of correct understanding or request clarification in a timely fashion. Since recruiting undergraduate and graduate students to complete a recorded search and rescue experiment last summer in the basement of Flanner Hall, Eberhard has compiled a list of speech disfluencies that could prove problematic for human-robot conversations and which will help construct a natural language processing architecture for robots. Working under an unspecified deadline, pairs of students were told to locate hidden boxes and communicate findings, as well as ask for guidance via headset from an off-site director with a map of the locations.In the middle of the task, the studentsdirections were changed and a clock began ticking to increase their urgency and stress levels.The exchanges indeed reflected an array of disfluencies associated with urgency. We are analyzing the dialogues for factors that led to both effective and ineffective communication and coordination,Eberhard said.Correctly interpreting spatial descriptions requires knowledge of the task goals, as well as the speakers perspective.By studying these types of interactions, we hope to better gauge what natural language capabilities a robot should be programmed with for it to effectively be able to work with a remote human director. Eberhard and Matthias Scheutz, associate professor of cognitive science, computer science and informatics at Indiana University (and formerly a Notre Dame faculty member), are co-authors of a report of preliminary findings currently under review for presentation June 12 and 13 at the fifth International Workshop on Natural Language Processing and Cognitive Science in Barcelona, Spain. The next phase of Eberhards testing will involve using a Notre Dame robot (named Rudy, of course) thatunbeknownst to its directorwill be completely controlled by a human in a new exercise designed to determine whether a human will talk differently to a robot, simply because its a robot, even if it is behaving exactly as would a human. One thing is for certain,Eberhard said.Robotic technology is about to explode, which begs the philosophical questions: What are the ethical implications of creating ‘thinkingrobots, and how do we guard against the use of technology that is intended to benefit the human condition, from doing the exact opposite? Maybe we should ask a robot. Contact: Kathleen Eberhard, 574-631-7627, keberhar@nd.edu TopicID: 27746 Home Experts ND in the News Subscribe About Us For the Media Contact Office of Public Affairs and Communications Notre Dame News 500 Grace Hall Notre Dame, IN 46556 USA Facebook Twitter Instagram YouTube Pinterest © 2022 University of Notre Dame Search Mobile App News Events Visit Accessibility Facebook Twitter Instagram YouTube LinkedIn