Most people are familiar with virtual assistants such as Siri or Alexa. They answer questions, make recommendations, and basically simplify common tasks. One simply asks for help and options are provided: everything from “calling home” to providing operating hours for the closest grocery store so a person’s time can be more productive. No such assistance exists for software engineers … at least not yet. This is what Collin McMillan, assistant professor in the Department of Computer Science and Engineering at the University of Notre Dame, and his team are exploring.
Since joining the Notre Dame faculty in 2012, McMillan’s work has focused on source code summarization and being able to automatically generate English descriptions of source code behavior, which is exactly what he will need to do to create a virtual assistant for software engineers.
When assistants like Siri or Alexa exist, why do software engineers need one of their own? The answer is language. The virtual assistants for individuals work because they use a natural language interface — one based on common words, phrases, meanings and advanced machine learning technologies. Siri and Alexa have been programmed to understand and answer the basic questions an individual might ask in relation to everyday life.
But programmers often use different, more technical words and descriptions than others. When programmers need help, they ask a fellow programmer [someone who speaks their same “language”] or they stop what they are doing to find the answer themselves, documenting the process as they go along so they might use the process again or others might take advantage of it. Both take valuable time and reduce productivity. While documentation provides very specific information, it is only as valuable as the amount of information the individual programmer provides about the process.
Because of the significant achievements in artificial intelligence and natural language processing (NLP) McMillan believes that it may now be possible to create a visual assistant for software engineering. However, a couple of issues would need to be addressed, first is the necessity for conversation analysis and modeling — identifying how programmers talk to one another, what types of questions they most frequently ask. Second is reference expression generation — how do they describe functions or software artifacts?
The three-year project that McMillan and his team have begun will create a model of the conversations between programmers. From there they will generate expressions that refer to software components in a human-like manner so that they can design algorithms to extract the data to make similar references as part of a knowledge base so the new virtual assistant could respond as quickly and accurately as a Siri or Alexa. Finally, they would test their techniques in the lab and in real-life settings to determine its effectiveness.
Not only will this provide scientific knowledge of how programmers ask and answer questions but it can provide new models for representing data in software projects, generate descriptions of software artifacts, and, hopefully, extend the impact of that information to better understand how persons with visual disabilities interact with software and its development — making all assistive technologies more accessible.
To follow this project throughout its three-year course, visit www.cse.nd.edu/~cmc/
Originally published by conductorshare.nd.edu on August 10, 2017.
at