Researchers develop novel computer learning method for faster AI

Author: Nina Welding

Training computers
Training computers

Until recently, teaching an old dog new tricks was easier than teaching a computer how to learn in the same way a human does.

One approach to creating artificial intelligence (AI) has been to augment a computer’s neural network so it can access and apply already-learned information to new tasks. This approach takes significant time and energy to transfer data from the memory to the processing unit.

Researchers at the University of Notre Dame have demonstrated a novel one-shot learning method that allows computers to draw upon already learned patterns more quickly and efficiently and using less energy than currently possible, while adapting to new tasks and previously unseen data. Their work, recently published in Nature Electronics, was conducted using the ferroelectric field effect transistor (FeFET) technology from GlobalFoundries of Dresden, Germany.

Suman DattaSuman Datta

Led by Suman Datta, the Stinson Professor of Nanotechnology and Director of the Applications and Systems-driven Center for Energy-Efficient integrated Nano Technologies and the Center for Extremely Energy Efficient Collective Electronics, the interdisciplinary team produced a ferroelectric ternary content addressable memory array prototype, where each memory cell is based on two ferroelectric field-effect transistors, for one- and few-shot learning applications. When compared to more conventional processing platforms, the Notre Dame prototype provides a 60-fold reduction in energy consumption and 2,700-fold improvement in data processing time to access computational memory and apply the prior information.

Applying “previously stored” information to different situations is what allows humans to adapt to new but similar tasks. Even highly programed deep neural networks currently used in AI applications, such as computer vision and speech recognition, struggle to adapt new classes of data and are largely unable to use information stored in memory banks to process new situations. This means that large amounts of labeled data need to be available almost constantly for new network training. Biological brains, on the other hand, need just a few examples — maybe even one — to learn and understand the framework of a new situation, efficiently generalize from old input and react appropriately.

“In this work, we showed how to use emerging devices like ferroelectric field-effect transistors to create a very compact attentional memory that stores previously learned features,” Datta says. “The fast and efficient vector search-and-match operation that resulted highlights the benefit of such attentional memory for one-shot learning applications.”

The team, made up of device physicists, circuit designers, computer architects and machine learning algorithm experts, includes Datta and postdoctoral research associate Kai Ni, both from the Department of Electrical Engineering. From the Department of Computer Science and Engineering are co-principal investigators Xiaobo Sharon Hu, professor and associate dean for professional development in the Graduate School, and Michael Niemier, associate professor, along with Siddharth Joshi, assistant professor, and graduate students Ann F. Laguna and Xunzhao Yin.

Originally published by Nina Welding at conductorshare.nd.edu on Nov. 21.