System sidesteps computing bottleneck in tuning artificial intelligence algorithms.
This kludgy network of electrical resistors can learn to recognize flowers, among other artificial intelligence tasks.
CHICAGO—A simple electrical circuit has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.
“It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”
Currently, the standard tool for machine learning is the artificial neural network. Such networks typically only exist in a computer’s memory—although some researchers have found ways to embody them in everyday objects. A neural network consists of points or nodes, each of which can take a value ranging from 0 to 1, connected by lines or edges. Each edge is weighted depending on how correlated or anticorrelated the two nodes are.
The nodes are arranged in layers, with the first layer taking the inputs and the last layer producing the outputs. For example, the first layer might take as inputs the color of the pixels in black and white photos. The output layer might consist of a single node that yields a 0 if the picture is of a cat and a 1 if it is of a dog.
To teach the system, developers typically expose it to a set of training pictures and adjust the weights of the edges to get the right output. It’s a daunting optimization problem that grows dramatically more complex with the size of the network, and it requires substantial computer processing distinct from the neural network itself. Making matters more difficult, all of the edges across the entire network must be tuned simultaneously rather than one after another. To get around this problem, physicists have been looking for physical systems that can efficiently tune themselves without the external computation.
Now, Dillavou and colleagues have developed a system that can do just that. They assembled a small network by randomly wiring together 16 common electrical components called adjustable resistors, like so many pipe cleaners. Each resistor serves as an edge in the network, and the nodes are the junctions where the resistors’ leads meet. To use the network, the researchers set voltages for certain input nodes, and read out the voltages of output nodes. By adjusting the resistors, the automated network learned to produce the desired outputs for a given set of inputs.
To train the system with a minimal amount of computing and memory, the researchers actually built two identical networks on top of each other. In the “clamped” network, they fed in the input voltages and fixed the output voltage to the value they wanted. In the “free” network, they fixed just the input voltage and then let all the other voltages float to whatever value they would, which generally gave the wrong voltage at the output.
The system then adjusted resistances in the two networks according to a simple rule that depended on whether the voltage difference across a resistor in the clamped network was bigger or smaller than the voltage difference across the corresponding resistor in the free network. After several iterations, those adjustments brought all voltages at all the nodes in the two networks into agreement and trained both networks to give the right output for a given input.
Crucially, that tuning requires very little computation. The system only needs to compare the voltage drop across corresponding resistors in the clamped and free networks, using a relatively simple electrical widget called a comparator, Dillavou says.
The network was tuned to perform a variety of simple AI tasks, Dillavou reported at the meeting. For example, it could distinguish with greater than 95% accuracy between three species of iris depending on four physical measurements of a flower: the lengths and widths of its petals and sepals—the leaves just below the blossom. That’s a canonical AI test that uses a standard set of 150 images, 30 of which were used to train the network, Dillavou says.
It seems unlikely that the resistor network will ever replace standard neural networks, however. For one thing, its response to different inputs likely has to vary more dramatically if the resistor network is to match an artificial neural network’s ability to make fine distinctions, Divallou says.
But Jason Rocks, a physicist at Boston University, says it’s not out of the question that the idea might have some technological utility. “If it’s made out of electrical components then you should be able to scale it down to a microchip,” he says. “I think that’s where they’re going with this.”