Thursday, July 6, 2017

IBM Builds A Scalable Computer Chip Inspired By The Human Brain


By Alex Knapp


"I’m holding in my hand a chip with one million neurons, 256 million synapses, and 4096 cores. With 5.4 billion transistors, it's the largest chip IBM has built."

Dr. Dharmendra S. Modha sounds positively giddy as he talks to me on the phone. This is the third time I've talked to him about his long-term project - an IBM project with the goal of creating an entirely new type of computer chip, SyNAPSE, whose architecture is inspired by the human brain. This new chip is a major success in that project.

"Inspired" is the key word, though. The chip's architecture is based on the structure of our brains, but very simplified. Still, within that architecture lies some amazing advantages over computers today. For one thing, despite this being IBM's largest chip, it draws only a tiny amount of electricity - about 63 mW - a fraction of the power being drawn by the chip in your laptop.

What's more, the new chip is also scalable - making possible larger neural networks of several chips connected together. The details behind their research has been published today in Science.

"In 2011, we had a chip with one core," Modha told me. "We have now scaled that to 4096 cores, while shrinking each core 15x by area and 100x by power."

Each core of the chip is modeled on a simplified version of the brain's neural architecture. The core contains 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons). This structure is a radical departure from the von Neumann architecture that's the basis of virtually every computer today (including the one you're reading this on.)

Work on this project began in 2008 in a collaboration between IBM and several universities over the years. The project has received $53 million in funding from the Defense Advanced Research Projects Agency (DARPA). The first prototype chip was developed in 2011, and a programming language and development kit was released in 2013.

"This new chip will provide a powerful tool to researchers who are studying algorithms that use spiking neurons," Dr. Terrence J. Sejnowski told me. Sejnowski heads Computational Neurobiology Laboratory at the Salk Institute. He's unaffiliated with IBM's project but is familiar with the technology. "We know that such algorithms exist because the brain uses spiking neurons and can outperform all existing approaches, with a power budget of 20 watts, less than your laptop."

It's important to note, though, that the SyNAPSE system won't replace the computers of today - rather, they're intended to supplement them. Modha likened them to co-processors used in high performance computers to help them crunch data faster. Or, in a more poetic turn as he continued talking to me, he called SyNAPSE a "right-brained" computer compared to the "left-brained" architecture used in computers today.

"Current von Neumann machines are fast, symbolic, number-crunchers," he said. "SyNAPSE is slow, multi-sensory, and better at recognizing sensor data in real-time."

So to crunch big numbers and do heavy computational lifting, we'll still need conventional computers. Where these "cognitive" computers come in is in analyzing and discerning patterns in that data. Key applications include visual recognition of patterns - something that Dr. Modha notes would be very useful for applications such as driverless cars.

As Sejnowski told me, "The future is finding a path to low power computing that solves problems in sensing and moving -- what we do so well and digital computers do so awkwardly."

And that's what IBM is looking to do with SyNAPSE - finding the patterns that normal computers can't. As Modha put it, "Google Maps can plot your route, but SyNAPSE can see if there's a pothole."

What gives the SyNAPSE an advantage in pattern recognition is that, unlike a traditional computer, which crunches data sequentially, its brain-inspired architecture allows for more parallel processing. For example, in a facial recognition app, one core of the chip might be focused on nose shape, one on hair texture and color, one on eye color, etc. Each individual core is slower than a traditional processor, but since they run simultaneously in parallel, the chip as a whole can perform this type of operation much more quickly and accurately.

Other potential applications for the chip include use in cameras to automatically identify interesting items in cluttered environments. Modha's team also believes that the chip could be quite useful in natural language processing - being able to parse out and obey commands from people. Kind of like the computers on Star Trek that understood when they were in use and when people were just talking among themselves.

It probably won't be long before we see more of these applications in action. The scalable chip that IBM developed was built using conventional fabrication techniques for other chips - it just requires some different workflow.

Already over 200 programs have been developed for the chip, thanks to a simulation of the architecture running on supercomputers at at the Lawrence Livermore and Lawrence Berkeley National Laboratories. Those simulations allowed IBM to develop a programming language for the chip even before it existed.

"We've been working with IBM for the last 18 months and are extremely impressed with their achievement," Prof. Tobi Delbruck of the Institute of Neuroinformatics at UZH-ETH Zurich told me. "Applications like real time speech and vision that run continuously on battery power may finally be within reach."

"It's too soon to say who will win the race to implement practical realizations of brain-like computing in silicon," Delbruck added. "but IBM's solution is a serious contender."

Now that this new chip architecture has been developed and a fabrication technique setup, Modha said that the technology now is "like the 4 minute mile. Now that someone's done it, a lot of people can do it."

To help facilitate the development of the chip, both on the hardware and software side, IBM has developed a teaching curriculum for universities, its customers, its employees, and more.

On the hardware end, Modha's next goal is the development of what he calls a "neurosynaptic supercomputer." This would be a traditional supercomputer that uses both traditional and SyNAPSE chips - a computer with both a left and right brain, as it were - enabling it both to crunch numbers and quickly analyze real-time patterns as the data's crunched.

One question that Modha couldn't answer, though, what what the new chip means for video games - nobody's programmed one for SyNAPSE yet.
"That's an interesting question," he laughed. "But we're too busy for games!"

Wednesday, July 5, 2017

IBM Develops a New Chip That Functions Like a Brain


By John Markoff


Inspired by the architecture of the brain, scientists have developed a new kind of computer chip that uses no more power than a hearing aid and may eventually excel at calculations that stump today’s supercomputers.

The chip, or processor, is named TrueNorth and was developed by researchers at IBM and detailed in an article published on Thursday in the journal Science. It tries to mimic the way brains recognize patterns, relying on densely interconnected webs of transistors similar to the brain’s neural networks.

The chip’s electronic “neurons” are able to signal others when a type of data — light, for example — passes a certain threshold. Working in parallel, the neurons begin to organize the data into patterns suggesting the light is growing brighter, or changing color or shape.

The processor may thus be able to recognize that a woman in a video is picking up a purse, or control a robot that is reaching into a pocket and pulling out a quarter. Humans are able to recognize these acts without conscious thought, yet today’s computers and robots struggle to interpret them.

The chip contains 5.4 billion transistors, yet draws just 70 milliwatts of power. By contrast, modern Intel processors in today’s personal computers and data centers may have 1.4 billion transistors and consume far more power — 35 to 140 watts.

Today’s conventional microprocessors and graphics processors are capable of performing billions of mathematical operations a second, yet the new chip system clock makes its calculations barely a thousand times a second. But because of the vast number of circuits working in parallel, it is still capable of performing 46 billion operations a second per watt of energy consumed, according to IBM researchers.

The TrueNorth has one million “neurons,” about as complex as the brain of a bee.

“It is a remarkable achievement in terms of scalability and low power consumption,” said Horst Simon, deputy director of the Lawrence Berkeley National Laboratory.

He compared the new design to the advent of parallel supercomputers in the 1980s, which he recalled was like moving from a two-lane road to a superhighway.

The new approach to design, referred to variously as neuromorphic or cognitive computing, is still in its infancy, and the IBM chips are not yet commercially available. Yet the design has touched off a vigorous debate over the best approach to speeding up the neural networks increasingly used in computing.

Photo
A silicon chip relies on webs of transistors similar to the brain’s neural networks. Credit I.B.M.

The idea that neural networks might be useful in processing information occurred to engineers in the 1940s, before the invention of modern computers. Only recently, as computing has grown enormously in memory capacity and processing speed, have they proved to be powerful computing tools.

In recent years, companies including Google, Microsoft and Apple have turned to pattern recognition driven by neural networks to vastly improve the quality of services like speech recognition and photo classification.

But Yann LeCun, director of artificial intelligence research at Facebook and a pioneering expert in neural networks, said he was skeptical that IBM’s approach would ever outpace today’s fastest commercial processors.

“The chip appears to be very limited in many ways, and the performance is not what it seems,” Mr. LeCun wrote in an email sent to journalists. In particular, he criticized as inadequate the testing of the chip’s ability to detect moving pedestrians and cars.

“This particular task,” he wrote, “won’t impress anyone in computer vision or machine learning.” Mr. LeCun said that while special-purpose chips running neural networks might be useful for a range of applications, he remained skeptical about the design IBM has chosen.

Several neuroscience researchers and computer scientists disputed his critique.

“The TrueNorth chip is like the first transistor,” said Terrence J. Sejnowski, director of the Salk Institute’s Computational Neurobiology Laboratory. “It will take many generations before it can compete, but when it does, it will be a scalable architecture that can be delivered to cellphones, something that Yann’s G.P.U.s will never be able to do.”

G.P.U. refers to graphics processing unit, the type of chip being used today to deliver graphics and video to computer screens and for special processing tasks in supercomputers.

IBM’s research was funded by the Defense Advanced Research Projects Agency, a research arm of the Pentagon, under a program called Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNapse. According to Gill Pratt, the program manager, the agency is pursuing twin goals in its effort to design ultralow-power biological processors.

The first, Dr. Pratt said, is to automate some of the surveillance done by military drones. “We have lots of data and not enough people to look at them,” he said.

The second is to create a new kind of laboratory instrument to allow neuroscientists to quickly test new theories about how brains function.

Correction: August 7, 2014

Because of an editing error, an earlier version of this article misstated the day on which the report of a new computer chip was published. It was Thursday, not Wednesday.

Correction: August 11, 2014

An article on Friday about a new IBM computer chip that is said to mimic the way a human brain works omitted the last word in the name of a program known by the acronym SyNapse, which funded IBM’s research. It is Systems of Neuromorphic Adaptive Plastic Scalable Electronics.