Scientists are using emerging artificial intelligence (AI) networks to enhance their understanding of one of the most elusive intelligence systems: the human brain. The researchers are learning much about the role of contextual clues in human image recognition. By using artificial neurons – essentially lines of code, software – with neural network models, they can parse out the various elements that go into recognising a specific place or object.
“The fundamental questions cognitive neuroscientists and computer scientists seek to answer are similar,” said Aude Oliva from the Massachusetts Institute of Technology (MIT) in the US. “They have a complex system made of components – for one, it is called neurons and for the other, it is called units – and we are doing experiments to try to determine what those components calculate,” said Oliva, who presented the research at the annual meeting of the Cognitive Neuroscience Society (CNS).
In one study of over 10 million images, Oliva and colleagues taught an artificial network to recognise 350 different places, such as a kitchen, bedroom, park, living room, etc. They expected the network to learn objects such as a bed associated with a bedroom. What they did not expect was that the network would learn to recognise people and animals, for example dogs at parks and cats in living rooms.
The machine intelligence programmes learn very quickly when given lots of data, which is what enables them to parse contextual learning at such a fine level, Oliva said. While it is not possible to dissect human neurons at such a level, the computer model performing a similar task is entirely transparent.
The artificial neural networks serve as mini-brains that can be studied, changed, evaluated, compared against responses given by human neural networks, so the cognitive neuroscientists have some sort of sketch of how a real brain may function,” researchers said. Neural network models are brain-inspired models that are now state-of-the-art in many artificial intelligence applications, such as computer vision,” said Nikolaus Kriegeskorte of Columbia University in the US, who is chairing the CNS symposium.
Kriegeskorte said that these models have helped neuroscientists understand how people can recognise the objects around them in the blink of an eye. “This involves millions of signals emanating from the retina, that sweep through a sequence of layers of neurons, extracting semantic information, for example that we are looking at a street scene with several people and a dog, he said.
“Current neural network models can perform this kind of task using only computations that biological neurons can perform. Moreover, these neural network models can predict to some extent how a neuron deep in the brain will respond to any image,” said Kriegeskorte.