- University of Sheffield researchers say artificial intelligence systems are unlikely to gain human-like cognition, unless they’re connected to the real world through robots and designed using principles from evolution
- Current AI systems, such as ChatGPT, copy some processes in the human brain to use datasets to solve difficult problems, but Sheffield researchers say this form of disembodied AI is unlikely to resemble the complexities of real brain processing no matter how big these datasets become
- Biological intelligence - such as the human brain - is achieved through a specific architecture that learns and improves using its connections to the real world, but this is rarely used in the design of AI
- Embodying AI in robots so they can interact with the world around them and evolve like the human brain does is the most likely way AI will develop human-like cognition
Connecting artificial intelligence systems to the real world through robots and designing them using principles from evolution is the most likely way AI will gain human-like cognition, according to research from the University of Sheffield.
In a paper published in Science Robotics, Professor Tony Prescott and Dr Stuart Wilson from the University’s Department of Computer Science, say that AI systems are unlikely to resemble real brain processing no matter how large their neural networks or the datasets used to train them might become, if they remain disembodied.
Current AI systems, such as ChatGPT, use large neural networks to solve difficult problems, such as generating intelligible written text. These networks teach AI to process data in a way that is inspired by the human brain and also learn from their mistakes in order to improve and become more accurate.
Although these models have similarities to the human brain, the Sheffield researchers say there are also important differences, which are preventing them from gaining biological-like intelligence.
Firstly, real brains are embodied in a physical system - the human body - that directly senses and acts in the world. Being embodied makes brain processes meaningful in a way that is not possible for disembodied AIs, which can learn to recognise and generate complex patterns in data but lack a direct connection to the physical world. Therefore such AIs have no understanding or awareness of the world around them.
Secondly, human brains are made up of multiple subsystems, which are organised in a specific configuration - known as architecture - that is similar in all vertebrate animals from fish to humans, but not in AI.
The Sheffield study suggests that biological intelligence - like in the human brain - has developed because of this specific architecture and how it has used its connections to the real world to overcome challenges, learn and improve throughout evolution. This interaction between evolution and development is rarely factored into the design of AI, according to the study.
Professor Tony Prescott, Professor of Cognitive Robotics at the University of Sheffield and Director of Sheffield Robotics, said: “ChatGPT, and other large neural network models, are exciting developments in AI which show that really hard challenges like learning the structure of human language can be solved. However, these types of AI systems are unlikely to advance to the point where they can fully think like a human brain if they continue to be designed using the same methods.
“It is much more likely that AI systems will develop human-like cognition if they are built with architectures that learn and improve in similar ways to how the human brain does, using its connections to the real world. Robotics can provide AI systems with these connections - for example, via sensors such as cameras and microphones and actuators such as wheels and grippers. AI systems would then be able to sense the world around them and learn like the human brain.”
The Sheffield academics say some recent progress in developing AIs for controlling robots has been made. For example, a powerful approach is the use of recurrent neural network models - models composed of multiple feedback loops - that are trained to make better predictions about what might happen next.
These models are making important progress in making robots more adaptable. However, robot AIs are still a long way from resembling real brains in terms of capturing how different brain subsystems work together as part of a broader cognitive architecture, the study suggests.
Dr Stuart Wilson, Senior Lecturer in Computational Neuroscience at the University of Sheffield, said: “Efforts to understand how real brains control bodies, by building artificial brains for robots, have led to exciting developments in robotics and neuroscience in recent decades. After reviewing some of these efforts, which have mainly focussed on how artificial brains can learn, we think the next breakthroughs in AI will come from mimicking more closely how real brains develop and evolve.”
The paper, Understanding brain functional architecture through robotics, is published in Science Robotics. Read the paper in full.
Contact
For further information please contact: