Growing human brain cells onto silicon chips to transform machine learning
Jul. 23, 2023.
3 min. read Interactions
Merging AI and synthetic biology
A team led by the Monash University Turner Institute for Brain and Mental Health has been awarded almost $600,000 (AUD) by the Australian National Intelligence and Security Discovery Research Grants Program for research in growing human brain cells onto silicon chips. The research goal is to “develop new continual learning capabilities to transform machine learning,” according to an announcement.
The new research program—led by Associate Professor Adeel Razi from the Turner Institute for Brain and Mental Health in collaboration with Melbourne start-up Cortical Labs, involves growing about 800,000 brain cells living in a dish (the “DishBrain system”).
These cell will be “taught” to perform goal-directed tasks and help understand the various biological mechanisms that underlie lifelong continual learning.
Merging AI and synthetic biology
According to Razi, the research program’s work (using lab-grown brain cells embedded onto silicon chips) “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms.”
However, “this new technology capability in future may eventually surpass the performance of existing, purely silicon-based hardware … with “significant implications across multiple fields such as planning, robotics, advanced automation, brain-machine interfaces, and drug discovery, giving Australia a significant strategic advantage.”
This “continual lifelong learning” means machines can acquire new skills without compromising old ones, adapt to changes, and apply previously learned knowledge to new tasks—all while conserving limited resources such as computing power, memory and energy. Current AI cannot do this and suffers from “catastrophic forgetting,” Razi said (and “hallucinating”?).
The goal is to “develop better AI machines that replicate the learning capacity of these biological neural networks. This will help us scale up the hardware and methods capacity to the point where they become a viable replacement for in silico computing,“ Razi said.
A dystopian vision?
“Just like AGI in general, a BCI [brain-computer interface] can be a boon or a harmful disaster, depending on how it is done,” said Dr. Paul Werbos in an email. Werbos is best known for his 1974 dissertation that first described the process of training artificial neural networks through backpropagation of errors; and as a former program director at the National Science Foundation.
“Bio connections could create huge leapfrogs in neural network hardware, but the brute-force approach is not likely to be anywhere near as powerful as the best current electronic/photonic advances now in process, or as bio-inspired projects better grounded in new information about how brains actually work.
“This project reminds me a lot of Stapleton’s dystopian vision in Last and First Men. Those approaches in the BCI slides are so ill-grounded that they do pose threats to our very existence. But in most cases, it just wastes money that could have been better spent elsewhere.
“In my view, the NSF/EFRI COPN activity we funded described how to do it right, to benefit both science and technology. If Australia had learned from what we learned, after huge effort and wide-ranging planning conversations, they would be much safer, less wasteful, and far more useful.”