Neural Echoes: The Dawn of Biocomputing in Speech Recognition

A recent article from New Scientist discusses a groundbreaking experiment where researchers at Indiana University Bloomington developed a biocomputing system using living human brain cells that could perform simple speech recognition tasks. These brain cells, known as brain organoids, are grown from stem cells into small structures that mimic the human brain’s neuronal networks. The organoids were trained over two days to recognize the voice of an individual from audio clips of people pronouncing Japanese vowel sounds. Initially, the accuracy of recognition was between 30 to 40 percent, but it improved to 70 to 80 percent with training, showcasing what is known as adaptive learning.

Despite the success, the researchers acknowledge the limitations of this proof-of-concept. The current application is very basic, merely identifying the speaker without understanding the speech’s content. Additionally, the brain organoids can only be maintained for a short period of one to two months, posing a challenge for long-term use. However, the low energy consumption of this biocomputing system compared to traditional silicon-based AI systems marks a significant advantage, potentially leading to more sustainable and efficient AI technologies in the future.

Source: AI made from living human brain cells performs speech recognition

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *