Pages

26 August 2023

AI's Next Frontier: Are Brain-Computer Interfaces The Future Of Communication?

Bernard Marr

The human brain is the most complex and powerful computer in the world - and, as far as we know, the universe.

Today’s most sophisticated artificial intelligence (AI) algorithms are only just beginning to offer a partial simulation of a very limited number of the brain’s functions. AI is, however, much faster when it comes to certain operations like mathematics and language.

This means it comes as no surprise that a great deal of thought and research has gone into combining the two. The idea is to use AI to better understand the workings of the brain and eventually create more accurate simulations of it. One day, it may also help us to create systems with the complexity and diversity of capabilities of the human brain combined with the speed and accuracy of digital computers.

Sounds like something straight out of science fiction? Well, of course it is. Movies like The Matrix as well as books including Ready Player One and Neuromancer have based fantastic stories around the concept of connecting human brains to computers.

But increasingly, it's also becoming a serious possibility in the real world. Companies, including Elon Musk's Neuralink and Paradromics, as well as government agencies, including the US and European governments, have established projects to test the possibilities, and working real-world applications are said to be on the horizon.

So, here’s an overview of what’s been done so far in the mission to create the ultimate merger between humans and machines – and some ideas about where these breakthroughs might take us in the future.

Early History

Going back as far as the late 1960s, early attempts were made to control simple electrical devices such as lightbulbs using electrodes that could measure and react to signals, first from monkey brains and then from humans.

Some of the first experiments were carried out in an attempt to allow amputees to control synthetic limbs – which continues to be a focus of activity in brain-computer interfaces to this day. The first successful demonstration of a brain-computer interface took place in 1988 at the University of Rochester and involved using brain signals to move a computer cursor across a screen.

In the eighties, the neurons that controlled motor functions in Rhesus Macaque monkeys were identified and isolated, and during the late nineties, it became possible to reproduce images seen by cats by decoding the firing patterns of neurons in their brains.

Over the years, surgical methods evolved to the point where it became ethically sound to experiment with invasive methods of implanting sensors internally into the human brain, which allowed brain signals to be harnessed and interpreted with far greater accuracy and reliability.

This rapidly led to big advances in our understanding of how brain signals can be interpreted and used to control machinery or computers.

Today

Brain-computer interfaces have progressed a long way since then. Today, one of the best-known pioneers is Neuralink, founded by Elon Musk. It develops implantable brain-machine interface (BMI) devices, such as its N1 chip which is able to interface directly with more than 1,000 different brain cells. It aims to enable people suffering from paralysis to use machines and prosthetic limbs to recover their mobility. They are also studying the application of their technology in developing treatments for Alzheimer’s and Parkinson’s diseases.

Bitbrain has developed wearable brain-sensing devices that monitor EEG signals with the help of AI. They provide applications for carrying out medical brain scans, as well as a variety of laboratory tools that are used in research into human behavior, health and neuroscience.

Another company bringing products to market in this space is NextMind, recently acquired by Snap Inc, the parent company of Snapchat. It has developed a device that translates signals from the visual cortex into digital commands. As well as creating tools that allow computers to be controlled with brain signals, they hope to create a device that can translate visual imagination into digital signals; in other words, whatever image you think of will be recreated on a computer screen.

In academia, boundaries are being pushed even further. For example, researchers worthing on BCI technology have used machine learning to extract data from frontal lobe EEG signals that have been used to classify mental states (such as a person’s level of relaxation or stress) with a high degree of accuracy.

And a diffusion-based neural network – the image generation model used by AI applications including DALL-E and Midjourney – has been used to reproduce images that people have seen based on their EEG activity, as well as music that someone has listened to.

Where Next?

Obviously, this is a very advanced technology that we are only just starting to get to grips with. Eventually, it may open up possibilities that seem completely fantastical now – such as being able to digitally “record” all of a person’s life experiences, create a digital representation of any person or object simply by thinking about it, or even allow us to “mind control” another person (Leaving aside for a moment the question of whether or not this would actually be a good thing).

In the nearer future, we can expect less invasive methods of capturing electrical brain activity, meaning that the technology will have a wider number of applications without users having to undergo implant surgery. This is likely to include advancements in the use of near-infrared spectroscopy, which detects changes in the blood flow in the brain using light.

It will also become possible to more accurately understand the significance of particular EEG signals by isolating them from the brain’s accompanying background “noise” more effectively.

We can also expect to see the emergence of brain-to-brain interfaces – effectively allowing us to send and receive telepathic messages, thanks to an electronic "middleman" device that will record messages decoded from one person's EEG activity and transmit them directly to another person. This could even extend to control of other people’s bodies - researchers at the University of Washington have demonstrated a method for allowing one person to control the hand movements of another using their brain.

It's clear that this technology has the potential to be highly transformative in any number of fields, from making it easier for us to precision-control machines to restoring mobility for those who have lost it to creating new ways that we can communicate and share information.

Of course, there are huge ethical implications for all of this – we’ve completely skipped over the question of what it would mean for society if technology makes it possible for a person’s most personal and private thoughts to be decoded and effectively watched like a movie. How far back will it be possible to “rewind” these movies? We all know that it’s common for the human brain to suddenly recall information about people, locations or experiences from our distant past, even if we haven’t thought about something for a long time. Psychologists also tell us that the brain has the ability to block us from thinking about or remembering particular experiences or incidents if doing so would be traumatic or distressing. What will the evolution of this technology teach us about how memory works, and do we have an ethical responsibility to create safeguards to stop our extraction of information from having dangerous consequences?

These are questions that will undoubtedly have to be addressed before development progresses much further than it already has done today. However, the field of study and technological development also offers plenty of exciting potential and could also have countless positive uses.

No comments:

Post a Comment