New AI System Can Read Your Mind and Thoughts, and Convert Them to Sentences
New AI System Can Read Your Mind and Thoughts, and Convert Them to Sentences
Researchers at University of California, San Francisco predict that such a system may one day help us treat individuals with speech and text impairment.

A new artificial intelligence system developed by researchers has proved capable of reading the thoughts in the minds of people, and converting them into sentences. Although the prototype AI system was developed under controlled conditions, the researchers who created the project believe that with ample training of the machine learning algorithms, such a tool can be used to help individuals with speech impairments and other physical constraints to express themselves, which would make a landmark achievement in healthcare technology research. The project was undertaken by Prof. Dr. Joseph Makin and his team from University of California, San Francisco.

To create the ML model, the researchers had four participants read aloud a set of 50 sentences while connected to brain signal-reading electrodes, and tracked and recorded the neural activity in the individuals when doing this. To use this data by breaking it down into segmented parts, the algorithms then broke down the information into a string of numbers, which would represent specific words and help future algorithms understand the order of words that occur in grammatically correct sentences. It took quite a while, but over time, the algorithms started improving in terms of their general contextual awareness as to which sequence of words actually make sense, and which do not.

Over time, the system seemingly made quite a few rather amusing mistakes. Among them was a sentence that read, "Those musicians harmonise marvellously", which was misinterpreted to "The spinach was a famous singer". A second sentence, which actually read "A roll of wire lay near the wall", was interpreted as "Will robin wear a yellow lily". As revealed by The Guardian, these sentences do show that such a brain-to-speech system has quite some way to go before being implemented professionally, but irrespective of the mistakes, show immense promise in professional applications.

In order to create a prototype model of an futuristic, implementable technology, the researchers also fed the systems the actual audio recordings of the sentences in order to create a proper, brain-mapped model. This would help future systems as these to emulate the brain closely — while speaking a sentence, the sequence of words are contextually easier to understand by their auditory properties as well. The researchers used this to help the system understand the context better. The end result was a mere 3 percent error percentage for one individual among the participants, which even defeats the 5 percent error margin that professional human translators and transcribers exhibit.

Reports on the latest achievement have experts hailing the system, saying that the actual data set and training time required by the algorithm was quite less in comparison to scientific tools. In time, it is this aspect that might make such a tool highly proficient in professional applications. For now, the research work has been indexed by Nature Neuroscience journal, which can be read from here.

What's your reaction?

Comments

https://umatno.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!