I just experienced a new AI mind reading tool — here’s what happened

MindPortal
(Image credit: Future)

It isn’t every day you walk into the offices of a cutting-edge tech startup and the first words you hear from the CEO are “I came up with the idea while tripping on hallucinogenics”. MindPortal isn’t any normal startup and co-founder Ekram Alam isn’t your ordinary CEO. 

After dropping out of medical school, Ekram realized entrepreneurship was his future. With friend and co-founder Jack Baber, their first product was a VR app that simulated the prehistoric world. Eventually, they drew on their shared medical background and founded MindPortal.

The goal of the company isn’t to create a physical mind-reading product with the help of artificial intelligence, rather it is to create AI models that simulate and interact with the brain in different ways. They are close to releasing their first general-purpose thought-to-text model.

How does the technology work?

Mind-Reading AI is Here, and We Tested It - YouTube Mind-Reading AI is Here, and We Tested It - YouTube
Watch On

During a demo at their unassuming office in London, I watched a researcher send a rehearsed sentence into their current brain AI model and then on to ChatGPT. “The premise of Mind Portal is we want to explore the nature of human-AI interaction,” Ekram declared.

Using a Functional near-infrared spectroscopy system (F-stop) MindPortal has a software classifier that measures optical brain data, seen through blood flow in the brain, and compares that to the training data to link thought to phrase.

Their success rate for MindGPT is 42% — a statistically significant figure that shows it is demonstrating real capability and not just getting it right by chance. But still mostly wrong.

The system I saw requires pre-defined and trained phrases and is specifically linked to a single brain. Ed, the researcher demoing the model had to spend hours thinking the phrase over and over again as part of the training process.

“When Ed imagines language that activates different parts of the brain, it's that activation that's being picked up in real-time,” explained Ekram. “The AI model will then be able to make a classification guess as to which of a handful of sentences Ed is thinking about.”

I tried it three times and it got the wrong sentence each time. However, this was expected as their success rate for MindGPT is 42% — a statistically significant figure that shows it is demonstrating real capability and not just getting it right by chance. But still mostly wrong.

A 5-year road to general-purpose MindGPT

MindPortal describes itself as a human-AI interaction company. It has been going for five years, focused on building cutting-edge AI models. Baber told me each new model was a fresh start, taking a different approach to mind-to-AI interactions.

During my tour of MindPortal’s small office, which resembles more of a lab than a traditional workspace, I was shown two generations of mind-reading models.

The first was more general and would work for anyone — apart from me as it required a VR headset and I’m blind in my left eye so most VR systems don’t work for me. It looks for a specific brain signal common to all people and uses this to trigger an “on/off” gate.

MindPortal venus test - YouTube MindPortal venus test - YouTube
Watch On

They demonstrated a VR game where you could trigger a color in a grid just by thinking about it and then using eye-tracking to change position. This uses a simple EEG sensor mounted to the back of the VR headset. It allows you to click by concentrating instead of pinching your fingers.

I was then able to see Ed demonstrate MindGPT. This is partly a second-generation model and partly a completely fresh approach. As explained earlier this uses a more complex optical brain sensor to detect specific brain wave signals linked to pre-defined phrases.

We’re currently at the mind-to-phrase stage, with a clear path to scale up that technology in a way that could be widely useful in interacting with chatbots, or even helping give voice to people who have lost their voice. But it does have limitations, including being restricted to set phrases.

What comes next?

Midjourney image showing a warehouse of volunteers having their brains scanned

Midjourney image showing a warehouse of volunteers having their brains scanned (Image credit: Midjourney/Future AI image)

Ekram says to create something more than a statistically significant but largely inaccurate brain-reading model they would first need to get more brain data, and it can’t just be any data from an fMRI scan or sensor reading — it has to be structured data thinking about sentences.

He says that it would cost about $50 million and require a warehouse full of people wearing brain-sensing headsets to think about sentences for hours per day.

I could wear a headgear, think of a sentence, such as, how are you today? That could be then sent through an AI model that takes the text and translates it into a voice. It then puts that into your ear using AirPods and you can respond with your mind.

Ekram Alam, MindPortal CEO

This would be to scale the current MindGPT model to a level where it has more sentences than just the three samples. Ekram says you could get to a point where you have enough phrases for it to be widely useful for most situations. The extra training data is more to make it general use. Right now it has to be trained per user, but with enough data, anyone could use the model.

The real solution though is a new type of model that is general purpose by default. They are working on something called MindSpeech. This would be thought-to-text, not to-phrase.

This is an AI model that will allow the user to continuously imagine language in their mind and have it continuously decoded into text. No specific or set phrases, just mind-to-text.

According to Ekram it is entirely possible that by the end of this decade, we could have wearable devices that convert our thoughts into text and send them straight to the AI. He goes even further, predicting devices that will allow us to communicate mind-to-mind with AI’s help.

“I could wear a headgear, think of a sentence, such as, how are you today? That could be then sent through an AI model that takes the text and translates it into a voice. It then puts that into your ear using AirPods and you can respond with your mind,” he explained.

Final thoughts

Midjourney AI image showing a MindPortal cap

Midjourney AI image showing a man wearing a fictional MindPortal cap in a supermarket (Image credit: Midjourney/Future AI image)

This isn't a product they're going to be putting on the market tomorrow. They are a research company building a very early prototype. However, they do have plans for what they could achieve in the future and potential ways to get to that point — with enough money.

What MindPortal has done is give us a really interesting insight into what we might be using in the coming few years and how we might be interacting with AI and each other. With the next generation, general-purpose mind model on the horizon, that might come even sooner.

I really hope it works because I do not want to be standing in the supermarket talking to myself when I'm just having a conversation with my AI. I’d rather just send it my thoughts instead.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 76 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?