Reading Your Mind: What If the Tyrants’ Dream Came True?

novaMAG : Futurology

When people worry about the potential dangers of technology, AI tends to be the go-to concern. Which makes perfect sense, given that it’s a concept in very wrong hands and almost entirely out of control. Robotics is another one, and it’s made spectacular strides in recent years. But sorry to add to the gloom, because we think there’s something even worse lurking in the technologies currently under development, technologies whose goal is to read your thoughts. All of your thoughts, including the most private ones.

Please don’t take this lightly, because the question is no longer whether it works, but simply how long before it does. Personally, I don’t know if we can still stop this madness, because we don’t carry much weight against the billions of dollars being poured into neural decoding. But we can at least start by getting informed about what this technology actually involves, so we can push effectively for it to be, at the very bare minimum, tightly regulated. So let’s take a quick tour of the landscape. You’ll see that it’s literally mind-bending.

Understanding neural decoding through brain-computer interfaces

Your brain never stops working. Every thought, every emotion, every intention produces measurable electrical activity. We call these brain waves. Building on that principle, BCI (Brain-Computer Interface) technology captures these signals and translates them into usable data. And that’s where AI comes in. Because without machine learning algorithms capable of processing billions of micro-signals in real time, those waves are nothing but incomprehensible noise. It’s the combination of increasingly sensitive sensors and increasingly powerful AI that will push this technology from the medical field into the surveillance domain.

Brain implants: when precision requires surgery

The principle relies on electrodes placed directly on or inside the cortex. This delivers a clean signal and maximum precision. It’s the approach chosen by Neuralink and BrainGate, and the direction US military research is heading through DARPA. For now it’s confined to extreme medical cases, but the patents being filed go well beyond medicine. Far beyond! The upside for anyone wanting to spy on your brain is that the signal is sharp, interpretable, and actionable. The major downside, though, is that it requires surgery. Which isn’t exactly practical for monitoring an entire population, unless electrode implantation becomes mandatory for part of the population, or even all of it.

Electroencephalography: the holy grail of techno-fascists

This time, no surgery at all. Just a headset or a headband pressing electrodes against the skull. That’s enough for EEG to capture the brain’s electrical activity right through the cranium. It’s currently far less precise than an implant, but it’s improving fast. And above all, it can be deployed very quickly at a very large scale. Worth noting: brain fingerprinting is already being used in judicial proceedings in India. And in 2023, researchers at the University of Texas successfully reconstructed complete sentences from EEG signals. These are not speculations. They are peer-reviewed scientific findings with concrete applications happening right now.

Who is pushing brain-machine interfaces, and with whose money?

The global brain-computer interface market has crossed an impressive threshold. Over two years, venture capital investment in the sector multiplied by 3.5, going from $662 million in 2022 to $2.3 billion in 2024. Five of the world’s largest fortunes have committed personal funds. Unsurprisingly, the CIA’s financial arm is also in the mix. And since September 2025, Meta has already been selling the first neural wristband to the general public.

This is the first concrete move by a Big Tech company. Behind this acceleration lies a network of official actors, tech giants operating in the shadows, Gulf sovereign wealth funds, military agencies, and ideologically motivated billionaires. Here is who is pushing this technology, how, and with what financial firepower:

Neuralink dominates the sector by capitalization and visibility. Founded in 2016 by Elon Musk, the company has raised around $1.3 billion across six funding rounds, reaching a valuation of $9 billion in June 2025. Musk personally injected $100 million at the initial funding stage. The latest round, closed in May 2025 at $650 million, attracted ARK Invest, Sequoia Capital, Peter Thiel’s Founders Fund, as well as G42, which is linked to the Abu Dhabi sovereign wealth fund, and the Qatar Investment Authority. By late 2025, twelve patients in four countries carry the N1 Link implant. The first among them, Noland Arbaugh, who is quadriplegic, uses his interface for around ten hours a day to browse the internet and play chess.

Synchron holds the second strategic position, with $345 million raised and a valuation close to one billion. Its approach differs from Neuralink’s: the Stentrode is deployed through the jugular vein without opening the skull. This less invasive profile appeals to a particularly telling range of investors. Among them: Jeff Bezos via Bezos Expeditions, Bill Gates via Gates Frontier, Khosla Ventures, the Qatar Investment Authority, and most notably In-Q-Tel, the CIA’s investment arm, which confirmed its $200 million participation in November 2025. Synchron now collaborates with Apple to enable iPhone and iPad control by thought, and with Nvidia to reduce interface latency through artificial intelligence.

But there are other equally troubling players rounding out this rapidly expanding ecosystem. Blackrock Neurotech, manufacturer of the Utah Array implanted in more than 40 patients since 2004, was acquired by Tether, which took a majority stake for $200 million in April 2024. Peter Thiel had invested in it as early as 2021. Precision Neuroscience, founded by Benjamin Rapoport, who had co-founded Neuralink before leaving over safety concerns, has raised between $155 and $180 million, including a $102 million round in December 2024 with participation from Stanley Druckenmiller’s family office. Its non-penetrating Layer 7 electrode received FDA clearance in March 2025. Paradromics, based in Austin, performed its first human implant in May 2025 after receiving $18 million in direct DARPA funding and has just received an investment from NEOM, Saudi Arabia’s futuristic megacity, which wants to establish a BCI center of excellence there. Kernel, founded by Bryan Johnson with a personal commitment of $100 million from the sale of Braintree to PayPal, has pivoted to non-invasive approaches but has remained relatively quiet since 2020. Merge Labs, co-founded in 2025 by Sam Altman with OpenAI as the main investor, has closed a $250 million funding round to develop a non-implantable ultrasound interface.

The big tech companies, for their part, are advancing under the radar. Their strategy looks more like silent infiltration than official announcements. Because patents, discreet acquisitions, and strategic partnerships reveal a far deeper footprint than press releases would suggest.

Currently, Meta is the most advanced and the only one to have crossed the commercial threshold. In 2019, Facebook acquired CTRL-labs for an estimated $500 million to $1 billion. That startup had previously attracted Google Ventures, Amazon’s Alexa Fund, and Founders Fund. Six years later, on September 30, 2025, Meta launched the Neural Band, a wristband with 16 surface electromyography electrodes. This Trojan horse is sold for $799 alongside Meta’s Ray-Ban glasses. The system detects neuromuscular signals to control interfaces without physical contact. To get there, it was trained on data from nearly 200,000 participants. A July 2025 article in Nature documents its scientific foundations.

Apple is playing the patents and accessibility angle with calculated discretion. A first patent filed in January 2023 describes AirPods-style earbuds equipped with EEG, EMG, and EOG electrodes capable of capturing brain activity with machine-learning-driven dynamic sensor selection. A second patent from March 2024 integrates neural sensors into the headband of a future Apple Vision Pro. In November 2025, Apple published a study on self-supervised learning for EEG from ear-mounted sensor data. And in May 2025, the Apple-Synchron collaboration extended the Switch Control accessibility framework to enable control of iPhone, iPad, and Vision Pro directly by thought.

Google Ventures participated in Neuralink’s Series C in 2021 and had invested in CTRL-labs before its acquisition by Meta. DeepMind, co-founded by neuroscientist Demis Hassabis, maintains a computational neuroscience research program that lays the groundwork for future BCI applications. Microsoft holds a 2018 patent describing application control via an EEG headband integrated into HoloLens, and its research group is actively pursuing work on interactive BCIs.

Samsung is developing, in partnership with Hanyang University, an in-ear EEG device reaching 92.86% accuracy in identifying video preferences, with a direct application in neuromarketing. And Gabe Newell, founder of Valve, co-founded with OpenBCI the Galea headset, integrating EEG, EMG, and eye-tracking into a VR headset strap, publicly declaring that the real world will eventually seem flat and dull compared to the experiences that will be created directly inside people’s brains.

DARPA is the historical architect of this entire industry. Oddly, that fact is rarely highlighted in mainstream media coverage. Yet practically every major advance in the BCI sector traces back to US military funding. The agency has launched at least 40 neurotechnology programs over 24 years, with budgets ranging between $50 and $100 million each, for a cumulative investment exceeding one billion dollars since the 2000s. Half of all American invasive BCI companies have direct or indirect roots in DARPA programs.

The N3 program, launched in 2018 with a budget of $104 to $125 million, aims to develop non-surgical bidirectional interfaces for able-bodied soldiers, not patients. Its stated goal is to let a soldier control drone swarms and cyberdefense systems by thought. The NESD program, funded at $65 million, directly supported Paradromics and Brown University in developing interfaces capable of interacting with one million neurons simultaneously. SUBNETS, with $70 million over five years, produced the first prototypes of closed-loop brain implants for treating post-traumatic stress and depression in veterans. And the Silent Talk program, conducted jointly with the Army, explicitly targeted telepathic communication between soldiers through analysis of pre-vocal neural signals.

China has built a national strategy backed by civil-military fusion and makes no secret of it. The China Brain Project, launched in 2016, is funded at roughly one billion dollars through 2030 and explicitly targets the development of brain-machine intelligence technologies. In December 2021, the US Department of Commerce sanctioned the Academy of Military Medical Sciences of the People’s Liberation Army and eleven affiliated institutes for using biotechnology methods for military purposes, including alleged brain control weapons. Chinese military doctrine now conceptualizes cognitive dominance as a new warfare domain. On the industrial side, BrainCo raised $287 million in January 2025, the largest BCI funding round ever completed outside the United States. A government fund of 11.6 billion yuan for neurosciences was announced in late 2025 in Shenzhen. China now has around 170 active companies in the BCI sector.

Israel, for its part, publicly confirmed in February 2026 that its Defense Ministry’s neurotechnology division is developing interfaces allowing a single operator to control multiple drones via neural signals, in direct response to the drone swarm doctrine of Iranian proxies.

Russia, as usual, remains more opaque. But it’s not sitting this one out: the Balalaika program is developing a multimodal neural interface. Kommersant reported in 2021 that Putin personally approved a research program on brain chips. That has been neither confirmed nor formally denied since.

The financial map ultimately reveals connections that official announcements deliberately keep out of the spotlight. The Qatar Investment Authority stands out as the most active sovereign fund, having invested in both Neuralink and Synchron in their respective latest rounds. G42, a UAE entity backed by Mubadala, participated in that same Neuralink round. Saudi Arabia is involved through NEOM in Paradromics. And Tether, the issuer of the USDT stablecoin, now holds majority control of Blackrock Neurotech after injecting $200 million, placing a cryptocurrency company with a well-documented lack of transparency at the heart of the global neurotechnology industry.

In-Q-Tel, the CIA’s investment vehicle working with fifteen agencies including the FBI and NSA, confirmed its position in Synchron in late 2025. This is the first documented direct investment by In-Q-Tel in a BCI company. And that signal is anything but trivial, because it indicates that the US intelligence community now considers brain-computer interfaces an area of operational interest, well beyond the already well-established military research framework.

The only country to have enshrined neuro-rights in its constitution is Chile, since 2021. Meanwhile, UNESCO is sounding the alarm. The International Committee of the Red Cross published an analysis in August 2025 questioning whether military BCIs comply with international humanitarian law, raising the risk that combatants could be reduced to components of weapons systems. But money flows faster than regulations can keep up. In a single year, $2.3 billion in private investment flooded the sector. The market, currently estimated at between $2 and $3 billion, is projected to reach $9 to $14 billion by 2033. But Morgan Stanley values the potential commercial market at $400 billion in the United States alone if all medical and consumer applications materialize.

Once again, the question is no longer whether brain-computer interfaces will enter our lives. It’s who will control them, for what purpose, and whether we’ll still have a say when that moment arrives. Nothing is less certain!

How are they selling us the decryption of our own brains?

It’s always the same story. When a technology risks triggering rejection, the marketing machine kicks in to make it easier to swallow. For brain-machine interfaces, the people driving this agenda claim it’s for the benefit of people with disabilities. Very clever! Because unless you’re a genuinely horrible human being, who can be against the idea of a quadriplegic person regaining some autonomy? So we’d love to believe it, but the problem is that the people pouring billions into this technology are nowhere near being great philanthropists. Which means they have no reason to bet huge sums on a niche market.

That was the first phase of their plan. The second consists of gradually getting the population used to actually wearing technology on their bodies. That means smartwatches and, above all, connected glasses. On that note, if someone has the misfortune of coming to talk to me with that kind of glasses on their face, they’d better be ready to run, because I’m really not ready to put up with that! I genuinely don’t understand how these completely privacy-invasive gadgets ever got commercial authorization in the first place.

But that’s still not the final phase. The third step, already underway, consists of making the technology disappear into objects that no longer feel unnatural. Like connected wristbands sold as sports accessories or health trackers, which are the most widespread example. And now brain wave measurement headbands are starting down the same path, dressed up as meditation aids or cognitive performance optimizers. Add to that the earbuds millions of people wear for hours every day, and “smart” clothing. Tomorrow it’ll be prescription glasses with connectivity built in that will be nearly indistinguishable from regular glasses! At that point, the question of consent becomes genuinely thorny, because the line between an everyday object and a passive neural data collection device disappears completely. And of course, nobody signs a form before putting in their AirPods. That’s precisely the ultimate goal.

Brain-machine interfaces represent an unprecedented danger for all of humanity

Just imagine the damage when this technology is fully operational. In dictatorships like those run by the deranged likes of Putin or Kim Jong-Un, it will be an absolute massacre! With that kind of completely paranoid individual in power, at the slightest suspicion of disloyalty all they’ll need to do is strap a headband on their subordinates to find out what they really think. And in a theocracy, a lot of people would risk ending up with a headband on their head to verify whether they truly believe in God. If the answer is no, they’d end up in re-education, or even murdered for apostasy.

As for big tech, they’ve been sparing no effort for years to crack every secret of our private lives. With access to neural data, we’d enter a completely uncharted dimension. It wouldn’t just be your clicks, your purchases, or your movements they’d analyze, but your raw emotions, your intentions before you’ve even articulated them, your unconscious reactions to an ad, a political candidate, or a piece of news. Cambridge Analytica, whose scandal provoked a worldwide outcry, was just a rough draft compared to what even partial access to your brain signals would make possible. And unlike a password or a credit card number, you can’t change your brain once it’s been compromised.

In the workplace, the potential abuses are equally staggering. Employers could require workers to wear neural measurement devices to evaluate their concentration, loyalty, or stress levels. Some insurance companies could tie their rates to neural profiles. Border control systems could integrate brain wave reading as a threat detection tool, alongside facial recognition already deployed in dozens of countries. And in the democracies that are cracking, the ones where the rule of law is receding at accelerating speed, the parties in power would have in their hands the most formidable political control tool ever conceived by human beings.

In the end, the question that needs to be asked is very simple, even if the answer is far less so. Have all the risks really been accounted for? Have governments, regulators, and international institutions fully grasped what is being built right under their noses? The answer is no! And there’s another question that deserves to be asked straight out, one that the major AI ethics conferences carefully sidestep. Is it a coincidence that the main backers of this technology are either tech multinationals whose entire business model depends on exploiting personal data, or sovereign funds tied to Gulf theocracies where fundamental rights stop at the palace gates, or authoritarian regimes that have made mass surveillance a governing instrument? Is it really a coincidence that those putting billions on the table are precisely those who stand to gain the most from knowing what people actually think, before they even open their mouths? The answer belongs to each of us. But the history of surveillance technologies has taught us one thing with absolute consistency: they never stay in the hands of those who claim to control them, and they never serve only the purposes announced at the outset.

At what point does a thought become a crime?

There’s a question nobody is really asking yet. Yet everyone should be asking it right now, before it’s too late. At what point does a thought become a crime? The question might seem philosophical, almost abstract. But it’s not at all. Because as soon as a technology is capable of reading brain signals and interpreting their emotional or potentially intentional content, the temptation to use it as a judicial tool will be irresistible for some. And that temptation already exists, because legal scholars, criminologists, and law enforcement officials are already publicly rubbing their hands at the idea that brain-computer interfaces could revolutionize the justice system. But revolutionize in which direction, exactly?

Let’s take an example everyone will understand because everyone is affected. Sexual fantasies. Without exception, absolutely everyone has them. And the vast majority of people never fully act on them. What happens inside a human being’s head is not an action plan. It’s a mental space where imagination, desire, fear, anger, and transgression coexist without ever necessarily crossing the threshold into action. Starting from that principle, does a sexual thought about someone who hasn’t consented to being the object of that thought constitute assault? If the answer is no, and it is, in every legal system that still operates on the basis of acts rather than intentions, then why would that same thought, once captured by a neural device, suddenly become admissible evidence in a court of law?

And if you push the reasoning into the private sphere, you very quickly arrive at situations that would be funny if they weren’t so revealing. Picture the scene: “Honey, did you cheat on me? Put on your interface, I want to know!” And then, disaster strikes. The person didn’t cheat, but it turns out they fantasized about colleagues, neighbors, strangers they passed on the street. Which is exactly what every human being has done since the species came into existence. The verdict: “I’m filing for divorce!” The court of thoughts will be open to all, and hearings can begin. You can still laugh about it for now, but this grotesque scenario rests on exactly the same logic that some want to apply in real courtrooms. The confusion between thought and act, between desire and its fulfillment, between what crosses a brain and what a person actually chooses to do.

The same reasoning applies to far more mundane situations. Who hasn’t thought on a Monday morning, stuck in traffic or facing an unbearable boss, that they wanted to strangle them? That thought crosses millions of brains every day. Maybe billions. But it translates into action only in a very small minority of cases, generally involving a very specific clinical context. So if a neural device captures that signal at the wrong moment, in a judicial or security setting, what does it become? Evidence? A danger indicator? Grounds for heightened surveillance?

The fundamental problem is that the human brain doesn’t operate like a courtroom. It produces contradictory thoughts, fleeting impulses, and imaginary scenarios that consciousness processes, filters, and moves past continuously. Reading a thought at a single point in time, without access to the full mental process before and after it, is like judging a novel based on one sentence ripped from its context. The margin for judicial error would be colossal. And the margin for manipulation would be just as large. Because a neural signal can be misread, because the algorithms that decode them are trained on biased data, and because the history of forensic medicine is littered with scientific certainties that turned out to be wrong decades later.

We already have a preview of what this looks like in practice. In India, the BEOS system has been used for years in hundreds of police investigations, and a murder conviction was handed down in 2008 based partly on its results. The Indian Supreme Court has set some limits, but the system continues to be exported to around a dozen countries. It’s the full-scale laboratory of what awaits us at a larger scale if no one imposes serious safeguards.

Because that’s where the most immediate danger lies. Not in the science fiction of a totalitarian regime reading the thoughts of an entire population in real time, which remains technically out of reach for now. But in the progressive normalization of judicial and security uses of partial, imprecise, and highly manipulable neural data that will be presented to juries and judges as objective scientific evidence. And since the word “scientific” has a remarkable ability to shut down debates before they even start, that’s exactly the capacity those pushing neurotechnologies into courtrooms are counting on.

Since when are big tech companies required to tell you their agenda?

There’s a playbook that major tech companies have mastered, because they already used it to build artificial intelligence as we know it today. It consists of having others bear the heaviest costs and the highest risks while capturing the bulk of the value produced. And for brain-computer interfaces, that playbook is already in motion!

The visible part is what gets covered in the media. Quadriplegics, people with ALS or other severe conditions, who agree to have their skulls opened to receive an experimental implant. These patients are willing to do anything. And we understand them completely, because the technology offers them the prospect of regaining a form of autonomy they thought they had lost forever. Nobody questions their courage or their choice. But things need to be called what they are: these implantations allow the companies funding them to obtain the most precise neural data in existence, the kind that only electrodes placed directly on the cortex can produce.

Every implanted patient is also, whether they know it or not, an involuntary contributor to the construction of a neural codex. More plainly, a kind of reference dictionary that maps specific brain signals to intentions, emotions, and motor commands. The richer and more precise that codex becomes, the more powerful the decoding algorithms trained on it become. And the more powerful they become, the less invasive implants will be needed to produce actionable results with non-invasive consumer devices.

But building these decoding algorithms represents a colossal amount of data annotation and training work. Repetitive, tedious work that doesn’t require highly specialized skills but demands a considerable volume of labor. And that’s where the second mechanism comes in, the one that major AI platforms already perfected before them. When Meta, Google, or OpenAI needed millions of images, texts, or videos annotated to train their models, they turned to click workers in Africa, Southeast Asia, or Latin America, paid a few cents per task through platforms like Mechanical Turk or Remotasks.

Investigative journalism has documented these practices, revealing exhausting working conditions and miserable pay for mentally grueling tasks like annotating violent or pornographic content. There’s no reason to think that training the AI systems specialized in brain decoding will escape this logic. On the contrary, the sensitive nature of neural data makes it even more likely that this work will happen out of sight, in countries where data protection regulations don’t exist or aren’t enforced.

And precisely because all of this is sensitive, none of it gets shouted from the rooftops. The companies working on neural decoding behind the scenes have every reason not to publicly detail their training methods, their data sources, or their partnerships with offshore providers. What the public sees is Neuralink announcing that a quadriplegic man is playing chess by thought, Meta presenting its neural wristband as an accessibility gadget, Apple filing patents worded in the reassuring language of health and wellness. What the public doesn’t see is the invisible infrastructure being built in parallel, the neural databases accumulating, the algorithms refining, and the technical patents being quietly filed in domains nobody is watching yet.

What we’re observing today is therefore only the tip of the iceberg. The experimental implants, the connected wristbands, the EEG-electrode AirPod patents… all of it is just the visible surface of a process of data accumulation and technological construction happening at a completely different scale. And the day the iceberg fully emerges, when neural reading devices are precise enough, miniaturized enough, and normalized enough to blend into everyday objects without anyone paying attention, it will probably be too late to ask the questions we should have been asking much earlier. That’s exactly what happened with social media. That’s exactly what happened with smartphones. And there’s no serious reason to think history won’t repeat itself a third time.

Conclusion: Act now before it’s too late

So what do we do now? Same as always? Best case, you drop a like on this article. Maybe you share it. And then we wait… and wait some more… While neural interfaces quietly settle into the market. Same as AI. No public debate, so no ethics whatsoever. At first we laugh it off because it’s not quite there yet. And three years later nobody’s laughing anymore because it’s too late. So right now is the time to put a big STOP sign on this technology. Because yes, when it comes to technological enslavement, every red line has been crossed a long time ago. But this one, the one about decoding our thoughts, is the ultimate frontier toward a point of no return that leads straight to absolute techno-fascism.

So do what you want. You can say all of this is science-fiction raving. But educate yourself on the subject while you’re at it, we’ve given you everything you need for that. Or you decide that limits need to be set urgently. In that case, the starting point is spreading this information as widely as possible. For example, by sharing this article with genuine motivation to open a debate that is more than necessary. You can also repost it, print it out… And of course, realize that you are also a media outlet, and take ownership of this topic to help everyone understand what kind of danger we’ll be facing in the very near future. On our end, we’re not going to hold back in shining a light on everything the sorcerers’ apprentices are up to. But we don’t have many illusions, because it’s as if everyone has become completely numb to just how dangerous big tech really is. In the meantime, see you very soon for more adventures.

Share on MastodonShare on LemmyShare on BlueskyShare on Hacker NewsShare on TelegramShare by emailCopy link