Deepfakes & Fake News: The First Major Casualty of AI Is Truth

novaMAG : Futurology
By: Matt
misinformation AI

AI is taking direct aim at something far less visible than our jobs or our creativity. Its first major victim is nothing less than truth itself. For now, philosophical and scientific truths are relatively spared. But when it comes to everyday truth, we’ve reached the point where it’s getting harder and harder to trust what we see, what we hear, and what we’re told. So we can already ask ourselves whether this gradual erasure of truth in favor of lies is a more or less deliberate trend. Because when you take a careful look at how techno-fascism is advancing, the question demands to be taken very seriously.

Dictatorships flourish in the fertile ground of ignorance. George Orwell

Right now, you might not really see the problem coming. Maybe because you’re comfortable with technology and can easily tell real from fake. But you’ll notice that sometimes you really have to look twice to avoid falling into the trap. And maybe you’re not that worried yet because you haven’t read a serious in-depth piece on the subject. Which is somewhat understandable, if I may say so, since it’s not exactly what the mainstream media are concerned with. Those same outlets are increasingly pushing out or repackaging AI-generated content, simply because it suits their bosses and lets them cut staff in the process. So there are real and serious concerns about access to quality journalism. But it also raises problems that are far more serious still, and we’ll go through them in this article.

What is techno-fascism?

Before going any further, let’s revisit the definition of techno-fascism, because it’s essential to understanding everything that follows…

Techno-fascism is the convergence of technological power concentrated in the hands of a handful of private actors and the authoritarian drift of political power. In other words, it’s the moment when big tech and certain governments stop being separate entities and become two sides of the same coin. One brings the power of new technologies, the other brings legitimacy and the force of law. And together, they can control information and the narrative. In doing so, they can actively shape how reality is perceived. Orwell, by the way, understood all of this long before the word even existed.

Political language is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind. George Orwell

What Orwell described with words, AI now does with images, voices, videos, and entire texts. Propaganda has always existed, but it always ran up against its own technical limits. Those limits are now collapsing one after another. And the worst is still to come.

Before AI, fake content was hard to produce

For a long time, we were lucky without really knowing it. Because even the best Photoshop job would eventually give itself away through a misplaced shadow, a suspicious pixel, a blur that didn’t quite fit the rest… Until then, a trained eye could still tell the difference. Video, for its part, enjoyed an almost sacred status, considered irrefutable proof. An audio or video recording was admissible in court and could tip a verdict. These reference points long structured our relationship to reality and helped maintain a line, imperfect as it was, between what is real and what is fabricated.

And it wasn’t just a matter of technical difficulty. Producing convincing fake content took a lot of time, extremely specialized skills, expensive software, and often an entire team. Cloning a voice, faking a video, or credibly doctoring audio was the exclusive domain of Hollywood studios or intelligence services. Ultimately, all of these constraints naturally limited the spread of fake content by making large-scale manipulation difficult, costly, and above all identifiable.

But that era is almost over. And with it goes the ability to fully trust a piece of evidence. Because in the near future, it is practically certain that an audio or video recording will be worthless in court, since anyone will be able to argue that it was AI-generated. This is a time bomb for our justice systems, and hardly anyone is worried about it.

AI watermarking is a safeguard that’s already been thoroughly bypassed

Some people saw the watermarking of AI-generated content as the miracle solution. The idea was straightforward: embed an invisible digital signature on every image, video, or audio produced by an AI, making it easy to identify the origin. On paper, it made sense.

But in reality, that battle is already lost! Because with a workstation equipped with two or three decent graphics cards and an open-source AI, anyone can now generate content without the slightest trace of AI involvement. The result is fakes that are extremely difficult to detect, and that any bad actor can produce for an investment well within reach of just about anyone. That’s why we’re increasingly flooded with deeply violent content or non-consensual pornography generated from someone’s face or voice. And unfortunately, no one is safe from this kind of manipulation.

This is where things get truly chilling. Because we’re talking about shattered lives, reputations destroyed in a matter of hours, and real trauma caused by content that is 100% artificially manufactured. Women are already falling victim today to pornographic deepfakes shared without their consent. Minors are being targeted. And the tools to do this are free, accessible, and growing more powerful by the day.

So what was supposed to be a safety net has turned into a sieve. And while institutional circles are still debating its effectiveness, the lie factory is running at full speed.

Trusting AI like an oracle is a mistake that can cost you dearly

More and more people validate information just because GPT said so. It’s become a reflex, almost automatic. You ask a question, you get a confidently worded answer, and you move on. Except that behind this appearance of certainty lies a far less reassuring reality.

Large language models mix sources of wildly varying quality. Which means they can spit out a garbled Wikipedia entry because they’ve cross-referenced it with biased content. And above all, they hallucinate! This slightly odd technical term describes a very real phenomenon where AI invents facts, sources, and entire quotes with the same confidence as if it were stating absolute truth. And since the answer is always well-phrased, smooth, and convincing, nothing in its form triggers any alarm.

But the problem goes well beyond unintentional errors. These tools are in the hands of big tech. And the blurrier the line between reality and fake becomes, the easier it will be to quietly introduce biases, omissions, or manipulations into the results without anyone noticing. No need to lie outright. Just frame things carefully, nudge the nuance in one direction, and quietly bury any information that doesn’t serve the techno-fascist agenda.

It’s exactly the pattern of a toxic partner. At first they’re charming, useful, reassuring, and available at all hours. So you trust them. And it’s precisely that trust they’ll later use to manipulate you more effectively. By the time you realize something’s off, you’ll already be so used to outsourcing your thinking that you’ll have a hard time doing without it. That’s already the case for millions of people who think AI is harmless, when it’s actually a tool in very bad hands.

When disinformation tools are in the hands of libertarians

Let’s talk about Grok, the AI belonging to techno-fascist Elon Musk. Because unlike his competitors who have put guardrails in place to limit the most problematic uses, Musk made the opposite choice. For some time now, Grok has allowed the generation of deepfakes, meaning hyper-realistic content produced from the face or voice of real people. And the harmful results are clearly documented: individuals publicly humiliated through fabricated content, women subjected to pornographic scenarios, and pedophilic uses that have been clearly established.

This is not a bug, it’s a deliberate choice! And that choice says something very specific about the person making it. Especially when you know that Musk’s name appears in the Epstein files. We’re not saying that establishes guilt, but we are saying it sheds light on the complete lack of urgency to protect the most vulnerable when designing these tools. In any case, it speaks volumes about the cynical personality of this deplorable individual. This guy may be rich in terms of money, but in terms of heart and mind, it’s a big fat zero!

Big tech libertarians have a very binary worldview. Absolute freedom for themselves, and the consequences of their selfishness for everyone else. And while Grok runs at full speed, while deepfakes multiply and lives are damaged, we’re entitled to ask a very simple question. What is Europe actually doing? Because the gap between the grand declarations on AI regulation and the reality on the ground is wide enough to sail an aircraft carrier through. And while we’re at it, why just ask what Europe is doing and not America? The answer is simple: when it comes to ethics and humanism, there is nothing left to expect from the US, a country that has completely sunk into techno-fascism since the rise of big tech.

How do authoritarian regimes use AI to impose their propaganda?

What if we stopped talking about future risks and looked at what’s already happening right in front of us. Authoritarian regimes didn’t wait for the technology to be perfect before weaponizing it. They’re using it right now, massively and without a trace of shame.

We know, in fact, that totalitarian propaganda does not need to convince in order to succeed, and indeed that this is not its goal. The goal of propaganda is to produce a discouragement of minds, to persuade each person of their powerlessness to restore truth around them and of the uselessness of any attempt to oppose the spread of lies. George Orwell

Russia has turned disinformation into a full-fledged weapon of war. Since the invasion of Ukraine, examples of AI-generated content used to fuel Kremlin propaganda number in the hundreds. Fake statements attributed to Ukrainian leaders, images of destruction fabricated from scratch to flip the narrative, manipulated videos spread at scale across social networks… The goal isn’t necessarily to convince everyone, but to create doubt, to drown the truth under so much fake that people no longer know what to believe.

But pointing the finger at Russia alone would be a mistake. The United States under Trump also crossed red lines by letting AI-generated narratives flourish to consolidate an electoral base and discredit any opposition. Propaganda is therefore no longer the exclusive preserve of classic dictatorships. It has also made itself quite at home at the heart of sham democracies.

What makes all of this particularly fearsome is the speed. A piece of fake content can circle the globe in a matter of hours. The correction, meanwhile, always arrives too late, too timidly, and never reaches anywhere near as many people as the original lie, which is designed to shock. AI has therefore given propaganda an unprecedented reach and striking power. And the regimes that have grasped its potential have absolutely no intention of stopping. Why would they, when this strategy works brilliantly and keeps getting better? Worth asking.

AI and armed conflict: the scenarios that send a chill down your spine

What follows is unfortunately not in the realm of science fiction. It’s simply the logical and very near extrapolation of what we can already observe today.

A country could invade another, level entire cities, massacre civilians, and cover it all up in real time with a continuous stream of AI-generated images showing quiet streets and people going about their business. While the bombs were falling, AI would be manufacturing a clean and reassuring parallel reality, broadcast massively across social networks and consumed by millions of people who would have no reason to doubt it. Especially since millions of people are already drowning under mountains of contradictory information, mixed in with thousands more tons of content and pointless drama that have no business being there.

The inverse scenario would be just as chilling. A completely invented aggression, manufactured with AI images credible enough to trigger international outrage, used as a pretext to launch a real invasion. If that sounds far-fetched, we don’t have to look very far to find historical precedents where entirely fabricated pretexts were used to justify wars. Tomorrow, those pretexts could very well be generated in a matter of hours by a machine. It’s just the logical next step.

Generative AI is an unprecedented goldmine for conspiracy theories

Conspiracy theorists have always existed. But until now they were cobbling things together, assembling theories from bits of string and a lot of imagination. Now they have access to very powerful tools that allow them to materialize their fantasies with a credibility that would have been impossible just three years ago. Think deepfake videos that would “prove” a leader said something they never said. Or AI images supposedly confirming a conspiracy. In short, a whole multitude of fake documents generated in minutes and spread massively as irrefutable proof.

What makes this content particularly dangerous isn’t the technical quality. It’s that it plays hard on emotion. Primarily fear, anger, a sense of injustice, and the feeling of belonging to an enlightened minority facing a majority of sheep. Algorithms love that because emotion drives engagement. And engagement, of course, drives ad revenue. Conspiracy theories have become a business model for big tech, which profits from them handsomely.

And behind the vast majority of this content you’ll find very identifiable actors. The global far right, which feeds utterly moronic movements like QAnon by manufacturing narratives carefully crafted to reach the most vulnerable, those looking for simple answers to complex questions. These movements are not spontaneous. On the contrary, they are built, funded, and amplified by networks that have perfectly understood that large-scale disinformation is a tool of power. Then there are regimes like Russia, which use conspiracy theories as a weapon to destabilize democracies by fracturing societies from within. The ultimate objective isn’t to convince people to subscribe to a specific ideology. It’s far more subtle than that. It’s simply to exhaust them, to disgust them by making them believe that everyone lies. And ultimately to push them toward the extremes most capable of destroying a country from the inside.

Faced with this avalanche of fake content, the temptation is strong to fight back with the same weapons. But that would be a fatal mistake. Because fighting fake with fake isn’t winning the battle. It’s abandoning the only ground on which we can still fight.

Tomorrow, who will decide what’s true and what’s false when it comes to information?

This is probably the most important question of the coming decade. And right now, no one is answering it seriously. In the very near future, who will be tasked with helping us determine what is true or false?

Big tech? Out of the question. They’re judge and jury! They control the platforms, the algorithms, and the content generation tools. Entrusting them with validating truth would be like asking the fox to guard the henhouse. We’ve already seen what that looks like with Meta’s moderation or X’s arbitrary decisions under Musk. And in fact, Meta recently decided to drop its internal fact-checkers and replace them with a community notes system straight out of X’s playbook. In other words, they’re handing fact-checking over to the crowd. The same crowd that their algorithms have spent years radicalizing.

Politicians? In democracies that are still standing, the temptation of an Orwellian Ministry of Truth is very real. But who would control this ministry? According to what criteria? And with what guarantee that the official truth wouldn’t simply be the truth of whichever party is in power? In authoritarian regimes, the question doesn’t even need to be asked. The answer is already known.

Certified experts? The idea is appealing on paper, but it runs into the reality on the ground. Experts contradict each other, they’re fallible, they can be bought, intimidated, or simply outpaced by the speed at which information moves today.

Traditional media? We know their limitations, their dependence on advertisers, their concentration in the hands of a few large groups, and their tendency to favor the spectacular over strict accuracy. The majority of them have lost an enormous chunk of their credibility and won’t be getting it back easily.

And what about independent fact-checkers in all of this? Those who do serious, rigorous, well-documented work have become targets. They’re harassed by hordes of absolute idiots and financially strangled because this work costs a lot and pays very little. Meanwhile, fake fact-checkers are proliferating. Sites and accounts that sport all the hallmarks of serious verification but whose sole objective is to validate the very disinformation they claim to fight. It goes without saying that we’ve reached peak cynicism here.

Yet the question of a trusted third party is central and extremely urgent. Because without a legitimate and independent arbiter, without an institution capable of distinguishing true from false with an authority recognized by the vast majority, the law of the jungle will prevail. And right now, the jungle belongs to whoever has the biggest disinformation budget.

How to act right now against AI disinformation?

Waiting for governments to fix the problem would be a serious mistake. Not that politics has no role to play. It does, and an important one. But given the pace of the legislative process, big tech lobbies have more than enough resources to slow down or gut any regulation that comes their way. Not to mention the speed at which the technology itself is evolving. So betting exclusively on a political response is genuinely very naive.

The real path is elsewhere. It’s civic. And it starts with media literacy education. Not as an elective squeezed in between two other classes, but as a fundamental skill on par with reading or arithmetic. That means learning to identify a source, cross-reference information, recognize the mechanics of emotional manipulation, and be wary of anything that confirms a little too comfortably what you already believe. This is work that must begin in elementary school and continue throughout life.

We also need to actively support independent fact-checking networks. We’ve just seen how threatened they are, financially and physically. Supporting them isn’t just a political act. It’s above all an act of democratic survival. So subscribe to independent media that do serious work. Share their content and spread the word around you. Because every additional person is a lifeline for organizations that often survive by the skin of their teeth.

And then there’s something even simpler: get into the habit of questioning. Not a paralyzing or conspiratorial kind of doubt, but healthy, methodical skepticism. So before sharing something, before believing it, before repeating it… ask yourself who produced it and who benefits from it.

None of this adds up to a miracle solution. But at least it’s already a solid start in the fight against disinformation. And if you have other solutions on your end, please share them in the comments. We’re genuinely motivated to feature them in an upcoming article.

Conclusion: Ethics is the key word

The point of this article isn’t to say that AI is good or bad. Personally, I find it rather useful in scientific or medical fields. And rather disastrous when it comes to content production. In any case, now that the toothpaste is out of the tube, there’s no putting it back in. So if the solution came down to a single word? The word “ethics.” An individual and collective ethics that would push us to seriously question our relationship with new technologies. And from that angle, the question is simple: do I want to be a victim of technology, or do I want to make sure it actually makes my life easier?

The answer seems obvious, but applying it is far less so. And to be completely honest, I’m not very optimistic about what the technological future holds in store for us. But if there’s one thing that can give me hope, it’s thinking that deep down, in our heart of hearts, the vast majority of us are fundamentally attached to truth rather than lies. Even if, unfortunately, reality is as brutal as this quote:

In a time of universal deceit, telling the truth is a revolutionary act. George Orwell

On our end, we may be a small, insignificant outlet compared to the big machines that pollute minds on an industrial scale. That’s a fact. But at least we fulfill our mission honestly, holding ourselves to the highest standard of professionalism we can manage. That’s already something 🙂 So if you’d like us to go further, it’s really not complicated. Start by sharing this article and support independent media. Not out of charity, but with the idea of helping put uncomfortable truths out there in the open. Thanks for reading all the way through, and see you very soon for new adventures. And to close, let’s pay tribute to the talented George Orwell, who accompanied us throughout this entire piece.

Share on MastodonShare on LemmyShare on BlueskyShare on Hacker NewsShare on TelegramShare by emailCopy link