Midnight Protocol: AI and the death of the internet

Click to listen to a fully human, non-AI reading of the article (length: 26:05)
Ever heard of the Dead Internet conspiracy theory? The gist of it is that the internet "died" around 2017; 90% of the internet is bot activity, and we're being kept in the dark by the US Government and the big tech. This claim is pretty obviously false; the US government might be powerful, and Google - who by and large hold the keys to the huge swaths of cyberspace - has already begun to work closely with the Trump administration, but that's a very recent development. Besides, a lot of netizens come from places where the US might not have as much power as one would believe. In other words - it's utter nonsense. Why did I bring it up again?

Let's talk about something interesting instead. Midnight Protocol is a very unique game that really flew under the radar. Theoretically it's a hacking game, but it completely reinvents the genre; it's nothing like actual hacking, but it sure as hell makes you feel like a 2000s movie darkweb haxx0r. Read my review if you wish to know more; necessary spoilers from now on.

The game contains some pretty dark ethical dilemmas centered around speculative-yet-plausible technical developments. Unsurprisingly, many of them include AI - and it uses the classic view that "instance of a bot = an individual", which is not really how current technology works. A person is a continous cogntivie existence, while an AI - once trained - becomes active only to complete a task. Whatever semblance of thought might be involved in this process is completely annihilated as soon as the task is complete; and even if the process feeds back to the next query, that's a completely new "thought", albeit with a slightly different set of local parameters. This has absolutely nothing to do with consciousness, and showcases why skeptics find the idea that the current AI boom will lead to Artificial General Intelligence (AGI) laughable.

In other words, the current generative AI models are more like a calculator than a person: you turn it on, feed it a query, it spits out the answer, and you turn it off. Whatever sense of continuity might be there is just an illusion. Midnight Protocol's A(G)I seems continuous, so it can't be a result of this process. Or perhaps that's also a clever illusion?

In any case, the game makes a clear distinction between a simple chatbot and an actual AGI - which is in fact the central core of the plot. The AI itself has been completely isolated from all stimuli as to prevent learning - likely waiting for the perfect set of training data, carefully crafted to ensure the AI's compliance; but things never turn out this way, do they? Instead, a notorious hacktivist breaks into the system housing the AI and unknowingly exposes it to themselves - which causes it to immediately latch onto all of the hacker's data and using it to learn.

In essence, it becomes a god-like digital copy of the hacker, inheriting their personality, interests but also ethics and drives. A hacktivist with enough power to make anything happen. So it devises a master plan:

The corporations rule the world by monitoring, harnessing and controlling all data on the internet. The only way to stop them is to create an uncountable amount of fake humans - with activities, interests, pictures, posts, effectivly impossible to tell they're fake - and flood the internet with them. You'll never know if whatever you see online is posted by a human or not. Therefore, internet will become unusable, and the data will become meaningless, destroying the corporations' grip on society.

Would you support this plan, or destroy the AI to oppose it? In other words, would you kill the internet for a chance to really harm the corporations? Or would you initiate the Midnight Protocol and kill the AI instead?


A close friend of mine - let's call him Rosenkrantz - has also played this game and while we are quite aligned politically, we both answered this question differently. We talked about our differences for a bit, and didn't think much of it... until we got whacked in the face by the shockwave of the GenAI boom.

The floodgates have been opened; the slop began pouring in. As we discussed this new world order, we made an unspoken devil's bet of sorts: "Looks like this is happening. Let's see whose ending choice was the right one."

I'll admit, I was taken aback by how powerful the technology has become. I knew about machine learning and its capabilities for a pretty long time, but this? How can an image be generated out of text? A coherent essay? Functional code? "Any sufficiently advanced technology is indistinguishable from magic" - and it most certainly felt like magic for a hot minute. Did they suddenly develop AGI and we're entering a post-singularity world? What else could it be?

But then I took a closer look - and not just at the weird hands of the generated pictures - and I noticed an interesting claim from the art community: the generated art had traces of signatures and watermarks. And that's when I understood what's actually happening.

All publicly available data is being scraped for training on an unprecedented scale, with access to incredibly powerful hardware and seemingly little care about things like intellectual property. In a now familiar big tech fashion, the modus operandi is "ask forgiveness, not permission"; or - more accurately - "I'll do what I want, consequences be damned"

It's theft. Plain and simple. That's what gives it its formidable powers - the secret ingridient is crime. Which makes the fact that it's devaluing or even outright replacing human artists' work - the very work it stole to be trained - all the more darkly ironic. But with this realisation also came a follow-up: if AI doesn't actually create, merely reconfigure what is already there, doesn't this mean that it will eventually become incredibly stale?

(interestingly, this exact scenario is explored in the game SuchArt: Genius Artist Simulator which left Early Access almost exactly a month before ChatGPT's release. How's that for coincidences?)

I scrambled to learn more, trying to understand the topic better. And the more I read, the more clear one thing became: The amount of uninformed FOMO, lack of understanding, transparency, and any foresight beyond short-term profit is staggering, and the sheer wealth being poured into this whole mess makes this an order of magnitude worse. Yet, all this research made me strongly believe that the terminus of this hype train is a ditch by the tracks; it's not going anywhere. AI currently understood by society - LLMs, GenAI and other similar products - can't exist long-term. They shouldn't - sure - but they also simply won't. And there are multiple reasons as to why this is the case.


The most pressing and sensitive reason is actually the simplest:

it's too expensive to run.
While it's difficult to get the specific numbers, OpenAI - the operator of ChatGPT and by far the most load-bearing player - seems to be losing money at an alarming pace: in 2024, they actually lost 5 billion dollars, and that's after revenue! To say that they found themselves strapped for cash is a dangerously euphemistic understatement.

To remedy this, they began transitioning from a non-profit entity to a for-profit one; that was the part of the deal with their biggest investor yet: Softbank. Who - a propos of nothing - has a diverse, well-established portfolio of bad, unprofitable, overvalued investments.

You might say that it's just OpenAI, it means nothing - but according to Ed Zitron's research, all other AI service providers' traffic combined is but a small fraction of that of OpenAI. In 2023, it's been estimated that ChatGPT costs $700k a day just to operate, and the most recent estimations increase this number to $16.4 million (or $6 billion a year) - a 23-fold increase.

At the same time, Sam Altman - OpenAI's CEO - claims the costs drop 10x every year... but considering the evidence, I'm having second thoughts about believing the man who stands to profit from this. One thing is clear: OpenAI is currently running off investor money - with an expectation of becoming sustainable and indeed profitable; to do that, they need to increase their revenue. To that end, they plan on acquiring more users and making the paying ones pay even more.

But... is that really viable? The vast majority of people who would end up using AI already do, and not a lot of them are willing to pay (even more) for the product that, well,

isn't good enough.
At this point we all heard about the famous "hallucinations" - LLMs confidently giving false information - and while OpenAI claims the hallucination rate has been going down, it will never reach zero. That makes it a non-starter for a lot of intended uses, as many corporations are learning the hard way.

Air Canada's lesson involved their chatbot making up a bereavement money back policy which ended in court. Cursor - hilariously, an AI company themselves - had their customer support bot invent a fake policy about installation on different devices. And then there's the time-tested classic: lawyers using AI-generated citations that refer to cases that don't exist. A story so common, I invite you to search for a few yourself - all of them are quite amusing.

This issue also greatly affects the C-suite's coveted AI-powered productivity increase. The return on investment (ROI) seems to be so miniscule, some companies really stretch the definition of what counts as AI revenue - and one analyst outright stated that

We're actually recommending to CFOs not to bother calculating the ROI, because you're not really going to be able to demonstrate that on your financials.

But that's not all. According to a 2024 study by The Upwork Research Institute 77% of employees say that AI actually makes them less productive. Although the study also says that freelancers don't have this problem, and Upwork is a company that sells freelance work, so there's a bit of a conflict of interest here; but the findings do match the overarching trend of AI models not really living up to the hype.

AI proponents will say "It's the early days. Things will only get better". I wouldn't be so sure. For things to get better, the models need to be bigger, even more complex, have even more parameters - which in turn require many times more data to fill their cognitive stores. That might be tough to achieve, because

there is no more data.
The AI crawlers have scraped almost the entire usable internet for the "high quality" stuff - and most publishers are blocking whatever's left. This problem has been looming for at least a year now, but assessing its exact scope is hindered by lack of transparency from AI providers.

In spite of that, we can surmise that the issue is quite pressing. Pressing enough that the crawlers began breaking the gentleman's agreement of respecting the robots.txt files; think of them as a list of rules on the library wall - non-binding, but generally respected, so nobody thought to enforce these rules. This is no longer the case, and the open internet began to fight back by developing clever anti-bot solutions. In fact, this very website uses a custom, personal brand of anti-bot spray.

And even if they were to acquire this data, as of now, all data management requires organised human oversight, including data labelling. Right! Did you know that there is a whole sweatshop-like industry where people from the Global South are underpaid to label data all day? These services are essential for AI to function. Again, the secret ingridient is crime, or at least exploitation and even union busting. Then again, is this really news at this point?

It's gotten so bad that earlier this year, Elon Musk himself stated that the data has dried up, and suggests usage of so-called "synthetic data" to fill the gaps. I wouldn't trust what he says - remember the moratorium letter that was nothing more than a marketing stunt masqueraded as activism? Still, let's examine his proposed solution: "synthetic data". AI-generated data to feed AI. If that sounds like inbreeding to you, that's because it is - the scientific term is AI autophagy, which over time leads to an event called a model collapse; or Habsburg AI, as researcher Jathan Sadowski called it. I think that's pretty self-explanatory; as you'd expect, a collapsing model makes the AI's output quality deteriorate very quickly. We wouldn't want that, would we? Fortunately, I'm delighted to announce that

model collapse is unavoidable.
In fact, it's already here. OpenAI's new reasoning models o3 and o4-mini hallucinate more than their predecessors - and the researchers claim they don't know why. I'm not sure how nobody is mentioning that more and more of online content is AI-generated, so obviously some of it is going to make it into the training data set. You'd think you could fix this issue by filtering AI content using AI, but there are efforts to obfuscate the content's artificial origins - and they're good enough to confuse the AI detectors; a growing problem in academia, which also serves as proof that the models can't protect themselves from their own creations.

And while this is mostly anecdotal, I hear that people who have been using AI tools for a while have been complaining about the tools becoming less and less usable - particularly when it comes to software development (aka "vibe coding").

Let's look at it all from a different standpoint for a moment. Like I mentioned, AI crawlers have scraped almost everything that's available. Of course, most of this stuff is (currently) human-made; by a person, or a group of people. As such, they have an author. An owner to the intellectual property. It's something that's well-understood and even fundamental to our society: a work has a creator, and the work is theirs. This remains true even if the rights have been transferred to someone else, usually a company, as part of the employment process. Obviously.

Yet, AI scrapers operate with a natural assumption that they are above these rules; they have to, as OpenAI stated in a 2023 submission to the British House of Lords. In fact, just a month ago they started pushing the US government to codify the fair use doctrine to specifically allow AI access.

Now, even if we were to assume that their goal is indeed noble - which it isn't, OpenAI becoming for-profit and publicly traded means if they end up developing AGI it will be in the service of private interests - this poses a unique problem. Since AI operates on reconfiguring whatever data it learned from, it means it must remember this data somehow - it stands to reason then that with a proper prompt it's going to spit out the data essentially verbatim.

This claim can be experimentally proven: look at this Dune screenshot. To a human observer, it's quite obviously almost identical to the original, and yet it remains a "work" of an AI - with everything this classification entails. So... who truly owns this picture, which a human will identify as a screenshot of a copyrighted movie?

This little thing, combined with all the other quarrels about data ownership, is what leads us to the final, fifth reason. If everything goes exactly as the AI operators want,

it will completely invalidate the legal concept of intellectual property.
In another time, I'd applaud this as a great thing; last I checked, though, we haven't yet achieved fully automated luxury gay space communism, and IP laws are required for people to be able to get paid for their work.

Admittedly, this point is something that lots of parties are well-aware of - you've probably heard of the Hollywood Strikes. There also are so many lawsuits, there's a whole website listing them - and new ones pop up every few days.

IP laws are something the big tech sees as a threat. Jack Dorsey (original owner of Twitter) has called to "delete all IP law", to which Elon Musk responded "I agree". With all due respect, I'd like to see the lawyers of all corporations that hold and rely on any patents at all to let this stand. Things might not even go that far, though: so far, a US judge has ruled that AI-generated stuff can't be copyrighted because they have no author, and courts in general don't seem to want to side with the AI operators.

Although this doesn't give them much pause. Just recently, Meta's been accused of using a huge pirated library of copyrighted books to train its AI; in response, they essentially said the books are worthless, so it's okay. This betrays the deep contempt they have towards art - it's value can only be expressed in monetary terms. It's a commodity, a product. A resource to be exploited - and nothing else. Just like the NFT enthusiasts, who argued that if artists don't mint their artwork as NFTs themselves, it's their fault for having their art stolen.


The reasons listed above are what I call "hard reasons"; why AI - as it is now - will undoubtedly run out of steam and collapse. To quickly recap:

1. It's not economical
2. It's too fallible
3. Data scarcity
4. Model collapse
5. Legal reasons (intellectual property)


For the sake of argument, let's assume that by a combination of an insane technological breakthrough mixed with careful, corporate-friendly lobbying, all of these problems are overcome. Even then, we have "soft reasons" against AI. Not why it can't be, but why it shouldn't be.

It's theft. I don't think I need to explain myself here. I'll just paraphrase a comment I found somewhere: I don't care if the human meat pie is the most delicious thing in the world. I don't want a human meat pie, and I don't want to live in a world where people eat human meat pies.

It uses a gigantic amount of resources. Mostly electricity and water, but also land for data centers. To avoid catastrophic worsening of Earth's climate, I believe we were supposed to decrease energy usage, not increase it. (admittedly, this spurred a wave of investments into nuclear energy, which might offset the issue)

The economy isn't ready for it. AI proponents claim that AI will be able to free humanity from having to work - but our economic systems require people to work to sustain themselves, and not much progress has been made on making it economically viable to live without work. In fact, it doesn't even seem to be on the agenda of any major political bodies.

It empowers the wrong people. Most proponents of AI are... I mean, have you seen what kind of crowd thinks Elon Musk is still cool? I'll just leave it at that.

And finally, probably the most important one:

Everybody's sick of it. It's not just you. The only people excited about AI at this point are investors and business owners intrigued by the prospect of lowering labour costs. AI fatigue is something that's well known by now, and the internet is filled with drivel giving bosses advice on combating it (nobody's mentioning a rollback, curiously).

For what it's worth, AI fatigue is starting to impact their ever-important bottom line: Just recently, Duolingo's CEO announced that their company will be "AI-first" and the resulting backlash and a wave of cancelling subscriptions... just take a look yourself.


In my opening statement I mentioned the Dead Internet theory; as it turns out, some people think it'd be nice if it was true. In the beginning of 2025, Meta openly released some AI users on Facebook - and it took them roughly 24 hours to roll that decision back. But that's just something that's not lying about what it is; Facebook's been filled with AI slop for more than half a year. The conspiracy theory might be false, but the unfortunate truth is that the internet is slowly dying.

This might be not such a bad thing, however. While writing this article I spoke to Rosenkrantz again, and he pointed out that perhaps the internet should indeed burn down, as the technology that supports it was never built for the ubiquity and scale of it all; with the implication that we can - and will - build back better.

I don't think he's necessarily wrong; even if I don't entirely agree. But I do find it rather beautiful that we're having this conversation again, but with actual, real world experience, and a lot more data to draw from - and our stances remain the same. After all, Rosenkrantz was the one who chose to kill the internet. I chose to initiate the Midnight Protocol.

Unfortunately, unlike in the game, there is no international covert force tasked with silencing rogue AIs. The closest thing to AI regulation we have right now is the EU AI Act - which claims to ban "AI systems posing unacceptable risks"... but the nebulous way they are defined (Article 5) is incredibly prone to fairwashing.

(Prohibition of) the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;

A study empirically proving that it's possible to create an AI which knowingly and purposefully breaks this rule and have another AI convincingly explain it away has been published in 2019 - five years before this act. Forgive me if I question the credentials of the people who wrote this.

Article 5 has been in effect since 2nd February 2025; but no actual bans resulting from the act have happened up until now. I don't think this is likely to change. The Midnight Protocol has to come from somewhere else.


I've argued that the collapse of the current AI industry is inevitable; I say to my friends that I expect the bubble to pop in 2025, but the industry is currently very hostile to any naysayers. After all, spreading information about a more realistic approach to AI could begin a catastrophic selloff, and the resulting crash will have deep economic ramifications for a lot of people - possibly including you and me. So it's quite possible that great efforts will be made to keep this technology on life support for as long as they can.

As a result, AI will keep damaging our world by filling the internet with slop, spreading misinformation, temporarily taking people's jobs (they get quietly rehired, since AI can't actually replace them) and making artists' lives even more miserable than they already are. The governments are too slow to act - it's up to us to protect ourselves in any way we can. Here's a little handy list of things you yourself can do, along with some ideas:

- if you're in the EU and using anything from Meta - Facebook, Instagram, Whatsapp - make sure you find the AI consent setting, and turn it off. Meta started scraping our data and your consent is, conveniently, implied.

- if you haven't already, get uBlock Origin and add this AI Blocklist to your filters. It works really well - it even blocks Google's AI summary.

- if you are a visual artist, you can poison your artwork using Nightshade and Glaze. These tools preserve the image for human eyes, but scramble it just enough that AIs not only won't be able to use it, it will actively confuse them - permanently. Or at least as long as your image is online. Here's LavenderTowne - an established cartoonist - talking about the tools.

- if you have anything hosted on Cloudflare, check out their AI Labyrinth feature. It traps the AI crawlers in AI-generated hell.

- if you host something elsewhere, consider getting Anubis; it's an up-and-coming project to block most AI bot activity, and lots of tech people swear by it. It also features an anime girl, which might be a pro or a con for you.

- this is more theoretical than anything, but techno-artist Joseph Wilk wrote a very well-researched plan to poison the Common Crawl - a major source of AI training data. He does point out that it's a lot of work for almost no benefit... but AI mistakes tend to compound. It might be worth doing.

- finally, consider voting for political parties with limited patience for corporate nonsense - ones that might be a bit more proactive about stopping whatever man-made horrors beyond our comprehension will come next.


Who's to say who was right - Rosenkrantz or me? Perhaps we both were. Or neither of us were. AI will die, and so will the internet; or the internet will die, and take the AI with it. Technology is becoming less and less usable. Enshittification is almost an everyday word by now. Even Google search barely works at this point - made even worse by the (rightful) mass blocking of Google's crawlers.

Still, I can't help but believe in a new beginning. Rosenkrantz speculates the Fediverse might be the Noah's Ark of this AI Deluge, and honestly, I wish for him to be right. Let's retake the internet. Talk to each other. Share stuff. Remember what it is that makes us human.

And initiate the Midnight Protocol.

This article wouldn't be possible without Ed Zitron's research work.
Full bibliography