Or was it? 🤔
Let’s be honest: just the emoji used here already raises suspicion. This emoji pattern is used to exhaustion by AI-generated texts ("please Claude, don’t use emojis in the DAMN REPO DOCUMENTATION"), so before even reaching the second sentence the reader is already skeptical (and rightly so). I myself start reading with bias, looking for signals instead of ideas — and I use AI for a lot of things.
Those signals show up fast. The structure is just too good. Balanced paragraphs, a clear introduction, organized development, a predictable conclusion. A text built to perform well: SEO, headings, keywords nicely distributed, and that strategic separation (which annoys me) with --- between h2s. Because every “well-written” article needs a visual pause.
And when AI
Creates those texts
That look like a poem?
Full of punchy one-liners??
Yeah, that’s been bothering me. A lot. Sometimes I just want to read without feeling like I’m being led around by an algorithm or by a clearly manipulative text. Don’t get me wrong, none of this is wrong. But the problem is that, added together, everything starts to sound the same.

The bad side
When I launched the blog on my site, I fell into this trap — not in the way big news portals do, but in a more “juvenile” way. I’d write the article, AI would improve it, I’d read the result — think it looked beautiful — and publish it. The next day, when I reread it, I felt embarrassed, rewrote it, added my human flaws (which I actually like!). But then it doesn’t engage.
Writing without AI makes me realize that I love writing, but I’m painfully bad at it. I’ve always dreamed of writing a book, even if it sells nothing, but my very human limitations get in the way of producing something decent. The short-term solution I found is this:
Use AI to tear your text apart, give that brutally honest critique — but without rewriting it. Let me spend half an hour here writing, even if that means publishing one less post per month.
But is it worth it? Probably not. Let’s be real. Look at the most consumed content, the most mainstream stuff, the best-selling books, the most listened-to music, the most watched movies. This is how the world works. And this is where the so-called dead internet theory starts to feel less exaggerated and more like a natural consequence: Skynet won’t take over the internet, we won’t become slaves, babies won’t feed the Matrix — but humans have discovered it’s easier to outsource thinking than to sustain an original idea.
Thinking is tiring, making mistakes publicly is costly, and having an opinion exposes you — oooh, nice punchy sentence! Like it or not, AI creates a comfortable, error-proof safety layer. We stop exposing what we really think or what our real quality is, and instead expose what we want to show — or what others expect from us.
As a result, that text about that amazing project becomes 100% technical, beautiful, but empty of business logic — why was that decision made? AI doesn’t know, and it was the one who wrote it. We end up in that space where everything is well written, everything is correct… and everything is the same.

The good side?
There’s a lot of good stuff there — which can be used for evil or just used poorly. I now have: a reviewer, a junior programmer, a DevOps assistant, and hundreds of other roles/tasks just a chat away. Same quality as a real specialist? Of course not. But delivering something decent for a fraction of a fraction of the cost? Absolutely.
Ok, but you’re contributing to the end of jobs, creative work, humanity, the whales.
Here I need to bring up a harsh reality: nobody cares. Often not even us. We don’t fight for what we don’t actively defend — we just accept it, or get left behind. Small developers, small studios, and creative people without deep technical backgrounds are now shipping genuinely useful things thanks to the accessibility AI brought. And it feels like this is just the beginning. I don’t even know what my professional future will look like in 10 years, but I have to reference that now-dead blog/podcast I loved: Update or Die.
I’m from the era when DreamWeaver was loved (that’s a lie, that era never existed), and even though I was slow to adopt some things, I realized how rewarding it is to have an assistant on your second monitor working on your side project while you do your day job. Or that AI technical review that will save you — and teach you — a lot before an important presentation.
It’s dangerously gratifying. Once again, we’re putting ourselves in the hands of a few big companies. So what can we do? Get very rich and escape to the countryside (lovely dream), or accept it and try to build some financial cushion. Maybe the future of content isn’t about proving it was written by humans, but about accepting that if no one risks saying something real, it hardly matters who wrote it.
Damn, I’m really nailing the punchy lines today.
So tell me — do you think this text was written by AI? Or not? 🤔
