· 18 min read

The Double Edged Sword: AI

philosophy science

Fellow overthinkers, let's talk about the sharpest tool we've ever built, and how we keep waving it around like we're invincible.

AI is that shiny double-edged sword. One side is so beautiful it makes you want to cry a little (or at least makes your imposter syndrome file a resignation letter). The other side is... well, the side that makes your brain whisper, "What if we're speedrunning the bad ending?"

And the wild part is, both sides are the same blade. Same math. Same silicon. Same "wow". Different intent.

The origin story, or: how one guy casually changed the timeline

Back in the day, John McCarthy, the guy people call the father of AI, basically looked at computers and said, "Yeah, those can think." Not "compute", not "calculate", not "sort your Excel columns". Think. Logical reasoning. Problem solving. The whole human brain vibe, but in a box that hums and heats your room.

That was the dream. That was the pitch.

Then reality pulled up like, "That's cute."

AI went through phases where it looked promising, then immediately face-planted. The industry kept doing this cycle where everyone got hyped, funding rained down, papers flew, conferences exploded, and then, when the hype didn't turn into real-world magic fast enough, everything froze. The famous "AI winters." Like the entire field got put into a freezer because it couldn't deliver a talking robot by Tuesday.

But then the 2010s happened, and someone somewhere (okay, a lot of someones) found the cheat codes.

Deep learning started cooking. Neural networks that used to look like overcomplicated toys suddenly started doing real work. GPUs went from "gaming thing" to "science thing." Models got bigger. Data got bigger. Everything got bigger. Typical human response to any problem, honestly (if it doesn't work, make it bigger until it scares you).

GANs showed up and basically said, "Oh, you thought computers can't imagine? Hold my electricity bill." Suddenly we weren't just classifying cats vs dogs. We were generating faces that didn't exist, art that never got painted, and images that made your brain do that confused little buffering circle.

By the late 2010s, this stuff started leaking out of labs and into public hands. At first, it was a bit... weak. Clunky text generation. Weird chatbots. "Look mom, it wrote a poem," except the poem sounded like a toaster having a breakup.

But everyone could see it. The trajectory was obvious.

People knew these models could get better. And then we did what humans do best, we poured time, money, ego, competition, and pure obsessive curiosity into it.

Now here we are.

Every year, the models get more powerful. Not just better at writing "hello world", but better at:

It's crazy crazy stuff. It's happening fast. And it only seems to accelerate.

So if AI is getting better, where's the problem?

If the trend is "machines do more work," then shouldn't we be celebrating?

Like, imagine it.

An AI assistant that:

In that world, AI isn't the villain. It's the ultimate upgrade. v2.0 of human progress. A turbocharger strapped to civilization.

And if you're like me, you can already feel the temptation to say: "Bro, give me more. Keep scaling. Keep training. Keep going. I want the full sci-fi package."

This is the part where the positive side of the sword glows like an enchanted weapon.

And yeah, it's real. It's not just marketing. It is genuinely life-changing.

The reversal: it would be fantastic... if the world worked like a fairytale

It would be fantastic if technology automatically made life better for everyone.

But it doesn't.

That's not how reality behaves. Reality is not a Disney movie. It's a messy multiplayer server with lag, griefers, pay-to-win mechanics, and absolutely zero tutorial.

Here's the first issue: speed.

AI is advancing so fast that most of us are living in a constant state of "wait what do you mean it can do that now?"

Even if you keep up, only a small percentage of the population can. Not because they're "dumb," but because they have jobs, families, responsibilities, limited access, limited time, limited money, limited bandwidth (both mental and internet).

This creates imbalance.

And imbalance creates friction.

And friction between groups, classes, countries, and generations has never historically ended with everyone holding hands and singing kumbaya (I mean, it could, but I'm not betting my rent on it).

We would want all people to advance together. People of all ages should get to understand what's happening, because this tech changes how we live, how we plan, and how we imagine our future.

And we should have room to talk about it.

Not the fake talk where a company announces, "We care about safety," and then quietly ships the next monster model anyway. I mean real societal conversation. Real debate. Real consent. Real pushback. Real guidance.

Because advancement is not a law of nature. It's a choice.

We don't have to accept every new capability just because it's possible.

The second issue: humans are... humans

This is where my inner optimist fights my inner realist.

Optimist-me says: "AI will help education, medicine, productivity, creativity. We will use it to uplift the world."

Realist-me replies: "My brother in carbon, have you seen global politics?"

Countries don't just see AI as "cool tool." They see it as leverage. Advantage. Dominance. A new kind of arms race.

We can already see hints of secret progress and cold-war vibes. And look, I'm not here to do the whole conspiracy-theory playlist. But it's not exactly a stretch to imagine powerful governments working on:

That thought alone gives me goosebumps.

Not because technology is evil, but because humans have a very consistent habit of using powerful tools for power moves.

And the scariest part is the combo: speed + weaponization + autonomy.

Education and assistance? Amazing.

Robotics + autonomous capabilities + weaponization? The risk starts to skyrocket.

It has only been a few years since AI became widely discussed among normal people. Like, "my cousin who never touched a computer is talking about ChatGPT" level of mainstream. And already we're flirting with capabilities that should require extreme caution, planning, and global agreements.

We're basically handing toddlers a lightsaber and saying, "Just be responsible, okay?"

Violence is never the answer (and it also makes us look stupid in the cosmic scale)

This is where I want you to take a moment and read my Fisholophy on Why We Are Not Alone (I want you to actually click it, not just nod and scroll).

Try to comprehend how small and petty we are.

We are tiny creatures living on a small rock orbiting an average ball of fire. That star is one tiny dot in the Milky Way. And the Milky Way is one galaxy among, what, around two trillion galaxies in the observable universe.

In the grand cosmic lobby, humanity is not the final boss.

We're more like that new player who just learned how to craft a wooden sword and is already trying to start a clan war.

And I'm not even saying "aliens are coming." I'm saying the universe itself is hostile enough:

If an asteroid decided to spawn in our trajectory tomorrow, are we ready? Maybe. Maybe not. And while we're still unsure, we're here trying to nuke each other like it's a personality trait.

This only makes us weaker.

There's a saying in Nepali culture: when two brothers fight, the third one benefits.

And in this case, the "third one" is not some villain in a suit. It's the universe. It's entropy. It's cosmic threats. It's biological threats. It's everything that can wipe us out while we're busy arguing about flags and borders.

We are all brothers to each other (and yes, I mean it in the human-family sense, not the cringe motivational poster sense).

When we fight, the real enemies win.

And I don't even want to bring up world politics in detail because the best single-word review I have is: terrible.

The real problem list (the part that makes my brain go "nope")

Let's get specific.

The nightmare scenario isn't "AI writes poems." The nightmare scenario is powerful institutions pushing AI into the domains where:

A few things that genuinely freak me out:

1) Mass surveillance at god-mode scale

If a government wants to watch everyone, it doesn't need 10 million human analysts anymore.

With AI, the scale becomes automatic.

Cameras + face recognition + behavioral prediction + data fusion (phones, internet activity, purchases, movement) becomes a system that doesn't sleep, doesn't forget, and doesn't get tired of watching you exist.

And yes, some people will say, "If you did nothing wrong, you have nothing to fear."

To which I say: that sentence is the most dangerous bedtime story ever told.

2) Autonomous weapons and "oops" disasters

If you connect AI to weapons, the biggest risk isn't always evil intent. Sometimes it's:

When humans mess up, we at least have guilt, hesitation, and the ability to say "stop."

When autonomous systems mess up, the speed of the mistake can outrun our ability to intervene.

3) Supply chain attacks and cyber warfare on steroids

As someone who's into security, this one hits personal.

AI can be used to automate discovery of vulnerabilities, craft phishing that feels terrifyingly human, generate malware variants, and run large-scale operations that used to require whole teams.

And the kicker is: defenders also use AI. So it becomes an escalating loop.

A cold war, but digital, constant, and invisible.

4) Inequality, but now it has jet engines

If only a few countries, companies, or elites control the most powerful models, they gain an advantage that compounds.

AI can boost productivity, research, and influence. That means whoever gets there first can pull the ladder up (intentionally or unintentionally).

This isn't sci-fi. This is literally how power works.

Here's my controversial take: we've already achieved what we needed

I'm not anti-technology. I'm not against development. I'm not here to scream "AI bad" and go live in a cave with a Nokia phone (though that honestly sounds peaceful sometimes, ngl).

I'm saying something different:

We already have enough AI to change human life for the better.

Right now, AI is already good enough to be:

This is the good edge of the sword. The "build a better Earth" edge.

We can automate tedious processes so humans don't spend their lives doing soul-draining repetitive checks, like manually inspecting every packet for quality or standing all day separating fresh tomatoes from old ones like we're in some medieval produce tribunal.

AI can monitor, filter, flag issues, and let humans do the final pass when needed.

That's not "replacing humans." That's freeing humans.

And education? This one is insane.

Anyone, from anywhere, can learn almost anything now. That's not a small deal. That is a civilization-level advantage.

A brilliant kid in a remote place, in an underdeveloped or badly developed region, can now access top-tier learning, if they get the tools and the mindset. That kid can become a scholar, a builder, a researcher, a problem-solver, and contribute to the development of Earth itself.

That's the dream.

Not "my country wins." Not "your country loses."

Global scale.

Develop the planet. Make it a better habitat for all living creatures. Use AI for:

This is where AI is not just cool, it's meaningful.

And now I'm going to argue against myself (because I have to be honest)

Okay. Counterpoint.

Some of you will say: "Loki, that's not how progress works. If we stop, someone else won't. And if someone else doesn't stop, we're vulnerable."

That's a valid fear.

In a world where competition exists, unilateral slowing down can feel like surrender. If one group keeps building autonomous weapons and another group says, "We're focusing on farming automation instead," then the farming group might get bullied.

So yeah, I get it. I hate that I get it, but I do.

But here's my response:

Just because the incentive structure is broken doesn't mean we should sprint deeper into the abyss and call it strategy.

Sometimes the bravest move isn't "build bigger." Sometimes it's "build smarter," and push for agreements, oversight, and boundaries.

Because if the endgame is "everyone builds autonomous death machines," then nobody wins, we just reach the point where accidents or bad actors can delete cities like it's a video game save file.

The real suggestion: stop pushing the blade into the scary territories

This is the ultimate message I want you to walk away with:

I'm not opposing AI advancement overall.

I'm saying we should stop taking it to the next level in the domains we will regret later, specifically:

We already have enough AI to uplift humanity. We should focus on using what we have responsibly, distributing access fairly, improving safety, and aligning incentives toward life, not domination.

We do not need to invent stronger nukes to wipe out our brothers.

If we're going to build cosmic-level sci-fi futures, we need to act like a species that deserves to survive.

Not a species that got a new toy and immediately used it to threaten itself.

Actionables (small steps that don't require being a president)

Here's what I think we can do, as normal people, right now:

Conclusion: the sword is not the problem, the hand is

AI is not a demon. It's not a savior either.

It's a force multiplier.

It multiplies what we already are.

So the question is not "can we build it?"

The real question is: should we deploy it, and for what?

Because if we keep pushing into surveillance and autonomous violence, we're not building a future, we're building a regret.

And we don't get a rollback.

Let's Talk About It

If you have a take on this, hit me up: message

That's a wrap on this AI existential spiral. If you made it this far, I genuinely appreciate your time and patience; it means more than you think. Feel free to check out the other writings if you haven't already, or come back later when there's something new cooking.

Thank you so much for reading and visiting. Your support keeps this corner of the internet alive. Until next time, stay curious, stay kind, and keep looking up. If you want to add something, feel free to send a message here.

← Back to All Writings