Capability precedes understanding
I was a dumb teenager
I grew up in a small town. My house was a fifteen minute drive to high school, and ten minutes to the nearest apple orchard.
One summer day at the ripe age of 13 I was hanging out with my two buddies- let's call them Mac and Derrick. Mac had a brilliant idea. According to a Youtube video he consulted, if you mixed gasoline and styrofoam it made a sticky, flammable napalm-like substance. Sick.
So we got a plastic bucket, filled it a centimeter high with gasoline from the red can you'd normally use to fill a lawnmower and tossed in a ripped up styrofoam block.
We had to let it sit for a while so Derrick and I got bored. We dipped some sticks in the gasoline, lit them up and waved them around like makeshift sparklers.
I always loved watching fires burn, it's mesmerizing the way they
FWOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOSH
A pillar of flame four feet high erupted from the bucket!
Derrick: OH SHIT
One of the sparks had flown off the stick and ignited the gasoline in the bucket. The bucket's shape funneled the fire upward in a column of intense light, heat and smoke.
Mac: WHAT THE FUCK DUDE
The fire was just a few feet from the side of Mac's house.
Me: I'll go get water!!!
I rushed into the house and managed to get a can of Coke to pour on the fire. It was a gas fire though, so it didn't do anything.
...
Over the next three minutes, we continued to panic and just watched the fire burn itself out. Luckily we only had a tiny bit of gasoline in the bucket on a stone floor, so nothing actually got damaged in the end. Except the bucket. In the aftermath we found the bucket melted down into a flat disk, turned two dimensional by the heat.
Terrifying.
That was the first time I realized how much agency I had in shaping the world around me. Power I didn't realize I had. And what was truly disturbing was how stupid I was and how easy it would be to burn the whole place down.
The digital analog
I haven't felt that perturbed in a long time. Until now. Spoiler: yes this is another post about AI, but stick with me here! I promise it's more profound than "this is what almost burning my friend's house down taught me about B2B SaaS."
LLMs and AI tools now allow individuals to manipulate the digital world at a pace deemed impossible just a few years back. People with no technical background can vibe code a functional webapp overnight. Script kiddies who don't know how Git works can deploy OpenClaw agents with access to their credit cards. And echoing my blogpost from three months ago: today's AI is the least capable it will ever be.
Outside of excitement, the dominant emotion I feel is unease. The same unease you'd feel watching teenagers play with matches over gasoline.
A power we don't comprehend
Part of the discomfort lies in how little intuition we have for why large language models work as well as they do.
In a way, our struggle to understand why they work isn't that surprising. LLM development represents the single greatest concentration of human and economic capital in our species' history. Products like ChatGPT and Gemini are the emergent outcome of trillions of USD in investment.
As a civilization we've now built technological capabilities which eclipse some of our own individual cognitive functions. It's only natural we struggle to explain why it works- it's a super brain-in-a-box that our individual brains can barely comprehend. Sure it has no long-term memory, but it's already a faster coder than any human that's ever lived.
And we're all figuring out their capabilities in real time: we find out what LLMs can do after they've already done them. This means we can't rely on regulators or top-down mandates to solve safety for us, we're in the driver's seat.
The temptation to move fast and break things
Armed with this newfound power, it's tempting to throw caution to the wind and "just ship". After all, if you don't get there first someone else will. Market forces are powerful and we all want to win in the game of capitalism.
But not everything should be built.
Recently I've caught myself a few times building "on autopilot" in response to market demand. For example, in my industry there's a trend for platforms to create fake personas for mass marketing tactics. Think "jane.done@gtm360solutions.com". These are generic and enormously profitable because they're ready to use from day one with zero prior setup, so I started building it like everyone else. But upon reflection, I want brands to be accountable for any communications they send to real people, not hide behind fake ghost accounts. So I shelved it before production.
To be clear, I'm not taking some grand moral stand here. There may be some tasteful way to ship it after all.
What scares me is how easily I started building something without examining whether I thought it should be built. Something that erodes trust in the entire email ecosystem. Something I don't believe in.
Meanwhile the tools will keep making us even faster. Faster than we can read what's being written, if humans aren't doing the PR reviews like the OpenClaw founder boasts.
Responsibility over capability
These aren't novel concerns. In the 1990 novel Jurassic Park, chaos theory expert Ian Malcolm delivers an iconic line: "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should." The book illustrates how things can go wrong when technological confidence exceeds our judgement to wield it.
In this new world where anyone can code, what matters more than anything is the judgement on what to code. When anyone can use AI, what matters most is how you use it.
And the ultimate unit of responsibility is the individual. You. And me. Not market forces. Individuals.
Controlled burn
So we all have a super brain-in-a-box readily available in our computers that we don't really understand. And we're each responsible for wielding that power properly. Now what?
I don't have all the answers, but here's where I think we should start.
First, we ship responsibly. We need to press the boundaries of what's possible while setting the right guardrails in place. A simple principle I keep is to keep a human-in-the-loop for all major side effects: credit card transactions, social media posts, database updates. So no, you shouldn't let OpenClaw buy random things on the internet for you or shill on LinkedIn in your name (as effective as it may be).
Second, we truthfully say what it can and cannot do. AI hyperbole sells in the short term, but damages credibility in the long term. What irks me the most is watching companies like Google justify their mass layoffs on "AI cost savings", when it's really just an excuse to offshore labor and correct historic overhiring.
Putting it in practice: I used Wispr Flow and ChatGPT to structure my thoughts for this post. And it was really tempting to just copy-paste the draft it spit out and pass it off as my own writing. But it wasn't me, so I sat down for ~15 hours and painstakingly rewrote this until it felt like my own. Testing the limits with controlled side-effects.
Cautious optimism
Despite all this doomer talk, I am actually quite optimistic about AI. Especially for younger folks who I've seen pick up these tools faster than anyone else.
I envision a world where AI democratizes education and lets ever more people contribute to solving humanity's greatest challenges.
I just hope we don't burn the house down getting there.