Capability precedes understanding
I was a dumb teenager
I grew up in a small town. My house was a fifteen minute drive to high school: the same distance to the nearest apple orchard.
One summer day at the ripe age of 13 I was hanging out with my two buddies- let's call them Mac and Derrick. Mac had a brilliant idea. According to a Youtube video he consulted, if you mixed gasoline and styrofoam it made a sticky, flammable napalm-like substance. Sick.
So we got a plastic bucket, filled it a centimeter high with gasoline from the red can you'd normally use to fill a lawnmower and tossed in a ripped up styrofoam block.
We had to let it sit for a while so Derrick and I got bored. We dipped some sticks in the gasoline, lit them up and waved them around like makeshift sparklers.
I always loved watching fires burn, it's mesmerizing the way they
FWOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOSH
A pillar of flame four feet high erupted from the bucket!
Derrick: OH SHIT
One of the sparks had flown off the stick and ignited the gasoline in the bucket. The bucket's shape funneled the fire upward in a column of intense light, heat and smoke.
Mac: WHAT THE FUCK DUDE
The fire was just a few feet from the side of Mac's house.
Me: I'll go get water!!!
I rushed into the house and managed to get a can of Coke to pour on the fire. It was a gas fire though, so it didn't do anything.
...
Over the next three minutes, we continued to panic and just watched the fire burn itself out. Luckily we only had a tiny bit of gasoline in the bucket on a stone floor, so nothing actually got damaged in the end. Except the bucket. In the aftermath we found the bucket melted down into a flat disk, turned two dimensional by the heat.
Terrifying.
That was the first time I realized how much agency I had in shaping the world around me. Power I didn't realize I had. And what was truly disturbing was how stupid I was and how easy it was to burn the whole place down.
The digital analog
I haven't felt that perturbed in a long time. Until now. Spoiler: yes this is another post about AI, but stick with me here! I promise it's more profound than "this is what almost burning my friend's house down taught me about B2B SaaS."
LLMs and AI tools now allow individuals to manipulate the digital world at a pace deemed impossible just a few years back. People with no technical background can vibe code an app overnight. Script kiddies who don't know how Git works can deploy OpenClaw agents with access to their credit cards. And echoing my blogpost from three months ago: today's AI is the least capable it will ever be.
Outside of excitement, the dominant emotion I feel is unease. Unease at the digital power we now possess but do not understand.
Could. Should?
This isn't a novel concern. In the 1990 novel Jurassic Park, chaos theory expert Ian Malcolm delivers an iconic line: "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."
Among its other themes, the book illustrates how things can go wrong when technological confidence exceeds our judgement to wield it.
Part of the discomfort lies in how little intuition we have for why large language models work as well as they do. The raw ingredients make sense: lots of transformer weights, compute, internet corpora and some human feedback. But a subset of tasks like language translation and coding turn out to be unusually tractable.
In a way, our struggle to understand why they work isn't that surprising. LLM development represents the single greatest concentration of human and economic capital in our species' history. ChatGPT is the emergent outcome of all that investment. As a civilization we've now built technological capabilities which eclipse some of our own individual cognitive functions. It's only natural we struggle to explain why it works.
All the while, industries continue to rapidly develop and integrate the technology. Which means now we're all scientists, building to discover what's possible empirically: we find out what LLMs can do after they've already done them.
The temptation to move fast and break things
With AI at your fingertips, it's tempting to throw caution to the wind. After all, if you don't get there first someone else will. Market forces are powerful and we all want to win in the game of capitalism.
Recently I've caught myself a few times building "on autopilot" in response to market demand. Take our recent SmartSenders launch for example: pre-warmed accounts are all the rage these days in cold email land. But these are effectively fake personas that aren't tied to actual humans at the end of the day. The fact that there isn't a real person behind the messages feels eerie and irresponsible, so against my business judgement (and the fat margins for such a service) I chose not to build it.
To be clear, I'm not taking some grand moral stand here. There may be some tasteful way to build it after all.
What scares me is how easily I started building something without examining whether I thought it should be built. And the tools will keep letting us build even faster. Faster than we can read what's being written, if humans aren't doing the PR reviews like the OpenClaw founder boasts.
Responsibility over capability
In this new world where anyone can code, what matters more than anything is the judgement on what to code. When anyone can use AI, what matters most is how you use it.
And the ultimate unit of responsibility is the individual. You. And me. Not market forces. Not US-China geopolitical forces like in The 2028 Global Intelligence Crisis. Individuals.
If you're reading this, you're likely in the 0.001% of people on the planet at the forefront of this revolution.
Controlled burn
So we all have a super brain-in-a-box readily available in our computers that we don't really understand. And we're each responsible for wielding that power properly. Now what?
We practice, thoughtfully. AI is a tool like any other- the more you use it, the more effectively you can wield it. And we truthfully say what it can and cannot do. Through disciplined utilization we will gain the wisdom to deploy it for everyone's benefit.
I myself used Wispr Flow and ChatGPT to structure my thoughts for this post. And it was really tempting to just copy-paste the draft it spit out in the end and pass it off as my own writing. But it wasn't me, so I sat down for ~10 hours and painstakingly rewrote this until it felt like my own. So at least for now, ChatGPT 5.2 is still pretty bad at writing like me. I'll let you know if that changes!
Despite all this doomer talk, I am actually quite optimistic about AI. Especially for younger folks who I've seen pick up these tools faster than anyone else. I envision a world where AI democratizes education and lets ever more people contribute to solving humanity's greatest challenges.
I just hope we don't burn the house down getting there.