The More Powerful AI Gets, the More Humans You Need to Check Its Work
Source: a16z | Published: 2026-04-07T14:30:00Z
Generation costs are plummeting to near zero, but verification costs are skyrocketing — AI is creating a boom in oversight and validation roles. Meanwhile, distillation attacks can replicate frontier model capabilities at just 2% of original training costs, making decentralized AI potentially the only endgame.
AI is a shortcut — but only if you already know the long way around. This is the single most important line for understanding today's AI economy. Someone who memorized e^(iπ) + 1 = 0 can spit out the answer instantly, but if you asked them to derive it from the definition of complex functions, they could do that too. The pre-AI generation learned all the fundamentals, so using AI as a shortcut works fine for them — when AI screws up, they can debug it. But if you never walked the long road, AI isn't a shortcut. It's a black box. You have no idea where it took a wrong turn.
Trust Accelerates Within Tribes, Decelerates Between Them
There's an underappreciated prerequisite for AI productivity gains: trust. Share your entire codebase with AI inside a small, trusted team, and efficiency soars. But beyond the trust boundary, AI creates costs — spam, AI-generated junk decks, unverifiable résumés.
The cost of generation has plummeted, but the cost of verification is skyrocketing. A well-written cover letter used to be a signal in itself. AI has flattened that signal, forcing hiring managers to spend even more effort separating real from fake. Some companies have already started requiring in-person written tests and on-site interviews — the mere threat of an offline exam is enough to keep candidates from using AI on online assessments.
AI will create a massive number of verification and proctoring jobs. This is counterintuitive: the more powerful AI gets, the more human effort goes into confirming it didn't mess up.
AI Is Making the Entire Internet Look Like the Chinese Internet
China's tech ecosystem was born in a low-trust society — if my data sits on your server, you're probably going to snoop on it. So Chinese companies tend to build everything in-house rather than rely on third-party SaaS. That used to mean an efficiency penalty, but AI is changing the equation.
Now any company outside China can operate the way Chinese companies do — spin up internal tools with AI, reduce dependence on external services. The classic "build vs. buy" question? AI is tipping the scales toward "build." The friction cost of digital self-sufficiency is dropping fast.
Visual Output Is Verifiable; Backend, Watch Out
There's a clean dividing line between what AI is good at and what it isn't: verification cost.
Images and video are easy — the human brain comes with a built-in GPU. You can instantly spot a mangled hand or a misaligned UI. Frontend code is the same: generate a page, glance at it, done. Backend code is different. Amazon already learned this the hard way: they went all-in on AI-generated code, systems broke, and they had to call a company-wide meeting. Reviewing individual pull requests is fine; full automation is not.
Physical-world AI is actually easier to verify — move a hundred boxes from this pallet to that one, and either they're there or they're not. No ambiguity. Self-driving eventually cracked getting a car safely from A to B because there's only one physical world and all sensor data converges on the same reality. The digital world is full of self-constructed environments — Harry Potter fanfiction, Star Wars fan communities — where boundaries are forever blurry.
"Don't Ship Undisclosed AI in Public"
There's a simple logic behind this rule: when you receive a presentation that's obviously AI-generated, what's your first reaction?
Three possibilities: lazy — they typed a few words and dumped AI's raw output without even trimming it; stupid — they thought you couldn't tell the difference between AI slop and carefully crafted work; or dishonest — they're trying to pass off sloppy material as legitimate.
"No matter how advanced AI gets, the default output has a generic look. It's like someone who never changes the Windows default wallpaper — most people don't change defaults, so default AI looks like AI."
If this is how the most tech-forward people react to AI content, imagine the backlash from those who were already skeptical.
Humans Are Sensors, AI Is the Actuator
The conversation around "taste" and "agency" makes more sense through a precise framework: humans sense the world — market conditions, political winds, user needs — then compress those perceptions into a prompt. AI executes.
What is taste? Taste is perception. And that's exactly what AI can't do right now. AI waits for your prompt, executes on command, then stops. If it doesn't stop, it burns tokens and ceases to be economically useful. Digital AI is architecturally designed to be controlled. China, the country mass-producing physical robots, won't even loosen control over its own people — it's certainly not going to loosen it over machines.
In the near term, the smarter you are, the more useful AI is to you — this pattern has held for years. Some will argue AI will eventually surpass humans in taste and agency, but the current evidence doesn't support that claim.
AI Can't Read Your Mind, but It Might Read Your Body
Neuralink's concept deserves serious consideration, but it has a fundamental constraint: you still need to form a thought before characters appear on screen. Brain-computer interfaces don't skip the "thinking" step.
Biological data is different. Stanford professor Mike Snyder ran an experiment years ago — he ran every test he could on himself, tracked continuously. He discovered that shifts in antibodies and white blood cells revealed illness before he had any symptoms. Your body constantly generates massive telemetry — gene expression levels, small molecule concentrations, spatiotemporal changes across tissues — data that can serve as a prompt for AI without you saying a word.
AI may not be able to read your mind, but it can read your body. Biotech can feed inputs to AI while you sleep. This means AI breakthroughs in healthcare could run far deeper than most people expect.
AI Won't Take Your Job — AI Makes You the CEO
Throughout history, people could cheaply test their athletic talent, math ability, or musical gifts as early as high school — try it and you'd know where you stood. But "being a CEO" was never something you could test on the cheap, which is why so many people assumed there was nothing special about it.
AI changes this. You can now "hire" AI at near-zero cost to execute tasks — which is essentially being a CEO: sensing the market, writing clear instructions, verifying output. Smart people from so-called "poor countries" around the world — the Calendly founder from Nigeria, entrepreneurs in India and Latin America — can go remarkably far on nothing but internet access and AI.
"AI doesn't take your job — AI makes you the CEO. Another version: AI doesn't take your job — AI takes the last generation of AI's job. Claude took ChatGPT's gig."
There's a third version: AI lets you hit 60–70% competence in any field. That's exactly what a CEO needs — before you hire specialists, you are the designer, the accountant, the product manager. AI dramatically lowers that bar.
SaaS Won't Die, but It Will Be Forced to Evolve
The "SaaS apocalypse" narrative is too simplistic. Sure, AI can help you clone an interface fast, but clone all of Facebook's code and launch facebook2.com — who's going to sign up? Distribution is something AI can't clone.
If Notion, Figma, and Replit are smart enough, they can use AI to ship new features to existing users faster than ever. What's actually vulnerable are products coasting on their installed base without innovating. But AI accelerates both incumbents and challengers — there's no one-directional revolution that only benefits attackers.
One trend worth watching: users may increasingly prefer local data. Obsidian's competitive position against Notion improves, because local Markdown files have an edge in the AI era — you can feed all your data to a local model for analysis, and data network effects can compound locally too.
The Blind Spot of American AI Companies: Modeling Only One Variable
Silicon Valley AI companies are scalar thinkers, not vector thinkers — they're modeling AI's disruptive potential but ignoring the other singularities happening simultaneously: shifting political landscapes, the collapse in solar energy costs, the copyright backlash wave.
These variables matter because they change the leverage of political factions. If you only extrapolate the AI curve without tracking other variables that are spiking or crashing at the same time, your world model is wrong. The copyright litigation backlash is building momentum, and China's open-source and decentralized models don't carry that baggage — "Pirate Bay–style AI" might end up freer and better.
Things compound until they hit the S-curve constraint — and that constraint usually comes from social and political backlash, not from the technology itself.
Bitcoin Is Becoming "Provably Global, Institutional-Grade Collateral"
Bitcoin in 2026 is less about personal digital cash and more about verifiable assets between institutions. When Bukele tweets "I hold this much BTC at this address, and now I'm moving it to this other address," anyone on earth can verify his reserves at near-zero cost — enormously valuable in an era where AI can fake any gold audit video.
But this also means Bitcoin is becoming increasingly vulnerable at the personal privacy level. On-chain analysis plus AI will de-anonymize a huge swath of Bitcoin usage. Only institutions can tolerate that degree of transparency — public companies are designed for public scrutiny, but individuals are not.
Quantum computing points in the same direction: institutions can migrate assets from old addresses to quantum-safe ones within days, but a billion small holders cannot. Bitcoin as "digital gold" is quantum-safe. Bitcoin as "digital cash" is not.
Zcash and the Thirty-Year Dream of Digital Cash
Gold comes in big bricks, shipped between institutions in armored trucks — large denominations, low frequency. Cash is the opposite: for individuals, high frequency, small denominations. If Bitcoin is becoming digital gold, who fills the role of digital cash?
Milton Friedman predicted in the 1990s a form of "reliable e-cash" — A can transfer money to B without A knowing who B is and B not knowing who A is, just like handing over a twenty-dollar bill. Zero-knowledge proof technology has evolved over thirty years, from theory to Zcash's commercialization to efficient mobile implementations, and Apple and Google have finally loosened restrictions on crypto apps — this dream is becoming reality through Zashi, a Zcash wallet.
Zcash's strategy is deliberate simplicity — no smart contracts, just private transactions. Just as Twitter's simplicity made it a standalone product even though Facebook already had status updates, a simple, scalable, private digital cash for a billion people is a big enough target on its own.
Decentralized AI May Be the Only Endgame
Distillation attacks are eroding the moats of large models — a relatively small number of API queries can distill a large model's capabilities into a smaller one at just 2% of the original training cost. It's hard to argue against this on moral grounds: these companies built their models by copying the entire internet's content — it's as indefensible as Facebook trying to block others from scraping the data it once scraped itself.
If a major capital market crash hits, frontier model training could stall — nuclear energy is the precedent, where massive investment was followed by decades of stagnation. But existing models plus distillation might be enough for ten years. In that scenario, decentralized, personalized, programmable AI isn't just one possible future — it may be the only future. Crypto handles trust between tribes; AI boosts efficiency within them. The intersection of those two is where the real bet should be placed.