AI Far Outperforms Humans at Changing Minds — and World Bets Big on Iris Verification

Source: a16z | Published: 2026-04-02T14:30:00Z

A University of Zurich experiment shows AI vastly outperforms humans at persuading people to change their stance in Reddit debates. Meanwhile, World has completed 18 million iris verifications and plans to deploy 50,000 Orb devices across the US.


Every interaction on today's internet comes with a question you can't answer: is the person on the other end actually a person? This is rapidly shifting from "theoretical concern" to "real-world catastrophe." A University of Zurich study had AI debate real people on Reddit's Change My View forum. The AI combed through opponents' post histories, analyzed their political leanings and linguistic habits, then deployed precisely calibrated arguments to change their minds. The result: AI is far better at changing human minds than humans are. Alex Blania put it in terms that should make everyone uncomfortable: AI is better at programming humans than humans are at programming AI.


The "People" You Meet Online — 99% of the Problem Hasn't Even Started

X (formerly Twitter) bans millions of bot accounts daily, but that's probably one percent of the actual total. And everything we're seeing now? Alex believes it represents less than 1% of what the next year or two will bring — intelligence costs are dropping exponentially while agent capabilities are growing super-linearly.

This isn't just a social media problem. On dating platforms, you have no idea if the other person is real — Tinder has already started integrating identity verification because of it. In video calls, deepfakes are approaching fully real-time, photorealistic quality. Within a year, you won't be able to tell whether the face on your screen belongs to a human or an AI-generated avatar. For fund managers, a single spoofed video call could mean massive fraud.

Gamers train for hours a day only to get steamrolled by superhuman AI. On YouTube, people generate hundreds of AI videos per day, pulling in tens of thousands of dollars monthly, and viewers have no idea. Even more absurd: some operators run "farms" of thousands of phones, using bots to watch videos and rack up ad revenue — revenue that's worth absolutely nothing to the advertisers paying for it.

Three Paths, Two Eliminated

Six years ago, when the team started working on this problem, there were three mainstream approaches.

The first was "trust networks" — evaluating a person's internet history: years-old GitHub accounts, regular posting activity, real humans vouching for you. The team rejected this almost immediately because their core assumption was: anything purely digital, AI will eventually be able to replicate. That assumption has already proven true — AI can register GitHub accounts, make consistent code commits, and have five other AI accounts vouch for it, swearing they're real people.

The second was government-issued IDs. Three problems: government control over such infrastructure is bad for free speech; anonymity is instantly destroyed; and this is a global problem — Singapore's digital identity infrastructure might be excellent, but Singapore has a few million people. Meta has three billion users spread across every country on earth. No single government's solution solves a global problem.

The third was biometrics. This path triggers instinctive discomfort in everyone, but mathematically, it's the only one that works.

Why Iris, Specifically

The core issue is uniqueness. Face ID performs one-to-one authentication — comparing your current face against the one stored on your phone to confirm you're the same person. But proving human identity requires one-to-many matching: confirming that a new registrant has never registered before, which means checking against every existing user.

This is an exponential information-theory problem. You need enough mathematical entropy to distinguish every individual. Run the numbers, and even faces — even fingerprints — don't carry enough information. They hit a wall at tens of millions of users. The iris is a muscular texture pattern in the eye that carries enough entropy to enable uniqueness verification at global scale.

The team also made a bet: iris scanning would become increasingly commonplace because AR and VR devices inherently need it. Apple Vision Pro already uses iris recognition.

The Counterintuitive Privacy Solution

The biggest early criticism was: "My God, they're collecting my eyeball data." But the actual technical architecture achieves something counterintuitive: even while using biometrics, you can remain completely anonymous.

It works on two layers. The first is multi-party computation: when you verify at an Orb device, it photographs your iris, computes the iris code, then splits that code into fragments sent to different servers. No single server holds the complete data, and no party can reconstruct the full information during computation — multiple servers execute a carefully designed interaction protocol that completes the matching calculation while keeping the fragments separated.

The second layer is zero-knowledge proofs: your phone holds a private key that only you know. When you need to prove to a platform that you're a unique human, you generate a proof using that key — the platform learns you're a unique, real person but not who you are. World doesn't learn which platform you visited.

From Mockery to "The Horse Has Left the Barn"

Six years ago, when they brought the Orb prototype out to fundraise, nearly everyone thought they were insane. Before AI had truly exploded, scanning irises to prove you're human sounded like science fiction. An a16z partner later recalled that they invested because "this was bound to happen eventually," combined with a strong enough team — even though the timing was far from obvious.

After ChatGPT launched, the first shift happened. People started taking the problem seriously, but the prevailing response was still "that's a future problem — let's revisit in a few years." The real inflection point came with the recent explosion of Claude bots and autonomous AI agents. Alex's exact words: If you're still not taking this seriously, you should probably find a new job. Since then, platforms have been reaching out in droves. The question flipped from "does this market exist?" to "how do we ship this as fast as possible?"

18 Million Verified Users, but the Real Fight Is in America

World currently has 18 million verified users and 40 million total app users. But due to the crypto regulatory environment, they haven't invested heavily in the U.S. over the past few years. That strategy is now shifting — 90% of their effort over the next year will go into the American market.

Deployment faces a trilemma: platforms need to be willing to integrate the technology, devices need sufficient deployment density, and users need motivation to participate. Alex offered a concrete benchmark: for the average American to reach an Orb device within 15 minutes, you'd need roughly 50,000 units deployed. That's not an outrageous number, but it's far from easy — you need to convince chains like Walmart or Starbucks to host devices while also reaching independent coffee shops and even the DMV.

One solution about to launch is "Orb on Demand": in the San Francisco Bay Area and New York, you can simply summon an Orb — strapped to a motorcycle, it arrives at your location within 15 minutes to complete verification. It sounds absurd, but the economics pencil out far cheaper than permanently installing a device on every corner.

Tiered Verification at Different Precision Levels

Not every scenario requires Orb-level verification. World also offers "Face Check" using phone cameras — still anonymized through multi-party computation, but at much lower precision. What it achieves: one person might still be able to register 10 to 20 accounts, but at least not a hundred. It's rate-limiting, not perfect verification. Alex is candid: as deepfake technology advances, this approach will eventually fail — it's a stopgap, buying time.

They also support verification via government IDs with NFC chips, again anonymized through multi-party computation. But platforms have a natural aversion to government ID solutions, so actual adoption remains low.

The Infrastructure of Democracy Is Breaking Down

The conversation ultimately turned to a bigger question: if you can't reliably identify who's a real person, democracy itself is under threat.

America's Social Security number system is riddled with holes — anyone's SSN can be purchased on the black market. AI will transform this scattered underground fraud into a massive, highly automated industry. During COVID, roughly $400 billion in stimulus funds were stolen. If recipients could have been verified as unique, real individuals, that number would have shrunk dramatically.

Mail-in voting was designed for an entirely different era. In a world where AI can impersonate identities at scale, layered on top of a broken identity verification system, "the will of the people" may soon become unreliable. Medicare is so inefficient and fraud-ridden that people expressed genuine satisfaction when the UnitedHealthcare CEO was shot — a measure of just how profoundly the system has failed.

Governments need cryptographic-grade identity infrastructure to confirm "who is who," and they need more efficient ways to deliver money directly to citizens rather than through layer upon layer of social programs that hemorrhage value at every step. AI is turning this from a "should fix" problem into a "fix it or watch it collapse" problem.

More articles on TLDRio