AI Now More Persuasive Than Humans — World Bets on Iris Scans to Prove You're Real
Source: a16z | Published: 2026-04-02T16:26:55Z
A University of Zurich experiment shows AI can dig through opponents' post history to persuade with surgical precision. World co-founder Alex Blania warns current AI impersonation is less than 1% of what's coming in 1–2 years, and is building iris-based global identity verification to counter it.
On r/ChangeMyView — the Reddit forum famous for persuading people to change their minds — the University of Zurich ran an experiment: they put AI in the debate ring. The AI crushed humans in persuasiveness. It combed through opponents' posting histories, analyzed their political leanings and communication styles, then surgically pushed all the right buttons. World co-founder Alex Blania cited this experiment during a conversation at a16z, summing up our current predicament in one line: AI is far better at programming humans than humans are at programming AI.
This isn't some distant threat. Alex's assessment: everything we see today — social media bots, deepfakes, AI-generated content — amounts to less than 1% of the actual scale we'll face within the next year or two.
Why Proving "You're Human" Is So Hard
The core problem isn't identity verification — it's uniqueness. Face ID does one-to-one authentication: your phone stores your facial features and compares each new scan against the original. But "proving you're human" requires one-to-N matching — you have to prove you've never registered a second account in the system, where N is the entire network's user base.
This is an exponential information-entropy problem. Alex's team did the math: the information contained in faces or even fingerprints isn't sufficient — the system hits a wall at tens of millions of users. They ultimately settled on the iris. The texture patterns in the eye's musculature carry enough mathematical entropy to guarantee uniqueness at a global scale.
When the team reached that conclusion six years ago — "we need to build a dedicated hardware device to scan irises" — they scared themselves. It meant raising billions of dollars and distributing devices worldwide.
Three Paths, Only One That Works
In World's early days, the industry had three mainstream approaches to solving "proof of humanity."
The first was "trust networks": examine a person's internet history — how many accounts they own, posting frequency, GitHub activity — plus mutual vouching between real-world friends to build a trust graph. This was popular at the time, but Alex's team rejected it almost immediately — any purely digital behavior can eventually be perfectly replicated by AI. AI can maintain GitHub accounts, commit code regularly, and have five AI accounts mutually vouch for each other as "real people."
The second was government-issued IDs. The problems are layered: it destroys anonymity; government identity systems were never designed for this; and most critically, this is a global problem — Singapore might have perfect digital infrastructure, but Singapore has a few million people, while Meta has 3 billion users worldwide.
The third was biometrics. Intuitively the most unsettling, but after ruling out the first two, it's the only approach that works mathematically.
The Privacy Paradox: Biometrics Can Actually Protect Anonymity
"They're taking my eyeball data" was the most common early criticism. But the actual engineering works in exactly the opposite direction.
When a user verifies at an Orb device, it photographs the iris and computes an iris code locally, then splits that code into multiple fragments and sends them to different servers. No single server holds the complete data, and no party can reconstruct the full information during computation — this is multi-party computation (MPC). Multiple servers collaborate through carefully designed interactions, ultimately outputting just one result: whether this person is unique.
On top of that, zero-knowledge proofs are layered in. A user's phone holds a private key known only to them, which they can later use to prove to any platform: "I am a verified, unique real person" — without the platform knowing who they are, and without World knowing which platform they visited. It's a counterintuitive property: a biometrics-based system actually achieves extreme privacy protection and anonymity.
Deepfakes Are Coming for Video Calls
Dating platforms are the obvious use case — Tinder is already piloting in Japan, where Orb-verified users get a "real person" badge. But Alex pointed to a less obvious scenario: video conferencing.
Fund managers, people handling large transactions — their video calls are inherently high-value targets. Someone could impersonate you, make a call, and request a wire transfer. Deepfakes haven't quite achieved real-time, photorealistic quality yet, but Alex predicts that within a year, this will become a cheap, widely available capability — at which point you won't be able to tell whether the person on the other end of a video call is real or AI.
Gaming is another sector poised to explode. Players train for hours every day, only to get steamrolled by an AI that outclasses humans in every dimension — especially in competitive scenarios involving real money.
Content Platforms' Business Models Are Under Threat
Alex mentioned a case: someone uses AI to generate roughly 100 YouTube videos per day, earning tens of thousands of dollars monthly, with viewers completely unaware. This raises a fundamental question: is YouTube willing to pay ad revenue share on AI-generated content?
There's another side to this. He saw a video that day of a "YouTube farm" — thousands of phones playing videos around the clock, inflating view counts. AI-generated content plus AI-inflated views means advertisers are paying for zero-value impressions.
For creator-economy platforms — Substack, Patreon, Spotify — the core driver behind fans supporting creators is a personal connection with a real human being. TikTok content is compelling largely because it has some connection to reality. If that connection breaks, the entire logic of the creator economy unravels.
From Ridicule to "If You Don't Believe This, Find a New Job"
Six years ago, when Alex brought the Orb prototype to a16z for a pitch, AI hadn't truly arrived yet — bots were still crude. Host Ben recalled that the project felt "too far ahead of its time," and timing was the biggest concern. But the team was strong enough and the problem real enough, so they invested.
For a long stretch afterward, the outside world's reaction was widespread mockery. The shift came in two stages: after ChatGPT launched, people started acknowledging the problem existed but still thought "this is a few years out — let's stay in touch." The real inflection point was the recent Claude bot and Moltbook incidents — large-scale, high-quality AI impersonation went from hypothetical to reality. Alex's exact words: if you're still not taking this seriously, you should find a new job.
World now faces neither a market risk nor a persuasion problem. It's a pure execution problem.
18 Million Verified Users, and the Challenge of 50,000 Devices
World currently has 18 million verified users and 40 million total app users. But due to the crypto regulatory environment of the past few years, they've barely invested in the U.S. market. Over the next year, 90% of their effort will be focused on the U.S.
The core challenge is device distribution. Alex used a specific metric: how many minutes does it take for the average person to reach the nearest Orb device? On a global average, that number right now is probably "several days," since many people would need to fly. Bringing that number below 15 minutes in the U.S. would require deploying roughly 50,000 devices.
The distribution strategy is multi-pronged: major partners (Walmart, even Starbucks-tier), independent cafés one by one, even the DMV. Alex also revealed an upcoming service — Orb on Demand: in the San Francisco Bay Area or New York, you can order online, and within 15 minutes a motorcycle courier shows up at your door with an Orb device to complete verification. The team probably wasn't thrilled he announced this early.
Democracy's Infrastructure Is Expiring
The conversation's finale went bigger than product. Ben pointed out: America's Social Security number system is a disaster — everyone's SSN is available on the black market. During COVID stimulus programs, roughly $400 billion was stolen through fraud — if they could have confirmed each payment went to a unique real person, even without verifying citizenship, things would have been far better.
AI will transform this kind of fraud from a loosely organized underground industry into large-scale, highly automated operations. Mail-in ballots, Social Security, Medicare — these systems were designed for an entirely different era. Ben's assessment was blunt: without building some form of cryptographic-grade strong identity infrastructure, democracy itself will collapse. Public anger toward existing systems has reached extremes — when a UnitedHealthcare CEO was assassinated, the public reaction was cheering. That tells you the level of systemic failure is far beyond patchable.
World's "proof of humanity" is just one piece of this massive puzzle — but it may be the most foundational one.