Google paid $17.1 million to 747 security researchers in 2025. Record year. 40% jump over 2024. Cumulative total since 2010 now sits at $81.6 million.

Big numbers. Good press. But what does $17 million actually buy in the vulnerability market?

More complicated than the headline.

Android and Google Devices accounted for over $2.9 million. Chrome paid out $3.7 million to 100+ researchers. The Cloud VRP, in its first full year, hit $3.6 million. Highest single payout was $250,000.

These are real vulnerabilities. Hundreds of them, found and fixed before exploitation. That’s genuine defensive value. But do the average math. $17.1 million divided by 747 researchers comes out to roughly $22,900 each. Many reported multiple bugs, so the per-vulnerability average is even lower. For a talented researcher spending weeks on a complex exploit chain, that number invites comparison shopping.

And here’s the comparison. Zerodium’s published rates: Android full-chain zero-click exploit, up to $2.5 million. iOS equivalent, up to $2 million. Chrome RCE with sandbox escape, $500,000. Those are list prices. The actual private market, including nation-state buyers who don’t publish their rates, almost certainly pays more.

Google’s top bounty was $250,000. Zerodium’s published top for a comparable Android exploit is ten times that.

So do bug bounties fail? No. They compete effectively for a certain category of vulnerabilities and lose the competition for others. The medium-to-high severity individual bugs flow through bounty programs. Memory corruption in browser components, auth flaws in cloud services, logic errors in APIs. Real, consequential stuff.

Bounties also win on breadth. 747 researchers looking from 747 different angles will find things internal teams miss. That’s math, not criticism. And legality matters. Selling to Zerodium is a legal gray zone. Reporting through a VRP is clean. For a lot of researchers, that’s the deciding factor.

Where bounties struggle is at the top of the severity curve. The multi-step chains that can compromise a fully patched device. The kind of thing intelligence agencies need. Those don’t show up in bounty reports. The researchers capable of that work either work internally at big tech companies, work for governments, or sell to brokers. The bounty economics can’t compete.

Google also launched AI-specific bounty categories in 2025 for Gemini-related vulnerabilities. Smart move. Prompt injection, training data poisoning, model extraction, adversarial inputs don’t map neatly onto traditional bug categories. Whether external researchers can find meaningful AI vulnerabilities at scale without internal access to model architectures is still an open question.

For everyone else: $17.1 million is a lot of money. It’s also about seven high-end zero-day exploit chains on the private market. Both things are true at the same time. And that tension is the entire story of vulnerability economics in 2026.


Read the full post on gNerdSEC