Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
Two reports landed this week that look like separate stories. Taken together, they describe something more fundamental.
Interpol’s 2026 Global Financial Fraud Threat Assessment found that criminal organizations using generative AI are 4.5 times more profitable than those running traditional fraud. That’s not a marginal improvement. It’s a complete restructuring of the economics of crime.
Meanwhile, a consumer study from biometric security firm iProov describes what they’re calling “The Great Trust Recession.” People can no longer reliably tell what’s real online. Not just in their inbox – broadly, as a cultural condition, the assumption that you can trust what you see has been dismantled.
These aren’t separate trends. Criminals are 4.5x more profitable specifically because they’ve destroyed the ability to trust perception. The profitability and the trust collapse are the same story.
The Interpol number has one predictable consequence: money flows toward the winning approach. Criminal networks that haven’t upgraded to AI-assisted methods are now at a structural disadvantage. The ones that survive will be the ones that do upgrade.
Interpol describes AI-using criminal networks able to “scale operations exponentially with minimal investment.” Deepfake-as-a-service kits are available at affordable prices. This isn’t artisanal fraud anymore – it’s industrialized. A network with no particular technical expertise can now rent the tools that used to require real talent or real money.
When deepfakes are common enough, something breaks. People stop trusting. Not just “they become more cautious” – they genuinely can no longer rely on seeing something as evidence it’s real.
That has real consequences. Banking KYC processes were built on the assumption that a video call or document image is meaningful evidence of identity. Telemedicine requires both parties to be who they say they are. Remote hiring depends on video calls and work samples that can now be fabricated. Government digital identity programs face the same problem.
The deepidv startup that raised $1M for counter-deepfake tools put it plainly: they’re “building from the ground up for a post-deepfake world.” The phrase acknowledges what a lot of enterprise technology hasn’t: the old world is already gone.
The trust recession isn’t a side effect of AI fraud. It’s the mechanism. Criminals are 4.5x more profitable because they’ve broken verification systems. Breaking verification systems makes fraud easier. Making fraud easier erodes trust further.
There are real technical responses – biometric liveness detection, hardware-bound identity credentials, out-of-band verification. None are perfect. All create friction. But the harder problem is cultural: people’s baseline assumption about whether to trust what they see has already shifted.
The iProov study isn’t a warning about where we’re headed. It’s a description of where we are. And if the return on investment for destroying digital trust is 4.5x, criminal networks are going to keep making that investment.