Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
Everyone’s worried about AI models going rogue. Meanwhile, the tools we use to build AI systems are getting owned through completely conventional attack paths.
Flowise, the popular open-source AI workflow builder, has three CVEs under active exploitation right now. The worst, CVE-2025-59528, earned a perfect CVSS 10.0. A malicious MCP server hands Flowise poisoned tool definitions through normal protocol behavior. Flowise processes them. Code executes. The thing Flowise is designed to do is the attack vector.
The other two aren’t gentle either: unauthenticated OS command execution (9.8) and arbitrary file upload leading to RCE (8.9). Between the three, an attacker gets code execution, credential theft, and full application control. Censys shows 12,000 to 15,000 Flowise instances exposed to the internet. Flowise 3.1.1 patches all three. Go update.
Then there’s GrafanaGhost. Researchers at Noma Labs showed you can inject prompts into Grafana’s AI features through metric names, label values, and dashboard titles. The AI reads them as instructions. The attacker doesn’t even need Grafana credentials. If they can write metrics to any system Grafana monitors (a compromised app, an open Prometheus endpoint), they can inject prompts the AI assistant will process.
Grafana acknowledged the issue and released a fix, though they dispute the severity and say exploitation would require significant user interaction.
Different stages of the threat lifecycle, same thesis: the attack surface of AI systems isn’t the model. It’s the infrastructure. Workflow orchestrators, monitoring dashboards, vector databases, plugin systems. All deployed fast, with default configs, by teams focused on getting AI running rather than hardening the plumbing.
Patch Flowise. Audit your Grafana AI features. Then inventory every tool in your AI pipeline and treat it like the internet-facing software it is.
Read the full technical analysis of both attack chains and what they mean for your AI stack