The most important AI governance story of the week didn’t come out of RSAC. It came out of a federal courthouse.

Court depositions in litigation over NEH grant terminations describe DOGE staff using ChatGPT to classify grants for cancellation. The workflow: feed grant descriptions into the chatbot, label items as DEI-related, use those outputs to build termination lists. Roughly 1,400 grants and $100 million in funding. No humanities experts consulted. No domain review. No discernible due process.

Just a general-purpose AI chatbot and the authority to execute the results.

What the Depositions Actually Say

Depositions are useful because they’re under oath. These are unusually revealing about how DOGE operated.

One DOGE staffer admitted the organization was “unable to lower the federal deficit.” That’s interesting as a window into decision-making. But the ChatGPT detail is more specific: grant titles and descriptions went in, DEI flags came out, and those flags drove terminations. Scholars in the middle of multi-year projects had their funding cut based on what a language model said about their grant abstracts.

The Accountability Gap

This is where the story connects to conversations happening at RSAC 2026. The dominant theme this week is agentic AI – systems that don’t just answer questions but take actions and make decisions. The security industry is wrestling with who’s responsible when an AI system makes a consequential mistake.

The DOGE case is the non-security version of that exact problem. And it already happened.

When ChatGPT output drives a government funding termination, who made that decision? There are no established standards for AI-assisted government decision-making. No framework requiring expert review before AI outputs trigger binding actions. No audit trail requirements, no appeals process tied to AI involvement.

Due process generally requires that government decisions affecting people’s interests go through a defined process with meaningful review. “ChatGPT said it was DEI” is not a process. It’s an accountability vacuum.

Why Language Models Are Bad at This

ChatGPT is genuinely useful for a lot of things. It’s not a reliable classifier for contested social and political categories. “DEI content” is not a well-defined technical category a language model can identify with precision. It’s a political and cultural judgment that different humans would apply differently. The model’s output reflects whatever pattern it picked up during training – and you can’t cross-examine a neural network.

That’s a serious problem when the output drives binding government decisions.

What the Industry Should Take From This

Security teams are building AI-assisted threat detection, incident response, and vulnerability scanning. Some of that work is valuable. Some of it will produce exactly the kind of unaccountable outputs the DOGE case illustrates.

The question: what’s the standard for human review before an AI-assisted decision becomes an action? RSAC will spend a lot of time on the exciting possibilities of agentic AI. The DOGE case is a useful reminder that the governance questions are the ones that actually matter when things go wrong.


Why the DOGE-ChatGPT-NEH case is the AI accountability story the industry needs to take seriously.