Judge Strikes Down DOGE's ChatGPT-Powered Grant Cancellations

Federal judge rules Department of Government Efficiency illegally used ChatGPT to cancel $100M+ in grants, citing unconstitutional process targeting diversity programs.
In a significant legal setback for the Department of Government Efficiency, a federal judge has ruled that the agency's sweeping cancellation of over $100 million in government grants was fundamentally unconstitutional. The decision, handed down Thursday by US District Judge Colleen McMahon, represents a major challenge to DOGE's grant termination process and raises serious questions about the use of artificial intelligence in federal decision-making.
The 143-page ruling centers on DOGE's controversial methodology for identifying and eliminating grants, which relied heavily on ChatGPT to scan applications for references to diversity, equity, and inclusion (DEI) initiatives. According to Judge McMahon's comprehensive analysis, this approach violated constitutional protections and demonstrated a troubling disregard for proper administrative procedures that have governed federal grant allocation for decades.
The lawsuit, filed in 2025 by a coalition of humanities groups, challenged DOGE's authority to unilaterally cancel funding without proper review or justification. The plaintiffs argued that the agency's use of automated systems to identify and eliminate grants based solely on the presence of protected characteristics represented an abuse of executive power and violated fundamental fairness principles embedded in the Administrative Procedure Act.
Judge McMahon's decision contains particularly scathing language regarding DOGE's approach to grant evaluation. The judge wrote that "it could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding" from the National Endowment for the Humanities (NEH), which had been a primary target of the agency's cancellation efforts.
The ruling highlights what legal experts are characterizing as a dangerous precedent for using artificial intelligence in high-stakes federal decision-making. By relying on ChatGPT to screen grants, DOGE appears to have outsourced critical policy decisions to a language model without human oversight, professional expertise, or adherence to established administrative standards. This approach raised immediate red flags among legal scholars and government accountability advocates.
The implications of the decision extend far beyond the $100 million in canceled grants at stake. Judge McMahon's ruling effectively establishes that algorithmic decision-making in federal grant programs must meet the same constitutional and procedural requirements as traditional human-led processes. Agencies cannot simply delegate their decision-making authority to AI systems, particularly when those systems are being used to target protected classes or characteristics.
The Department of Government Efficiency, which was established through an executive order and tasked with identifying wasteful government spending, had promoted its use of ChatGPT as an efficient tool for reviewing the thousands of active federal grants. However, this efficiency came at the cost of due process and constitutional compliance, according to the judge's analysis.
Humanities organizations that brought the lawsuit celebrated the ruling as a vindication of their concerns about the government's approach. They argued that DEI-focused grants were being unfairly targeted despite their legitimate educational and cultural value. Many of these grants supported important work in arts education, historical research, and cultural preservation that would have been lost without the court's intervention.
The decision also raises broader questions about the Trump administration's broader push to eliminate DEI initiatives across the federal government. While the administration has the authority to set policy priorities, Judge McMahon's ruling suggests that the methods used to implement these priorities must still comply with constitutional standards and administrative law requirements that protect against arbitrary and discriminatory decision-making.
Legal experts have noted that the ruling could have significant implications for how federal agencies use technology in their operations. Going forward, agencies will likely face increased scrutiny if they attempt to automate high-stakes decisions using AI tools without proper oversight, human review, and adherence to established procedural safeguards. The decision suggests that efficiency cannot be the sole justification for eliminating traditional checks and balances in federal decision-making.
The National Endowment for the Humanities, which had been significantly impacted by DOGE's cancellations, is expected to begin the process of reinstating affected grants in light of the court's decision. This will require substantial administrative work to identify which grants were improperly canceled and to restore funding to projects that had already begun making plans based on the original grant awards.
DOGE officials have not yet indicated whether they will appeal the ruling or modify their approach to grant evaluation. However, the comprehensive nature of Judge McMahon's decision, which thoroughly documented the constitutional problems with the agency's methodology, suggests that any appeal would face significant legal hurdles. The ruling provides detailed legal reasoning that will be difficult to overcome on appeal.
The case represents one of several legal challenges to DOGE's aggressive approach to federal spending reduction. Other lawsuits have questioned the agency's authority, its transparency, and its compliance with federal employment laws. Collectively, these legal challenges suggest that DOGE's efforts to rapidly reduce federal spending are running into significant constitutional and procedural obstacles.
The ruling also underscores the importance of maintaining human judgment and expertise in federal decision-making, particularly on matters that affect educational institutions, cultural organizations, and research initiatives. While technology can be a useful tool for managing information and identifying potential areas of concern, critical policy decisions require the careful analysis and accountability that only trained government officials can provide.
Moving forward, this decision will likely influence how federal agencies across the government approach the use of artificial intelligence in administrative processes. Agencies will need to ensure that any adoption of AI tools includes appropriate oversight mechanisms, human review procedures, and safeguards against discriminatory outcomes. The ruling effectively establishes that technological efficiency cannot override constitutional and statutory protections.
Source: The Verge


