Policy Shifts and Energy Costs: Lessons from the 2025 ForumGlobal US AI Summit

by Robert Buccigrossi, TCG CTO

On June 3,  ForumGlobal held their 2025 US AI Summit with federal, state, and industry leaders. The Summit made it clear that tomorrow’s AI landscape is governed by two variables we don’t control: policy whiplash and cost upswings. Everyone operating in the Federal space will need to keep an eye out on both. 

Key Takeaways

  • States remain the nation’s AI “policy lab,” but a surprise House budget rider could freeze their enforcement powers for ten years.
  • Agencies and integrators lament “POC-purgatory”; the only AI automations escaping the lab are small, governed micro-workflows.
  • Energy is a looming bottleneck to generative AI—and the current LLM price war is propped up by venture subsidies that could disappear overnight.

The state-house sprint—and a new cloud on the horizon

At the start of the conference, Delegate Michelle Maldonado of Virginia asked a ballroom of 200 participants to imagine “an all-hands-on-deck moment” for AI policy. By the end, technologists, lawyers and legislators seemed to agree on one thing: the policy deck is being reshuffled so quickly that anyone building solutions for public-sector work must design for motion, not marble.

Texas Representative Giovanni Capriglione said: “Congress has passed twelve laws this year; I passed thirty-eight today.” Utah Senator Kirk Cullimore followed with an ode to the state’s new AI Policy Lab, where companies sign mitigation agreements and test AI in a controlled sandbox.

Then the mood shifted. Panelists called out an ambiguous moratorium quietly tucked into the House-passed budget on 22 May. The language would bar states from enforcing AI-related laws for a decade. Delegate Maldonado warned it could “erase Virginia’s child-privacy and do-not-train acts overnight.

Will the Senate strip the rider? Unknown. The episode is a reminder that policy rules may evaporate (or re-appear) between RFP release and award.

“POC-purgatory” is the real blocker

Yet the tech panels were remarkably short on vendor hype and long on operational angst, particularly around being stuck in the proof of concept phase. “Everybody has twenty 90-percent solutions that never make it to production,” confessed Mike Horton, DOT’s Chief AI Officer. GAO’s science chief Taka Ariga echoed him: “Our analysts still have to own the judgement. The model’s job is to get them to the right paragraphs faster.

The success stories were practical:

  • GAO’s Claude-based bot that skims hundreds of audit reports so a rookie can locate precedent in minutes.
  • DOT’s classifier that pre-sorts FOIA requests and flags PII before a human ever opens the ticket.
  • A regional utility’s call-center tool that converts voice logs to text, then lets a coach search for error patterns.

Each example obeyed three rules:

  1. Narrow, measurable task (routing, summarising, classifying)
  2. Human-in-the-loop with explicit acceptance criteria
  3. Governance first, GPU second—pilot inside a sandbox, record error rates, publish a rollback plan.

Energy is the new latency—and the subsidy clock is ticking

Utility executives and DOE officials warned that data-center load curves now resemble the 2001 Internet boom “only steeper.” The department is evaluating on-site small-modular reactors to feed GPU farms at national labs.

But the eyebrow-raiser came during the Q&A session: running an LLM on “serverless” endpoints is roughly one-sixteenth the cost of renting the same GPUs directly. The spread exists because cloud providers are discounting tokens far below hardware break-even, a loss financed—so far—by venture capital eager for market share. If that subsidy dries up, usage-based LLM prices could spike without any policy change, driven solely by real electricity costs.

Delegate Maldonado closed her session by saying, “We can’t legislate ourselves back to yesterday—so we’d better engineer for tomorrow.”