by Robert Bruce, TCG Deputy CTO
In 2025, we’re past novelty. We’re entering a phase where using AI well is a professional skill that any developer must have to meet the expectations of clients and employers. Whether it’s GitHub Copilot or a few well-aimed prompts into ChatGPT, AI is already writing code in today’s government projects. But writing code faster is not the same as delivering value, so how do agencies fit AI into a project’s architecture, compliance model, and team structure?
In this two part post, we answer that question.
Part 1 describes the AI best practices to improve outputs, including test driven deployment, code reviews, prompt engineering and documentation.
Part 2, coming next week, describes how to use AI in balancing small teams, ensuring quality is not sacrificed for speed, practicing the principle of trust but verify, and more.
Part 2: Principles
The following best practices are tactics that help smaller teams deliver high-quality, secure, and compliant systems for our customers:
Empowering Small Teams
There’s also a growing shift toward using AI to improve team dynamics, not just output. A small team — say, two developers and one QA — might use AI to offload rote testing, generate boilerplate scaffolding, or draft API documentation. That doesn’t mean the QA person is obsolete. It means they can spend their time on exploratory testing, accessibility checks, or validating edge cases. Developers, meanwhile, can focus on solving real business problems instead of writing yet another pagination helper.
Balancing Speed with Quality
These aren’t hypothetical efficiencies. Studies like McKinsey’s 2023 generative AI report show developers working with AI complete tasks up to 55% faster. But speed alone isn’t the metric that matters. As DORA’s 2024 State of DevOps report warns, quality often drops if that extra speed isn’t balanced by stronger testing and review. In fact, teams with high AI adoption sometimes saw overall product quality decline, even as documentation and delivery metrics improved. That’s a red flag in federal work, where software isn’t considered “done” until it’s secure, traceable, and formally authorized.
Augmentation, Not Automation
The goal isn’t to replace people, it’s to improve the efficiency and capability of each team member. AI can take on low level, repetitive work such as generating use tests for edge cases, scaffolding boilerplate code or refactoring verbose logic so that developers can work at a higher level and deliver more value. This will lead to smaller teams with more capable members. AI makes each contributor more effective, more creative, and more aligned with the project goals.
Trust, but Verify
The risks, of course, are real. AI code can be wrong, insecure, or biased. Developers new to the tools may over-trust the results. That’s why practices like automated code reviews and validation against unit tests are so critical, they provide a structured way to assess AI-generated code before it’s merged or deployed. The DORA report highlights this need, as well as training in how to validate what AI produces. That’s especially true in government work, where a bug isn’t just inconvenient—it could create a compliance violation or a security vulnerability.
A Shift in Culture
Integrating AI into coding practices isn’t a plug-and-play solution, it’s a complex cultural and procedural shift. Just as methodologies and practices like DevOps, Agile, TDD, and Scrum emerged to help teams adopt new ways of working, successful AI integration requires the same level of intentionality. The teams doing it well aren’t chasing the flashiest tools; they’re deliberately adapting their workflows and team dynamics to make AI a meaningful part of how they deliver value. They start with clarity: clear specs, clear prompts, clear policies. They implement structure: automated checks, reproducible pipelines, human-in-the-loop reviews. And they reflect on outcomes: tracking where AI helps, where it falters, and how to improve next time. None of this is theoretical. These are working patterns across real teams in government settings. They’re not always easy to implement. But they’re worth it. Because the reward is clear: smaller teams delivering faster, with better coverage, fewer regressions, and more bandwidth to solve problems that move the project forward.