AWS Executive in Residence Blog
AI, Technical Debt, and the Path to Real Fluency

Every enterprise leader I talk to right now is wrestling with the same three problems. They’re not unique to any one industry or company size; I’ve seen them appear in financial services, government, and healthcare. And they tend to show up together.
The first problem is that most organizations don’t actually know what systems, tools and applications they have. Their technical estate is broad, poorly documented, and in many cases understood only by people who left years ago. They know they have problems and feel the drag, but they can’t quite put their finger on where the bodies are buried. Every time they start something new, they uncover another “Yes, but.”
I lived this firsthand as CTO for the State of Indiana. We knew we had challenges. But we couldn’t consistently identify them with enough precision to act on them systematically.
The second problem is encouraging widespread adoption of AI. Technical teams are trying to figure out how to use it, but most are stuck in code generation or maybe some test writing. They don’t have clear use cases or a framework for determining where AI changes are needed and what incentives are required. And without that, broad AI implementation remains an idea on a slide deck.
The third problem is the hardest to name but becomes apparent when you observe teams using AI tools. It’s the gap between the mechanical competence that training provides and the problem-solving fluency that comes from getting your hands dirty in your own environment. Getting teams from one to the other is a matter of experience, not training.
Here’s what I tell customers: A single approach can address all three problems, and it starts with requiring documentation artifacts that are accurate to the last commit.
Start Where You Are: Making the Unknown Known
In the thirty years I’ve worked in this industry, I’ve never seen good documentation. Since effective documentation is time consuming and conflicts with the pressure to deliver, I don’t think that will change unless we stop expecting developers to write it.
I’ve recently seen something more promising: teams using AI to programmatically generate documentation and other useful artifacts in real time, as a byproduct of the work itself.
AI doesn’t mind being asked to write comments in code or documentation artifacts, plus it’s great at making sense of what it sees. If you give it enough context and point it at your code, it will surface things like dependencies, patterns, risks, and architectural decisions baked into the logic that your team either forgot or never knew about.
A practical starting point is the modernization agent AWS Transform custom. It provides out-of-the-box transformation definitions (TDs) and lets you customize them to your needs. One TD can read your code and produce information about it without changing your application or migration.
Pick one or two applications from your legacy estate, run the analysis, and see what AWS Transform custom tells you. You’ll likely find some things you already knew, some you suspected, and some that genuinely surprise you. Take the time to validate the output against what your team actually knows about those applications. Get a feel for the accuracy, then ask yourself: What context could the team add to make this more useful?
No off-the-shelf tool is an expert in how—or why—you do things. But you can extend these tools with enterprise-specific context, such as architectural standards, known constraints, technology decisions, and standards for software versions.
A bank team I spoke with recently was particularly concerned about date and time zone handling across a complex, multi-system legacy environment. A great solution could be to customize the AWS Transform custom code analysis TD to surface date and time logic across their estate. That’s a targeted investigation into something that matters to the future of the business, not a generic AI use case.
The artifacts from this process, like markdown files stored in your repos or whatever text format you choose, are the beginning of something valuable: a searchable, describable body of knowledge about your legacy estate. Automate the process and call it what you want: living documentation or real-time artifacts. The name doesn’t matter; the point is that it exists, it’s accurate, and it updates as the code changes.
The Hidden Benefit: Building AI Fluency Through Real Work
An unexpected outcome of this approach is that it becomes an AI fluency exercise. When your team sits down to describe what they want the AI to look for, provide context, and refine the output, they are practicing skills that transfer to every other AI use case. How do you describe what you want? How do you manage context? How do you iterate toward something useful?
These are not abstract skills. They are the practical mechanics of working with AI effectively, whether you’re analyzing code, drafting a document, or solving a problem that has no precedent. The discipline of starting something, refining it, managing the context, and getting to something you want is the core of AI problem-solving. Your team learns this by doing real work on real problems, not in a training module.
You don’t solve the adoption problem by mandating AI use. You solve it by giving teams a concrete, meaningful task that requires them to engage with AI seriously.
Tying It Together: Make It an OKR
So you have a method for understanding your technical estate. You have a way to build AI fluency through that work. Now how do you make it stick organizationally?
Consider setting an objective that every application in your portfolio has an accurate, up-to-date set of real-time artifacts by the end of the calendar year—not documentation written once and forgotten, but artifacts that update automatically every time there’s a commit, a push to main, or a change in code. Create living records that reflect the application as it actually runs today.
The OKR doesn’t mandate AI usage but defines an outcome.
The path to that objective has three steps:
- Deliver the mechanism to teach the mechanics. Show your teams how to use the tools to generate these artifacts. Make it concrete and hands-on.
- Encourage experimentation. Let teams explore, refine their definitions, and develop their own fluency. The variation is a feature, not a bug.
- Automate the outcome.Once the process is understood, wire it into your pipeline as a CI/CD hook, a scheduled job, or an agent trigger. Make the artifact generation automatic so it doesn’t depend on anyone remembering to do it.
When you’ve done all three steps, you’ve solved more than a documentation problem. You’ve created a consistent, automated process for generating institutional knowledge about your technical estate.
Those artifacts can:
- Feed AI agents that need accurate context about how your applications work.
- Support audit and compliance workflows.
- Accelerate onboarding.
- Surface risk before it becomes an incident.
The Outcome
The customers I talk to who are struggling with AI adoption are often looking for a use case that’s big enough to matter but safe enough to start. This is it.
You’re not changing your code or replacing your team. You’re using AI to do something your organization has always needed and never quite managed: understand itself. And in the process, you’re building the fluency, habits, and institutional knowledge that make every subsequent AI investment more likely to succeed.