Skip to main content

AWS Executive Insights

The Disruptor's Playbook: Winning with Agentic AI

A conversation with AWS Executives in Residence Jake Burns and Tom Soderstrom

In this episode...

Dive into the "AI multiverse" with AWS Executives in Residence Jake Burns and Tom Soderstrom as they explore why some organizations are seeing massive productivity gains with agentic AI while others struggle to see value. Drawing from their extensive enterprise experience, they reveal how treating AI agents like human team members - with clear specifications, iterative feedback, and proper context - can unlock unprecedented productivity gains that far exceed the reported 30% improvements. Burns and Soderstrom share practical insights on creating a culture of experimentation, emphasizing why traditional ROI metrics may hinder innovation and why "return on attention" could be a better early indicator of success. From leveraging multi-agent systems to democratizing AI adoption through "centers of engagement," this episode offers essential guidance for leaders seeking to position their organizations on the right side of AI transformation. Don't miss these actionable strategies for winning in the age of agentic AI!

Watch now

Transcript of the conversation

Featuring AWS Executives in Residence Jake Burns and Tom Soderstrom

Tom Soderstrom:
Hello, and welcome to the Executive Insights podcast. I am Tom Soderstrom and I'm an enterprise strategist in AWS, and I'm joined with my dear friend.

Jake Burns:
I'm Jake Burns, I'm an enterprise strategist at AWS as well. Nice to talk to you, Tom.

Tom Soderstrom:
And you've said something fun that I'm always puzzled about. You said you feel with this whole innovation and agentic and all that, you feel like you're living in A Tale of Two Cities.

Jake Burns:
Yeah. Almost like it's an agentic AI multiverse.

Tom Soderstrom:
Would you dive into that a little bit? That's such a cool point.

Jake Burns:
Yeah, it's strange. And I wasn't looking for this, but a large percentage of people who are getting and reporting massive productivity gains using agentic AI. And of course, a lot of these are companies that are reporting it, so you take it with a grain of salt. I think there's some stats, and we could go through those from various organizations about their own products, but also about other people's products and the studies they've done have shown that say AI coding companions, for example, the productivity gains that developers are getting with them. But at the same time, there was a recent study that came out that went viral that actually showed developers were less productive.

And I see quite a few people online very frustrated with these tools and making bold claims that they're fixing more bugs. They're spending more time fixing bugs than the code is saving them time generating. And it's not enough where it's just one side is clearly wrong. I think there's a lot of people on both sides of this, and they're kind of living in their own bubbles each side, I think. And I don't know, it's just a very interesting phenomenon.

Tom Soderstrom:
Yeah. We've never seen, we've seen technologies come and go of course, but this is really revolutionary because it'll change how we work.

Jake Burns:
Yes.

Tom Soderstrom:
And I looked at some of the stats, and Deloitte found that Amazon Q developers were 30% more productive. And of course, Amazon saved 3,500 work years, $250 million changing or upgrading Java using generative AI and agentic AI. So what about the people who are finding that it's not productive?

Jake Burns:
So how can both of these be true at the same time? Well, for one, I think this is a very real phenomenon. And if you're in one of these camps, you really think the other one's wrong.

Tom Soderstrom:
Always.

Jake Burns:
Now I can say that I believe the people who say that they're getting productivity gains for the simple fact that I'm getting productivity gains. I'm using a lot of these tools, and I'm finding value in it. And I know people personally who are as well. But I wasn't at first. So there's definitely a learning curve. And I heard this articulated online, it was that if you think about human beings as agents and think about developers as agents, you don't go to a team of developers and say, “Build an awesome product and expect a great result.” You don't even go to them and say, “Build a product with this feature and that feature,” and expect just them to come back with an awesome result.

There's a back and forth that takes place. There's a design specification that you give them, a very detailed one if you have a very ambitious product. And then they deliver version one, you find feedback that you give them, and then they come back. Now what I found was the people who are using these tools effectively, myself included, they're treating these agentic AI tools, these coding companions for example, just like you would treat human agents, human developers, giving very detailed prompts, doing multi-shot back and forth, fixing and clarifying. And one of the really biggest underrated benefits of this whole agentic AI rollout as we've seen it up until now, is that it's forcing organizations to really understand their workflows and really be able to define the specifications for things. And I think that that's kind of an underrated skill. But if you develop that skill, I think then you be in that camp that's getting these productivity gains.

Tom Soderstrom:
I think that's well said. Remember robotic process automation? That was the latest great thing at the time, but what people did wrong is they automated their old processes. And if you automate the bad process, you just have a bad process run faster. So it's what you said is rethink how work is done and this is the opportunity to do so. It's quite amazing.

Jake Burns:
You rethink it. But I would also say kind of go back to how were we successful in the past without this technology? Take context, for example, context engineering is trending right now for good reason, because context is what I'm talking about when I talk about the specification. That's context. So the prompt that you give an LLM is context. The RAG retrieval data that you get back is context. The history of the conversation, if it's a chatbot, is context. So when you're developing enterprise grade tools, giving it the context that it needs in order to do the job properly is the difference between success and failure. And oftentimes, the difference between disillusionment thinking that these tools are just all hype and they can't produce anything because you got a bad result giving it bad context or being one of those statistics, which I think are really underselling the impact because I know people who are getting far more than 30% productivity gains.

Tom Soderstrom:
I've seen with all the customers that I talk to across the world, it's energizing the workforce. Some, a third, are afraid that AI is going to take their work, and the other two thirds are saying, “Man, I'm learning something new.” In fact, you can just use prompts to code this in Python and then look at the code. You can learn that way, and you can learn quality code. So I think one opportunity here for leaders who are watching this is to energize their workforce by just trying it. Be a technology teenager. Just don't worry about perfection, just try it.

Jake Burns:
Well, I think you have got to stick with the experiments longer as well, because trying it once and concluding that you got a result that maybe wasn't what you were expecting. I mean, again, you wouldn't do that, brand new developer that you hire and give that developer vague instructions and expect a beautiful result. Maybe if you get lucky, but it takes work with human agents as well. So I think it takes work to produce great quality outputs with these AI agents. Now, I would argue that it takes less work. The leverage is much higher, and the productivity you can get is much higher, but...

Tom Soderstrom:
It's a much faster iteration.

Jake Burns:
Absolutely.

Tom Soderstrom:
And that's really the key. I've talked to some of people that I used to work with when they were young programmers and asked them, “How are you using agentic AI now and generative AI” And many of them say, “I use two of them. And I ask them the same prompts, same code, and then they disagree. And now I get something that I can use,” just like we would with people. So your point about agents being treated like people makes sense. And are people ever wrong? Yeah. Agents will be wrong too, but you just have to use your judgment.

Jake Burns:
People hallucinate, people misremember things, right?

Tom Soderstrom:
Exactly.

Jake Burns:
Especially if you don't give them access to the knowledge they need in order to do the job they have. I think giving great instructions like system prompts to these agentic AI systems, but also giving them access to the data they need, giving them guard rails. And then to your point, some really interesting patterns are emerging right now where these kind of multi-agent systems or swarms of agents and this more cutting-edge technology where you give different agents different prompts or different system prompts, different instructions, different personalities. I was reading about - I think it was a podcast actually on the plane ride over here - a really interesting use case. A gentleman was giving system prompts to different agentic AI systems, giving them personalities of different kind of political leanings and different socioeconomic situations and then asking them what would be a good policy to implement for whatever political issue was.

And it was interesting. You could get them to all kind of give different answers. You can also have them have conversations with each other and get consensus, maybe not 100% agreement, but find things that everyone could live with. Now of course, this is going to be an approximation. This isn't going to be the final output. Right now, there always needs to be a human in the loop for the most part. But you could get really a lot of the way there with solving a lot of these problems. But again, you have to know how to use the tools and you need to use them in a way that's going to give you this kind of quality output that you're looking for.

Tom Soderstrom:
When you talk about thousands of developers, now thousands of agents, an agent that never gets tired, but I think we're going to have people managing thousands of agents. And AWS just released the AgentCore where we can spin up thousands of agents. That was really interesting, and Kiro where you can just give it the spec and it comes back with a code. So it's a new way of working and many are already getting there, but to your point, it's early going. And if they give up too soon, then they're going to be left behind.

Jake Burns:
And just to wrap up this point, because I know we want to get to what's actionable, what can leaders do about this?

Tom Soderstrom:
Yeah. What can we do next?

Jake Burns:
But with Kiro, for example, you said give it a specification. And that's exactly what I'm talking about. If you go into a tool like that and say, “Write me a very cool app that does this feature and that feature,” and you don't give it the context it needs, it's likely to give you something that you're not going to be happy with. And so the more effort you put into that specification, the better the output will be. And I think the people who are doing that, they're getting a disproportionate return on investment. 

Tom Soderstrom:
So I think what should leaders do about it? Number one is to create a culture of experimentation in their companies to experiment with these things. Now how do they do that? It's not the technology. It's working backwards from the business outcome, and how can you get there. And you iterate and iterate and iterate. One of the things that resonates with leaders is when you say the first version will be the most expensive and the worst. So set the expectations accordingly. I think that culture of experimentation where you have now with generative AI, you can have everybody participate. You don't have to be a programmer anymore. And that's very different. Now you get the perspective of the entire company. And I think lowering the friction to experiment, psychological safety, setting the storytelling when you actually solve a business problem, tell the story fast and realize that it's imperfect.

Jake Burns:
Your point in one of your recent talks was return on investment isn't always the best metric. So if return on investment is not the metric, what is the metric?

Tom Soderstrom:
Return on investment conjures up this idea of a business plan. And if you're going to run an experiment in two weeks, the business plan takes four weeks. It's kind of a waste of time because the experimentation is a way of prioritizing the things that actually have a business outcome. So what we did before, and what I see customers do, is to measure something else. Measure return on attention. What on earth is that? If you create this experiment and you get the attention of the end users, they say, “This is good.” And you get it from two different end user community, now you're getting their attention. Now you go for the investment, and you can measure it the way you normally do. 

We say think big, start small, scale fast. So think big is the working backwards. What are we going to do? Start small of these experiments and the ones that matter when you've got the return on attention, scale fast. And if you build them in the cloud, they will scale up and down automatically if you build it right, which lowers the risk.

Jake Burns:
Right, yeah. So what I really took away from your talk at re:Invent last year on this topic was that I think a lot of organizations have it backwards. So what's the difference between an organization that's being very successful with these cutting-edge technologies, they're adopting agentic AI and getting good results, and the organizations that are still operating in a legacy manner?

Tom Soderstrom:
But they need to start with the business use case and experiment with the technology to see if they can solve it.

Jake Burns:
Exactly. Right. But you're not going to know what the solution is until you iterate many times, experimenting.

Tom Soderstrom:
Or which technology makes sense.

Jake Burns:
Exactly. And so I think that the legacy organizations are doing it backwards. They're saying, “We need to know what the technological solution is, what technology we need to buy, how much it's going to cost so we can calculate our return on investment before we even try.” And the problem with that is you don't get very many iterations doing it that way, and you're much less likely to come to the right or the optimal solution.

Tom Soderstrom:
And the reason for it was, we both worked for enterprise customer companies, you had to buy up front all the tools that you needed for all the people. Now it's pay by the drink, and it's very different. Pay only for what you use and if you don't like it anymore, it's a two-way door decision. My favorite Amazon concept.

Jake Burns:
Right. So applying this to agentic AI, how does return on attention apply in this context?

Tom Soderstrom:
It's what business problem do you need to solve? And especially if it involves partners, that's the really exciting thing here. Agents can now go across companies. Some use case that actually includes other data, other companies solve that using agentic AI, and now you can move forward faster than any of the competitors can.

Jake Burns:
I see.

Tom Soderstrom:
It's not easy, but it's doable now at low cost. If it doesn't work, don't worry about it.

Jake Burns:
And I would say don't give up on these experiments too quickly.

Tom Soderstrom:
I think that's key.

Jake Burns:
Don't draw conclusions. Realize that there's a learning curve and that the more time you invest into it and the more effort you have put in, again, you're going to get a high, and I know ROI, I'm using that term, but...

Tom Soderstrom:
Eventually you measure ROI.

Jake Burns:
Right. But even just with these experiments, there's ROI that is not in financial terms, just productivity. Think of ROI as productivity.

Tom Soderstrom:
If I can give you another out there thought like ROA, what does COE mean?

Jake Burns:
I don't know, center of excellence?

Tom Soderstrom:
It usually does. It shouldn't. It should mean center of engagement. Because with center of excellence, you get a few people who get really good and everybody else is left behind and start not liking the people who are very good because they think they're very good. So now you get shadow center of excellence instead of center of engagement. Get everybody to try this technology, learn by doing, get this business problem. And now have people from the business community and the IT community and cybersecurity solve it in two weeks. And now all of a sudden, you're democratizing the adoption of this technology. And that's I think is the key. That's how you move fast with that agentic AI.

Jake Burns:
Realize that you're going to get a disproportionate productivity than the productivity or the effort that you put into these tools. I think that's a conclusion that most people are coming to that are really putting in the effort. They're really diving deep onto these things. So there isn't very big net positive, but it's also not a zero-effort type of situation. And I think that's ultimately going to determine which of these universes you live in. Are you going to live in the universe of disillusionment of disappointment of not really understanding these tools? And I think eventually everyone will come around and it will become kind of the new normal, but you'll be a little bit late. Or, will you be in the early adopters phase and enjoy all the benefits of the productivity gains that you can get today with today's technology.

Tom Soderstrom:
And I think that's another lesson is when you try something new, don't start with the most important thing because that has had years to be perfected. Start with something internal, something that saves time. If you could save 10% of time automated a manual process, it's across the whole company. Everybody gets 10% back to innovate, and you don't lose your reputation or risk your reputation. So learn on the simple things and then apply to the big things. I think the companies that have done that have been very successful.

Jake Burns:
All right. Any other lessons that can put them in the right side of history?

Tom Soderstrom:
I would say when we think about teenagers, when we have problems at home, teenagers will solve it. They don't worry about all the details, they solve the problem and then figure out what did we do? We need to become technology teenagers. So just like you are doing and I'm doing, experiment with agentic AI, generative AI hands on. You don't have to be PhD IT person anymore. You can actually try it at home with AWS tools or other tools. The point is to try it, but start with the business case.

Jake Burns:
Yeah, I agree. And it's interesting. I think that the trend is going towards the majority of people. I think in the United States, I think this trend was, I read a report about 40% of people are using AI tools just in their personal lives. Like come up with recipes and come up with stories for the children and things like that. That number will increase over time, and they're getting good at using these tools. And they're getting more productivity the more they use them and the more expertise they gain.

Tom Soderstrom:
And I think for the companies out there jumping into this creates a competitive advantage because you can pivot very quickly. It's never been a better time to be in IT or be a business leader because you have all these tools at your disposal if you use them.

Jake Burns:
If you use them.

Tom Soderstrom:
So which universe do you want to live in?

Jake Burns:
Well, I'm already living in it, and I'm trying to take as many people along for the ride as possible.

Tom Soderstrom:
Good. Awesome. Well thank you, Jake. It's always a pleasure.

Jake Burns:
Indeed. Thanks, Tom.

Missing alt text value
A large percentage of people who are reporting massive productivity gains using agentic AI... And at the same time, I see quite a few people online very frustrated with these tools... And it's not enough where it's just one side is clearly wrong.

Jake Burns, AWS Executive in Residence

Subscribe and listen

Listen to the episode on your favorite podcast platform: