Paul Vixie (11:10):
So, you have to look past the hype and say, “What will it be once it settles down?” And in some cases, we don't know. It's a pretty new technology and a lot of hardware and software is being crafted for it. And you never really know what the impact of a tool is going to be until it is used by somebody other than its maker. If you're going to use a wrench as a hammer, that's probably not what the wrench maker thought you were going to do, but it might work in some situations.
We haven't seen strong indications of what this will really make possible once the hype cycle dies down and we have something else that's grabbing the headlines. That having been said, we at Amazon have been doing research, development, and deployment of AI-based solutions for at least the last dozen years. And so, this was not a complete surprise to us.
We already have an example in that the CodeWhisperer system of something that is using generative AI techniques but doesn't look anything like what's been grabbing the headlines. I see that happening on all sorts of systems. For example, when you're doing anomaly detection, you're looking at telemetry flows from your system, you're looking at either events that indicate maybe there's something going wrong or events may indicate somebody's attacking you. It's going to be possible to cross-correlate those better now that we have this technology. And again, I feel like we've barely seen 1% of what will be possible.
So, while on the one hand I despise the hype cycle and I wish we could just be serious from the get-go, I also understand there is some real merit here. I'm working with some teams inside of AWS Security who are trying to answer that exact question: “What can we do to better serve our customers now that this is generally available and generally understood?”
Clarke Rodgers (13:14):
And then sort of help that human security practitioner with a lot of the grunt work from a technology perspective with generative AI tooling?
Paul Vixie (13:25):
Yes, and I don't mean this to be a product plug, but Amazon's biggest success with our cloud has always been the workflows we enable our customers to adopt and build. And so, one of the first things that we did in the large language model space was Bedrock. The idea is if you want to use a large language model, do you also want to pay the training cost? Do you want to have to build the model?
Because that can take thousands of hours or tens of thousands of hours of very expensive computer time to do. And if there are various pre-built models and they're sort of on a menu and you get to pick which ones you want, but you don't have to pay to copy that to your own system, you can simply put logic in your VPC or whatever it is you're doing in our cloud environment that has direct access to APIs that know that they have access to these subscription models.
And so, the original premise, which I didn't know at the time, I've had to learn after coming here, the premise of the cloud turned out to be that having an elastic amount of compute, so as much as you really needed, next to an elastic amount of storage, again, as much as you really needed, with no access penalty is how we got big. And now we've just replicated that inside of generative AI so that people who are maybe very ambitious in their own segment of the market can do with our cloud and LLMs what they've always done with our cloud without LLMs. We love that. I love that because the real power of this will turn out to be what our customers did with it.
Clarke Rodgers (15:13):
And that customers have that built in trust from all the security tooling they've used for years and other aspects that can now apply to tools like Bedrock and whatever else may be coming down the road.
Well, Paul, thank you so much for joining me today.
Paul Vixie (15:26):
It has been great. Thanks again for having me.