Clarke Rodgers:
Welcome to the Executive Insights podcast, brought to you by AWS. I'm Clarke Rodgers, Director of Enterprise Strategy, and I'll be your guide through a series of conversations with security leaders.
Today's guest is the very fashionable Hart Rossman Vice President of Global Security Services at AWS. Join us for our discussion on emotional intelligence, compliance, engineering, and incident response. Enjoy.
Hart, thank you so much for joining me.
Hart Rossman:
Oh, thanks for having me, Clarke.
Clarke Rodgers:
It's been a couple of years. You were one of our first guests on Conversations with Security Leaders. A lot of things have happened with your career. Please get us up to date.
Hart Rossman:
Well, I mean, first of all very clearly I've leveled up my wardrobe.
Clarke Rodgers:
You have indeed.
Hart Rossman:
I've come a long way from layered t-shirts back in 2022. And then since then, we've also formed the Global Services organization, which is really how we've brought together all of our field delivery services across AWS to better accelerate customer outcomes.
And in the process we created a dedicated security organization, which I look after called, very imaginatively, Global Services Security.
Clarke Rodgers:
Has a nice ring to it. And what does that encompass? If I'm a customer, what am I coming to you for?
Hart Rossman:
Yeah, so we do two things. First and foremost, we're focused on helping customers build, deploy, and operate securely in AWS and also build, deploy, and operate security solutions in AWS. So that's everything from compliance, engineering, incident response, threat detection, cryptography, identity, whatever it might be. Then the other thing we do is we look after security internally in the field at AWS. And so we help Amazonians who are looking to raise the bar for security in the field.
Clarke Rodgers:
Oh, interesting. So what would that look like? When you say an Amazonian in the field, is that at a customer site or is it Amazonian building a service and then you help them-
Hart Rossman:
Actually, it's both. Right?
So from what we call engagement security, we might help solution architects, our salespeople, a consultant in the field, get the right security outcome for their customer and doing it in the right way to protect both the customer and the employee. Then the other thing we'll do is we'll help our builders build faster at a higher security bar. So a good example of that is that gen AI has become a bit popular-
Clarke Rodgers:
I've heard.
Hart Rossman:
...over the last 18 months or so. And we wanted to make sure that across the field, our solutions architects, our cloud support engineers, our consultants, could build quickly and securely, really innovative gen AI solutions. So we collaborated internally with AWS security and our service teams to essentially create these golden paths to quickly, effectively, efficiently, and securely allow our builders to innovate with this new technology.
Clarke Rodgers:
That's very cool. I meet with a lot of CISOs. You meet with a lot of CISOs. I've seen a general trend over the last 18 months of going from how do I secure gen AI inside of my environment? So maybe I'm buying a third-party tool, maybe I'm using Bedrock, whatever the case may be, how do I make that as secure as possible? Transitioning to how do I use gen AI tooling for a security outcome?
Hart Rossman:
Yeah.
Clarke Rodgers:
Are you seeing that? And if so, how is your organization helping customers?
Hart Rossman:
Yeah, we absolutely are. I think there's a couple of interesting angles to that. First, is that I think a lot of these things, it's difficult to protect what you don't understand. So step one is just encouraging these security minded organizations to use the technology. Get comfortable with it, kick the tires. Have some frivolous use cases that are valuable, like build a recipe book or something out of it. Write recipe chat bot, whatever it might be.
Clarke Rodgers:
Just to get comfortable with it.
Hart Rossman:
Just to get comfortable with it and just sort of think deeply about the types of use cases, the types of data that you might apply. At AWS, we've published the security scoping matrix for gen AI workloads that helps you in a very disciplined way think through what are the right outcomes from business and security standpoint and then allow you to apply the right controls and technology around that. That's kind of one element of it.
The other is, as you're pointing out how do we use the technology specifically to get good security outcomes? And when this was becoming popular, the AWS Cert was looking around to sort of understand how can we best help customers in this space?
One of the things we quickly keyed in on is that there was a lot of information out there about how to pen test or how to do red teaming or AppSec reviews of LLMs of gen AI. There really wasn't any publicly available information on how do you do incident response if an LLM was involved or if an LLM might even be part of the cause of the issue. So we dug really deep. We did some experimentation, we developed some run books and playbooks of it.
Because we think it's valuable for customers, we published it. We made available these automated run books and playbooks. We published a methodology for responding to security issues when gen AI might be involved. We've gotten really, really great feedback from customers about that. And so then we thought, well, we ought to be using this more ourselves. So we've worked with a couple of teams across AWS security and we have an internally built security responder chatbot essentially. And what it allows our responders to do is when they have an inbound ticket, they can ask this Chatbot questions that help them prioritize, help them triage and help them discover resolution paths or courses of action much quicker than if they have to follow a traditional investigative workflow.
Clarke Rodgers:
Can you share an example of what they might ask the chat bot?
Hart Rossman:
It sort of depends on the nature of the investigation, but I will share that one of the most interesting things is we put it out there in production and we thought these folks are battle-hardened veterans. They're going to ask it like a question, treat it a little bit like a tinker toy and then move on. It turns out that on average they ask 11 questions of the bot.
One of the reasons for that is because early use, they were able to so quickly get to an effective course of action that they feel it's good use of their time to work with the AI rather than pursuing a traditional investigative approach. Now that we've seen that, one of the things we really want to do is say, "Okay, well can we essentially guarantee them to get the right answer in three to five questions instead of maybe as many as 11?"