Danielle Ruderman (02:14):
I think the third thing that's been very interesting is the promise of AI to help even mature security professionals lean into what they have to do. Imagine if you're analyzing logs and you've learned to code in a certain language, maybe Python, but now you need to go analyze logs and learn how to write SQL queries. The barrier there is having to learn these different languages to extract. You know where the data is, but you have to learn how to pull it out with the different code. Imagine if you could just tell the AI what you want to do and what data you want to pull together to do your analysis, without having to learn all these different esoteric coding models.
There's a lot of power there that can really help us speed the time to investigations, and really help everyone from our junior security professionals to our very mature professionals do their jobs faster.
Chris Rothe (02:58):
The big challenge in security ultimately is there's not enough people to go around. And so that's why it's so important the work that AWS does in terms of making the platform more secure and the services more secure, inch by inch, mile by mile.
Generally speaking, we want everyone across the Red Canary team using generative AI in a way that makes sense for their roles. Whether you're a sales person and you've just had a great call with a customer and you need to put together a follow-up email, let's make that faster and make the quality of that communication better. Because ultimately, that's better for the customer and better for you, because it took you five minutes instead of maybe an hour.
So that's been our approach, is to make sure that everyone can use it in a safe way. But I think we're early in that in terms of learning what are the pitfalls and what are the challenges associated with that? What type of legal things are going to come up over the next several years as it relates to generative AI?