Clarke Rodgers:
I love it. With the speed that technology is advancing, and of course with the generative AI tools that are out there, what do you see being that future SOC? I don't know if you can speak to if you're using any generative AI tools today, or you plan on doing it or investigating whatever the case may be, but how do you see that helping your SOC analysts and the other roles as well? And then from an attacker side, how are you thinking about how they may be using it so that you can either detect their activities or react to them?
Tom Avant:
We're starting to use it in a way of creating automated responses for some of our customers. And then we're also looking at automated workflows to be able to say, "Okay, we know that these are common workflows that come in — these are things based off our metrics that we're looking at, that customers are looking for a lot — how do we incorporate what our data is telling us with a more direct routing to the solutions that they're looking for and where they don't require human judgment, why don’t we remove the human completely from that chain?" And that's what we're working on right now.
Clarke Rodgers:
That's fantastic. And then from the adversary side?
Tom Avant:
The threat side's a real interesting one. It's such a new playing field. So, you're hearing so many different new things about injections into... People want to play with the technology, and they're just running out to all websites and just downloading. They don't even know what they're downloading half the time. You don't want people who are going to run into the fire. You want to assess the fire first and look for what is the best point of entry.
So, it's the same thing when we're talking about gen AI. What are the safe places to go? How do we make sure we validate that usage before we incorporate it? What are the different checks that we can run in the background and make sure we say, "Yeah, we feel really good about what we're doing," before we proliferate this. Because once it's in and it starts to propagate, that's not the time to find out that uh-oh, you did something wrong because now you're doing a cleanup and you're trying to catch up to the propagation. And that's just not fun, for those of us who've done it before for other things. So, you definitely want to look at it from a perspective of doing those pre-checks before you even break things in.
► Read the research report: Securing Generative AI: What Matters Now
And I think another threat that's tied to that that we're starting to see, which is probably an uncommon one, is regulation. It is probably one of the biggest trends I'm starting to see as we start to adopt more and more workloads to the cloud, as more and more customers are coming to the cloud. We go to more and more different environments, different countries. We're starting to see sovereign cloud pop up in more and more locations. The regulation is something that you actually have to think about. Before, it was an afterthought, and now it's at the forefront of a lot of our discussions. Before we go in and think about anything else, is how are we able to adopt and comply and still operate and maintain maximum value for the customer while also being in compliance and being able to communicate that?
Clarke Rodgers:
I think that is an incredible trend I've seen as well. It used to be we could have the conversation around security by design, right? Build security in, maybe even just in the prototyping stages, ideation stages of things. And now we're at the point where, “Oh, yeah, and privacy and compliance and regulatory obligations as well.”
Tom Avant:
Absolutely.
Clarke Rodgers:
I'm glad that you're seeing that, that people are pushing it further down into the stack so that when it comes to release time, you're actually aligning with things.
Tom Avant:
Absolutely.
Clarke Rodgers:
Well, this has been fantastic, Tom. I really appreciate your time today. Thank you.
Tom Avant:
Thank you so much for having me. I really appreciate it as well.