Video Player is loading.
Play
Seek backward
Seek forward
Mute
Volume
Current Time 0:00
Duration 0:00
How Security Teams Can Leverage Generative AI to Strengthen Security and Provide Contextual Guidance
Loaded: 0%
NaN
 
Settings
Captions
Transcript
Fullscreen
Seek forward
Seek backward
Play
Transcript
Auto Scroll On
A
A
Clicker work. Yeah, there we go. Hi. Uh I'm Fritz Councilor. I am a principal security consultant. Been with Aws for about 7.5 years now and I would say, I don't know, maybe a few years ago um I started uh with some colleagues doing some experiments around how we might be able to apply data science and analytics to help customers strengthen their Identity and Access Management. And that ultimately led to work at the intersection of generative AI and, and security, which has been my focus for about a year now. Uh And, and I'm really excited to talk to you about that today. I'm Anna Macie. I am a senior security specialist solutions architect. I've been at AWS about 3.5 years before I joined. I worked as an AWS customer in a variety of security roles, uh incident response to red teaming um that kind of thing. And recently I've been focusing on dinner of AI and how our customers can use it uh to really make security better and hopefully easier. So, looking forward to speaking with you guys today, Marshall Jones, I've been at AWS roughly 4.5 years and a couple of different roles, uh professional service as a security consultant, last 2.5 years in uh as a security s a my background's in threat detection and incident response as well. And I essentially spend most of my time now helping customers adopt the native AWS security services and building and operationalizing them. Cool. So, uh whatever your uh whatever your job, the fact that you are here with us today And um and thinking about this topic means that that you are a security leader. And leaders recognize that uh generative AI like cloud, it's a powerful technology and it's uh it is going to change the way we work that is already changing the way we work in some cases with that in mind. It's uh it's, it's important for security leaders to ask, how can I minimize the fear, uncertainty and doubt around this technology and use generative AI to drive better, faster, safer security outcomes for, for, for your organization. And we hope to help you answer that question here today. Now, Jo AI and security are obviously both very broad topics. And so we're going to start with an explanation of the specific opportunities and challenges that, that we're going to cover in our talk today, then we'll walk you through the use case for our virtual security assistant and, and do a quick demo. Uh Next present our solution architecture uh followed by some detailed demos, really get into the into the nitty gritty of it and then we'll share some resources that you can use to do this yourself. And, uh, if we're, if we're lucky, we get the timing right. We'll, we'll close out with Q and A. But first the bottom line, if you, we're, we're gonna share some really cool and interesting things here in just a minute. But, uh, but if you really only remember one thing from, uh, from this uh from this talk today, I hope it'll be what's on this slide. So in our, in our opinions, AI is unlikely to replace skilled security experts anytime soon. But you do want to get smart on this technology because individuals and teams that make effective use of generative AI will outperform. In other words, AI is probably not going to take your job anytime soon, but someone who is making better use of AI than you. Well, they might AWS services like Amazon Bedrock and Amazon Kindra are service experiences that make it easy to get started and to keep your data private and secure. Uh No customer data is used in the, the the training of of of underlying metals, excuse me, models in um in bedrock and your data is encrypted, secure. It does not leave your VPC. Additionally, you can deploy, integrate and secure your generative AI application using familiar AWS tools and capabilities like the A W OS CDK, PrivateLink, AWS Identity and Access Management KMS WF and more this technology generative AI it's, it's not magic and you don't need a phd to build with it. In fact, uh as a general purpose technology, generative AI is much easier to use than many of the earlier specialized ML models. And with bedrock, using your choice of foundation Model or large language Model, it's just another API call. You can get up and running in a matter of hours and we're gonna show you how awesome. Well, um so I wanna get started and think about how much there is to generative AI security and wanna kind of clarify what we're going to cover today. So first and foremost, security of generative AI um these are the applications that actually use it. This could be things like uh the controls you have in place around, you know, public chat applications or sorry, public I AI applications or just exercising your opt out mechanisms. These are kind of your risk management type things. Then next we have uh security from generative AI threats. Uh And if you're not familiar uh O ap actually recently released a new top 10 um that goes through threats like prompt injection, like training, training, data poisoning. Super interesting topic. Um And what we're actually here to focus on today is this last item. So definitely a lot to these first two, but I just wanted to at least clarify that we're focusing on class aspect. How can you use this emerging technology to make your lives easier? So I suspect if you're here, maybe you're a developer or security professional who's interested in, how can my life be easier? So we really wanna focus on that aspect today. So how can you actually do that? Um I think that there is kind of a, a number of ways that we believe that you can achieve strengthening security with general of AI. So to start automating mundane tasks, I know if you're like me, uh one of the, the boring parts of your job is reporting or um summarizing information, there's a lot of opportunity with security professional to automate those types of tasks next, uh expediting your decisions and operations. So, um, so you guys get a security finding, um whether it's something like a threat detection, we think there's something in your environment that's going on or a configuration risk, you may wanna have a playbook that's in place. Well, how do you write those? And you may say we, we're, we're a group of five people. I don't have time to write the playbooks. This could be an opportunity to say if I get this finding, how would you write it? And then you could actually use J AI in that, in that space to write and run these playbooks and then lastly, you know, just increase focus on interesting topics. So hopefully, um you know, after this, you can take some of these like mundane tasks, uh like reporting, you can focus on these novel problems. So your security sm ES could really focus on interesting and high value opportunities. So, um I know hopefully you got excited about how you can use AI we definitely want to at least explain some of potential pitfalls and then we will also will go through um you know how to mitigate these. So I wanna first start with saying we're, we're saying here a public AI Chatbot service, so nothing that's custom. But we want to say if you guys were looking for security advice, what would this look like? What are some of the risks? So, I am a developer at a company, I use AWS and I need help. How do I secure my application? Well, if you go to some sort of, you know, public chat bot, they may say, OK, um Anna, uh I need your company's security policy that would help me actually help provide guidance. And if you are, you know, um a customer, what you might say is I don't want to provide that information as a company to a public Chatbot service which is a toughly fair response. Um And so what a public uh chapa say, they may respond with, you know, maybe an assumption or speculation. So this is just a pitfall of using some of this. But again, um I still feel like, you know, there, there's a lot of opportunity in the space. We just want to recognize what could, what could happen, which is a public one without the proper uh layering on top of it. So I wanna dive a bit deeper into these considerations. So to start, if you're not familiar with hallucinations, what is this? This means they are providing a Model is providing a response, but it's just uh you know, incorrect. So say you're, and this is one thing I I personally do is I'll be like I want a travel itinerary. I'm a busy person. I would love for German of AI to figure that out. For me, it's not the end of the world there if they're wrong. Um But there obviously are spaces where you wanna make sure that's correct. So um we just need to consider hallucinations and think through how this affects things. So um and we will kind of go through some of the the keeping humans in the loop. That type of thing next is data privacy. So we're going to go through a Rag architecture. I know we're, we're introducing Rag here, we're gonna dive deeper into that. Um How do you keep sensitive data out? So to ensure that, you know, maybe someone not everyone should be able to say what's the biggest risk in my environment, right? You wanna make sure that that data is, you know, closed and then lastly is quality control. So this is still a human in the loop operation, right? So, you know, given the current, you know, say they are uh with with general of AI the human in the loop must review um these outputs to make sure that they are high quality and validate that the output, you know, is, is sane and matches the source data. So that's an important aspect to this. All right. So let's start with a scenario, Anna you talked to many people as a security S A, here we go, talk to many people as a security S A what's one of the common kind of scenarios that you hear about? Yeah. Um So there, there's a lot of different scenarios, but I think the one we wanna go through is I need an IAM strategy and it's just like my IAM is not scaling well, and it's becoming a blocker. Um It's slowing us down. What do I do? Uh How would you respond to that from an AWS security consultant perspective? Yep Fritz. You've got an IM background. What's probably your most natural response when a customer says that to you? Well, first, I would say this is a, this is a very common scenario, particularly where, where um adoption started organically and then, and then scaled out in a hurry. Uh And uh what we'd say is that, you know, um first, let's take a look at your delegation Model and uh uh and your workflows and then I can work with you to, to develop a new IM strategy and that's something that, that could take around six weeks and my response would be what you guys would think is why does it takes, you know, I need to get this done. I need Nyan strategy tomorrow. What's the blocker? It's not so much a blocker. I mean, the, the fact is that a lot of analysis needs to be done in, in these situations. We need to understand the current state of your environment, all of your organizational security policies and the associated documentation for that. Then we might need to go talk to some teams to have them show us the way they've implemented things in their individual AWS accounts so that we can create a roadmap that's prioritized appropriately and actually feasible to implement. And the bottom line is all that takes time. Great. So with that scenario in mind, we'll just discuss now, we're gonna have a similar conversation except with the generative AI Chatbot that we exper experimented with and built for this talk and see what this conversation looks like. One thing is I promise we're gonna get into the exact details of how we did this uh further into this talk. So bear with me. All right. So we're essentially gonna start out by asking the exact same question. Uh Essentially my team's approach to IM isn't scaling well, and it's slowing us down. Can you help me with an IM strategy? We're gonna ask this pretty basic question, see what it gives us. So it's gonna start generating some suggestions as you can see it's bringing some things back, use roles instead of users whenever possible. Sounds like a good idea. Sign groups to assign use groups to assign permissions to set some users with common access needs leverage levels to find permissions based on job function like developer and analyst, automate provisioning and deprovisioning. Use temporary credentials, monitor this activity with CloudTrail regularly review and role definitions and policies and re source permissions. And then it also says the key is starting with least privileged approach and leveraging automation. So if you know, if you got a little bit of background, it sounds like some pretty good advice uh that's been given to us so far. So based on that, we'll probably have more questions, right? We're gonna wanna ask a follow up question and just see what else this uh what other information that we can get from this. So as we go into this next question, we're gonna add a little bit more, we're gonna look for a little bit more context that's specific to our environment based on the suggestions that I gave us before sticking on that strategy of least privilege. So now we're just saying following up with that question based on my strategy above, are there any security findings in my environment? So any security findings in this demo environment that we built that they, we should prioritize based on the suggestions that were given above, it starts to provide some information back based on the information provided. It seems the top priority would be to address the finding that an IAM policy allows full administrative privileges. So the recommendations suggest following the principle of least privilege. Once again, staying along the theme that we saw from the the suggestion above. And if we look at the 3rd and 4th column, it gets a lot more specific about our environment. So we're essentially addressing this finding by reviewing and restricting access, restricting the demo, everything policy, im policy to only grant required permissions would be the top priority. The other findings related to ensuring IM policies don't allow full admin access but are also important. But the demo everything policy seems the highest risk right now based on the information provided. So now we can see we got some suggestions and some information from this. But then we were also able to use this to get context that's specific to our environment to go. Now take action and, and improve the security posture of what we built here. So as we get into this a little bit more, we wanted to build some tenants as we were building out this experiment in, in this demo. If you're not too familiar with tenants, tenants are fundamental at, at AWS. So every team, every project has a set of tenants and and they rely on them to make tough decisions, right? So if I'm working on a project and I, I'm trying to decide whether or not I should make a decision one way or another. I can always look back to the tenants and think does this go along with the guidelines that we've set out for this project or for our team, does this align with what I should be doing and then really make my decision based on that. So building some tenants for, for this uh for this application, we wanted to make sure that you, you know, we use responses as inputs to a process and not authoritative guidance, right? So we should look at these and we should also consider other contexts that we know about as, as humans and not use it as authoritative guidance, but use it as inputs to any decisions that we're gonna make rolls into keeping a human security expert in the loop, right? If you know your security experts at your company, a lot of you are probably in this crowd, you've got understanding of people, process and technology either benefits or constraints that exist at your organization. So it's important to understand these things when you know, you're getting outputs and, and getting information and context or suggestions from this generative AI application protect and verify knowledge based content used in rag providers. And like we said, we're definitely gonna dive a lot more deep into rag providers and the benefits that that provides. But essentially think of this is, is uh you know, garbage in garbage out. You wanna make sure you're providing context uh context that's gonna be helpful to your use case and what you're trying to accomplish with this, right? If you're feeding it any and all context, you might not be getting the best results out of this. So keeping uh context clean and understanding what context you're giving is important when you're uh when you're working with us. So devise automated and manual tests to continuously validate and improve answers over time. So think about a progression over time of, of building out a generative a generative AI application. And you're gonna wanna give it more context about maybe internal policy documents or other uh public documents that, that give great context into your different use cases. You wanna gonna wanna continue to validate either through manual or automated tests that that context is helping in driving better answers and better suggestions and better context into this application. Um And not, you know, degrading anything that you've built so far that you're trying to improve on. All right, before we get into um the technical architecture, uh I wanna return for a moment to a point that uh that, that animated earlier. This is similar to a to a previous slide. Um So we got really excited when we saw the, you know, the the first results out of this experiment we were doing and to be clear this, this is an experiment, right? Like we're not talking about a full-blown surface here. Um The the results the results actually look really good. But without access to proprietary information, a large language Model can only respond based on its training data. And sometimes that that might be really good and convincing information. Uh In fact, it is true that the capabilities of pretrained LL MS like the one you saw in the demo here today, which is Anthropic's Clawed V two Model, the capabilities are truly amazing. But for a security application like ours, we cannot assume that the model's response is always going to be appropriate and accurate. And uh that could be for several different reasons. But one that we're particularly concerned about is is the idea of hallucinations and and all large language models are prone to this problem of, of hallucinations. That's basically where the Model responds with erroneous information in an authoritative way. So if the output is not reviewed by a subject matter expert who can, who can really differentiate, you can end up with a situation where uh bad guidance being given and, and used. And if, if that is being used to secure your environment, obviously, it could lead to uh to bad outcomes especially over time. So there are several techniques that uh uh that we can use to minimize that risk. Um And it's not just about the risk of hallucinations, there's, there's other factors too, but there's several techni techniques that we can use here. Um And they're all basically about doing the same thing and that is to provide context to the Model along with the input that, that we're giving it in the form of prompts. Uh In order to ensure that the answers we get back, the information that we get back are are specific to our environments, to our Organizations, to, to our use cases. A prompt engineering is the basic technique. Um And, and uh I I spent a bunch of time trying to figure out how best to describe this. Um But long story short is just about, you know, iterating on the questions or inputs you provide to until you get them optimized to the point that you get the best answers back. One way to think about this. If, if um if you were a native English speaker, like I am, you know, in elementary school at some point, some English teacher said to me, you know, I would ask, hey, can I do something? Can I, can I go outside? Can I go to the bathroom? And she would say well, yeah, you can, right. So I can ask the Model uh can you write me an IM strategy? Its answer could just be yes, that's not really what I'm looking for here. So, so that's the way I like to think about the the prompt engineering story, like really getting your, your inputs optimized to get the answers you want. Uh Now a level above that uh rag or Retrieval augmented Generation you've mentioned here several times. That's one way to customize the outputs from a general purpose Model to perform domain specific and business different, excuse me, business differentiating functions in our case, security ones. Uh And the thing about Rag is that using that method, you can achieve those things at a tiny fraction of the cost and time required to train a custom Model um and even cheaper, you know. So in between the two, there's this other option uh that you see listed there called Model fine tuning. Uh And, and you can do some really interesting things with that. Uh But we're not going to go into it right now because we didn't find it necessary in order to achieve the outcomes. We were, we were after for, for this demo. Uh And for this, for this topic. So the demo you saw a moment ago um is an example of a rag application again, rag or retrieval augmented generation to produce those answers that we got, right? We weren't just relying on the base Model and its training. So to produce those answers, we got uh our virtual security assistant needed to know three things first. Um It, it needs knowledge of our organization specific security policies, standards control frameworks, et-cetera. And for the purposes of this demo, uh the stand in we used was the AWS security reference architecture. Second, the technical knowledge, the, you know, it needs technical knowledge of the of the target systems. And, and in this case, uh the the targets were um AWS, Identity and Access Management and related service documentation. So that's the the those documentation packages are, are how we gave the Model, the um the technical knowledge it needed. And the third, it needs knowledge of the current state of security in our, in our AWS environment. And you saw this reflected in the fact that when Marshall asked a followup question, he actually got back guidance about a very specific role and policy that needed attention, right? So obviously, the Model itself is not going to have that information at hand. And the way that we handled this and uh for for this demo is to to use findings from Guardduty and from Security Hub that we extracted from Amazon security Lake. So now that you know what, what information we need to provide to the Model as a context, I want to dive a little bit into exactly how that's used in a in a rag architecture application note that we're using Kindred in this diagram as our rag provider, but this flow applies to, to any rag provider. Kindred's not, you know, it's it's a it could be, it could be just about anything. Um So first, our our enduser makes a request and in the case of, of our demo application that that looks like typing a um a question into, into the Chatbot, right? Second our our generative AI app again, the Chatbot in this case, um takes that question and uh formats it into a query that it sends the Kendra using Kinder to retrieve API third is going to be the kinder then responds by returning relevant excerpts of documents from the index. Now these documents would be ones like I mentioned on the previous slide. So these could be your uh you know, internally like proprietary security policies and standards and guidelines uh could be, you know, control catalog like nist 853 AWS technical documentation, et-cetera. But all of that could be in there. And part of the, the excerpts that, that um they come back fourth uh that our generative AI app in the middle there is gonna send the original user request along with the excerpts. It got from Kindra as context to the large language Model and the parameters in this case that we send along with that stuff to bedrock, say in effect bedrock, uh you must answer the question being asked here from the context provided. And if you can't do that, you have to say sorry, I don't know. We're not gonna let you make up an answer in this case because you know, the stakes are, are reasonably high if we're going to use this for security purposes. So then ideally what happens is the lom comes back with a succinct response to the user's original request based on the provided context. And finally, our generative AI app is going to format that um and uh you know, send it back to the user as a reply from a Chatbot just like you would see in a typical chat conversation. Uh So this architecture may look familiar. I'm because there are so many examples of it out there. Probably a bunch of them in your organization too. I mean, this could be pretty much any serverless app of any kind deployed on AWS, right? Uh And, and we're showing it because I want to make the point that um all I need to do is add that bedrock box of this diagram. And now I've got the architecture for a generative AI application when I realized that that that was a kind of a that was really powerful uh realization for me. Um Personally, my my background is is not an AI/ML it's in security. And, and so when I realized it was potentially this easy to do that, that was pretty exciting. Now that previous diagram is a bit of an oversimplification, right? This, this is the actual architecture diagram that, that reflects our virtual security assistant. Um It's pretty close, but you could see that we have added uh on the, on the far right hand side, there are a couple of rag providers in this case. Um Kindra which I mentioned previously as well as Amazon opensearch and talk a little bit about that in a moment as well down below security lake, which as I mentioned is the source of the findings data we use to provide the context about our, our, you know, the current state of our environment to the to the Model. So why those rag providers, there are lots of op options for uh for rag providers out there and no single this is just like any other kind of technology practically, right? No single tool is is likely to solve all all rag use cases. There are a few specific reasons why we chose uh Kindra and open services um as as our rag providers. So for Kindra, the available connectors to common enterprise knowledge management systems are super valuable. There are a bunch of them. I don't know, I don't remember how many but dozens uh for sure. Uh And they make it really easy for you to ingest proprietary security documentation or really anything else for that matter uh from your existing enterprise knowledge management systems without writing any custom code that's really valuable. And you're gonna see how easy it is to set up in in just a moment here in the, in the demo. Um And opensearch gives us a high performance scalable and cost-effective vector database, which is great for indexing uh the findings data that we extracted from security lake, which is more structured uh than, than the typical documentation uh use case that, that we rely on Kinder for. And uh well, I guess we're about to show that to you. Now, I gotta use two hands. All right. So let's jump into, like I said before, we're gonna show the exact the entire configuration of that chat bot that you saw earlier. Um So you're gonna see every little detail and then some of the different configurations to get that chat bot set up. So let's go ahead and get started there. The first thing we're gonna do this is on github and we're gonna share a link later, but we do a GIT clone, change our directory to get into the actual directory. We're gonna do a couple build commands here. So we're gonna do an M PM. Install an M PM run build. It's gonna build for a few seconds. And then once this builds, we're gonna actually set up the configuration for the chat bot. It's gonna ask us a few different questions that we'll go through. And one thing to note is all of these kind of steps are, are on that github page as well too. So one thing next thing is we'll do the M PM run create and this is the configuration process. We're gonna give it a name. I'm gonna call it SEC 210. We're gonna get say yes, we have access to bedrock. We're gonna just go ahead and use the US East one. We're gonna use the standard Bedrock endpoint. We don't need a cross account role because we're doing the same account. We're not gonna use any SageMaker models for this, but it's something you could easily add on uh do you wanna enable Rag? We're gonna say yes. And we're gonna select open search and Kendra, we're gonna let the Chatbot create these for us and that's what we're selecting here. It's also gonna ask us if we needed a Kindra in or if we already had a Kindra index and we said false. And then it says, do you wanna confirm this configuration? And we say yes. So next, we're gonna do an NPX CDK deploy. If you haven't used the CDK or cloud development uh development kit, essentially, it's gonna deploy a CloudFormation stack in the background. And that's what you see going on here. So those resources that we talked about earlier, the, the Lambda functions, CloudFront Cognito, all of that's gonna get set up now. Um And this whole setup, I think, I think it took me like 45 minutes or so to set up. So we've uh you know, accelerated that process here for this demo. Um But it's essentially going through and creating all of that and also those rack providers. So the open source serverless cluster, the Kindra index. And then after we get uh after the deployment finishes, we'll, we'll jump into a and work on setting up the configuration that we need to do there as well as the REST of the configuration in the actual Chatbot itself. Cool. So I think the total time was uh 1700 seconds as you can see there. And now we're gonna switch over to the Kendra Console. We're gonna go to indexes. You can see the index that was created for us. We need to add our data source now and tell Kendra what we're gonna go index. You can see the different connectors that brief or Fritz mentioned briefly. Uh Jira, you can see some of these maybe internal tools. We're gonna pick a web crawler and we're gonna give it a name for this data source. As you can see, I love dashes and I will continue to use them as we walk through this. So sec 210 dash Im Docs, we're gonna use a site map and we're gonna put in the site map for the IM documentation and then I'm gonna leave everything else as the default. I'm gonna let Kendra create the Im roll because I'd rather let Kendra create the IM role. Once again, give it a name, huge dash guy, big dash guy. Well, so we say next, there's some additional configurations to consider here. I leave everything as default, but there's crawl depth. For example that you, you know, how deep do you wanna go in that documentation? How often do you want this, this to be crawled and indexed depending on how the documentation and how often it might be updated is really gonna depend on the the crawl depth. So there's some other things to consider. Certainly, when you go through there, I'm gonna go ahead, I click add data source and uh it's creating an iron roll. So roughly, it takes um 30 seconds or so we split this up a little bit. We may not have sped it up quite enough. Yeah, we're, we're learning that. So, all right. So now that we have our data source created, like I said before I put run on demand. So essentially, now we're going to sync and if you can read that tiny lettering at the top, it says a few minutes to a few hours. In our case, we're crawling and indexing the IM documentation, which is rather large. So our, ours was closer to the, the few hour mark. So essentially, we let it scan and scan and scan, um goes on and on and then you go to lunch and you get your wallpaper screen and then when you come back from lunch, it will be done. Um But before we move on there, the next thing we're gonna do, you come back from lunch, Kendra's done indexing. You're all good to go. The next thing that we're gonna do is we're gonna go grab some security data from security Lake. If you're not too familiar with security lake, it's a managed security data lake. It stores security data findings and logs in S3 for this demonstration. We essentially just go grab some of this data that we can provide and put in opensearch fairly easily. And we're gonna do that with Athena. So I'm using Athena to grab some data from security Lake now. So essentially you're gonna run my query. Um We're just looking for data over a day. We grab, we see there's some Security Hub and some GuardDuty findings. I'm just gonna download that CSV, pause it correctly this time. Uh I'm jumping to the, the next part which is essentially the, the CloudFormation Console. So the reason we're going here, the I told you before CDK deploys that CloudFormation Template in the back end. So you can go and get the output in the outputs tab of that CloudFormation stack. You will see the URL for the user interface. So we wanna go ahead and grab that and then as we jump to that, I will, I do wanna say that I did skip one step is part of this and that's essentially setting up the, the create user incognito essentially just create a user and then you have to reset your password on, on the first uh in. Uh and then once you do that, you'll get into the chat bot and it'll look exactly as it looks here. So we're going to finish the configuration of the rag providers on the left navigation pane. We're gonna select workspaces under the retrieval augmented generation or rag space. As you can see, we don't have any workspaces. We're gonna create a workspace. I first do the open search service, workspace and, and give it a name. This is where we're gonna put our security link data, like we said before. So security dash lake dash data, uh look at the additional settings. We're gonna keep all those default and then we're gonna click, create workspace and then uh it takes a second to create. So in the meantime, we'll create the Kindra uh uh the Kindra workspace. So we select Kindra, we're gonna name this IAM dash docs. If you remember the demo from earlier, this was at the bottom of the screen, we select the Kindra index that was created and we'll select, use all data and then we'll select, create workspace. So now we got to finish the security like data. We went and grabbed some uh security findings with Athena. Now we're gonna actually upload that data directly into the, the Chatbot. So, uh or into opensearch that was created through the Chatbot. So we'll select, choose files and we'll choose that CSV file from the Athena output that we grabbed earlier and it'll upload and then once it's all complete, uh we can carry on. So one of the first things that you probably think is or want to do next is go test that this thing works. OK. Well, luckily now that it's deployed with all the the errors, hopefully now that it's deployed, um there's, there's no issues with the demo. So thank you all for your, for your patience with that. So, um I know we showed briefly earlier, um this, this Chatbot and now, we know how it's deployed. I want to kind of go back and let's ask some questions and we're also gonna show some of the metadata and some of the, the back end, hopefully get you guys to understand, uh some of the items that you know, and the decisions that we made, uh, that are here. So we're gonna start with a simple questionnaire. So what are some of the best practices with? IAM rules? So, as you can see here, and I'm gonna pause and hopefully, I have better luck than Marshall um where you can see these are definitely like valid responses and things that are valuable using least privilege um having temporary credentials using identity federation. These are definitely best practices for IAM roles. So, um but the next thing like, let's make this a little bit harder here, right? Um So we're gonna ask um around how you would use and sorry, how you would do like actually implement um I am uh like an implementation strategy here. And so one thing that's, that's interesting and when you look at this is, you can actually see there is at least in a response and this is something you, you, you would, I would, I would say want in a way to say, OK, um I need more context and so I'm gonna pause here and so we can actually look through it and it says I don't have quite enough context context. It still does provide interesting and relevant information. And I think that's really important. And so one thing I thought was interesting that Frit Fritz mentioned earlier is the the aspect of the prompt. So here we said, can you, I think that the the prompt engineering here does play a role in how you're actually asking these items. And again, I want to note down here and we're working with the Chapa right now. You can see we have no rag data source. So we are gonna quickly flip. So you guys can see what this would look like um with, with a rag data source. All right, there we go. Here we go, got nervous there. So now that you know, we're, we're thinking about like what IAM issues do I have in my environment? We added this security lake as a RF. We wanna understand um the issues that may exist. We now have that contextual information that Marshall should be added and we should get a response that now understands our environment. So as we look at this, this is important, um we can understand, OK, we have an IAM policy here uh that grants full administrative permissions. We also have, you know, a certain account number that they're calling out. We're getting prescriptive guidance on things you need to change. Um from this chat bot we're now switching to. So uh this multi chat option. So you guys can see the difference between a um, one with a rag provider of security lake and one without it. So we're going straight to the LLM on the left and then we're adding that rag provider on the right. So we're going to see on the right or sorry on the left, we actually have the information. Ok. I actually don't have the correct information to fully answer your question. It's telling us, you know, we don't have information about your environment. I can't really tell you what is the IM risks there on the right? It's actually telling us. So you can see the difference between the rag data source and uh the just plain LLM. Now we are actually going back to um the chat and we're just saying, provide me a link to the IM documentation. The reason why I'm trying this, I actually wanna show you guys we're gonna show some of the metadata around the calls that actually exist. And from here you can see a little bit towards the back and what's going on, how are they making these decisions and how are they providing these links? And so what you can actually see here is various page content here are the various things. It was looking at various paths, the content on these. Um So the I'm not gonna go through and bore you guys each of these, but I did wanna show um that you are able to at least get some insight how these applications are. Um making decisions. Um So I know, you know, we kind of jumped all over the place of it with the demo, but I want you guys to at least hopefully see how you can use this as a job aid for security, whether you're one person in security on your whole entire organization or you're, you know, uh a massive organization, it can really help. Um And so I hope you guys at least saw some of this is how you can use this to make your life easier and move quicker. So right back to uh back to some key takeaways here. So again, our opinion is that AI is unlikely to replace skilled security experts. Anytime soon, you can see from this demo, this is by the way, this is all real, there's nothing synthetic here. We actually did everything you saw. Um And it's, it's pretty impressive as far as the, as you know, technology that you can deploy in an hour goes, but it's not going to replace any of you as security experts uh anytime soon. Um Again, you do want to get smart on the technology though because it's easy to see how teams that are using it effectively or could can in, in pretty short order, outperform ones that are not oops uh A services like Amazon bedrock, Amazon Kindra, our serverless experiences that make it easy to uh to get started and to keep your data secure. There's not going to be any uh with, with bedrock, none of your proprietary information, no customer information period is used to train the underlying models. Uh And, and your data is never going to leave your, your VPC. So it's safe, you can also deploy and secure the generative AI applications that you want to run using these tools in the exact same way that, that you, you're deploying traditional applications on a today and securing them. So we're talking about things like CDK and AWS, IAM and KMS and, and um Waff and the like finally, this technology is not magic. Uh And, and as I said before, you don't need a phd to deploy it. Um You can do this, you should experiment with it not only for the sake of your own security team and increasing your, your security team's performance, reducing toil and mundane tasks and all that kind of thing. But, but if you're responsible for the security of your organization, the sooner you are experimenting with this stuff, the easier it's going to be for you to help the business units or, or others inside of your organization, make smart security decisions about the applications that they build using this technology. Uh So we certainly encourage you to uh to get started experimenting, get started experimenting with it as uh as soon as you can. And if you're interested in doing that, the easy way or uneasy way you could scan this QR code that's going to take you to github, uh, which has the demo chat application that we showed you today. Uh We did not write that chat application just to be clear. We're not, we're not taking credit for that front end component. Another team within AWS did that. Um, just to, just to be clear, but you can get it and you should use it. It's a, it's a fantastic piece of opensource technology. Uh, just quickly, uh, there are a bunch of people who aren't on this stage today, uh who contributed to, to, to this talk and without them, we would not have been able to uh to, to stand here and tell you all about it. So special thanks to the folks who see mentioned there. Um In case you're not familiar with AWS reinforces our annual security conference and in 2024 it's going to be in Philadelphia and we hope to, uh we hope to see you there. Thank you very much for your time. If you would like to contact any of us, uh You can see there how to do it. We'd love to hear from you and I'm watching this clock below me, which is down to 45 seconds. That means they are not going to let us do any further Q and A from the stage, but you could find us back there in the back corner. If you have questions, we'll be happy to hang around for a little while. And uh and answer them as best we can. Thank you very much.

How Security Teams Can Leverage Generative AI to Strengthen Security and Provide Contextual Guidance

Nov 22, 2024

Amazon Web Services

This video from AWS re:Invent 2023 explores how security teams can leverage generative AI to strengthen security outcomes. The speakers demonstrate a virtual security assistant built using Amazon Bedrock, Kendra, and Security Lake that can provide contextual security guidance. They walk through the architecture, implementation, and key considerations for using generative AI safely and effectively for security use cases. The demo shows how the assistant can answer questions about IAM best practices and identify specific security findings in an environment. The speakers emphasize that while AI won't replace security experts, teams that effectively utilize it will outperform others. They encourage experimentation with these technologies to improve security operations and provide better guidance to builders.

product-information
skills-and-how-to
generative-ai
security-marketing-priority
ai-ml
Show 4 more

Up Next

Autoplay Off
VideoThumbnail
18:11

Building Intelligent Chatbots: Integrating Amazon Lex with Bedrock Knowledge Bases for Enhanced Customer Experiences

Nov 22, 2024
VideoThumbnail
21:56

The State of Generative AI: Unlocking Trillion-Dollar Business Value Through Responsible Implementation and Workflow Reimagination

Nov 22, 2024
VideoThumbnail
1:19:03

AWS Summit Los Angeles 2024: Unleashing Generative AI's Potential - Insights from Matt Wood and Industry Leaders

Nov 22, 2024
VideoThumbnail
15:41

Simplifying Graph Queries with Amazon Neptune and LangChain: Harnessing AI for Intuitive Data Exploration

Nov 22, 2024
VideoThumbnail
50:05

Unlocking Business Value with Generative AI: Key Use Cases and Implementation Strategies

Nov 22, 2024