AWS Partner Network (APN) Blog
Securing Amazon Bedrock and Amazon SageMaker with Orca Security
orca security |
By Jason Patterson, Sr. WW Security PSA – AWS
By Deborah Galea, Director of Product Marketing – Orca Security
The integration of artificial intelligence (AI) technologies is gaining momentum across various industries, offering a variety of business advantages. According to Gartner’s Software predictions, the AI software market is poised for an anticipated growth of 19.1% annually, projected to reach $298 billion by 2027. Amazon Bedrock and Amazon SageMaker, two of the leading AI services offered by AWS, have surge in adoption due to their widespread popularity.
However, implementing robust AI security measures is crucial for organizations to mitigate potential risks, including model poisoning and sensitive data breaches. While these risks are common to any cloud-based asset, others are unique to AI models and their deployment.
In this blog, we shall provide a view into AI risks that organizations should be aware of, and provide guidance on how Orca Security provides effective strategies to mitigate and prevent these potential threats.
What are the risks to AI services?
The risks to AI models and services, such as Amazon Bedrock and Amazon SageMaker, are around the data used to train models. If attackers are able to access the training data and tamper with it, this could affect the output. If the data used to train the model includes sensitive data, bad actors could also manipulate models into giving them this data.
As depicted in Figure 1, Orca uses its patented SideScanning™ technology to read AWS workloads without requiring any agents. After one-time integration into your Amazon Web Services (AWS) environment with read-only permissions, Orca will start scanning all your AWS asset configurations, data storage, and IAM resource configurations, network layout, and security settings. This provides visibility into specific configurations used by Amazon SageMaker and Amazon Bedrock as well as the data storage, data pipelines, users, and roles used to within the AI workload.
Figure 1: Orca uses AWS APIs and snapshot scanning to generate a comprehensive view
The Open Web Application Security Project (OWASP) foundation has published the OWASP Machine Learning Security Top Ten list that provides an overview of the top 10 security issues of machine learning systems. In addition, Orca Security’s recent 2024 State of AI Security Report provides insights regarding these risks, and how prevalent they already are in current cloud production environments.
Below are explanations of the risk terminology specific to AI covered in this blog:
- Prompt injection: When an attacker inputs malicious prompts into a Large Language Model (LLM) to manipulate the LLM into performing unintended actions, such as leaking sensitive data or spreading misinformation.
- Data poisoning: A bad actor intentionally corrupting an AI training dataset and machine learning model to reduce the accuracy of LLM’s output.
- Model poisoning: An attacker manipulating an AI model to introduce vulnerabilities, biases, or backdoors that could compromise the model’s security, effectiveness, or ethical behavior.
- Model inversion: This happens when an attacker reconstructs sensitive information or original training data from a model’s outputs.
For more information on the different types of AI risks, the OWASP foundation has published the OWASP Machine Learning Security Top Ten list that provides an overview of the top 10 security issues of machine learning systems.
Orca Security’s recent 2024 State of AI Security Report also provides insights required to gain more understanding of these risks, and just how prevalent they already are in current cloud production environments.
AI Security challenges
So, what are the top 5 challenges to keeping your AI models and data secure?
- Pace of innovation: The speed of AI development continues to accelerate, with AI innovations introducing features that promote ease of use over security.
- Shadow AI: Security teams don’t always know which AI models are in use and aren’t able to discover shadow AI.
- Datasets from many sources: Training datasets can be aggregated from many sources, some of which may be public and others private, requiring them all to have the same level of security precaution as the most sensitive data source set (which might be prohibitive).
- Nascent technology: Due to its nascent stage, AI security lacks comprehensive resources and seasoned experts. Organizations must often develop their own solutions to protect AI services without external guidance or examples.
- Resource control: Resource misconfigurations often accompany the rollout of a new service. Users overlook securely configuring settings related to roles, buckets, users, and other assets, which introduce risks to the environment.
AI Security Posture Management (AI-SPM)
AI-SPM is a new category of solutions that help organizations secure the infrastructure of their machine learning (ML) and AI systems, models, packages, and data.
AI-SPM solutions detect risks that are applicable to other AWS cloud assets, including misconfigurations, overprivileged permissions, insecure secrets, and Internet exposure. However, AI-SPM solutions also cover use cases unique to AI security, such as detecting sensitive data in training sets, which can lead to unintentional exposure through the legitimate use of an AI service.
AI-SPM also helps ensure compliance with regulatory mandates and industry standards. This includes ongoing compliance monitoring and reporting.
How does AI-SPM work?
The first step in AI-SPM is discovering all AI deployments in their AWS environment. This includes a detail inventory of all AI projects, AI models and AI packages used with Amazon Bedrock and Amazon Sagemaker.
Next, it detects risks that endanger AI models and prioritizes them according to the likelihood of a breach as well as its potential business impact.
Finally, an AI-SPM solution offers remediation options to help mitigate risks.
AI-SPM solutions perform risk detection across the entire software development lifecycle, so issues can be addressed early in development, before they reach production.
About Orca AI-SPM on AWS
Orca Security’s AI-SPM capabilities uses our patented, agentless SideScanning™ technology to provide the same visibility, risk insight, and deep data for AI models as it does for AWS resources, while also addressing unique AI use cases.
Orca’s AI-SPM solution covers over 50 AI models and packages used within Amazon Bedrock and Amazon SageMaker allowing you to confidently build AI enabled solutions while maintaining visibility and security of your tech stack.
Figure 2: AWS assets running vulnerable AI software applications
Orca Platform Capabilities
With the Orca Platform you can get a complete AI and ML inventory and bill of materials (BOM) of your AWS environment. Figure 3, below, depicts how the Orca Platform gives you a complete view of all AI models that are deployed in your AWS environment—both managed and unmanaged, including any shadow AI.
Figure 3: Inventory of all the deployed AI models in your AWS cloud
Orca ensures that AI models are configured securely, covering network security, data protection, access controls, and IAM. For instance, Orca will check whether encryption for data at rest is enabled with Customer Managed Keys (CMK) using AWS Key Management Service (KMS) to safeguard sensitive information used and generated by models. Figure 4, below, depicts how the Orca Platform addresses AI misconfigurations.
Figure 4: Orca checks if Amazon Bedrock custom models are encrypted with CMK
Orca also has the ability to detect sensitive data in AI models. Orca alerts if any AI models or training data contain sensitive information so you can take appropriate action to prevent unintended exposure. For instance, if an Amazon Bedrock deployment uses training data from an Amazon Simple Storage Service (S3) bucket that includes Personally Identifiable Information (PII), Orca will send out an alert to notify about the possible risk.
In addition to identifying sensitive data, the Orca platform detects when keys and tokens to AI services and software packages are unsafely exposed in code repositories.
For each detected risk, Orca offers automated and guided remediation options, including AI-generated code that can be copied and pasted into a command line interface or Infrastructure as Code (IaC) provisioning tool as shown below in figure 5.
Figure 5: Amazon Bedrock built-in AI engine automatically generates remediation steps and code
Conclusion
Although organizations face significant security challenges due to the rapid pace of AI innovation, compounded by the technology’s nascency and the overemphasis on speed of development, both traditional and new solutions are available to address these risks. By deploying Orca Security’s AI-SPM capabilities on AWS when leveraging Amazon Bedrock or Amazon SageMaker, you can benefit AI services without sacrificing security.
To learn more, request a demo or go to Orca Security on the AWS Marketplace.
.
.
Orca Security – AWS Partner Spotlight
Orca Security is an AWS Specialization Partner that provides cloud-wide, workload-deep, context-aware security and compliance for AWS without the gaps in coverage, alert fatigue, and operational costs of agent-based solutions.