Listing Thumbnail

    Booz Allen AI Security Assessment

     Info
    As the single largest provider of AI services to the US federal government, Booz Allen has worked closely with government agencies and businesses to build and deploy machine learning systems that are secure and trustworthy. Booz Allen’s AI Security Risk Assessment can help your organization identify and implement practices that deliver mission advantage while thwarting adversarial attacks
    Listing Thumbnail

    Booz Allen AI Security Assessment

     Info

    Overview

    Booz Allen’s 8-week AI Security Assessment follows a three-step process for evaluating AI systems; identifying and characterizing vulnerabilities and risks; and providing recommendations for mitigations and controls:

    Step 1 - Information Gathering. Booz Allen will perform documentation reviews and conduct a series of collaborative workshops to understand an organization’s AI use case(s), including GenAI, automation objectives, model(s), pipeline(s), existing adversarial attack prevention measures, security controls, user personas, data usage, and impact to the business.

    Step 2 - Threat Modeling. Using the structure and vocabulary associated with MITRE ATLAS, Booz Allen will perform a threat landscape analysis that identifies models, pipelines, and processes that may expose the client to risks, such as adversarial attacks and data leakage. The analysis will call attention to specific, likely exploitable vulnerabilities (e.g. published training data, insecure outputs, and API calls), with all findings backed by relevant academic or industry literature.

    Step 3 - Security Recommendations. Booz Allen will provide one or more mitigation options for each AI security risk/vulnerability identified. Booz Allen will provide a written summary of the trade space for each mitigation and a recommendation for which mitigation might be best suited for the environment.

    Booz Allen’s AI Security Assessment concludes with three deliverables:

    1. Discovery Report summarizing the information gathered during Booz Allen’s documentation reviews and workshops.

    2. AI Risk Report detailing potential risks, vulnerabilities​, and recommended measures for improving system security posture.

    3. AI engineering Documentation Protocol in the form of a questionnaire that can be incorporated into organizational engineering processes and model registries to promote consideration of risks, controls, and mitigations relevant to securing existing and future AI systems.

    Highlights

    • Assess your AI security posture by identifying AI model-level and system-level vulnerabilities
    • Identify best practices for defending against traditional ML and generative AI security attacks (e.g. data poisoning, model evasion, prompt injection, data extraction, model theft, and malware)
    • Establish a baseline for AI red teaming procedures, ranging from highly specific, domain-focused attacks to broader AI security concerns. Adopt standardized AI security and risk vocabulary, documentation, and threat modeling protocols

    Details

    Delivery method

    Pricing

    Custom pricing options

    Pricing is based on your specific requirements and eligibility. To get a custom quote for your needs, request a private offer.

    Legal

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Software associated with this service