AWS Security Blog

Application Security at re:Inforce 2024

Join us in Philadelphia, Pennsylvania, on June 10–12, 2024, for AWS re:Inforce, a security learning conference where you can enhance your skills and confidence in cloud security, compliance, identity, and privacy. As an attendee, you will have access to hundreds of technical and non-technical sessions, an Expo featuring Amazon Web Services (AWS) experts and AWS Security Competency Partners, and keynote sessions led by industry leaders. AWS re:Inforce offers a comprehensive focus on six key areas, including Application Security.

The Application Security track helps you understand and implement best practices for securing your applications throughout the development lifecycle. This year, we are focusing on several key themes:

  • Building a culture of security – Learn how to define and influence organizational behavior to speed up application development, while reducing overall security risk through implementing best practices, training your internal teams, and defining ownership.
  • Security of the pipeline – Discover how to embed governance and guardrails to allow developer agility, while maintaining security across your continuous integration and delivery (CI/CD) pipelines.
  • Security in the pipeline – Explore tooling and automation to reduce the mean time of security reviews and embed continuous security into each stage of the development pipeline.
  • Supply chain security – Gain improved awareness of how risks are introduced by extension, track dependencies, and identify vulnerabilities used in your software.

Additionally, this year the Application Security track will have sessions focused on generative AI (gen AI), covering how to secure gen AI applications and use gen AI for development. Join these sessions to deepen your knowledge and up-level your skills, so that you can build modern applications that are robust, resilient, and secure.

Breakout sessions, chalk talks, lightning talks, and code talks

APS201 | Breakout session | Accelerate securely: The Generative AI Security Scoping Matrix
As generative AI ignites business innovation, cybersecurity teams need to keep up with the accelerating domain. Security leaders are seeking tools and answers to help drive requirements around governance, compliance, legal, privacy, threat mitigations, resiliency, and more. This session introduces you to the Generative AI Security Scoping Matrix, which is designed to provide a common language and thought model for approaching generative AI security. Leave the session with a framework, techniques, and best practices that you can use to support responsible adoption of generative AI solutions designed to help your business move at an ever-increasing pace.

APS301 | Breakout session | Enhance AppSec: Generative AI integration in AWS testing
This session presents an in-depth look at the AWS Security Testing program, emphasizing its scaling efforts to help ensure new products and services meet a high security bar pre-launch. With a focus on integrating generative AI into its testing framework, the program showcases how AWS anticipates and mitigates complex security threats to maintain cloud security. Learn about AWS’s proactive approaches to collaboration across teams and mitigating vulnerabilities, enriched by case studies that highlight the program’s flexibility and dedication to security excellence. Ideal for security experts and cloud architects, this session offers valuable insights into safeguarding cloud computing technologies.

APS302 | Breakout session | Building a secure MLOps pipeline, featuring PathAI
DevOps and MLOps are both software development strategies that focus on collaboration between developers, operations, and data science teams. In this session, learn how to build modern, secure MLOps using AWS services and tools for infrastructure and network isolation, data protection, authentication and authorization, detective controls, and compliance. Discover how AWS customer PathAI, a leading digital pathology and AI company, uses seamless DevOps and MLOps strategies to run their AISight intelligent image management system and embedded AI products to support anatomic pathology labs and bio-pharma partners globally.

APS401 | Breakout session | Keeping your code secure
Join this session to dive deep into how AWS implemented generative AI tooling in our developer workflows. Learn about the AWS approach to creating the underlying code scanning and remediation engines that AWS uses internally. Also, explore how AWS integrated these tools into the services we offer through reactive and proactive security features. Leave this session with a better understanding of how you can use AWS to secure code and how the code offered to you through AWS generative AI services is designed to be secure.

APS402 | Breakout session | Verifying code using automated reasoning
In this session, AWS principal applied scientists discuss how they use automated reasoning to certify bug-free code mathematically and help secure underlying infrastructure. Explore how to use Kani, an AWS created open source engine that analyzes, verifies, and detects errors in safe and unsafe Rust code. Hear how AWS built and implemented Kani internally with examples taken from real-world AWS open source code. Leave this session with the tools you need to get started using this Rust verification engine for your own workloads.

APS232 | Chalk talk | Successful security team patterns
It’s more common to hear what a security team does than to hear how the security team does it, or with whom the security team works rather than how it was designed to work. Organizational design is often demoted to a secondary consideration behind the goals of a security team, despite intentional design generally being what empowers, or hinders, security teams from achieving their goals. Security must work across the organization, not in isolation. This chalk talk focuses on designing effective security teams for organizations moving to the cloud, which necessitates outlining both what the security team works on and how it achieves that work.

APS331 | Chalk talk | Verifiable and auditable security inside the pipeline
In this chalk talk, explore platform engineering best practices at AWS. AWS deploys more than 150 million times per year while maintaining 143 different compliance framework attestations and certifications. Internally, AWS has learned how to make security easier for builder teams. Learn key risks associated with operating pipelines at scale and Amazonian mechanisms to make security controls inside the pipeline verifiable and auditable so that you can shift compliance and auditing left into the pipeline.

APS233 | Chalk talk | Threat modeling your generative AI workload to evaluate security risk
As the capabilities and possibilities of machine learning continue to expand with advances in generative AI, understanding the security risks introduced by these advances is essential for protecting your valuable AWS workloads. This chalk talk guides you through a practical threat modeling approach, empowering you to create a threat model for your own generative AI applications. Gain confidence to build your next generative AI workload securely on AWS with the help of threat modeling and leave with actionable steps you can take to get started.

APS321 | Lightning talk | Using generative AI to create more secure applications
Generative AI revolutionizes application development by enhancing security and efficiency. This lightning talk explores how Amazon Q, your generative AI assistant, empowers you to build, troubleshoot, and transform applications securely. Discover how its capabilities streamline the process, allowing you to focus on innovation while ensuring robust security measures. Unlock the power of generative AI for helping build secure, cutting-edge applications.

APS341 | Code talk | Shifting left, securing right: Container supply chain security
Supply chain security for containers helps ensure you can detect software security risks in third-party packages and remediate them during the container image build process. This prevents container images with vulnerabilities from being pushed to your container registry and causing potential harm to your production systems. In this code talk, learn how you can apply a shift-left approach to container image security testing in your deployment pipelines.

Hands-on sessions

APS373 | Workshop | Build a more secure generative AI chatbot with security guardrails
Generative AI is an emerging technology that is disrupting multiple industries. An early generative AI use case is interactive chat in customer service applications. As users interact with generative AI chatbots, there are security risks, such as prompt injection and jailbreaking resulting from specially crafted inputs sent to large language models. In this workshop, learn how to build an AI chatbot using Amazon Bedrock and protect it using Guardrails for Amazon Bedrock. You must bring your laptop to participate.

APS351 | Builders’ session | Implement controls for the OWASP Top 10 for LLM applications
In this builders’ session, learn how to implement security controls that address the OWASP Top 10 for LLM applications on AWS. Experts guide you through the use of AWS security tooling to provide practical insights and solutions to mitigate the most critical security risks outlined by OWASP. Discover technical options and choices you can make in cloud infrastructure and large-scale enterprise environments augmented by AWS generative AI technology. You must bring your laptop to participate.

APS271 | Workshop | Threat modeling for builders
In this workshop, learn threat modeling core concepts and how to apply them through a series of group exercises. Key topics include threat modeling personas, key phases, data flow diagrams, STRIDE, and risk response strategies as well as the introduction of a “threat grammar rule” with an associated tool. In exercises, identify threats and mitigations through the lens of each threat modeling persona. Assemble in groups and walk through a case study, with AWS threat modeling experts on hand to guide you and provide feedback. You must bring your laptop to participate.

APS371 | Workshop | Integrating open source security tools with AWS code services
AWS, open source, and partner tooling work together to accelerate your software development lifecycle. In this workshop, learn how to use the Automated Security Helper (ASH), an open source application security tool, to quickly integrate various security testing tools into your software build and deployment flows. AWS experts guide you through the process of security testing locally on your machines and within the AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline services. In addition, discover how to identify potential security issues in your applications through static analysis, software composition analysis, and infrastructure-as-code testing. You must bring your laptop to participate.

This blog post highlighted some of the unique sessions in the Application Security track at the upcoming re:Inforce 2024 conference in Philadelphia. If these sessions pique your interest, register for re:Inforce 2024 to attend them, along with the numerous other Application Security sessions offered at the conference. For a comprehensive overview of sessions across all tracks, explore the AWS re:Inforce catalog preview.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Daniel Begimher

Daniel Begimher
Daniel is a Senior Security Engineer specializing in cloud security and incident response solutions. He holds all AWS certifications and authored the open-source code scanning tool, Automated Security Helper. In his free time, Daniel enjoys gadgets, video games, and traveling.

Ipolitas Dunaravich

Ipolitas Dunaravich
Ipolitas is a technical marketing leader for networking and security services at AWS. With over 15 years of marketing experience and more than 4 years at AWS, Ipolitas is the Head of Marketing for AppSec services and curates the security content for re:Inforce and re:Invent.