Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Skip to main contentAWS Startups

Seeing is believing: Using AI to validate media authenticity

How was this content?

Generative AI has the power to create high fidelity images, video, and audio, limited only by imagination. But, doctored photos, deepfake videos, and voice cloning pose significant ethical risks. From scammers posing as an extended family member to fake political ads, there’s a crisis of reality looming unless we take action. Hear from the founders of DeepMedia, an AI communications company, on how they’re using AI to build deep fake detection software to dispel false images and their implications. We’ll explore real-life examples of how detection software helps validate the authenticity of media—and how we can create a future where what we see and hear is real, not replicant.

How was this content?