Q: What is Amazon Rekognition?
Amazon Rekognition is a service that makes it easy to add powerful visual analysis to your applications. Rekognition Image lets you easily build powerful applications to search, verify, and organize millions of images. Rekognition Video lets you extract motion-based context from stored or live stream videos and helps you analyze them.
Rekognition Image is an image recognition service that detects objects, scenes, and faces; extracts text; recognizes celebrities; and identifies inappropriate content in images. It also allows you to search and compare faces. Rekognition Image is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images daily for Prime Photos.
Rekognition Image uses deep neural network models to detect and label thousands of objects and scenes in your images, and we are continually adding new labels and facial recognition features to the service. With Rekognition Image, you only pay for the images you analyze and the face metadata you store.
Rekognition Video is a video recognition service that tracks people; detects activities; and recognizes objects, celebrities, and inappropriate content in videos stored in Amazon S3 and live video streams from Acuity. Rekognition Video detects persons and tracks them through the video even when their faces are not visible, or as the whole person might go in and out of the scene. This makes investigation and real-time monitoring of individuals like Persons of Interest easy and accurate. For example, this could be used in an application that sends a real-time notification when someone delivers a package to your door. Rekognition Video allows you also to index metadata like objects, activities, scene, celebrities, and faces that make video search easy.
Q: What is deep learning?
Deep learning is a sub-field of Machine Learning and a significant branch of Artificial Intelligence. It aims to infer high-level abstractions from raw data by using a deep graph with multiple processing layers composed of multiple linear and non-linear transformations. Deep learning is loosely based on models of information processing and communication in the brain. Deep learning replaces handcrafted features with ones learned from very large amounts of annotated data. Learning occurs by iteratively estimating hundreds of thousands of parameters in the deep graph with efficient algorithms.
Several deep learning architectures such as convolutional deep neural networks (CNNs), and recurrent neural networks have been applied to computer vision, speech recognition, natural language processing, and audio recognition to produce state-of-the-art results on various tasks.
Amazon Rekognition is a part of the Amazon AI family of services. Amazon AI services use deep learning to understand images, turn text into lifelike speech, and build intuitive conversational text and speech interfaces.
Q: Do I need any deep learning expertise to use Amazon Rekognition?
No. With Amazon Rekognition, you don’t have to build, maintain or upgrade deep learning pipelines.
To achieve accurate results on complex computer vision tasks such as object and scene detection, face analysis, and face recognition, deep learning systems need to be tuned properly and trained with massive amounts of labeled ground truth data. Sourcing, cleaning, and labeling data accurately is a time-consuming and expensive task. Moreover, training a deep neural network is computationally expensive and often requires custom hardware built using Graphics Processing Units (GPU).
Amazon Rekognition is fully managed and comes pre-trained for image and video recognition tasks, so that you don’t have invest your time and resources on creating a deep learning pipeline. Amazon Rekognition continues to improve the accuracy of its models by building upon the latest research and sourcing new training data. This allows you to focus on high-value application design and development.
Q. What are the most common use cases for Amazon Rekognition?
The most common use-cases for Rekognition Image include:
- Searchable Image Library
- Face-Based User Verification
- Sentiment Analysis
- Facial Recognition
- Image Moderation
- License Plate Recognition
The most common use-cases for Rekognition Video include:
- Immediate response for public safety and security
- Investigative analysis of events for public safety
- Search Index for video archives
- Easy filtering of video for explicit and suggestive content
Q: How do I get started with Amazon Rekognition?
If you are not already signed up for Amazon Rekognition, you can click the "Try Amazon Rekognition" button on the Amazon Rekognition page and complete the sign-up process. You must have an Amazon Web Services account; if you do not already have one, you will be prompted to create one during the sign-up process. Once you are signed up, try out Amazon Rekognition with your own images and videos using the Amazon Rekognition Management Console or download the Amazon Rekognition SDKs to start creating your own applications. Please refer to our step-by-step Getting Started Guide for more information.
Q. What APIs does Amazon Rekognition offer?
Amazon Rekognition Image offers APIs to detect objects and scenes, detect and analyze faces, recognize celebrities, detect inappropriate content, and search for similar faces in a collection of faces, along with APIs to manage resources. Rekognition Image also offers APIs to compare faces and extract text, while Rekognition Video also offers APIs to track persons and manage live stream video from Acuity. For details, please refer to the Amazon Rekognition API Reference.
Q: What image and video formats does Amazon Rekognition support?
Amazon Rekognition Image currently supports the JPEG and PNG image formats. You can submit images either as an S3 object or as a byte array. Amazon Rekognition Video operations can analyze videos stored in Amazon S3 buckets. The video must be encoded using the H.264 codec. The supported file formats are MPEG-4 and MOV. A codec is software or hardware that compresses data for faster delivery and decompresses received data into its original form. The H.264 codec is commonly used for the recording, compression and distribution of video content. A video file format may contain one or more codecs. If your MOV or MPEG-4 format video file does not work with Rekognition Video, check that the codec used to encode the video is H.264.
Q: What file sizes can I use with Amazon Rekognition?
Amazon Rekognition Image supports image file sizes up to 15MB when passed as an S3 object, and up to 5MB when submitted as an image byte array. Amazon Rekognition Video supports up to 8 GB files and up to 2 hour videos when passed through as an S3 file.
Q: How does image resolution affect the quality of Rekognition Image API results ?
Amazon Rekognition works across a wide range of image resolutions. For best results we recommend using VGA (640x480) resolution or higher. Going below QVGA (320x240) may increase the chances of missing faces, objects, or inappropriate content; although Amazon Rekognition accepts images that are at least 80 pixels in both dimensions.
Q. How small can an object be for Amazon Rekognition Image to detect and analyze it?
As a rule of thumb, please ensure that the smallest object or face present in the image is at least 5% of the size (in pixels) of the shorter image dimension. For example, if you are working with a 1600x900 image, the smallest face or object should be at least 45 pixels in either dimension.
Q: How does video resolution affect the quality of Rekognition Video API results?
The system is trained to recognize faces larger than 32 pixels (on the shortest dimension), which translate into a minimum size for a face to be recognized that varies from approximately 1/7 of the screen smaller dimension at QVGA resolution to 1/30 at HD 1080p resolution. For example, at VGA resolution, users should expect lower performances for faces smaller than 1/10 of the screen smaller dimension.
Q: What else can affect the quality of the Rekognition Video APIs ?
Besides video resolution, heavy blur, fast moving persons, lighting conditions, pose may affect the quality of the APIs.
Q: What is the preferred user video content that is suitable for Rekognition Video APIs?
This API works best with consumer and professional videos taken with frontal field of view in normal color and lighting conditions. This API is not tested for black and white, IR or extreme lighting condition. Applications that are sensitive to false alarms are advised to discard outputs with confidence score below a selected (application-specific) confidence score.
Q: In which AWS regions is Amazon Rekognition available?
Amazon Rekognition Image is currently available in the US East (Northern Virginia), US West (Oregon), US East (Ohio) , EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Sydney) and the AWS GovCloud (US) regions. Amazon Rekognition Video is available in US East (Northern Virginia), US West (Oregon), US East (Ohio) , EU (Ireland), Asia Pacific (Tokyo) and Asia Pacific (Sydney) regions. Amazon Rekognition Video real-time streaming is only available in US East (Northern Virginia), US West (Oregon), EU (Ireland) and Asia Pacific (Tokyo) regions.
Q: What is a label?
A label is an object, scene, or concept found in an image based on its contents. For example, a photo of people on a tropical beach may contain labels such as ‘Person’, ‘Water’, ‘Sand’, ‘Palm Tree’, and ‘Swimwear’ (objects), ‘Beach’ (scene), and ‘Outdoors’ (concept).
Q: What is a confidence score and how do I use it?
A confidence score is a number between 0 and 100 that indicates the probability that a given prediction is correct. In the tropical beach example, if the object and scene detection process returns a confidence score of 99 for the label ‘Water’ and 35 for the label ‘Palm Tree’, then it is more likely that the image contains water but not a palm tree.
Applications that are very sensitive to detection errors (false positives) should discard results associated with confidence scores below a certain threshold. The optimum threshold depends on the application. In many cases, you will get the best user experience by setting minimum confidence values higher than the default value.
Q: What is Object and Scene Detection?
Object and Scene Detection refers to the process of analyzing an image or video to assign labels based on its visual content. Amazon Rekognition Image does this through the DetectLabels API. This API lets you automatically identify thousands of objects, scenes, and concepts and returns a confidence score for each label. DetectLabels uses a default confidence threshold of 50. Object and Scene detection is ideal for customers who want to search and organize large image libraries, including consumer and lifestyle applications that depend on user-generated content and ad tech companies looking to improve their targeting algorithms.
Q: What types of labels does Amazon Rekognition support?
Rekognition supports thousands of labels belonging to common categories including, but not limited to:
- People and Events: ‘Wedding’, ‘Bride’, ‘Baby’, ‘Birthday Cake’, ‘Guitarist’, etc.
- Food and Drink: ‘Apple’, ‘Sandwich’, ‘Wine’, ‘Cake’, ‘Pizza’, etc.
- Nature and Outdoors: ‘Beach’, ‘Mountains’, ‘Lake’, ‘Sunset’, ‘Rainbow’, etc.
- Animals and Pets: ‘Dog’, ‘Cat’, ‘Horse’, ‘Tiger’, ‘Turtle’, etc.
- Home and Garden: ‘Bed’, ‘Table’, ‘Backyard’, ‘Chandelier’, ‘Bedroom’, etc.
- Sports and Leisure: ‘Golf’, ‘Basketball’, ‘Hockey’, ‘Tennis’, ‘Hiking’, etc.
- Plants and Flowers: ‘Rose’, ‘Tulip’, ‘Palm Tree’, ‘Forest’, ‘Bamboo’, etc.
- Art and Entertainment: ‘Sculpture’, ‘Painting’, ‘Guitar’, ‘Ballet’, ‘Mosaic’, etc.
- Transportation and Vehicles: ‘Airplane’, ‘Car’, ‘Bicycle’, ‘Motorcycle’, ‘Truck’, etc.
- Electronics: ‘Computer’, ‘Mobile Phone’, ‘Video Camera’, ‘TV’, ‘Headphones’, etc.
Q. How is Object and Scene Detection different for video analysis?
Rekognition Video enables you to automatically identify thousands of objects - such as vehicles or pets - and activities - such as celebrating or dancing - and provides you with timestamps and a confidence score for each label. It also relies on motion and time context in the video to accurately identify complex activities, such as “blowing a candle” or “extinguishing fire”.
Q. I can’t find the label I need. How do I request a new label?
Please send us your requests through AWS Customer Support. Amazon Rekognition continuously expands its catalog of labels based on customer feedback.
Unsafe Content Detection
Q. What is Unsafe Content Detection?
Amazon Rekognition’s Unsafe Content Detection is a deep-learning based easy to use API for detection of explicit and suggestive adult content in images. Developers can use this additional metadata to filter inappropriate content based on their business needs. Beyond flagging an image based on presence of adult content, Image Moderation also returns a hierarchical list of labels with confidence scores. These labels indicate specific categories of adult content, thus providing more granular control to developers to filter and manage large volumes of user generated content (UGC). This API can be used in moderation workflows for applications such as social and dating sites, photo sharing platforms, blogs and forums, apps for children, e-commerce site, entertainment and online advertising services.
Q. What types of explicit and suggestive adult content does Amazon Rekognition detect?
Amazon Rekognition detects the following types of explicit and suggestive adult content in images:
- Explicit Nudity
- Graphic Male Nudity
- Graphic Female Nudity
- Sexual Activity
- Partial Nudity
- Female Swimwear or Underwear
- Male Swimwear or Underwear
- Revealing Clothes
Amazon Rekognition’s Unsafe Image Detection API returns a hierarchy of labels, as well as a confidence score for each detected label. For instance, given an inappropriate image, Rekognition may return “Explicit Nudity” with a confidence score as a top level label. Developers could just use this to flag content. In the same response, Rekognition also returns second level of granularity by providing additional context like “Graphic Male Nudity” with its own confidence score. Developers could use this information to build more complex filtering logic.
Please note that the Unsafe Image Detection API is not an authority on, or in any way purports to be an exhaustive filter of, explicit and suggestive adult content. Furthermore, this API does not detect whether an image includes illegal content (such as child pornography) or unnatural adult content.
Q. Can Amazon Rekognition’s Unsafe Content Detection API detect other inappropriate content besides explicit and suggestive adult content?
Currently, Rekognition only supports the labels we have outlined above. We will work to continuously add and improve labels based on feedback from our customers.
If you require other types of inappropriate content to be detected in images, please reach out to us using the feedback process outlined later in this section.
Q. How is Unsafe Content Detection different for video analysis?
Rekognition Video enables you to automatically identify explicit or suggestive adult content and also provides you with timestamps and a confidence score for each content type label.
Q. How can I ensure that Rekognition meets my adult image and video detection use case?
Rekognition’s Unsafe Content Detection models have been and tuned and tested extensively, but we recommend that you measure the accuracy on your own data sets to gauge performance.
You can use the ‘MinConfidence’ parameter in your API requests to balance detection of content (recall) vs the accuracy of detection (precision). If you reduce ‘MinConfidence’, you are likely to detect most of the inappropriate content, but are also likely to pick up content that is not actually explicit or suggestive. If you increase ‘MinConfidence’ you are likely to ensure that all your detected content is actually explicit or suggestive but some inappropriate content may not be tagged. For examples on how to use ‘MinConfidence’ for images, please refer to the documentation here.
In case Rekogntion fails to detect adult content in images or videos, please reach out to us using the feedback process outlined below.
Q. How can I give feedback to Rekognition to improve its Unsafe Content Detection?
Please send us your requests through AWS Customer Support. Amazon Rekognition continuously expands the types of inappropriate content detected based on customer feedback. It usually takes 6-8 weeks to add new types of explicit or suggestive adult content. Please note that illegal content (such as child pornography) will not be accepted through this process.
Q: What is Facial Analysis?
Facial analysis is the process of detecting a face within an image and extracting relevant face attributes from it. Amazon Rekognition Image takes returns the bounding box for each face detected in an image along with attributes such as gender, presence of sunglasses, and face landmark points. Rekognition Video will return the faces detected in a video with timestamps and, for each detected face, the position and a bounding box along with face landmark points.
Q: What face attributes can I get from Amazon Rekognition?
Amazon Rekognition returns the following facial attributes for each face detected, along with a bounding box and confidence score for each attribute:
- Eyes open
- Mouth open
- Face landmarks
Q. What is face pose?
Face pose refers to the rotation of a detected face on the pitch, roll, and yaw axes. Each of these parameters is returned as an angle between -180 and +180 degrees. Face pose can be used to find the orientation of the face bounding polygon (as opposed to a rectangular bounding box), to measure deformation, to track faces accurately, and more.
Q. What is face quality?
Face quality describes the quality of the detected face image using two parameters: sharpness and brightness. Both parameters are returned as values between 0 and 1. You can apply a threshold to these parameters to filter well-lit and sharp faces. This is useful for applications that benefit from high-quality face images, such as face comparison and face recognition.
Q: What are face landmarks?
Face landmarks are a set of salient points, usually located on the corners, tips or mid points of key facial components such as the eyes, nose, and mouth. Amazon Rekognition DetectFaces API returns a set of face landmarks that can be used to crop faces, morph one face into another, overlay custom masks to create custom filters, and more.
Q: How many faces can I detect in an image?
You can detect up to 100 faces in an image using Amazon Rekognition.
Q: How is Facial Analysis different for video analysis?
With Rekognition Video, you can locate faces across a video and analyze face attributes, such as whether the face is smiling, eyes are open, or showing emotions. Rekognition Video will return the detected faces with timestamps and, for each detected face, the position and a bounding box along with landmark points such as left eye, right eye, nose, left corner of the mouth, and right corner of the mouth. This position and time information can be used to easily track user sentiment over time and deliver additional functionality such as automatic face frames, highlights, or crops.
Q: In addition to Video resolution, what else can affect the quality of the Rekognition Video APIs?
Besides video resolution, the quality and representative faces, part of the face collections to search, has major impact. Using multiple face instances per person with variations like beard, glasses, poses (profile and frontal) will significantly improve the performance. Typically very fast moving people and blurred videos may experience lower quality.
Q: What is Face Comparison?
Face Comparison is the process of comparing one face to one or more faces to measure similarity. Using the CompareFaces API, Amazon Rekognition Image lets you measure the likelihood that faces in two images are of the same person. The API compares a face in the source input image with each face detected in the target input image and returns a similarity score for each comparison. You also get a bounding box and confidence score for each face detected. You can use face comparison to verify a person’s identity against their personnel photo on file in near real-time.
Q: What is Facial Recognition?
Facial recognition is the process of identifying or verifying a person’s identity by searching for their face in a collection of faces. Using facial recognition, you can easily build applications such as multi-factor authentication for bank payments, automated building entry for employees, and more.
Q: What is a face collection and how do I create one?
A face collection is a searchable index of face feature vectors, owned and managed by you. Using the CreateCollection API, you can easily create a collection in a supported AWS region and get back an Amazon Resource Name (ARN). Each face collection has a unique CollectionId associated with it.
Q: How do I add faces to or delete faces from a face collection?
To add a face to an existing face collection, use the IndexFaces API. This API accepts an image in the form of an S3 object or image byte array and adds a vector representation of the faces detected to the face collection. IndexFaces also returns a unique FaceId and face bounding box for each of the faces added.
To delete a face to an existing face collection, use the DeleteFaces API. This API operates on the face collection supplied (using a CollectionId) and removes the entries corresponding to the list of FaceIds. For more information on adding and deleting faces , please refer to our Managing Collections example.
Q. How do I search for a face within a face collection?
Once you have created an indexed collection of faces, you can search for a face within it using either an image (SearchFaceByImage) or a FaceId (SearchFaces). These APIs take in an input face and return a set of faces that match, ordered by similarity score with the highest similarity first. For more details, please refer to our Searching Faces example.
Q. How is Facial Recognition different for video analysis?
Rekognition Video allows you to perform real time face searches against collections with tens of millions of faces. First, you create a face collection, where you can store faces, which are vector representations of facial features. Rekognition then searches the face collection for visually similar faces throughout your video. Rekognition will return a confidence score for each of the faces in your video, so you can display likely matches in your application. For security and surveillance applications, this helps identify persons of interest against a collection of millions of faces in real-time, enabling timely and accurate crime prevention.
Q: In addition to Video resolution what else can affect the quality of the Video APIs ?
Besides video resolution, the quality and representative faces part of the face collections to search has major impact. Using multiple face instances per person with variations like beard, glasses, poses (profile and frontal) will significantly improve the performance. Typically very fast moving people may experience low recall. In addition, blurred videos may also experience lower quality.
Q. What is Celebrity Recognition?
Amazon Rekognition’s Celebrity Recognition is a deep learning based easy-to-use API for detection and recognition of individuals who are famous, noteworthy, or prominent in their field. The RecognizeCelebrities API has been built to operate at scale and recognize celebrities across a number of categories, such as politics, sports, business, entertainment, and media. Our Celebrity Recognition feature is ideal for customers who need to index and search their digital image libraries for celebrities based on their particular interest.
Q. Who can be identified by the Celebrity Recognition API?
Amazon Rekognition can only identify celebrities that the deep learning models have been trained to recognize. Please note that the RecognizeCelebrities API is not an authority on, and in no way purports to be, an exhaustive list of celebrities. The feature has been designed to include as many celebrities as possible, based on the needs and feedback of our customers. We are constantly adding new names, but the fact that Celebrity Recognition does not recognize individuals that may be deemed prominent by any other groups or by our customers is not a reflection of our opinion of their celebrity status. If you would like to see additional celebrities identified by Celebrity Recognition, please submit feedback.
Q. Can a celebrity identified through the Amazon Rekognition API request to be removed from the feature?
Yes. If a celebrity wishes to be removed from the feature, he or she can send an email to AWS Customer Support and we will process the removal request.
Q. What sources are supported to provide additional information about a Celebrity ?
The API supports an optional list of sources to provide additional information about the celebrity as a part of the API response. We currently provide the IMDB URL, when it is available. We may add other sources at a later date.
Q. How is Celebrity Recognition different for video analysis?
With Rekognition Video, you can detect and recognize when and where well known persons appear in a video. The time-coded output includes the name and unique id of the celebrity, bounding box coordinates, confidence score, and URLs pointing to related content for the celebrity, for example, the celebrity's IMDB link. The celebrity is also detected even if sometimes the face becomes occluded in the video. This feature allows you to index and search digital video libraries for use cases related to your specific marketing and media needs.
Q: In addition to Video resolution, what else can affect the quality of the Rekognition Video APIs?
Very fast moving celebrities and blurred videos can affect the quality of the Rekognition Video APIs. In addition, heavy makeup and camouflage common for actors/actresses, can also affect the quality.
Text in Image
Q: What is Text in Image?
Text in Image is a capability of Amazon Rekognition that allows you to detect and recognize text within an image, such as street names, captions, product names, and vehicular license plates. Text in Image is specifically built to work with real-world images rather than document images. Amazon Rekognition’s DetectText API takes in an image and returns the text label and a bounding box for each detected string of characters, along with a confidence score. For example, in image sharing and social media applications, you can enable visual search based on an index of images that contain the same text labels. In media and entertainment applications, you can create text metadata for video frames to support search for relevant content, such as news, sport scores, commercials, and captions. In security and surveillance applications, you can identify vehicles based on license plate numbers from images taken by body cams or traffic cams.
Q: What type of text does Amazon Rekognition Text in Image support?
Text in Image is specifically built to work with real-world images rather than document images. It supports text in most Latin scripts and numbers embedded in a large variety of layouts, fonts and styles, and overlaid on background objects at various orientation as banners and posters. Text in Image recognizes up to 50 sequences of characters per the image and lists them as words and lines. Also, Text in Image recognizes only text horizontal with +/- 30 degrees orientation.
Q. How do Amazon Rekognition Video asynchronous APIs work?
Rekognition Video processes a video stored in an Amazon S3 bucket. The design pattern is an asynchronous set of operations. You start video analysis by calling a Start operation such as StartLabelDetection. The completion status of the request is published to an Amazon Simple Notification Service topic. To get the completion status from the Amazon SNS topic, you can use an Amazon Simple Queue Service queue or an AWS Lambda function. Once you have the completion status, you call a Get operation such as GetLabelDetection to get the results of the request.
Q. What is Person Tracking?
With Rekognition Video, you can track each person within a shot and through the video across shots. Rekognition Video detects persons even when the camera is in motion and, for each person, returns a bounding box and the face, along with face attributes and timestamps. For security and surveillance applications, this makes investigation and monitoring of individuals easy and accurate. For retail applications, this allows to generate customer insights, such as how customers move across aisles in a shopping mall or how long they are waiting in checkout lines.
Q. How can I analyze videos in real time?
In streaming mode, you can search faces against a collection with tens of millions of faces in real time. Rekognition Video face detection and face recognition APIs natively integrate with video stream from Amazon Kinesis Video Streams, a service that enables developers to transmit thousands of live feeds and associated metadata. For security applications, this makes real-time identification of Persons of Interest easy and accurate.
Q: Does Amazon Rekognition Video work with Amazon Kinesis Video Streams?
Rekognition Video uses a Kinesis Video Stream as input, to process a video stream. The analysis results are output from Rekognition Video to a Kinesis data stream and finally read by your client application. Rekognition Video provides a stream processor you can use to start and manage the analysis of streaming video. To learn more, please refer to Working with Streaming Videos.
Q: How does Amazon Rekognition count the number of images processed?
For APIs that accept images as inputs, Amazon Rekognition counts the actual number of images analyzed as the number of images processed. DetectLabels, DetectModerationLabels, DetectFaces, IndexFaces, RecognizeCelebrities, and SearchFaceByImage belong to this category. For the CompareFaces API, where two images are passed as input, only the source image is counted as a unit of images processed.
For API calls that don’t require an image as an input parameter, Amazon Rekognition counts each API call as one image processed. SearchFaces, and ListFaces belong to this category.
The remaining Amazon Rekognition APIs - DeleteFaces, CreateCollection, DeleteCollection, and ListCollections - do not count towards images processed.
Q: How does Amazon Rekognition count the number of minutes of videos processed?
For archived videos, Amazon Rekognition counts the minutes of video that is successfully processed by the API and meters them for billing. For Live stream videos you get charged in chunks of every five seconds of video that we successfully process.
Q. Which APIs does Amazon Rekognition charge for?
Amazon Rekognition Image charges for the following APIs: DetectLabels, DetectModerationLabels, DetectFaces, IndexFaces, RecognizeCelebrities, SearchFaceByImage, CompareFaces, SearchFaces, and ListFaces. Amazon Rekognition Video charges are based on duration of video in minutes, successfully processed by StartLabelDetection, StartFaceDetection, StartFaceDetection, SatrtContentModeration, StartPersonTracking, StartCelebrityRecognition, StartFaceSerach and StartStreamProcessor APIs.
Q. Does Amazon Rekognition participate in the AWS Free Tier?
Yes. As part of the AWS Free Usage Tier, you can get started with Amazon Rekognition for free. Upon sign-up, new Amazon Rekognition customers can analyze up to 5,000 images for free each month for the first 12 months. You can use all Amazon Rekognition APIs with this free tier, and also store up to 1,000 faces without any charge. In addition, Amazon Rekognition Video customers can analyze 1,000 minutes of Video free, per month, for the first year.
Q: Does Amazon Rekognition Video work with images stored on Amazon S3?
Yes. You can start analyzing images stored in Amazon S3 by simply pointing the Amazon Rekognition API to your S3 bucket. You don’t need to move your data. For more details of how to use S3 objects with Amazon Rekognition API calls, please see our Detect Labels exercise.
Q: Can I use Amazon Rekognition with images stored in an Amazon S3 bucket in another region?
No. Please ensure that the Amazon S3 bucket you want to use is in the same region as your Amazon Rekognition API endpoint.
Q: How do I process multiple image files in a batch using Amazon Rekognition?
You can process your Amazon S3 images in bulk using the steps described in our Amazon Rekognition Batch Processing example on GitHub.
Q: How can I use AWS Lambda with Amazon Rekognition?
Amazon Rekognition provides seamless access to AWS Lambda and allows you bring trigger-based image analysis to your AWS data stores such as Amazon S3 and Amazon DynamoDB. To use Amazon Rekognition with AWS Lambda, please follow the steps outlined here and select the Amazon Rekognition blueprint.
Q. Are image and video inputs processed by Amazon Rekognition stored, and how are they used by AWS?
Amazon Rekognition may store and use image and video inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Amazon Rekognition and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is necessary for continuous improvement of your Amazon Rekognition customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information.
Q. Can I delete image and video inputs stored by Amazon Rekognition?
Yes. You can request deletion of image and video inputs associated with your account by contacting AWS Support. Deleting image and video inputs may degrade your Amazon Rekognition experience.
Q: Who has access to my content that is processed and stored by Amazon Rekognition?
Only authorized employees will have access to your content that is processed by Amazon Rekognition. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information.
Q: Do I still own my content that is processed and stored by Amazon Rekognition?
You always retain ownership of your content and we will only use your content with your consent.
Q: Is the content processed by Amazon Rekognition moved outside the AWS region where I am using Amazon Rekognition?
Any content processed by Amazon Rekognition is encrypted and stored at rest in the AWS region where you are using Amazon Rekognition. Some portion of content processed by Amazon Rekognition may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Rekognition customer experience and other Amazon machine-learning/artificial-intelligence technologies. You can request deletion of image and video inputs associated with your account by contacting AWS Support. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information.
Q: Can I use Amazon Rekognition in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?
Yes, subject to your compliance with the Amazon Rekognition Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Rekognition in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13.
Q: How do I determine whether my website, program, or application is subject to COPPA?
For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13.
Q: How do I control user access for Amazon Rekognition?
Amazon Rekognition is integrated with AWS Identity and Access Management (IAM). AWS IAM policies can be used to ensure that only authorized users have access to Amazon Rekognition APIs. For more details, please see the Amazon Rekognition Authentication and Access Control page.