Posted On: Jul 9, 2020

We are excited to announce the launch of normalized model scores in Amazon Fraud Detector (Preview). Customers use scores to sideline high risk events, while allowing low risk events to pass with no friction. Prior to this launch, score distributions could shift between models, forcing customers to manually analyze the distributions and update their business logic (e.g. rules) to account for these shifts. 

The new normalized scores are consistent across all models in Amazon Fraud Detector, making it easy and intuitive for customers to compare models and select appropriate score thresholds for their business. The new scores are directly correlated to false positive rates, so customers always know how much friction is being applied to their business. Because the score is consistent across model versions, there is no need to update business logic when deploying a new model, saving time and reducing the chance of introducing errors into production fraud systems. 

Amazon Fraud Detector is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore) and Asia Pacific (Sydney) regions. Learn more and sign up for the Fraud Detector preview here. For details about model scores, see our documentation