Sign in
Categories
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Reviews from AWS customer

0 AWS reviews
  • 5 star
    0
  • 4 star
    0
  • 3 star
    0
  • 2 star
    0
  • 1 star
    0

External reviews

327 reviews
from

External reviews are not included in the AWS star rating for the product.


    Information Technology and Services

Statsig helps me move fast

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
I love how Statsig handles all of the statistics and math for us. This really helps us take the guesswork out on whether something is statistically significant. I also like how easy it is to evaluate an experiment on different axes.
What do you dislike about the product?
I think Statsig could do a better job at alerting when something is amiss. For example, I mistakenly had too many exposures and someone had to manually notice and message our team.
What problems is the product solving and how is that benefiting you?
I work on the marketing site and we have lots of "launches". I love how I can schedule changes to features and experiments.


    Computer Software

Many features for growth engineering teams to run effective tests

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Lots of features for both engineers and data scientists to make educated decisions on A/B experiments, such as filtering out outliers, custom assignment sources, winsorization, etc. Analyzing experiments from within the web interface is easy and there are many tools available such as the Explore queries where you can cut by custom dimensions, adjusting the CI percentage, adding and removing metrics. Additionally, customer support is very helpful and friendly whenever we have questions or encounter issues.
What do you dislike about the product?
Some of the "edge cases" are not clear to the user - Segments are not supposed to be more than 10K rows, but the system doesn't stop you from going over that limit - it simply becomes unstable once you do. Other things - when calling Statsig API from offline jobs (such as cron) there is a default 500 batch limit and a 60-second flush interval, so if your job has < 500 exposures and finishes in < 60 seconds, those exposures get lost unless you explicitly call Statsig.flush() before the pod exits.
What problems is the product solving and how is that benefiting you?
How new features perform on our platform, what kinds of user cohorts utilize vs. shy away from certain features. Also helps with gradual rollout / immediate rollback of risky changes.


    Fabricio N.

Great for FF and even more

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
The feature flag dashboard is very flexible and easy to use.
Managing different environments is accessible and quite flexible.
SDK makes integration a breeze.
What do you dislike about the product?
Organizing FF as their numbers grow is not as easy as I expected. Could be better
What problems is the product solving and how is that benefiting you?
Mostly feature gating, but also product analytics.


    Ke W.

Great product, easy to use and insightful

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Very easy to use, set up experiment and features gate is quite straightforward. Dynamic config is also quite feasible for daily operations
What do you dislike about the product?
It is a little hard to understand its limitation on throughput, e.g. whether we should put it into db or statsig dynamic config. If the dashboard / insight can be more powerful we can save lots to use any tools such as mixpanel
What problems is the product solving and how is that benefiting you?
Feature rollout and A/B testing.
It is much easier to do it in Statsig than building in house solution


    Brian L.

A powerful and developer-friendly experimentation platform with room to grow

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Statsig makes it easy to run experiments and manage feature flags with minimal setup. The SDKs are straightforward, the dynamic configs are flexible, and the event logging is well-structured. I especially appreciate how experimentation logic can stay on the backend and remain decoupled from UI, which fits perfectly with our architecture. The ability to evaluate flags and variants in real time using console rules is also incredibly powerful. Lastly, the team is responsive and open to feedback, which makes a real difference.

Very smooth. The SDK was easy to integrate into our backend service, and initial setup of feature gates and dynamic configs was quick. We were up and running with our first experiment in a matter of hours.

We use Statsig consistently for experimentation logic, feature rollout control, and real-time configuration updates. It’s now an essential part of our development and deployment workflow.

Straightforward. Backend integration (in our case, .NET) was well-supported. The SDK offers a clean API, and the event logging + variant evaluation flow was simple to embed into our existing services.
What do you dislike about the product?
The documentation could go a bit deeper in some areas — especially around advanced configuration and production-ready patterns. We also noticed that the console UI can feel a bit clunky at times when dealing with large numbers of configs or gates. Some limitations around segment targeting and rule flexibility required us to build custom logic on top.
What problems is the product solving and how is that benefiting you?
Statsig helps us decouple experimentation logic from frontend clients and manage feature rollouts safely and efficiently. It solves the complexity of running A/B tests by providing real-time evaluation, clear variant assignment, and automatic metric tracking — all without reinventing the wheel internally.

By using Statsig, we can confidently experiment with new features, validate assumptions with data, and gradually roll out changes with minimal risk. It’s also helping promote a culture of experimentation across teams by making the tooling accessible and reliable.


    Music

Intuitive insights that the whole team can access and understand.

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
The ability to have confidence in product decisions by being able to visually see the impact of an experiment at a glance, while also being able to deep dive into specific behaviours.
What do you dislike about the product?
Some functionality was initially missing but the team were responsive to prioritise and add this.
What problems is the product solving and how is that benefiting you?
It allows us to run experiments and roll out features with confidence and reliability. It’s intuitive interface and well-explained functionality give me as a PM the tooling and context I need to feel certain of the impact our features are having.


    Florent B.

A Reliable Platform for Scalable Experimentation

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
We've been using Statsig for about a year and a half on our experimentation team, primarily for A/B testing and progressive rollouts. One of the biggest advantages is how quickly and easily we can spin up new metrics. The interface is clean and intuitive, especially the way it displays confidence intervals, which gives us a lot of clarity when interpreting experiment results. The built-in SRM (Sample Ratio Mismatch) alerting has also been incredibly helpful for catching issues early.
What do you dislike about the product?
The interface, while generally solid, can feel a bit rigid at times. We'd love to have more flexibility in setting up experiments and customising views or configurations to better match our internal processes. It's not a dealbreaker, but more adaptability would really take the platform to the next level for us.
What problems is the product solving and how is that benefiting you?
Statsig helps us confidently validate product decisions through experimentation. Before using it, setting up A/B tests and tracking meaningful metrics often required a lot of manual work and coordination. Now, with Statsig, we can quickly launch experiments, monitor real-time performance, and catch data integrity issues early thanks to features like SRM alerts. The ability to create custom metrics on the fly, combined with seamless integration with Amplitude, allows our team to move faster and make decisions backed by data. It's helped us scale our experimentation practice without adding operational overhead.


    Computer Software

Straightforward and intuitive experimentation platform

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Statsig is easy to use and highly intuitive, even for non-technical users. The interface is clear, and it simplifies experimentation setup and analysis. It's been very helpful for running quick experiments and understanding impact with minimal effort. I especially appreciate the ability to integrate it seamlessly into our workflow and quickly validate changes.
What do you dislike about the product?
The main challenge we've faced is controlling the consumer-side experience from the creator-side configuration. When trying to roll out experiments across both roles (e.g., creator and consumer), the default 50/50 split no longer behaves as expected, and this limits our ability to test full end-to-end user flows. A clearer mechanism to manage these kinds of multi-actor experiments would make a big difference.
What problems is the product solving and how is that benefiting you?
Statsig allows us to move fast with confidence by validating product decisions through experimentation.
It helps us understand how different experiences impact user behavior and business metrics, allowing our team to iterate quickly while minimizing risk. We also use it to monitor feature adoption and test hypotheses that guide product development. The platform has become essential for making data-informed decisions across teams.


    Sandra M.

Flexible, user-friendly experimentation platform

  • July 31, 2025
  • Review provided by G2

What do you like best about the product?
Statsig is incredibly easy to use and lets our team spin up and manage dozens of A/B tests in days. The UI for exploring metrics is intuitive, and you can slice and dice results by any dimension on the fly. I love how quickly you can see real-time results and pivot analyses without writing code. The frequent feature releases and constant product improvements keep it cutting-edge. Plus, the Slack support team is fantastic—whenever I have a question they respond immediately.
What do you dislike about the product?
When something breaks or an experiment isn’t showing the expected data, it can be hard to debug and pinpoint the root cause. There’s no easy way to stream real-time events from my local machine, and in the Explore view you can only group by one dimension at a time, which makes deeper analysis more cumbersome. The chatbot assistant also feels a bit limited right now, though I use it often.
What problems is the product solving and how is that benefiting you?
Statsig centralizes all of our experimentation and feature‐flagging in one place, eliminating the need to build and maintain custom pipelines. It solves the problem of slow, error-prone manual analysis by giving us instant access to real-time metrics and easy grouping by user segments. As a result, we can roll out new features more confidently, detect impact early, and iterate faster—driving better product decisions and shortening our release cycles.


    Entertainment

Seamless rollout (and its observability)

  • July 30, 2025
  • Review provided by G2

What do you like best about the product?
Flexibility to allow the rules with which to rollout gates and configs. Custom fields enable many use-cases other than user focused features, and the use of schemas for dynamic configs gives us protections against user error. The fact that metrics are seamlessly integrated with rollouts is also convenient, providing an easy to access and interpret interface that translates a feature into data impacted data points, considerably speeding up data analysis. I do use it all the time
What do you dislike about the product?
No support for mapping users with different ID types, we have both string and integer type ids however we're limited on which one we can use (integer) if we want metrics. It took quite a while to provide a Rust SDK. No easy way to define "static" gates. Aka. a rule which only cares returns a boolean a given % of time, I do feel like gates are overkill for that goal.
What problems is the product solving and how is that benefiting you?
We don't have an inhouse way to setup remote configurations, so statsig having the entire infrastructure saves us this development cost. It's also already integrated into feature rollouts, which allows us to focus data analysis into complex analysis as opposed to event handling. More recently we've started using funnel analysis, allowing us to finally spread these kinds of analysis in the company.