Reviews from AWS Marketplace
0 AWS reviews
-
5 star0
-
4 star0
-
3 star0
-
2 star0
-
1 star0
External reviews

External reviews are not included in the AWS star rating for the product.
Using Statsig daily to release and experiment with new features
What do you like best about the product?
Our initial use case for Statsig was for feature rollout using gates. Statsig has revolutionised this process for us - to a point where it seems strange that we never had it in the first place. Through Statsig's feature gates we've had a lot more confidence in rolling out features. In particular, we love the automated rollouts as it means that we can expose a feature, monitor its performance against key metrics, and then proceed once our confidence is high.
Monitoring is also very easy in Statsig. I have set up a dashboard for my team which includes all of the feature gates and experiments that we are currently running. I love how easy it is to set up the dashboards by just tagging any relevant gate/experiment with my team's custom tag.
I also love how easy it is to configure the UI. I now have things set up so that when I open the app I am presented with all the information I need straight away without having to dive into all the menus. This is very helpful when I just want to check the status of an experiment or stage of a rollout.
Monitoring is also very easy in Statsig. I have set up a dashboard for my team which includes all of the feature gates and experiments that we are currently running. I love how easy it is to set up the dashboards by just tagging any relevant gate/experiment with my team's custom tag.
I also love how easy it is to configure the UI. I now have things set up so that when I open the app I am presented with all the information I need straight away without having to dive into all the menus. This is very helpful when I just want to check the status of an experiment or stage of a rollout.
What do you dislike about the product?
I don't feel like there are major downsides to Statsig - all of the functions work great and we haven't had any trouble setting things up or using the features that are provided.
The one thing that would be nice from an interface perspective would be to have more in line documentation about a certain feature. This would lower the boundary for entry. I find myself having to flick between the docs and the app at times which can be frustrating. At the very least it would be great to have a button which links to the docs for a particular feature. I.e. if I want to implement a layer but I'm not 100% sure on the functionality - I should be able to get to the docs immediately from the `Layer` tab in the web app.
The one thing that would be nice from an interface perspective would be to have more in line documentation about a certain feature. This would lower the boundary for entry. I find myself having to flick between the docs and the app at times which can be frustrating. At the very least it would be great to have a button which links to the docs for a particular feature. I.e. if I want to implement a layer but I'm not 100% sure on the functionality - I should be able to get to the docs immediately from the `Layer` tab in the web app.
What problems is the product solving and how is that benefiting you?
As mentioned previously we use Statsig primarily for rolling out features using the feature gate tooling and performing experiments on certain parts of our app.
Feature gates benefit us by:
1. Allowing us to rollback features easily if there is a problem. This gives us a lot more confidence in our deployments.
2. Allows us to also monitor their success. Because we have configured our own events, it is easy to see if a feature is damaging key metrics.
We also use holdouts to see the impact that multiple features have had on a team-by-team basis. This is great for all the engineers, who can then see how the work they are doing is impacting the performance of our key metrics.
Feature gates benefit us by:
1. Allowing us to rollback features easily if there is a problem. This gives us a lot more confidence in our deployments.
2. Allows us to also monitor their success. Because we have configured our own events, it is easy to see if a feature is damaging key metrics.
We also use holdouts to see the impact that multiple features have had on a team-by-team basis. This is great for all the engineers, who can then see how the work they are doing is impacting the performance of our key metrics.
- Leave a Comment |
- Mark review as helpful
Used to roll my own feature flags but this is so much better
What do you like best about the product?
My favourite thing is how easy it is to manage feature rollouts and experiments on statsig. The SDK makes sense, and I love checking the pulse results during a roll out. One thing that I've noticed from using Statsig is that our team is better at defining experiments before hand, particularly in terms of what metrics we want to evaluate. It's also made us much more confident when making risky changes. Finally, I find it helpful that all of our gates and experiments are defined in one place and I can see what experiments other departments of the company are running if needed.
I would 100% recommend it!
I would 100% recommend it!
What do you dislike about the product?
I largely think statsig is very good, but there are two pain points that I find challenging:
Firstly, when doing a rollout it's not possible to reverse progress while maintaining the same cohort of experimental data and you need to start a new rollout. For metrics that have a long lead time, this can slow down experiments, and it makes me think it's not necessarily the right tool for assessing longer term impact of experimental changes.
Secondly, ad blockers can impact whether feature gates are evaluated as true. This is a known issue, but we do receive some (albeit very few) customer complaints from people expecting a feature that is currently blocked behind a rollout.
Firstly, when doing a rollout it's not possible to reverse progress while maintaining the same cohort of experimental data and you need to start a new rollout. For metrics that have a long lead time, this can slow down experiments, and it makes me think it's not necessarily the right tool for assessing longer term impact of experimental changes.
Secondly, ad blockers can impact whether feature gates are evaluated as true. This is a known issue, but we do receive some (albeit very few) customer complaints from people expecting a feature that is currently blocked behind a rollout.
What problems is the product solving and how is that benefiting you?
We use statsig to evaluate changes to our application and back end systems in a controlled way before rolling changes out to 100% of users. The ability to track outcomes of certain metrics by cohorts in the experiment, and see the impact of a feature, is a great benefit. We also benefit from not having to develop our own feature gate system in each repository that we work in.
Pretty positive - makes continuous roll-out easy
What do you like best about the product?
The flexibility it allows for custom rollouts. Layers have been a great feature used for more complex ml model rollout solutions.
In addition simple binary feature flags are a great way of getting things off the ground
In addition simple binary feature flags are a great way of getting things off the ground
What do you dislike about the product?
Have noticed some weird behaviours when the connection using a statsig client fails
What problems is the product solving and how is that benefiting you?
We wanted to have a custom ML model roll out solution where we could track the success of new models based on key metrics.
We developed a routing system which relied on a statsig layer to deliver traffic to different ML models
We developed a routing system which relied on a statsig layer to deliver traffic to different ML models
Makes experimentation easy
What do you like best about the product?
It's just very straightforward to rollout features and experiments to our customers and understand their impact. We use it across our tech stack with different languages and the SDKs all work well and without issues. Also get a quick response on the slack support.
What do you dislike about the product?
Takes a little time to understand the nuances of the different features (e.g. why you can't change what percentage of users go to a group in an experiment once it's started)
What problems is the product solving and how is that benefiting you?
Main three things for me are
1. enabling a gradual roll out of features so we can gain confidence with a small subset of customers
2. allowing access to different groups of users (e.g. internal team, beta testers etc) is super straightforward
3. experiments and layers are great tools for testing variants and understanding their impacts
1. enabling a gradual roll out of features so we can gain confidence with a small subset of customers
2. allowing access to different groups of users (e.g. internal team, beta testers etc) is super straightforward
3. experiments and layers are great tools for testing variants and understanding their impacts
A brilliant platform for feature flag, a/b tests and experimentation
What do you like best about the product?
We use Statsig for all our feature flags which allows us to release new feature quicker and safely.
We also use Statsig for all our experiments which enables all our product teams to use the platform
We also use Statsig for all our experiments which enables all our product teams to use the platform
What do you dislike about the product?
Actually no downsides, its one of the best platforms out there for product teams and engineers
What problems is the product solving and how is that benefiting you?
- Feature Flags
- Running A/B tests
- Running Expirements
- Using Holdout Groups
- Allowing our internal teams to test features early
- Running A/B tests
- Running Expirements
- Using Holdout Groups
- Allowing our internal teams to test features early
Scaling experimentation with Statsig
What do you like best about the product?
Statsig offers a variety of features helping to experiment at scale, such as a metric library, alerts and sanity checks for your experiments. It is in active development and new features are constantly added.
What do you dislike about the product?
Given it is a younger company and product, some features are not yet available yet but will likely be in the near future.
What problems is the product solving and how is that benefiting you?
Standardising the experimentation process, accelerating the rollout of new features.
A game-changer for decision makers
What do you like best about the product?
Our team absolutely loves Statsig. Prior to Statsig we had to custom implement feature flags, interpret product analytics, and argue about whether a change was working or not. With Statsig we can be completely data driven by quickly spinning up an experiment and receiving clear and objective results. We also now have fine grained control over what users see in our app, whether it's through and experiment or a feature flag. Another great thing about the feature flags is they are free!
I can't imagine going back to a world without Statsig!
I can't imagine going back to a world without Statsig!
What do you dislike about the product?
While I'm sure this is a hard thing to solve, if there is any detected bias in an experiment it's hard to determine exactly why bias was detected and what to do about it. That being said, it's great that Statsig is able to detect experiment issues.
What problems is the product solving and how is that benefiting you?
Statsig enables us to make objective decisions when pushing changes to our application. Our onboarding progress is critical for educating our users and getting them involved with the product. With Statsig we can see how small and large changes affect key performance metrics. There have been many instances a small change has made a big impact.
Super easy to use, their UX is great
What do you like best about the product?
Feature gate helped us improve our dev cycle.
What do you dislike about the product?
Not much to say here, maybe the timezone issue since we’re in South Korea. But not that big issue.
What problems is the product solving and how is that benefiting you?
deployment went easy especially features collaborated with other corporates, it was super complex and time consuming but after we adopted Statsig, it was easy as hell
Exposures in warehouse are gamechanging
What do you like best about the product?
Being able to connect statis to our existing data warehouse is massive. The UI is also amazing to use
What do you dislike about the product?
We need a solution to provide a long list of ID's to bucket into two groups
What problems is the product solving and how is that benefiting you?
We run our experiments through statsig
Best product for start ups
What do you like best about the product?
Statsig has been the standout solution when we were shopping around.
It has great programs for early stage start ups, and has reasonable and transparent pricing, and by far has the best and most intuitive product.
From their account executive team to their support team & numerous features they are shipping quarterly, there hasn't been a single point where we regretted going with Statsig.
It has great programs for early stage start ups, and has reasonable and transparent pricing, and by far has the best and most intuitive product.
From their account executive team to their support team & numerous features they are shipping quarterly, there hasn't been a single point where we regretted going with Statsig.
What do you dislike about the product?
I know that we are currently under-leveraging statsig's in-house built analytics tools and such. I wish there were more guides and tutorials on how to best set it up, with simple integrations like Segment or other CDP tools.
What problems is the product solving and how is that benefiting you?
Generally deploying A/B tests fast, monitoring results.
We also use it a lot for feature management, like feature flagging in-progress features,etc.
We also use it a lot for feature management, like feature flagging in-progress features,etc.
showing 11 - 20