Automated browser testing has reduced manual effort and saves significant testing time
What is our primary use case?
My main use case for BrowserStack is automating some of the test cases. A quick specific example of a test case I automate with BrowserStack is some of the application URLs.
What is most valuable?
BrowserStack has helped us with automating our test cases by reducing the time by almost 60%. The best features BrowserStack offers include giving us an automated way to simulate different browsers or devices, and I mostly use the browser simulation features. BrowserStack has positively impacted my organization by helping us reduce the human capacity by 50%, with that reduction mostly being in manual testing efforts.
What needs improvement?
I feel there is not much to improve about BrowserStack.
For how long have I used the solution?
I have been using BrowserStack for a year now.
What do I think about the stability of the solution?
What do I think about the scalability of the solution?
I have had no issues with BrowserStack's scalability so far.
How are customer service and support?
The customer support for BrowserStack is great.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
Before choosing BrowserStack, I did not evaluate other options.
What was our ROI?
I have seen a return on investment with BrowserStack, specifically a 50% reduction in human capacity.
What's my experience with pricing, setup cost, and licensing?
My experience with pricing, setup cost, and licensing was good.
What other advice do I have?
My advice for others looking into using BrowserStack is to consider purchasing via AWS Marketplace if the purchase price is high. I would rate this review a 10.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Lightning-Fast Parallel Testing with Seamless CI Integration
What do you like best about the product?
Parallelism that keeps pipelines lean. Our Appium tests fan out to ~40 real Android and iOS devices simultaneously, Pixel, Samsung, iPhone, iPad - cutting validation time from 60 min to 15 min.
Artifacts that tell the full story. Each session link bundles video, network/HAR, console, logcat/syslog, and device metadata, so debugging feels like being on the phone itself.
Stable CI integrations. With GitHub Actions and Jenkins triggers, every PR posts a pass/fail matrix to Slack and links back to the exact failing session, instant triage.
What do you dislike about the product?
1. App resigning quirks (push/universal-link entitlements) sometimes add setup friction.
2. Tunnel drops on long tests under corporate proxies.
What problems is the product solving and how is that benefiting you?
Replaces local device labs. No USB juggling, real-world gestures, sensors, and OS versions covered automatically. Faster, cleaner signal. Parallel runs + rich logs mean fewer flaky results and 60 to 70 % faster triage. Confidence before release. We validate upgrade paths, deep links, locale/RTL behavior, and network throttling pre-merge, cutting escaped mobile bugs by roughly 35%
Quick Results but Pricing Could Be Better
What do you like best about the product?
I use BrowserStack primarily for testing, and I've found it to be incredibly valuable in automating our testing scenarios. The platform excels in delivering quick results, which is essential for keeping up with our fast-paced workflow. This efficiency improvement has been a standout feature, as it streamlines our testing process and saves valuable time. Additionally, integrating it with React has been smooth, complementing our existing development stack well.
What do you dislike about the product?
I think the pricing is an issue. Additionally, while we've experienced improvements in efficiency, this aspect also poses some challenges.
What problems is the product solving and how is that benefiting you?
I use BrowserStack to automate testing scenarios quickly, improving our team's efficiency significantly.
Essential for Pre-Launch Testing and Bug Prevention
What do you like best about the product?
Chrome 3PC/ITP readiness sweeps. We run a cookie matrix (first-party only, partitioned, SameSite=Lax) across Chrome/Safari in Live/Automate to catch auth and cross-subdomain bugs before launch, no local hacks. DST/locale “calendar chaos” runs. One pass sets devices to DST-switch dates and non-Gregorian locales to flush out date math, invoice due-date, and countdown bugs that unit tests miss. Install/uninstall sanity for mobile. App Live lets us validate clean-install vs upgrade paths (cold cache, SW/asset refresh), uncovering stale WebView assets and versioned deep-links.
What do you dislike about the product?
No first-class device reservations for release hour; popular Safari/macOS queues still bite.
What problems is the product solving and how is that benefiting you?
Prevents “day-one” auth failures: Cookie/ITP sweeps catch SSO fall-throughs early, avoiding hotfix Fridays. Stops calendar/localization bugs: DST/locale matrices reveal off-by-one and formatting issues before customers do. De-risks mobile upgrades: Clean-install vs upgrade checks surface cached asset and link-routing regressions, cutting MTTR on app releases.
Effortless Real-Device Testing with Powerful Debugging Tools
What do you like best about the product?
I drag an APK/IPA, jump into a real Pixel/iPhone in ~60 seconds, and capture video + screenshots + logcat/syslog from one session URL, perfect for Jira. True “real-world” toggles. I can flip device language (including RTL), time zone, geolocation, and 3G/4G/offline profiles to surface auth, deep-link, and caching bugs that emulators miss. WebView + native in one view. Seeing JS console next to native logs pinpoints whether a failure is app code or web content, cutting triage time.
What do you dislike about the product?
Entitlement hiccups on some IPAs (universal links, push) make first-time setup fiddly.
What problems is the product solving and how is that benefiting you?
Reproducible, ticket-ready bugs and one link holds video, device/OS, and logs, no “can’t reproduce” loops. Coverage without a device cart with real iOS/Android versions expose layout/gesture/localization issues early. Faster PR unblocks - Quick sanity passes on feature branches behind VPN catch issues in minutes, not days.
Essential Accessibility Testing with Real Device Validation
What do you like best about the product?
True screen-reader validation on real devices. In App Live I enable TalkBack (Android) and VoiceOver (iOS) to verify focus order, rotor actions, hints/labels, and custom controls, no simulation guesses. Shift-left checks in CI. On App Automate we run Espresso Accessibility Checks and XCTest a11y assertions to fail builds on missing labels, tiny hit targets, dynamic type clipping, and contrast snapshots before manual passes.
What do you dislike about the product?
OS settings at scale are manual. System toggles like Dynamic Type, Reduce Motion, and High-Contrast can’t always be scripted reliably across big device grids.
What problems is the product solving and how is that benefiting you?
Catches real-world a11y bugs early, Focus traps, unlabeled icons, motion sensitivity, and text scaling regressions surface pre-merge, not in production.
Repro you can trust, Video + logs + device details end “can’t reproduce” loops and satisfy compliance reviews.
Simplifying Multi-Device Testing for Teams
What do you like best about the product?
BrowserStack offers a very convenient way to test on real devices without the need for any setup. I appreciate the smoothness of the interface and the speed with which I can switch between various browsers and operating system versions. It saves me a great deal of effort and helps keep my testing workflow straightforward and efficient.
What do you dislike about the product?
Sometimes, the loading time is a little longer than I would expect, but this doesn’t take away from the overall excellent experience.
What problems is the product solving and how is that benefiting you?
BrowserStack spares me the effort of setting up and maintaining a personal device lab, as it enables immediate testing on real browsers and various operating system versions. Thanks to this platform, I can detect layout and performance issues early, minimize the time I spend on setup, and achieve a much more reliable and efficient testing workflow.
Excellent Test Tools and Process
What do you like best about the product?
Test Tools & Process – Accessibility & Coverage
What do you dislike about the product?
The performance is an issue, and it's not possible to run this in closed or secured environments
What problems is the product solving and how is that benefiting you?
The platform provides efficient QA coverage, which has helped streamline our testing process. Additionally, we've experienced noticeable cost savings on devices, making it a practical solution for our needs.
Effortless Test Case Writing, Impressive Results
What do you like best about the product?
Writeitng Test cases converting from house to seconds
What do you dislike about the product?
The accuracy of AI-generated results should ideally be 100%. Anything less places an unnecessary burden on the tester.
What problems is the product solving and how is that benefiting you?
Writing test cases, maintaining a shared repository, and manually adding data sets are all valuable practices. Combining these methods with AI-generated test cases makes the process even more effective and enjoyable.
Intuitive and Easy to Use—A Pleasure to Work With
What do you like best about the product?
The software is easy to use and feels intuitive. It offers a wide range of functionalities to suit various needs.
What do you dislike about the product?
I can't think of anything i dislike, BrowserStack answers my needs.
What problems is the product solving and how is that benefiting you?
The team save time by generating test cases with AI. The test coverage is higher than before.