Continuous cross-browser testing has reduced production defects and improves team collaboration
What is our primary use case?
The main use case for me using BrowserStack, which I have been doing for four years, is mostly around testing solutions supported for older devices and operating systems, such as iPhone 8 or iOS 12, especially for applications used by customers who do not have the most recent devices in the market and are still using Internet Explorer or earlier versions of Edge. This helps provide us devices and browsers for those legacy solutions.
When using BrowserStack for those legacy devices and browsers, the process typically involves using it for both manual and automation testing, allowing us to utilize APM as well as Selenium in both ways.
What is most valuable?
The best features BrowserStack offers for us include App Live, which has really helped us; the quick availability of real-time devices as soon as new ones are launched, such as when iPhone 17 was released; integration with project management tools including Jira and Slack, which is very handy; and access to network logs, something we have made good use of.
From a productivity standpoint, the integration with the wider ecosystem of project management tools has the biggest impact for us, specifically with Jira and Slack, as it helps us log tickets and bugs directly, providing evidence for the tickets we are logging. This was much slower before, especially when dealing with flaky applications or newly live releases that have numerous problems. The integration helps us quickly log bugs using the evidence provided by BrowserStack.
BrowserStack has positively impacted our organization by improving collaboration and showing quality improvements in releases, with the number of defects leaking into production significantly reduced.
We track two metrics: the number of bugs leaking to production per application and the number of customer support issues reported. We have seen a reduction of close to 20 to 25% in defects compared to past releases. When we started using BrowserStack, release four had about 150 bugs reported compared to 200 in release three. Incidents reported by end customers have also seen a reduction.
What needs improvement?
I think false positives are an area where BrowserStack can improve, as I have often seen things working fine on actual devices, but on BrowserStack devices, issues arise due to network slowness or AWS region connectivity problems that cause lag.
In addition to false positives and network slowness, feature improvements could include monitoring dashboards or consolidated dashboards for multiple releases across different domains, allowing us to see runs scheduled and link us to reports of passed and failed cases.
For how long have I used the solution?
I have been using BrowserStack for a total period of close to four years across two organizations.
What do I think about the stability of the solution?
BrowserStack is mostly stable for our needs, though sometimes there is slowness in the network, especially when working with AWS-based hosting.
What do I think about the scalability of the solution?
Currently, BrowserStack's scalability for our organization meets our needs as we have relatively limited use cases, and so far what we have scaled has worked fine for us.
How are customer service and support?
I have not had to interact with BrowserStack's customer support team, as most issues were addressed locally.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
Before using BrowserStack, we had used LambdaTest and physical devices as our prior solutions.
What was our ROI?
We have definitely seen a return on investment with BrowserStack, particularly in tracking the value realized per automated test case and time savings in testing apps across multiple clouds, browsers, and operating systems, leading to money savings since we previously had many resources engaged in that.
What's my experience with pricing, setup cost, and licensing?
The setup cost and licensing were handled at the enterprise level, as our bank is a large organization, and these central negotiations were managed by the finance team, so I have limited exposure to that.
Which other solutions did I evaluate?
Before choosing BrowserStack, we mostly evaluated LambdaTest as an option.
What other advice do I have?
We have a local version of BrowserStack for direct access, and we also access BrowserStack from AWS EC2 machines, providing us with both kinds of interactions available.
My advice for others looking into using BrowserStack is to evaluate options, perform an ROI calculation beforehand, and identify the specific use cases BrowserStack excels at, as this will lead to a much higher ROI return for your organization rather than using it for everything, including manual testing. I would rate my overall experience with BrowserStack an 8 out of 10.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Comprehensive Testing Tools for All Devices
What do you like best about the product?
A range of testing tools is available, allowing us to perform tests on a variety of different devices.
What do you dislike about the product?
At this moment, I have nothing to dislike.
What problems is the product solving and how is that benefiting you?
There is a wide range of devices available for us to test on, which makes things much more convenient since we don't have to rely on having any physical devices ourselves.
Essential Tool for Cross-Browser and Device Testing
What do you like best about the product?
As a tester, I find it very useful. For manual testing, if I want to do cross browser testing or multiple device testing, I prefer browser stack. Now in addition to this, we have AI tools also. Would like to proceed further with this,
What do you dislike about the product?
Whenever I user this platform, I find this helpful
What problems is the product solving and how is that benefiting you?
Primarily testing the features in multiple device and platforms
Effortless Testing with Smooth Workflows and Real-Device Coverage
What do you like best about the product?
The build upload -> run -> debug cycle is extremely smooth. I don’t have to think twice; it just works.
Parallel execution actually saves my day during regression sprints. Instead of waiting hours, I wrap things up much faster.
Real-device coverage is impressive. Half of the issues we catch now come from devices we’d never be able to afford or maintain in-house.
The session recordings and live logs feel like someone recorded the entire investigation trail for me, super helpful when collaborating with devs.
What do you dislike about the product?
Sometimes older devices take too long to boot, and that slows down fast feedback.
When the dashboard gets busy, the UI feels a bit sluggish. Not a blocker, but noticeable.
What problems is the product solving and how is that benefiting you?
We don’t maintain a physical device lab anymore, no storage, no upgrades, no complaining devices. It helps us catch platform-specific issues early (especially those odd Android OEM quirks). Since everything runs in parallel, our regression cycle has gone from a full day to a few hours. Developers get cleaner, reproducible bug reports because each session has logs + video + device details neatly packaged.
Lightning-Fast Parallel Testing with Seamless CI Integration
What do you like best about the product?
Parallelism that keeps pipelines lean. Our Appium tests fan out to ~40 real Android and iOS devices simultaneously, Pixel, Samsung, iPhone, iPad - cutting validation time from 60 min to 15 min.
Artifacts that tell the full story. Each session link bundles video, network/HAR, console, logcat/syslog, and device metadata, so debugging feels like being on the phone itself.
Stable CI integrations. With GitHub Actions and Jenkins triggers, every PR posts a pass/fail matrix to Slack and links back to the exact failing session, instant triage.
What do you dislike about the product?
1. App resigning quirks (push/universal-link entitlements) sometimes add setup friction.
2. Tunnel drops on long tests under corporate proxies.
What problems is the product solving and how is that benefiting you?
Replaces local device labs. No USB juggling, real-world gestures, sensors, and OS versions covered automatically. Faster, cleaner signal. Parallel runs + rich logs mean fewer flaky results and 60 to 70 % faster triage. Confidence before release. We validate upgrade paths, deep links, locale/RTL behavior, and network throttling pre-merge, cutting escaped mobile bugs by roughly 35%
Essential for Pre-Launch Testing and Bug Prevention
What do you like best about the product?
Chrome 3PC/ITP readiness sweeps. We run a cookie matrix (first-party only, partitioned, SameSite=Lax) across Chrome/Safari in Live/Automate to catch auth and cross-subdomain bugs before launch, no local hacks. DST/locale “calendar chaos” runs. One pass sets devices to DST-switch dates and non-Gregorian locales to flush out date math, invoice due-date, and countdown bugs that unit tests miss. Install/uninstall sanity for mobile. App Live lets us validate clean-install vs upgrade paths (cold cache, SW/asset refresh), uncovering stale WebView assets and versioned deep-links.
What do you dislike about the product?
No first-class device reservations for release hour; popular Safari/macOS queues still bite.
What problems is the product solving and how is that benefiting you?
Prevents “day-one” auth failures: Cookie/ITP sweeps catch SSO fall-throughs early, avoiding hotfix Fridays. Stops calendar/localization bugs: DST/locale matrices reveal off-by-one and formatting issues before customers do. De-risks mobile upgrades: Clean-install vs upgrade checks surface cached asset and link-routing regressions, cutting MTTR on app releases.
Effortless Real-Device Testing with Powerful Debugging Tools
What do you like best about the product?
I drag an APK/IPA, jump into a real Pixel/iPhone in ~60 seconds, and capture video + screenshots + logcat/syslog from one session URL, perfect for Jira. True “real-world” toggles. I can flip device language (including RTL), time zone, geolocation, and 3G/4G/offline profiles to surface auth, deep-link, and caching bugs that emulators miss. WebView + native in one view. Seeing JS console next to native logs pinpoints whether a failure is app code or web content, cutting triage time.
What do you dislike about the product?
Entitlement hiccups on some IPAs (universal links, push) make first-time setup fiddly.
What problems is the product solving and how is that benefiting you?
Reproducible, ticket-ready bugs and one link holds video, device/OS, and logs, no “can’t reproduce” loops. Coverage without a device cart with real iOS/Android versions expose layout/gesture/localization issues early. Faster PR unblocks - Quick sanity passes on feature branches behind VPN catch issues in minutes, not days.
Essential Accessibility Testing with Real Device Validation
What do you like best about the product?
True screen-reader validation on real devices. In App Live I enable TalkBack (Android) and VoiceOver (iOS) to verify focus order, rotor actions, hints/labels, and custom controls, no simulation guesses. Shift-left checks in CI. On App Automate we run Espresso Accessibility Checks and XCTest a11y assertions to fail builds on missing labels, tiny hit targets, dynamic type clipping, and contrast snapshots before manual passes.
What do you dislike about the product?
OS settings at scale are manual. System toggles like Dynamic Type, Reduce Motion, and High-Contrast can’t always be scripted reliably across big device grids.
What problems is the product solving and how is that benefiting you?
Catches real-world a11y bugs early, Focus traps, unlabeled icons, motion sensitivity, and text scaling regressions surface pre-merge, not in production.
Repro you can trust, Video + logs + device details end “can’t reproduce” loops and satisfy compliance reviews.
Simplifying Multi-Device Testing for Teams
What do you like best about the product?
BrowserStack offers a very convenient way to test on real devices without the need for any setup. I appreciate the smoothness of the interface and the speed with which I can switch between various browsers and operating system versions. It saves me a great deal of effort and helps keep my testing workflow straightforward and efficient.
What do you dislike about the product?
Sometimes, the loading time is a little longer than I would expect, but this doesn’t take away from the overall excellent experience.
What problems is the product solving and how is that benefiting you?
BrowserStack spares me the effort of setting up and maintaining a personal device lab, as it enables immediate testing on real browsers and various operating system versions. Thanks to this platform, I can detect layout and performance issues early, minimize the time I spend on setup, and achieve a much more reliable and efficient testing workflow.
Excellent Test Tools and Process
What do you like best about the product?
Test Tools & Process – Accessibility & Coverage
What do you dislike about the product?
The performance is an issue, and it's not possible to run this in closed or secured environments
What problems is the product solving and how is that benefiting you?
The platform provides efficient QA coverage, which has helped streamline our testing process. Additionally, we've experienced noticeable cost savings on devices, making it a practical solution for our needs.