Testing your apps for all browsers
What do you like best about the product?
Very helpful in testing web and mobile apps in different browsers for compatibility
What do you dislike about the product?
what ever limited use of Browser stack we have done, it looks good
What problems is the product solving and how is that benefiting you?
web and mobile apps in different browsers for compatibility
Effortless Testing with Real Devices
What do you like best about the product?
I really appreciate BrowserStack for offering a full range of real devices, which is essential for us as a service company with customers who are very particular about application compatibility. The ease of use when it comes to installing APK or IPA files is a big plus, especially when I want to upgrade the APK with new features or test a web app on multiple browsers. I'm also starting to like the AI features. The initial setup is easy, and it fits well with our needs.
What do you dislike about the product?
I find the iOS app installation slower than I'd like. When installing an app after selecting the device, it takes a good amount of time.
What problems is the product solving and how is that benefiting you?
BrowserStack provides a full range of real devices for testing, which is crucial for our service company. It's easy to install APK or IPA files and test web apps across multiple browsers. I also enjoy the new AI features.
Effortless Cross-Browser Testing with Seamless Automation Integration
What do you like best about the product?
It saves a lot of time because everything runs online and setup is easy. The interface is simple, and it works well with automation tools like Selenium and CI/CD pipelines. It’s very helpful for teams that need cross-browser testing. However, it can feel expensive, sometimes the sessions are slow or laggy, and device availability can be an issue during busy times. Overall, BrowserStack is a good and reliable testing platform, but the price and performance issues may not suit everyone.
What do you dislike about the product?
well nothing as of now but sometime session disconnects alot but its not usual.
What problems is the product solving and how is that benefiting you?
t removes the hassle of setting up multiple environments and saves a lot of time by providing instant access to real devices in the cloud. This helps me quickly catch cross-browser and device-specific issues, improves test coverage, and speeds up both manual and automated testing.
Effortless Parallel Testing and Seamless Integrations
What do you like best about the product?
Parallel execution actually works reliably. When we run our full suite across multiple OS-browser combinations, the time savings are huge. The integration with popular frameworks (Selenium, Appium, Playwright, Cypress) is straightforward, you don’t waste time wrestling with configs. The session dashboard is clean: logs, video, network timelines, console output - all in one place. Debugging flakiness becomes much easier. Being able to run tests on real mobile + desktop devices without maintaining hardware is a massive operational relief. Build insights (pass/fail trends, flaky test detection, failure clustering) help identify problem tests faster.
What do you dislike about the product?
Occasional device allocation delays during peak hours can slow down CI pipelines. Video playback during debugging sometimes feels slightly laggy for animation-heavy flows. Network logs are useful but could be more detailed for deep investigation. If the suite has many retries, the dashboard can get cluttered without proper naming conventions.
What problems is the product solving and how is that benefiting you?
It eliminates the need to manage Selenium grids, Appium servers, or physical device racks, zero time wasted on infra. Makes cross-platform testing scalable. We can validate the same feature across Chrome, Safari, Firefox, Edge, and multiple OS versions in parallel. Reduces investigation time dramatically because every failure comes with video, logs, screenshots, and environment details. Fits cleanly into CI pipelines, our automated suite runs on every pull request, which improves code quality before merging. Helps catch environment-specific bugs early (especially WebKit/Safari quirks and Android OEM inconsistencies). Increases release confidence because automation coverage becomes both broader and more stable.
Continuous cross-browser testing has reduced production defects and improves team collaboration
What is our primary use case?
The main use case for me using BrowserStack, which I have been doing for four years, is mostly around testing solutions supported for older devices and operating systems, such as iPhone 8 or iOS 12, especially for applications used by customers who do not have the most recent devices in the market and are still using Internet Explorer or earlier versions of Edge. This helps provide us devices and browsers for those legacy solutions.
When using BrowserStack for those legacy devices and browsers, the process typically involves using it for both manual and automation testing, allowing us to utilize APM as well as Selenium in both ways.
What is most valuable?
The best features BrowserStack offers for us include App Live, which has really helped us; the quick availability of real-time devices as soon as new ones are launched, such as when iPhone 17 was released; integration with project management tools including Jira and Slack, which is very handy; and access to network logs, something we have made good use of.
From a productivity standpoint, the integration with the wider ecosystem of project management tools has the biggest impact for us, specifically with Jira and Slack, as it helps us log tickets and bugs directly, providing evidence for the tickets we are logging. This was much slower before, especially when dealing with flaky applications or newly live releases that have numerous problems. The integration helps us quickly log bugs using the evidence provided by BrowserStack.
BrowserStack has positively impacted our organization by improving collaboration and showing quality improvements in releases, with the number of defects leaking into production significantly reduced.
We track two metrics: the number of bugs leaking to production per application and the number of customer support issues reported. We have seen a reduction of close to 20 to 25% in defects compared to past releases. When we started using BrowserStack, release four had about 150 bugs reported compared to 200 in release three. Incidents reported by end customers have also seen a reduction.
What needs improvement?
I think false positives are an area where BrowserStack can improve, as I have often seen things working fine on actual devices, but on BrowserStack devices, issues arise due to network slowness or AWS region connectivity problems that cause lag.
In addition to false positives and network slowness, feature improvements could include monitoring dashboards or consolidated dashboards for multiple releases across different domains, allowing us to see runs scheduled and link us to reports of passed and failed cases.
For how long have I used the solution?
I have been using BrowserStack for a total period of close to four years across two organizations.
What do I think about the stability of the solution?
BrowserStack is mostly stable for our needs, though sometimes there is slowness in the network, especially when working with AWS-based hosting.
What do I think about the scalability of the solution?
Currently, BrowserStack's scalability for our organization meets our needs as we have relatively limited use cases, and so far what we have scaled has worked fine for us.
How are customer service and support?
I have not had to interact with BrowserStack's customer support team, as most issues were addressed locally.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
Before using BrowserStack, we had used LambdaTest and physical devices as our prior solutions.
What was our ROI?
We have definitely seen a return on investment with BrowserStack, particularly in tracking the value realized per automated test case and time savings in testing apps across multiple clouds, browsers, and operating systems, leading to money savings since we previously had many resources engaged in that.
What's my experience with pricing, setup cost, and licensing?
The setup cost and licensing were handled at the enterprise level, as our bank is a large organization, and these central negotiations were managed by the finance team, so I have limited exposure to that.
Which other solutions did I evaluate?
Before choosing BrowserStack, we mostly evaluated LambdaTest as an option.
What other advice do I have?
We have a local version of BrowserStack for direct access, and we also access BrowserStack from AWS EC2 machines, providing us with both kinds of interactions available.
My advice for others looking into using BrowserStack is to evaluate options, perform an ROI calculation beforehand, and identify the specific use cases BrowserStack excels at, as this will lead to a much higher ROI return for your organization rather than using it for everything, including manual testing. I would rate my overall experience with BrowserStack an 8 out of 10.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Comprehensive Testing Tools for All Devices
What do you like best about the product?
A range of testing tools is available, allowing us to perform tests on a variety of different devices.
What do you dislike about the product?
At this moment, I have nothing to dislike.
What problems is the product solving and how is that benefiting you?
There is a wide range of devices available for us to test on, which makes things much more convenient since we don't have to rely on having any physical devices ourselves.
Essential Tool for Cross-Browser and Device Testing
What do you like best about the product?
As a tester, I find it very useful. For manual testing, if I want to do cross browser testing or multiple device testing, I prefer browser stack. Now in addition to this, we have AI tools also. Would like to proceed further with this,
What do you dislike about the product?
Whenever I user this platform, I find this helpful
What problems is the product solving and how is that benefiting you?
Primarily testing the features in multiple device and platforms
Effortless Testing with Smooth Workflows and Real-Device Coverage
What do you like best about the product?
The build upload -> run -> debug cycle is extremely smooth. I don’t have to think twice; it just works.
Parallel execution actually saves my day during regression sprints. Instead of waiting hours, I wrap things up much faster.
Real-device coverage is impressive. Half of the issues we catch now come from devices we’d never be able to afford or maintain in-house.
The session recordings and live logs feel like someone recorded the entire investigation trail for me, super helpful when collaborating with devs.
What do you dislike about the product?
Sometimes older devices take too long to boot, and that slows down fast feedback.
When the dashboard gets busy, the UI feels a bit sluggish. Not a blocker, but noticeable.
What problems is the product solving and how is that benefiting you?
We don’t maintain a physical device lab anymore, no storage, no upgrades, no complaining devices. It helps us catch platform-specific issues early (especially those odd Android OEM quirks). Since everything runs in parallel, our regression cycle has gone from a full day to a few hours. Developers get cleaner, reproducible bug reports because each session has logs + video + device details neatly packaged.
Lightning-Fast Parallel Testing with Seamless CI Integration
What do you like best about the product?
Parallelism that keeps pipelines lean. Our Appium tests fan out to ~40 real Android and iOS devices simultaneously, Pixel, Samsung, iPhone, iPad - cutting validation time from 60 min to 15 min.
Artifacts that tell the full story. Each session link bundles video, network/HAR, console, logcat/syslog, and device metadata, so debugging feels like being on the phone itself.
Stable CI integrations. With GitHub Actions and Jenkins triggers, every PR posts a pass/fail matrix to Slack and links back to the exact failing session, instant triage.
What do you dislike about the product?
1. App resigning quirks (push/universal-link entitlements) sometimes add setup friction.
2. Tunnel drops on long tests under corporate proxies.
What problems is the product solving and how is that benefiting you?
Replaces local device labs. No USB juggling, real-world gestures, sensors, and OS versions covered automatically. Faster, cleaner signal. Parallel runs + rich logs mean fewer flaky results and 60 to 70 % faster triage. Confidence before release. We validate upgrade paths, deep links, locale/RTL behavior, and network throttling pre-merge, cutting escaped mobile bugs by roughly 35%
Essential for Pre-Launch Testing and Bug Prevention
What do you like best about the product?
Chrome 3PC/ITP readiness sweeps. We run a cookie matrix (first-party only, partitioned, SameSite=Lax) across Chrome/Safari in Live/Automate to catch auth and cross-subdomain bugs before launch, no local hacks. DST/locale “calendar chaos” runs. One pass sets devices to DST-switch dates and non-Gregorian locales to flush out date math, invoice due-date, and countdown bugs that unit tests miss. Install/uninstall sanity for mobile. App Live lets us validate clean-install vs upgrade paths (cold cache, SW/asset refresh), uncovering stale WebView assets and versioned deep-links.
What do you dislike about the product?
No first-class device reservations for release hour; popular Safari/macOS queues still bite.
What problems is the product solving and how is that benefiting you?
Prevents “day-one” auth failures: Cookie/ITP sweeps catch SSO fall-throughs early, avoiding hotfix Fridays. Stops calendar/localization bugs: DST/locale matrices reveal off-by-one and formatting issues before customers do. De-risks mobile upgrades: Clean-install vs upgrade checks surface cached asset and link-routing regressions, cutting MTTR on app releases.