Reviews from AWS Marketplace
0 AWS reviews
-
5 star0
-
4 star0
-
3 star0
-
2 star0
-
1 star0
External reviews
External reviews are not included in the AWS star rating for the product.
W&B helped me increase my productivity
What do you like best about the product?
I like the WEB UI, especially the manipulation of plots and reports as they simplify and visualize many metrics and parameters.
I also like the artifactory and the model registry, they help manage the countless number of models created during an ML/DL project.
Sweep management is also cool! We build an automation tool around it that simplifies ML sweeps and thus helps us get better results.
Finally, I love the prompt and kind assistance we (Nvidia) get on the dedicated Slack channel. Really appreciated!
I also like the artifactory and the model registry, they help manage the countless number of models created during an ML/DL project.
Sweep management is also cool! We build an automation tool around it that simplifies ML sweeps and thus helps us get better results.
Finally, I love the prompt and kind assistance we (Nvidia) get on the dedicated Slack channel. Really appreciated!
What do you dislike about the product?
Not too much actually :)
I guess sometimes the web UI is a bit slow.
I guess sometimes the web UI is a bit slow.
What problems is the product solving and how is that benefiting you?
It solves the management of many (many many) ML experiments; helps us improve our KPIs and track this improvement.
This is benefiting us by saving a lot of time on taking dev decisions based on results (i.e., decide on some algo change, set of hyper parameters).
This is benefiting us by saving a lot of time on taking dev decisions based on results (i.e., decide on some algo change, set of hyper parameters).
- Leave a Comment |
- Mark review as helpful
Best existing machine learning experiment tracker, including a great hyperparameter tuning
What do you like best about the product?
Extremely easy to use (both in browser or via API) + sweep launcher that allows to distributes experiments for different machines
What do you dislike about the product?
There's no easy pipeline for cross-validation, unless you play a bit around... In any case, it does never get as smooth as the other default functionalities
It is designed for the setting where you have fixed train, val, test sets
It is designed for the setting where you have fixed train, val, test sets
What problems is the product solving and how is that benefiting you?
- Experiment tracker
- Hyperparameter tuning (Sweep)
Wandb makes integration of both aspects above quite easy in machine learning experiments
- Hyperparameter tuning (Sweep)
Wandb makes integration of both aspects above quite easy in machine learning experiments
Great product -- too expensive
What do you like best about the product?
graphs, experiment logs, easy to share within team
What do you dislike about the product?
too expensive. latency sometimes sucks too.
What problems is the product solving and how is that benefiting you?
experiment tracking
Best for data science and NLP tasks
What do you like best about the product?
Used Weight & Biases for Natural language processing task where i have to train pre-trained models like Bert and RoBerta for classification models. By using Weight & Biases i don't have to manage the weights, loss and accuracy charts. All i have to login and initilize. You can login with your github account.
What do you dislike about the product?
Honestly nothing, only i think confusion part by viewing different analytics charts get valuable information. Default names mention are also little bit long and confusing.
What problems is the product solving and how is that benefiting you?
I makes life easier because you don't have to manage weights for model training and all data automatically saved in directory, which can be accessed any time you want by simply login to platform. And its free.
A great tool for ML
What do you like best about the product?
W&B is an excellent tool, particularly for collaborating on and maintaining a record of machine learning experiments. I think its role is in closing the gap between training and analysis, which it does very well.
What do you dislike about the product?
* Reports performance can be slow at times for a large number of displayed runs (e.g. 300-600)
* Tools such as Sweeps don't allow for an alternative backend, and available frontend tools are somewhat clunky without specific customization towards the end use (e.g. using reports for analyzing tune results). As it exists now, I think W&B offers more to ML teams that don't have a supporting SW infrastructure team.
* Tools such as Sweeps don't allow for an alternative backend, and available frontend tools are somewhat clunky without specific customization towards the end use (e.g. using reports for analyzing tune results). As it exists now, I think W&B offers more to ML teams that don't have a supporting SW infrastructure team.
What problems is the product solving and how is that benefiting you?
* Experiment tracking
* Quickly performing basic analysis early on in experiments
* Quickly performing basic analysis early on in experiments
The most impactful ML product in the last 5 years
What do you like best about the product?
This is the rare product that both engineers and researchers love, and it has been transformative for our team's ability to work on large, complex problems together. In particular, the Reports feature has become our main medium for collaboration, almost more essential than Github. It makes it easy to keep a shared ground truth for baselines while enabling everyone to fork their own versions. You can pull in as much or as little of the other team members’ work to your own current workspace and Reports – the filtering by tag, time, etc. makes this easy. We primarily use tags to make runs available with a quick semantic hook.
The ability to create custom visualizations (via Vega) and filter across many runs during sweeps has been very useful. We’ve made everything from embedding projections (tsne/umap) to sweep overviews here, and then been able to share them for everyone to use.
The ability to create custom visualizations (via Vega) and filter across many runs during sweeps has been very useful. We’ve made everything from embedding projections (tsne/umap) to sweep overviews here, and then been able to share them for everyone to use.
What do you dislike about the product?
I was pretty skeptical initially that it would help to have more collaborative visualization tools beyond Tensorboard, etc – and I was completely wrong! I wish I’d realized this sooner, wandb seems to know the current flaws in our workflow better than we do :)
What problems is the product solving and how is that benefiting you?
Wandb has been key for us to find regressions or mistakes that might have taken months to uncover without it (or never found them at all!). Many of the biggest advances in generative modeling that we've seen in the last 5 years (language models, text-to-image) were made by teams using wandb, and I wouldn't be surprised if the field was nearly a year behind its current frontier if wandb didn't exist. Especially for generative modeling, visualization and tracking is so essential that it saves you time you didn't realize you were wasting (both in experimental mistakes and collaboration/communication cost). None of the other tools we've tried (Tensorboard and similar) or experiment tracking systems we've built internally have been nearly as good as wandb for this. Also, logging/experiment tracking/visualization is surprisingly difficult to get right as you scale in team and model complexity, so the fact that wandb is very simple to integrate into any codebase makes one almost forget how much it is handling.
Weights and Biases Review Sky Voice Team
What do you like best about the product?
Very clean and easy to understand UI. Easy integration with Tensorflow, it's nice to see metrics per epoch
What do you dislike about the product?
Would be nice if there was model deployment functionality. Also, it would be nice to have a service user option or a team API key. Since our runs are triggered using AWS Sagemaker pipelines, we have had to hardcode one of our team member's user API keys which isn't the nicest solution since he isn't always the person triggering the run yet it's still linked to him.
What problems is the product solving and how is that benefiting you?
We use Weights and Biases to track runs of our model training pipeline. It benefits us by being able to analyse and compare our runs, sorting runs into groups is very useful for experimenting with different model architectures and datasets.
A Must Have Tool if You are a Serious ML Practitioner
What do you like best about the product?
W&B is so user-friendly and useful for any ML practitioner but if you are a serious one, you need to get your hands on this tool. Not only you can monitor the performance of your different architecture changes and hyper-parameters, but you can also debug some of the problems with your training. For example, one time I was pulling my hair understanding why my training is so slow, and just by looking at the system dashboard, I realized that CUDA had failed for some reason and I was training on CPU. The system dashboard is also so helpful to find the right batch size to make use of the last MBs of your VRAM, if you know what I mean ;) . All the different plotting options and model/hyper-parameter comparison capabilities, give you a lot of freedom and power to efficiently train machine learning models.
I also appreciate the fact the product is constantly evolving and adapting in flow with the scene of AI. Their blog posts are also a treasure trove of ML knowledge which shows some top-notch serious ML people are working on the product.
All in all, go try it, it is fun and useful!
I also appreciate the fact the product is constantly evolving and adapting in flow with the scene of AI. Their blog posts are also a treasure trove of ML knowledge which shows some top-notch serious ML people are working on the product.
All in all, go try it, it is fun and useful!
What do you dislike about the product?
The UI has a very small delay in updating the progress of your training which you might find annoying if you are an impatient person. Also, I would have loved it if they could add other features like the estimated time to finish the training or even show the time scales of the training steps on the plots (maybe there is a way to activate it but did not find)
What problems is the product solving and how is that benefiting you?
It takes away the need to write custom tools for monitoring your ML training and gives you the tools and capabilities to make your life a lot easier when you are a serious ML practitioner.
Wandb review
What do you like best about the product?
The simplicity of integrating wandb into our pipelines and experiments
What do you dislike about the product?
Nooooooooooooooooooooooooooooooooooothing
What problems is the product solving and how is that benefiting you?
It takes away a lot of work in regards to versioning artifacts, which makes our team a lot more productive
amazing product to accelerate the whole ML team
What do you like best about the product?
Weights and Biases makes our entire workflow easier. It biases new (and sometimes seasoned!) engineers toward better best practices, makes it easier to introspect, improve, store, and serve models - it makes the entire process better. Can't recommend highly enough!
What do you dislike about the product?
Really can't think of anything. Wish they'd ship even more new features faster; the ones they've added recently are spectacular!
What problems is the product solving and how is that benefiting you?
Weights and Biases makes training, introspecting & improving, storing, and serving models easier. It improves the workflow of our entire machine learning engineering team.
showing 21 - 30