Reviews from AWS Marketplace
0 AWS reviews
-
5 star0
-
4 star0
-
3 star0
-
2 star0
-
1 star0
External reviews
External reviews are not included in the AWS star rating for the product.
have recommended to others before, would again
What do you like best about the product?
I first learned about w&b 5 years ago in high school, and used it for a few projects. I now use it every day and have convinced several colleagues in and out of the lab to use w&b.
What do you dislike about the product?
i wish there was a way to locally/offline view the graphs, or at least be able to view graphs with latency.
What problems is the product solving and how is that benefiting you?
easy system agnostic logging, and composting across runs
- Leave a Comment |
- Mark review as helpful
wandb works great and is very easy to setup and use supporting wide variety of media types
What do you like best about the product?
Ease of setting it up, the dashboard is pretty much each to use and allows to visualize a wide range of features at once. The integration is pretty much solid and works out of box for any script that I worked on
What do you dislike about the product?
I used to pre train certain checkpoints in a sequential manner or sometimes my runs used to break in between due to memory/connection issues from there on it was quite difficult to visualize all the previous run in a single curve using the dashboard, setting the x axis as wall time helped but the curve was still not a single continuous graph
What problems is the product solving and how is that benefiting you?
In some of my experiments I wanted to visualize how the grad norm varies with train iter or how does the learning rate scheduler affect the perplexity of the model that I am working on, wandb makes doing these experiments much easier
A no-brainer tool to assist in model training/evaluation/comparison
What do you like best about the product?
Its easy and seamless integration with PyTorch lightening and simple API usage. The model (artifact) and logs logging also help me trace back a model that was training months ago.
What do you dislike about the product?
Nothing as of now. Would appreciate more dark modes and API control to give experiment names, rather than having my experiment named 'Tasty-Aadvark'.
What problems is the product solving and how is that benefiting you?
Having all my training sessions accessble from single site, is the biggest benefit. Also, saving logs, training meta is quite helpful too.
Very quick and easy to start online logging
What do you like best about the product?
This tool is perfect for logging information from programs, especially during training. It allows you to see how your model is training from anywhere in the world. I like how you can just dump some raw data onto the platform, and then you can make your graphs and manipulate the data separately from your training loop.
What do you dislike about the product?
Some features are missing but I am sure they would come if I did a feature request.
What problems is the product solving and how is that benefiting you?
Logging data from machine learning training loops that may be running headless.
Easy to use, but super helpful tool for logging machine learning experiments
What do you like best about the product?
That it is super easy to log all metrics, loss curves and all different kind of data and to get that data visualized in an interpretable manner. I really like how it is integrated into other frameworks like eg pytorch lightning.
I use W&B almost daily, at least I haven't started a single training run without using W&B ever since I subscribed.
I use W&B almost daily, at least I haven't started a single training run without using W&B ever since I subscribed.
What do you dislike about the product?
1) I would like to customize the plots and visualizations of the metrics shown even more. That would be nice to be able to do by using a script or something that can be used across multiple projects --> eg I would like to determine (from my python script) the color of all lines in a plot, and I don't know how to do that.
It would just be usefull for communication to team leaders, bosses etc., if they knew that eg. accuracy would also be plotted with a red, dotted line and that recall would always be a thick, blue line or something like that.
2) I have a hard time figuring out how to navigate all artifacts and how to use those. In eg the integration with ultralytics W&B will create an artifact for each epoch, which quickly fills up my storage. However, that is just a minor thing as I have just created a stand alone script to delete artifacts that aren't tagged with "best" etc.
It would just be usefull for communication to team leaders, bosses etc., if they knew that eg. accuracy would also be plotted with a red, dotted line and that recall would always be a thick, blue line or something like that.
2) I have a hard time figuring out how to navigate all artifacts and how to use those. In eg the integration with ultralytics W&B will create an artifact for each epoch, which quickly fills up my storage. However, that is just a minor thing as I have just created a stand alone script to delete artifacts that aren't tagged with "best" etc.
What problems is the product solving and how is that benefiting you?
Before using W&B I would either use TensorBoard (hard to set up and won't log the same amount of data) or rely on the automatic logging from AWS SageMaker (which is crap). Both these older methods took really long time to setup which delayed all projects - and then we are not even talking about how hard it is to manage and remember "these files are from this run, these other files are from that other run" as that what needed before. Hence using W&B increases the frequency of which I can test new ideas by saving me so many hours each time I start on something new.
Great platform - saves me many hours of work for tasks that I've previously coded manually
What do you like best about the product?
Easy to use, already incorported into major libraries, but still powerful.
What do you dislike about the product?
The only thing I wish was different is pricing per "tracked hour". For my workflow, this number seems very inflated - I have a few powerful GPUs, and run multiple experiments at a time on each one. This results in "tracked hours" of many multiples of realtime, for each GPU, which doesn't seem right. This is OK for me now as an academic, on the personal plan with unlimited tracked hours, but discourages me from using this for commercial projects in the future, where cost would quickly become prohibitive.
What problems is the product solving and how is that benefiting you?
Experiment tracking is hard, important, and wandb makes it almost trivial.
Weights & Biases
What do you like best about the product?
Much easier than tensorboard. Way easier to get started and then a lot more functionality once you're more experienced.
Easy monitoring of gpu use.
Very easy to compare runs.
Easy to upload tables and images.
Also easy to compare just the runs you want and to save working experiments in a nice format (reports)
Easy monitoring of gpu use.
Very easy to compare runs.
Easy to upload tables and images.
Also easy to compare just the runs you want and to save working experiments in a nice format (reports)
What do you dislike about the product?
The only downsides which I hope will be fixed at some point is you don't have an easy way of deleting just one run.
Would be nice if you could restart a run from the step you left it at as well.
But in the day to day use they're pretty minor and the positives outweigh the downsides.
Would be nice if you could restart a run from the step you left it at as well.
But in the day to day use they're pretty minor and the positives outweigh the downsides.
What problems is the product solving and how is that benefiting you?
I'm a researcher not a business so it's mainly helping me keep track of my research.
I like the ease of setup, I know no viable alternative, I hate the slowness and numerous bugs
What do you like best about the product?
It is easy, I can live preview the results, all the plots are done automatically and smartly. It is a great gain of time.
What do you dislike about the product?
The user interface is slow but it is acceptable. Retrieving runs data from wandb using the wandb.Api() takes forever (e.g., 30h for around 30 000 runs of hyperparameter in several environments). I would like to be able to download all data from a set of runs selected from filters in a single api call. Since it represents less than 100 mb of data, it should be feasible in a few minutes maximum, right? The documentation is not great.
What problems is the product solving and how is that benefiting you?
Logging and visualization during development (since I am using wandb in research, I still have to redownload all data using wanbd.Api() at the end, because the wandb plot are not professional enough (bitmap instead of vectors)).
It is saving me a enormous amount of time.
It is saving me a enormous amount of time.
Easy-to-setup model logging product
What do you like best about the product?
It is very quick to get started with logging models and performance to wandb, implementation and integration are readily intuitive and straightforward.
There are some useful available features such as model sweeping and other filtering/grouping mechanisms with runs logged in a given project.
Whenever I need to keep track of ML model performance, I use wandb.
There are some useful available features such as model sweeping and other filtering/grouping mechanisms with runs logged in a given project.
Whenever I need to keep track of ML model performance, I use wandb.
What do you dislike about the product?
The number of concurrent runs is somehow too limited if one launches jobs to a cluster.
It is most of the time hard to find the relevant information you are seeking for in the documentation, hence help comes from issues dealt online by users on different platforms (github, stackoverflow, etc.)
It is most of the time hard to find the relevant information you are seeking for in the documentation, hence help comes from issues dealt online by users on different platforms (github, stackoverflow, etc.)
What problems is the product solving and how is that benefiting you?
- Logging performance of machine learning models
- Helping the optimization of model hyperparameters
It represents a large gain of time compared to manual logging and optimization.
- Helping the optimization of model hyperparameters
It represents a large gain of time compared to manual logging and optimization.
great tool for tracking experiments
What do you like best about the product?
i use it as my single point of knowledge for all my experiments results, including model weights, configs, false analysis etc
What do you dislike about the product?
many specific use cases, which are not that specific imo, i had to implement myself,
What problems is the product solving and how is that benefiting you?
easy experiment tracking
showing 11 - 20