Experiment tracking has transformed model comparisons and now supports faster, clearer insights
What is our primary use case?
My main use case for Comet is experiment tracking and performance analysis. I initially used it as a tracking model. As a data analyst, I use it to monitor metrics, compare different model runs, and track changes. I also use it to analyze results in a structured way. It helps me identify trends, validate model performance, and share clear insights with the data science teams for better decision-making.
One example of how I have used Comet for experiment tracking recently is when we were testing different versions of a prediction model. I used Comet to track each experiment's parameters, accuracy, and loss values. I also used it for comparing runs in Comet. I could clearly see how changing features and hyperparameters impact performance. This helped us identify the best-performing model and confidently share the results with the team.
On a day-to-day basis, I use Comet mainly to keep experiments organized and easy to review. Whenever a new model run is completed, I check the log metrics, add notes, and tag the experiments, so it is easy to find later. During discussions, I quickly pull up the comparisons in Comet instead of creating manual reports. It saves time and helps me explain performance clearly to both technical and non-technical team members.
What is most valuable?
My favorite features of Comet are experiment tracking and the ability to visually compare multiple runs side-by-side. Experiment tracking lets me automatically log metrics, parameters, and results in one place, so nothing gets lost or mixed up. The dashboards and visualizations make it easy to analyze trends without building custom plots. The collaboration tools are also very helpful. Notes, tags, and shared views make it easier to communicate findings with my teams. Metrics history is also a very good feature. Seeing historical experiments and performance over time helps in decision-making and reproducibility.
Experiment tracking is one of the most important features for me, and it is valuable because it gives me a clear view of how different model runs perform under different parameters. Instead of checking results manually, I can quickly compare metrics such as accuracy or loss and understand what actually improved performance. This saves time and makes my analysis more reliable.
The overall impact of Comet on my organization has been positive. It is structured and transparent in how we track and analyze experiments. Before using Comet, results were scattered across files and spreadsheets, which caused confusion. Now, our team spends less time tracking results and more time analyzing them. Collaboration has improved. Reviews are faster and decisions are more data-driven because everyone is looking at the same reliable information.
One clear outcome that highlights the positive impact of Comet is the reduction in time spent analyzing experiments. Previously, it used to take a few hours to collect results and prepare comparisons, but now with Comet, this takes minutes instead of hours. We also see fewer mistakes because metrics and parameters are automatically logged. As a result, model reviews become faster, and the team can move to the next iteration more quickly and confidently. It has saved 40 to 50 percent of our time.
What needs improvement?
I would like to see more flexibility around content and reporting in Comet. Overall, the features are very strong, but having more customizable dashboards or easier ways to create shareable summaries for non-technical stakeholders would be helpful. It would make it easier to turn experiment results into clear insights without exporting data to other tools.
There is a need for some improvements in Comet. It is very strong overall, but there are a few areas where it can be improved. For example, more flexibility in dashboards and content customization would be helpful, especially for creating summaries for non-technical stakeholders. Additionally, some advanced features have a learning curve, so slightly simpler onboarding or guided tips would make it easier for new users or freshers. These are minor points and they do not affect my regular day-to-day usage, but from the perspective of a fresher or new users, it could be improved.
For how long have I used the solution?
I have been using Comet for six to 12 months.
What do I think about the stability of the solution?
In my experience, Comet is very stable. I have not encountered significant downtimes, data loss, or performance issues when tracking experiments or comparing runs. It works consistently, even when multiple team members are using it, which gives us confidence in relying on it for our day-to-day work.
What do I think about the scalability of the solution?
Comet is very scalable because we continuously use it. As our number of experiments and team members grows, it handles the increased load without any issue. We can track multiple models, large data sets, and numerous runs simultaneously, which makes it easy to scale up projects without worrying about the infrastructure, performance, or storage limitations.
How are customer service and support?
In my experience, Comet's customer support is responsive and helpful. Whenever we had questions about setup or features, the support team provided clear guidance. They also have good documentation and resources, which makes it easy to resolve common issues quickly. Overall, the support has been reliable and adds confidence when using the platform.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
Before Comet, we were using manual tracking with spreadsheets and shared documents to log experiments and results. It worked for small projects, but it was time-consuming, prone to errors, and hard to collaborate on. Switching to Comet made tracking more structured, reliable, and easy to review, especially for multiple experiments and team collaboration.
How was the initial setup?
We have deployed Comet in our organization on a public cloud. We access it as a cloud-based platform, so there is no heavy infrastructure to manage on our side. This makes it easy for the team to access from different locations and helps with scalability and maintenance.
What about the implementation team?
We have used AWS for deploying Comet. The authorities have purchased Comet through the AWS Marketplace. This makes it easy to manage subscriptions, integrate with our existing AWS infrastructure, and ensures secure and cloud-based access for our team.
What was our ROI?
We have seen a clear return on investment with Comet by reducing the time spent manually tracking experiments and preparing reports. Our team can focus more on analysis and improving models. For example, tasks that previously took hours now take minutes, which has improved productivity and accelerated project timelines. While I cannot share exact financial figures, the time and efficiency savings have been significant and have clearly added value to our operations.
What's my experience with pricing, setup cost, and licensing?
My experience with pricing and licensing for Comet was very straightforward. Since we purchased it through the AWS Marketplace, the subscription and billing are clear and easy to manage. Additionally, the setup costs were minimal because it is a cloud-based platform, so we did not have to invest in hardware or complicated infrastructure. Overall, the licensing model is flexible for our team size, and scaling up is easy when needed.
Which other solutions did I evaluate?
We were using Weights & Biases and MLflow before choosing Comet. While they all offer experiment tracking, we chose Comet because it had a cleaner UI, better run comparison, and easier collaboration for our team. It also integrated smoothly with our existing Python workflow, which made adoption faster and simpler.
What other advice do I have?
I would definitely advise new users to take advantage of Comet's experiment tracking, run comparison, and dashboards from the beginning. Make sure to tag and note runs consistently. This will save time later and help your team get the most value. Additionally, explore the collaboration and insights features. They can help speed up analysis and decision-making. I would rate my overall experience with this product as an 8 out of 10.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)