My main business use case for Control-M at NBCUniversal involves setting up schedules, running batches, fixing jobs, and everything that's involved with the actual Control-M side. The main business uses for Control-M revolve around bringing data together for every application that NBC uses. NBC is a media entertainment company, known for Universal Studios, Paramount, CNBC, MSNBC, and quite a few others. We utilize Control-M with everything that NBC does, such as for warehouse, database, and timekeeping. There are quite a few other areas where we actually use Control-M, but those are the main things. Additionally, we run backups to ensure the data is kept intact, so in case of any reporting issues, we always have something to restore back to and present as required.
Control-M supports our DataOps and DevOps initiatives by allowing us to set up job schedules based on requests from teams. These schedules outline what is needed, the sources used, and how it transforms the data to their requirements. This is quite useful in the way Control-M operates. For DevOps, the process is similar; teams provide what they need, and we input that into Control-M. We test it first to ensure it does what it's supposed to do before going live, ensuring nothing disrupts live production or information. We do use Control-M to orchestrate workloads across multiple environments. Currently, we're looking at possibly integrating Control-M with DataDog, which we use a lot. We also plan to bring together Control-M and AWS for certain applications, as there's a transformation occurring with some applications using AWS alongside Control-M. Although we have the old IBM AS/400 system, where schedules have been put on hold, we still utilize it with Control-M. Control-M effectively integrates everything, including running a schedule on AS/400 that processes data and sends it back for further manipulation.
The features of Control-M that I like the most include the ability to easily integrate or bring in different platforms into Control-M. For instance, AWS, mainframe, TWS, and something that's running on Autosys can all be brought into Control-M, converted to how Control-M runs it, and then the batch can be executed. This centralizes various applications in Control-M, which doesn't just have to handle batch processes, but also other tasks like reporting on required data. I find this functionality very useful and the setup is impressive, with more advancements yet to come.
With Control-M, my company has achieved several measurable improvements since I started. The metrics indicate that the number of failures has dropped, and we have addressed the issue of excessive false alerts that I encountered when I joined. Previously, we received an overwhelming number of alerts daily, but now we manage to maintain that at a normalized level, perhaps around five to fifteen alerts, depending on running core batches and their setup.
I think Control-M can be improved by enhancing integration capabilities. I would like to see an integration for OpenTelemetry, which we're looking into moving forward into early 2026 onwards. I believe it should be made easier so that even a basic person can have a grasp of how the GUI works and how everything connects together. Additionally, more integration with other platforms would be beneficial. I know there are over one hundred integrations already, and they are still working on many more, but certain integrations that we would use could probably be brought in sooner.
I have been utilizing Control-M for the last probably twenty-four years now.
The biggest return on investment I see when using Control-M stems from how it simplifies processes. Control-M makes everything easy to use and approachable, allowing multiple sources to consolidate data effortlessly. You can have numerous sources feeding into one job on Control-M, which will process that data whether it takes seconds or several hours. Upon completion, this data is ready for users, and any issues can be traced back to rectify the situation.
The biggest return on investment I see when using Control-M stems from how it simplifies processes. Control-M makes everything easy to use and approachable, allowing multiple sources to consolidate data effortlessly. You can have numerous sources feeding into one job on Control-M, which will process that data whether it takes seconds or several hours. Upon completion, this data is ready for users, and any issues can be traced back to rectify the situation.
Regarding pricing, setup cost, and licensing for Control-M, this aspect can be challenging. Licensing constantly evolves as needs change. We are considering a transition to Helix in early 2026, with ongoing discussions between BMC and high management regarding pricing structures. Once an agreement is reached, the transition will proceed.
In my current field, I have been working for more than five years. Approximately five thousand to ten thousand users interact with Control-M in my company, and their main role is view only, meaning they cannot actually do anything with the jobs. They can see what's running and if something fails, they come to us, and we take the action on the backend to resolve that issue. They have full visibility of data running on their applications, but the actual batch and job side is more controlled by us. They can view the GUI to see something running or if something is not running, but they cannot do anything else with it.
Control-M integrates fairly well with new or changing technologies within our DataOps or DevOps stack. Depending on the application and the team, some may choose not to use Control-M, opting for external solutions instead. However, from my perspective, Control-M can handle any DataOps-related tasks or platform-related processes without a problem. If something is not already integrated in Control-M, reaching out to BMC enables them to create that module and facilitate data needs effectively. I have used Control-M client and we are currently utilizing the API for transitions with another platform. My current project involves DataDog integration with Control-M, exploring how both platforms can work together for alerting, synthetic tests, and more. This integration is a work in progress, but BMC staff have assured me that it should be quite easy to integrate both platforms.
From my experience, Control-M has enabled new capabilities and business processes that were not possible before. The advancements in integration allow us to gather data from various applications and platforms seamlessly. Control-M can handle everything required, ensuring data is fed to clients or application teams according to their needs. Overall, based on my years of experience with Control-M, the improvements have been significant, even though issues can arise that require support from BMC.
In assessing the creation and automation of data pipelines across on-premises and cloud technologies with Control-M, I have found that while on-premises can typically address most needs, certain issues may arise. If my team cannot resolve an issue with a pipeline, we can reach out to BMC for full support. Moving forward with SaaS, I believe many of these issues have been addressed and the integrations look promising, although we have not fully transitioned yet.
For building, scheduling, managing, and monitoring workflows, Control-M offers a straightforward approach. If you are new to Control-M, monitoring is typically your first step, which leads to planning. This requires a bit more involvement to understand the dependencies and actions required for certain batches to feed others. A first-time user can become comfortable with Control-M in a week or two, especially with some guidance. Control-M operates efficiently and learning resources such as videos and online documentation are available for support.
I rate this product nine point five out of ten.