I have been using Control-M for more than six years. Initially, it was mostly just monitoring the jobs, but now I also do some troubleshooting around that.
My main use case for Control-M these days involves multiple jobs running in our contact center systems. We have multiple nodes to begin with, and some of them are responsible for maintaining the predictive dialer calling list for records sourced from multiple platforms. Along with this, we also have certain jobs deployed for our reporting purposes, where our databases are synchronizing with other Genesis databases. Additionally, we have multiple log archiving systems or jobs that have been deployed as well. We have some ServiceNow jobs that trace and manage the employee profiles, and then we have some speech-related Nuance jobs scheduled as well.
One of the major use cases of Control-M that we use is our log archival process. This process integrates file movements with job scheduling and enables secure file transfer by using both FTP and SFTP file transfers. It triggers the job when the file arrives, and then it also validates the file completion and size before actual processing. So, in the contact center cluster, one of the jobs that we have is the Informat job that extracts the caller data from Informat and transfers it to various downstreams such as BIH or Connect Direct. Apart from this, we also have various SQL stored procedure purging jobs in Genesis, and there is one main, important Cassandra job that runs on the Cassandra nodes, selected for incremental backing up. The Pulse housekeeping, where the job runs and cleans the ECP snapshots every 30 minutes, is one of the major, significant jobs that we use. Along with this, we also have a cyclic job that runs every 15 minutes on each of the MCP nodes. Every 15 minutes, it resyncs the job, basically for the audio file resyncing that happens from one of the applications to a given directory. This means the most recent file that has been uploaded is put into all the MCP boxes every five seconds, and then the right announcement gets picked for the user to hear.
The log job archival basically copies and archives all the Genesis log files for a period of retention given. It logs the files from site one to a specific site location and site two to another specific site location. This is not only in production; it is for all environments including Dev, SIT, and QA that we have. We have also automated that all archived log files older than three days are gzipped, and all these files will be moved to a different archive location than the location that it has initially been sent to. It also makes sure that we are masking and the schedules are followed, which are not getting archived.