Listing Thumbnail

    Qlik (formerly Attunity) Replicate - Hourly

     Info
    Sold by: Qlik 
    Deployed on AWS
    Free Trial
    Qlik (formerly Attunity) Replicate enables organizations to simplify, automate, and accelerate universal data integration from on-premises sources (SAP, Mainframes, Oracle, Microsoft SQL Server, MySQL, PostgeSQL, and more) to Amazon Cloud targets

    Overview

    Qlik (formerly Attunity) is a multi-competency AWS ISV Partner. We accelerate cloud analytics with the only end to end data integration and analytics solution for AWS, taking you from raw data to informed action. With Qlik, you can automate continuous delivery of real-time analytics-ready data into AWS Data Warehouses or Data Lakes, and make it easily accessible through a governed catalog. Our modern data analytics platform empowers users at any skill level to freely explore all your data and uncover hidden insights.

    The Qlik Data Integration Platform (formerly Attunity) efficiently delivers large volumes of real-time, analytics-ready data into the AWS platform. There is no need for manual ETL scripting with fully automated raw to analytics-ready data pipelines for RDS, S3, Kinesis, EMR, Redshift as well as Snowflake and Databricks running on AWS.

    Continuous data ingestion and migration. Qlik's fully automated change data capture (CDC) enables continuous, real-time data ingestion with an agentless and log-based approach, your data is always current without impacting source systems.

    Real-time change data capture with Qlik Replicate Qlik Replicate moves data in real-time from source to target, all managed through a simple graphical interface that completely automates end-to-end replication. With streamlined and agentless configuration, data engineers can easily set up, control, and monitor data pipelines based on the leading change data capture (CDC) technology.

    Validated by AWS for Database Migration Competency and RDS Services and Amazon Redshift Ready.

    Highlights

    • Enterprise-class change data capture technology: Advanced change data technology (CDC) addressing transactional, streaming and batch architectures
    • Zero-footprint architecture: Reduces impact on IT operations with log-based capture and delivery of transaction data that does not require Qlik Replicate to be installed on each source and target database
    • Cloud-optimized: Accelerated performance, secured, and guaranteed delivery of data.

    Details

    Sold by

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Win2019 Windows 2019

    Deployed on AWS

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Free trial

    Try this product free for 5 days according to the free trial terms set by the vendor. Usage-based pricing is in effect for usage beyond the free trial terms. Your free trial gets automatically converted to a paid subscription when the trial ends, but may be canceled any time before that.

    Qlik (formerly Attunity) Replicate - Hourly

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time. Alternatively, you can pay upfront for a contract, which typically covers your anticipated usage for the contract duration. Any usage beyond contract will incur additional usage-based costs.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (6)

     Info
    Dimension
    Cost/hour
    m5a.2xlarge
    Recommended
    $2.99
    m5.xlarge
    $1.49
    m5.2xlarge
    $2.99
    m5a.4xlarge
    $5.99
    m5a.xlarge
    $1.49
    m5.4xlarge
    $5.99

    Vendor refund policy

    Refunds are not provided, but one can cancel at any time.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Additional details

    Usage instructions

    Below are the instructions to Launch the Qlik (formerly Attunity) Replicate - Hourly application using AWS instance from AWS Marketplace listing. From the AWS Marketplace listing for "Qlik (formerly Attunity) Replicate - Hourly"

    Click on "Continue to Subscribe" Button Click on "Continue to Configuration" Click on "Continue to Launch" On the page with heading "Launch this software", Please make sure to choose below options in accordance with you company internal network and security -

    • Choose Action
    • EC2 Instance Type
    • VPC Settings
    • Subnet Settings
    • Security Group Settings
    • Key Pair Settings Once all the above details are filled, please click on "Launch" button.

    This will create the EC2 instance with Qlik Relicate application installed in it.

    How to Open and access Replicate Application - Click Start and from the All Programs section point to Qlik Replicate and select Qlik Replicate Console on the Windows machine launched from AWS Marketplace image. When you connect to the Qlik Replicate Console, your browser will prompt you for a username and password. The username and password that you need to specify depends whether Replicate Server is installed on Windows or Linux. Qlik Replicate Server on Windows: Your domain username and password.

    More Information - Qlik provides free online support through a monitored community forum, as well as access to detailed documentation, quick start guides, FAQs, and instructional videos. Qlik Community - https://community.qlik.com/t5/Release-Notes/Qlik-Replicate-Release-Notes-May-2024-Initial-Release/ta-p/2436682  Qlik Replicate Online help - https://help.qlik.com/en-US/replicate/May2024/Content/Replicate/Main/Introduction/Home.htm  Qlik Replicate Support Matrix - Supported Platforms and Endpoints https://help.qlik.com/en-US/replicate/May2024/Content/Replicate/Main/Support%20Matrix/supported_platforms.htm 

    Tutorial (https://help.qlik.com/en-US/replicate/September2020-and-earlier/Content/Replicate/6.4/PDF/Attunity_Replicate_EC2_Instance_Tutorial.pdf )

    Support

    Vendor support

    How to Open and access Replicate Application - Click Start and from the All Programs section point to Qlik Replicate and select Qlik Replicate Console on the Windows machine launched from AWS Marketplace image.

    When you connect to the Qlik Replicate Console, your browser will prompt you for a username and password. The username and password that you need to specify depends whether Replicate Server is installed on Windows or Linux.

    Qlik Replicate Server on Windows: Your domain username and password.

    More Information - Qlik provides free online support through a monitored community forum, as well as access to detailed documentation, quick start guides, FAQs, and instructional videos.

    Qlik Community https://community.qlik.com/t5/Release-Notes/Qlik-Replicate-Release-Notes-May-2024-Initial-Release/ta-p/2436682 

    Qlik Replicate Online help https://help.qlik.com/en-US/replicate/May2024/Content/Replicate/Main/Introduction/Home.htm 

    Qlik Replicate Support Matrix - Supported Platforms and Endpoints https://help.qlik.com/en-US/replicate/May2024/Content/Replicate/Main/Support%20Matrix/supported_platforms.htm 

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    10
    In Data Integration, Databases & Analytics Platforms, Migration
    Top
    50
    In Data Warehouses, ELT/ETL
    Top
    25
    In Data Warehouses, ELT/ETL

    Customer reviews

     Info
    Sentiment is AI generated from actual customer reviews
    Reviews
    Functionality
    Ease of use
    Customer service
    Cost effectiveness
    0 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    Positive reviews
    Mixed reviews
    Negative reviews

    Overview

     Info
    AI generated from product descriptions
    Change Data Capture
    Advanced log-based technology addressing transactional, streaming, and batch data architectures
    Data Replication Architecture
    Zero-footprint agentless approach that minimizes impact on source and target database systems
    Multi-Source Integration
    Supports data integration from diverse sources including SAP, Mainframes, Oracle, Microsoft SQL Server, MySQL, and PostgreSQL
    Real-Time Data Movement
    Continuous data ingestion and migration with automated change data capture technology
    Cloud Target Compatibility
    Seamless data pipeline delivery to AWS platforms including RDS, S3, Kinesis, EMR, Redshift, Snowflake, and Databricks
    Data Integration Methodology
    Supports both ETL and ELT data integration patterns with codeless visual development interface
    Cloud and On-Premises Connectivity
    Enables connection to hundreds of cloud and on-premises data sources including AWS services, enterprise applications, and databases
    Parallel Data Processing
    Utilizes highly scalable parallel data integration architecture for optimized data loading and processing
    Connector Ecosystem
    Provides multi-tier connectors supporting file systems, databases, cloud storage, and enterprise applications across Tier B, C, and D categories
    FedRAMP Compliance
    Offers FedRAMP-compliant integration services with specific security and regulatory requirements for government cloud environments
    Data Integration Capabilities
    Supports efficient ELT processes with SQL pushdown, Python scripting, and custom functions for building scalable data pipelines
    Cloud Platform Compatibility
    Native integration with cloud data platforms including AWS Redshift, Snowflake, and Databricks with seamless data movement capabilities
    AI-Powered Transformation
    Incorporates AI functionality through AWS Bedrock integration, enabling data enrichment, classification, and quality checks using Large Language Model components
    Security and Access Management
    Implements enterprise-grade security with encryption, integration with AWS IAM, role-based access controls, and compliance with standards like SOC 2, GDPR, and HIPAA
    Workflow Automation
    Provides no-code interface with real-time alerts, monitoring, and orchestration features for streamlined data pipeline management

    Security credentials

     Info
    Validated by AWS Marketplace
    FedRAMP
    GDPR
    HIPAA
    ISO/IEC 27001
    PCI DSS
    SOC 2 Type 2
    No security profile
    No security profile
    -
    -
    -

    Contract

     Info
    Standard contract
    No
    No
    No

    Customer reviews

    Ratings and reviews

     Info
    3.7
    11 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    45%
    18%
    9%
    9%
    18%
    11 AWS reviews
    |
    115 external reviews
    Star ratings include only reviews from verified AWS customers. External reviews can also include a star rating, but star ratings from external reviews are not averaged in with the AWS customer star ratings.
    Shraddha Waghulde

    Real-time data synchronization faces challenges with monitoring and affordability

    Reviewed on Jun 06, 2025
    Review from a verified AWS customer

    What is our primary use case?

    My customer's main use case for Qlik Replicate  is with one of the payment banks in India. We have implemented this Replicate project for their source data migration to cloud solutions. They wanted to have a CDC approach, so it was designed to get their change data captured on a cloud solution.

    What is most valuable?

    The user-friendly interface in Qlik Replicate  simplifies configuration and management of replication tasks because it is based on GUI, and users don't have to write any scripts. It is entirely drag and drop functionality. Even with zero knowledge of scripting, users can effectively use Qlik Replicate.

    The most valuable feature of Qlik Replicate is their change data capture feature. As soon as data is entered into the source, it gets captured at target locations. This occurs in near real-time, with replication happening within seconds at the target location. Additionally, whatever operations are performed at the source, whether adding data, deleting data, or updating anything, it gets replicated at target locations. It functions as data injection from source to target, ensuring all operations performed on the source system are replicated on target systems.

    What needs improvement?

    Qlik Replicate could be improved in the next release by incorporating more monitoring options to monitor the logs. Currently, log monitoring is not easily accessible, so there should be improvements in this area.

    Regarding additional improvements, since it is not open source, there is dependency on Qlik support. The system could be scaled to include more sources and functions. Currently, there are limited transformations available in Qlik Replicate which could be expanded. Additionally, support response times could be improved as there are sometimes delays in receiving replies to support cases.

    What do I think about the scalability of the solution?

    Since Qlik Replicate is not open source, there is dependency on Qlik support for scalability. The system could be scaled to include more sources and functions. Currently, there are limited transformations available which could be expanded.

    How are customer service and support?

    Support response times could be improved as there are sometimes delays in receiving replies to support cases.

    How was the initial setup?

    The setup of Qlik Replicate is straightforward. The Qlik help site provides comprehensive guidance. Users just need to follow the steps provided on the Qlik help site to set up Qlik Replicate.

    What was our ROI?

    Customers have seen ROI with Qlik Replicate because they get their data for analysis faster, enabling quicker decision-making compared to traditional data sourcing methods. With data replicated faster in target systems, it is readily available for analysis, allowing for expedited decision-making based on the analysis.

    What's my experience with pricing, setup cost, and licensing?

    For Qlik Replicate, the setup cost includes the requirement of a server, which represents the hardware cost that must be covered.

    Which other solutions did I evaluate?

    The cost of Qlik Replicate is comparatively higher as the amount is determined based on the sources and targets included with Replicate. It is more expensive when compared to Oracle GoldenGate .

    What other advice do I have?

    Regarding the effectiveness of automated schema and metadata management in reducing manual intervention in Qlik Replicate, while automated workflows haven't been implemented, tasks created in Replicate can be managed with APIs.

    The inbuilt monitoring and alerting capabilities of Qlik Replicate allow users to schedule alerts for task completion or errors through email, which is part of the administrator functionality.

    Qlik Replicate helps resolve issues by providing notifications for various scenarios. For instance, if a target becomes unavailable during job execution, users receive notifications. Alerts can also be set for high CPU or RAM usage due to large data volume transfers.

    The biggest benefit of Qlik Replicate as a tool is its GUI-based interface, drag and drop features, and its excellence in capturing changes in source systems, along with usual data transformation and replication capabilities.

    For those interested in using Qlik Replicate, it is recommended because it minimizes manpower effort. A person without IT programming knowledge can effectively use Qlik Replicate, which is an advantage over other tools. Those particularly interested in CDC should definitely consider Qlik Replicate.

    Which deployment model are you using for this solution?

    Private Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    SivanandamKrishnan

    Data has been efficiently replicated across multiple platforms

    Reviewed on Jan 28, 2025
    Review from a verified AWS customer

    What is our primary use case?

    An example involving the banking project I am currently working on is for a public sector bank in India. Primarily, they are using a database, and the ultra architecture is meant to take a backup of the database and restore it. This process requires downtime, usually done at midnight, resulting in around two hours of inactivity. This leads to inefficient analytics, as they cannot perform real-time analysis.

    Since it's a complete backup and restore process, there is a cost associated with this. To address this, I started replicating the data. The transaction is stored in one database, and the information is replicated to another database, which is suitable for analytics. This avoids any load on the transaction system while enabling analytics.

    What is most valuable?

    The system is basically user-driven, mostly operating as a local platform. Certain configurations and prerequisites are needed. Collaborative work with the system network and the database administrator to a large extent is necessary to make the most of this tool. 

    It supports multiple endpoints. Data retrieved from the system can be pushed to multiple places, supporting various divisions such as marketing, loans, and others. There's no additional cost, and this helps in calculating the ROI of marketing campaigns.

    What needs improvement?

    There is complexity involved in the licensing part of this system. It is a core-based licensing, which, especially in the banking industry, results in the system capacity being utilized up to a maximum of 60%. The remaining 40% acts as a buffer for safety purposes.

    For how long have I used the solution?

    I have been working on Qlik Replicate  for around three years.

    How are customer service and support?

    I am not very comfortable with Qlik support as it often goes in loops. Even priority tickets, which should be resolved in minutes, can take days. 

    Support engineers should be fully aware of the Qlik ecosystem to provide immediate, qualified responses, rather than checking, learning, and then responding.

    How would you rate customer service and support?

    Neutral

    What was our ROI?

    I conducted a cost comparison with the AWS  service provider, and this option is much cheaper than the Kinesis  service offered by AWS . From a cost perspective, I am quite comfortable with this solution.

    What's my experience with pricing, setup cost, and licensing?

    Licensing is calculated based on the machine's total capacity rather than actual usage. From my point of view, it feels like paying for unused services. This aspect should be considered.

    Which other solutions did I evaluate?

    I evaluated Python and Kinesis  as well as other services in Microsoft. The cost is under control with this solution, unlike other services where it's not. If there is a peak in transactions or a growth in data volume, the costs could become unmanageable with those solutions, which I saw as a threat.

    What other advice do I have?

    My current company is Aurai, where I am a director of data analytics. The product rating is eight out of ten. 

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    JohnZacharkan

    Efficient data replication with exceptional data mapping and recovery capabilities

    Reviewed on Nov 20, 2024
    Review provided by PeerSpot

    What is our primary use case?

    The primary use case is using Qlik Replicate  to interface with DB2 , allowing for efficient data replication from various sources into a common one or other destinations. It is particularly beneficial for dealing with legacy systems.

    What is most valuable?

    Qlik Replicate 's ability to correctly map and replicate data, especially when converting complex fields like packed decimal fields into integers, stands out. Its unparalleled capability to handle replication with DB2  is a key feature. Additionally, its change data capture capability and ability to recover data when not available enhances the data integration process.

    What needs improvement?

    Some features on the graphical user interface are clunky. Accessing certain functions like replicating from a table after a specific date requires obscure key combinations, which could be made more intuitive and visible.

    For how long have I used the solution?

    I have been working with their tool for over six or seven years.

    What do I think about the stability of the solution?

    Qlik Replicate has shown to handle issues and recover seamlessly, ensuring data is up to date after interruptions.

    How are customer service and support?

    The support was exceptional, especially when dealing with a performance issue requiring special libraries on the DB2 box for iSeries. The support team was highly experienced and provided insights without having to look up answers.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    Replicate used to be called Attunity.

    How was the initial setup?

    The initial setup was straightforward, with some considerations for specific libraries on DB2, which enhanced performance significantly. With experience, the setup could be completed in about 30 minutes to an hour.

    What about the implementation team?

    One person can handle the setup. If the source is complex with multiple journals, a team might be needed for the grunt work.

    What was our ROI?

    The product significantly reduces maintenance time and operational costs, securing data replication without missing any data, which is critical for financial reporting.

    What's my experience with pricing, setup cost, and licensing?

    While I am not sure about the exact details, the pricing seems reasonable. The cost might increase with multiple data sources, as it likely involves per-source licensing.

    What other advice do I have?

    I would rate the overall solution a nine out of ten as it is substantially better than other solutions.

    KrishnaBaddam

    Lightweight tool, ensures that data is replicated across different systems and simplify complex tasks such as defining relationships

    Reviewed on Aug 28, 2024
    Review provided by PeerSpot

    What is our primary use case?

    Qlik Compose is basically an integration tool, which has been acquired by Qlik from an Israeli IT company. So that Qlik can become leaders or can jump into the integration space.

    So, there are two tools. One is Qlik Replicate, which replicates the whole data. And then after the replication is done, Qlik Compose is primarily designed because if users use Qlik Replicate, it will replicate the data. For example, if users have ERP data in entity relationships, users can offload it instead of building ETL jobs over the same system. Users can offload that kind of load by pulling the data from the Replicate system. Users are not building a data warehouse from that. 

    So, Compose will come into the picture from the Qlik stack point of view, which will help users automate it quickly. Users need to define the relationships between different tables of what is present in the OLTP system. Based on that, it will automatically design the dimension. It will automatically design the fact, and it will automatically design the relationships. And it will create the table just like what users do in Erwin. Erwin, it is like users define the relationships and an SQL query is generated. But here, it’s 60% automation, where users define the relationship, and then automatically, it will even identify the dimension. Then, after identifying the dimension attributes and what needs to be in that, it will also generate the call for it. That is the data modeling part. So, the advantage is that data modeling is automated. 

    And then users will have something like slowly changing dimensions, later having dimensions, similar pattern jobs in the dimension, and fact ETL process that users need to develop. That, again, is time-consuming. So even that is automated. Usually, that can be automated in Compose.

    So overall, what we claim from Qlik Replicate is that 60% of the process can be automated. Users have the data modeling effort, where users manually define relationships and put in some effort, and then it's automated. Even in the ETL process, users manually define some connections, map the attributes together, and specify what they need. After that, the rest is automated. So, 60% of the time is spent there.

    How has it helped my organization?

    Qlik Compose plays a key role after users’ve purchased Qlik Replicate. Once replication is done from one system to another, but the data warehouse isn't in place, that's when users start using Qlik Compose. It pulls in tables from the source system and allows to define relationships between those tables. Once this is done, Compose will automatically create dimensions—this is part of the data modeling process.

    After the tables are created, the next step is data integration. Data integration can involve developing jobs, and even data modeling is considered a part of data integration. Essentially, data integration is done from the replicated ERP system, which is an entity-relationship (ER) model. Once the data is replicated, users can decide how to compose the dimensions.

    From there, users define the relationships, specify details, and set up ETL jobs. Users might deal with data-driven dimensions, slowly changing dimensions (SCD) types 1, 2, or 3. Users simply drag and drop the source and target tables, and Qlik Compose will automatically generate the required code and complete the integration process.

    I haven't implemented this fully myself, but I’ve learned it from a pre-sales perspective and demonstrated it once or twice. If I had implemented it over a year, I would have hands-on experience with everything, but I understand the automation of data modeling and ETL jobs well enough to explain it to customers.

    What is most valuable?

    Qlik Compose is something that will automate user's overall data modernization. Here data modernization includes data modeling, ETL jobs, etc. 

    But the advantage is users can automate the overall process of data engineering and data modeling through Qlik Compose. I think that's useful when users are able to manage 60% of the workload automated. That will be very useful. That's fantastic.

    Replicate does not have a great AI capability. AI capabilities are present in Qlik Sense. 

    Qlik Replicate is a very light tool. It is only meant to capture data from the log files, get the data, and transfer it, read that table structure, create the table structure, and transfer the data whenever there is a change. So, it basically integrates with the kernel of the operating system. 

    The way it works is that these replicate tools will integrate with the kernel of the operating system, and they will access the redo log files of the database. The redo log should have access to all the files of the structure of the schema, too. So, using that technique, they redo all the data structures, create a similar structure, and replicate the structure in the target schema, table, and database. After that is done, it will start tracing the instances that are happening. 

    For example, if data is inserted into the table, then an insert is fired on the statement on the table. So, that particular insert is captured. And based on that insert statement, it will pull the SQL query and say, "Okay, there is an insert. I need to get that data." It will get the data from the redo log itself rather than going to a database. Then, it will just pass that transaction into the target system, where it will just insert the data. 

    And this happens instantaneously, within a microsecond. So, if there is an insert, an update, or a delete, everything is transferred immediately. It is picked from the redo log because it comes to the redo log, and then the redo log sends it to Qlik Replicate and Replicate to the target system on which Replicate is installed.

    What needs improvement?

    The disadvantage is, I think, people are not going for this license because it is not marketed properly. 

    Qlik was not promoting it because Talend was acquired at the same time. So Talend has become their primary product. Compose was not being sold much. The other reason is there is a tool called Qlik Cloud Version, which is a combination of Compose and Qlik Replicate. So, Compose was not promoted much by Qlik. So, I started to concentrate on Talend more. This was an affair of around two or three months. 

    For how long have I used the solution?

    I used it for a year, from 2023 to 2024. I practiced it, but I have not implemented it for any customer as a project.

    What do I think about the stability of the solution?

    I have not come across any latency concerns raised by any customer because I’ve not implemented it. But latency issues, I have not heard from any customer. And what I understand is it is pretty smooth. 

    Overall, the load and the volume it can handle were pretty high. Qlik Replicate and Compose have handled even a very large number of transactions within a few fractions of a second without any glitches. It was good. So, latency is never an issue.

    What do I think about the scalability of the solution?

    Qlik tools can be scaled based on the source load. What happens when the data transfer happens is that the data resides in the RAM of the tool. So, what users need to do is increase the RAM for the tool so that data does not reside in the RAM for more than a few seconds. It just moves fast.

    Once it is on the disk, the performance will definitely be slow. If it is only on the cache and on the memory, the performance would be better. If it is on the database, it would be best. However, when data transfer happens, some movement of data has to be done through memory. But if that memory is not enough, then it will spill to the disk. So scaling out that memory is not a challenge. 

    So, scaling is possible. It requires some effort to scale the solution.

    How are customer service and support?

    There is a proper product support team. I heard that with the new tools like Qlik AutoML, there is a delay in the technical support from the product support team. We raised it with the product support team for some AutoML issues, which were not working as expected in Qlik Sense. The response was not so great, so we needed to escalate it to someone else to get it done. 

    The support is not that great when it comes to Qlik AutoML. I’m not sure about Replicate. My experience with the AutoML issue was not good. So, it is not as great as Informatica or Oracle in the way we get the support. It is not mature.

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    I used to use Informatica, Databricks, data modeling, and other ETL tools while I was working as an architect and project manager. When I was playing these roles. And since the last year, I started to play a different role. I started to play the role of presales engineer, presales manager. So I did not use these tools, but I used data integration tools like Qlik Talend and Qlik, from a solution architect and presales manager point of view.

    Qlik Compose, Qlik Replicate, Qlik Talend, all these tools I’ve worked with to demonstrate and present to customers and propose them for their modernization.

    So, I used not only Qlik Compose, but also Qlik Sense, Qlik AutoML, Talend, and several components of Talend. I’ve worked with AWS and Azure, though not extensively. I learned enough to demonstrate how these tools work because developing a product end-to-end for a customer is a different experience.

    For example, when doing a data integration with Compose, there might be issues with system readiness, data types, integration failures, and so on. These challenges provide a much deeper learning experience. However, in my demonstrations, I worked with pre-configured demo systems with sample data, so I didn’t encounter any errors. As a result, my learning from implementation is minimal. My focus has been on demonstrating how it works to customers.

    How was the initial setup?

    I’ve not implemented it, but I have used it to present to customers and provide solutions across landscapes from a presales point of view.

    But for the trial version, we were able to deploy it easily. I think we were able to deploy it. I gave my DB to deploy it. He deployed it. But, I have not deployed it personally.

    It would definitely need maintenance. For example, if I’m deploying a product in a customer environment where everything is automated and ready, then definitely, if Compose is going through upgrades, it needs upgrades. 

     Upgrades would definitely have to be made to the system wherever it is deployed. But from a normal maintenance point of view, I think that’s a very good point. It does not need maintenance. Everything has been automated. Unless your source system is changing, it doesn’t need any maintenance. 

    If your source system has changed, meaning your tables have changed or something has changed, I need to check this point. 

    If it can automatically recognize the tables, structures, or anything that has been changed, consume, absorb it, and then replicate that change into the target system from a data model, data integration point of view, and within data integration, even in the ETL process. So that’s a very good point you got. 

    From maintenance, I think there are two aspects. If the product itself has changed, the changes have to happen. But, again, if the product has not changed, but still, the source systems from where it is picking the data, the tables or anything has changed, how does it pick it up? 

    Replicate does it automatically. I remember that.

    What's my experience with pricing, setup cost, and licensing?

    Licensing is core-based for Qlik Replicate.  Based on the cores that you have in the source system and the target, you’ll identify the license for it. If the source system has many cores, then the load that Replicate has to use to pull the data from the source system would be high. So, the licensing would be decided based on the cores. 

    Generally, the advantage of Compose or the purpose of Compose will come into the picture because, in the complete stack of Qlik integration, Replicate is doing just one job of replicating data from an existing OLTP system to another OLTP system. Whereas Compose has to create from this OLTP system an OLAP system, which is a data warehouse. So, that makes a complete stack of Qlik integration, which is advantageous for the customer. They see that this tool is taking away the load from my source system. It’s not loading my core banking system, which has hundreds or thousands of queries per second running on it. My transactions are not getting affected, and I don’t want my sales or banking to be impacted if my data warehouse has to be loaded in real-time. So, what I do is replicate the data using the log files of the system. I pull the data without affecting the database at all. After that, Compose creates a data warehouse automatically. This is very interesting to the customer and very advantageous or profitable.

    Which other solutions did I evaluate?

    Informatica is a totally different tool altogether. It is a data integration tool that can handle huge volumes of data and transform huge volumes of data. 

    Talend is similar to a tool that can handle huge volumes of data, but it is more of a multi-skilled, multi-layer tool that has not only RDBMS integration but also big data integration. It can be on-premise, multi-cloud, or hybrid. It has many advantages. It’s a totally different tool. So, Informatica can handle huge volumes of transformation. So, these two are huge integration tools. 

    Replicate is a very, very light tool that is only used to transfer data from one system to another. It cannot handle transformations. It can only do minor transformations, like addition or subtraction. A few of these calculations can be done on its machine and then transferred to the data. Or you can add a new column to it, put some data in that column, and then transfer it. But it is not used for transformation. Compose is used for transformation. That’s why Compose, Qlik will come into the picture. Whereas Qlik Replicate is very light. The purpose of the tool is different. 

    Informatica and Talend are integration tools. Qlik Compose, and Qlik Replicate are replication tools that will only replicate data from here to there to reduce the load on the core system that is running. Informatica will connect to the core system and increase the load by firing the queries on it. So, Replicate takes away the load by not firing the queries, and then it will go to the log and get the data. And it is a very light tool. It is an agentless tool. So we cannot compare both of them. In terms of support, I didn’t have that proper experience because Qlik Replicate and Qlik AutoML are all evolving tools. 

    What other advice do I have?

    Overall, I would rate it an eight out of ten. 

    reviewer2321628

    Has CDC and trigger features and offers proactive support

    Reviewed on May 29, 2024
    Review provided by PeerSpot

    What is our primary use case?

    We use the tool as a plugin for CDC. 

    What is most valuable?

    The most valuable features of Qlik Replicate are its CDC performance and trigger functions. CDC feature is important to the financial industry. 

    What needs improvement?

    The product should improve its licensing limitation. 

    For how long have I used the solution?

    I have been using the product for six months. 

    What do I think about the stability of the solution?

    I rate the tool's stability a nine out of ten. 

    How are customer service and support?

    The solution's support is proactive and closes the queries. However, the response sometimes lags, and I rate it a seven out of ten. 

    How would you rate customer service and support?

    Positive

    What's my experience with pricing, setup cost, and licensing?

    Qlik Replicate is mainly suited for large companies. However, it is too costly for small businesses. Its pricing is high. 

    What other advice do I have?

    I rate the overall solution an eight out of ten.

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Other
    View all reviews