AWS Database Blog
Improve performance of real-time analytics and mixed workloads using the Database In-Memory option on Amazon RDS for Oracle
In this post, I demonstrate how to improve the performance of real-time analytics and mixed workloads without impacting Online Transaction Processing (OLTP) using the Oracle Database In-Memory option for workloads running on Amazon Relational Database Service (Amazon RDS) for Oracle.
The demand for real-time analytics requires analytic queries to be run in real time concurrently with Online Transaction Processing(OLTP) on the same database that has been designed for transaction processing. Enterprise OLTP systems like packaged EPR, CRM, and HCM applications (such as Siebel, PeopleSoft and JD Edwards) are some examples. Additional objects like indexes, materialized views and OLAP cubes are created as a reactive approach to improve the deteriorating performance of analytical workloads, thereby compromising on the OLTP performance. The more indexes, the more overhead of updating the indexes every time the underlying tables are modified. This results in increased memory contention and decreased transaction processing times which is a bottleneck in an extremely busy OLTP system.
What is Oracle Database In-Memory?
Oracle databases store rows in data blocks, and contained within the data for each row is the column data associated with that block. This pattern is optimized for OLTP because updating all columns in a small number of rows results in modifying a small number of blocks. Analytics, on the other hand, access fewer columns but across a large number of rows, often scanning the entire dataset. If we use row format to scan multi-TB heap organized tables with no specific ordering, the result set may be spread across multiple blocks over the entire disk. The I/O involved to fetch those blocks into the cache can be overwhelming. This can be avoided using columnar format, which would access the required columns and avoid scanning inessential data.
Oracle introduced the Oracle Database In-Memory feature to accelerate analytic queries by orders of magnitude without sacrificing OLTP performance. It uses a separate columnar format-based memory area within the System Global Area (also known as SGA) called In-Memory Area, which exists along with the old row format-based database buffer cache. Therefore, it can store data in blocks both in row and columnar format within the same database. Oracle Cost Based Optimizer (CBO) is aware of the existence of both these memory formats. Customers running their Oracle workloads on Amazon RDS for Oracle can benefit from this feature, which is available for Enterprise Edition Bring your Own License (BYOL) only and as an extra cost option requiring separate licensing.
Benefits of Oracle Database In-Memory
Oracle Database In-Memory (IM) has the following benefits:
- Faster scanning of a large number of rows and applying filters that use operators such as =, <, >, and IN
- Faster querying of a subset of columns in a table; for example, selecting 7 out of 200 columns
- Enhance join performance using In-Memory join groups, converting predicates on small dimension tables to filters on larger fact tables and using bloom filters
- Efficient In-Memory aggregation using
VECTOR GROUP BY
transformations - Reductions in indexes, materialized views, and OLAP cubes, and therefore reduced storage and bandwidth demand
- Reduction in storage costs and accelerated scans with In-Memory hybrid compression due to the ability of scanning column data in compressed format
- Data pruning using storage indexes and faster processing and In-Memory Arithmetic using the SIMD (Single Instruction Multiple Data) feature of new state-of-the-art processors with software on chip
The IM column store maintains copies of tables, partitions, and individual columns in a compressed columnar format that is optimized for rapid scans.
The IM column store stores the data for each table or view by column rather than by row. Each column is divided into separate row subsets. A container called an In-Memory Compression Unit (IMCU) stores all columns for a subset of rows in a table segment.
The IM column store resides in the In-Memory Area, which is an optional portion of the system global area (SGA). The IM column store doesn’t replace row-based storage or the database buffer cache, but rather supplements it. The database enables data to be within the memory in both a row-based and columnar format, providing the best of both worlds. The IM column store provides an additional transaction-consistent copy of table data that is independent of the disk format.
Identifying a suitable workload
You can leverage Amazon RDS Performance Insights for your RDS for Oracle database instance to understand the database workload profile and its performance. When investigating your database workload for any slowness or performance issues, you may find complex SQL queries such as SELECTs accessing multiple tables or views to perform aggregations , filters and sorts across large volumes of data using multiple joins methods in the Top SQL section of your Performance Insights dashboard. You would notice the execution plan for these SQLs is often complex and contain join methods like nested loops or hash joins. Besides, you would see DML (Inserts/Updates and Deletes) statements spanning not more than a few lines. Their execution plans often contain access methods like index unique scan
, index fast full scan
, index skip scan
and table access by index ROWID
to achieve lower cardinality or to access fewer rows. This is how a mixed workload presents itself.
Investigating further, you need to identify whether the database system is bound by IO, which can be either physical or logical. You would often find spiked up values under operating system counters for metrics of type Disk IO for example,
os.diskIO.<devicename>.readIOsPS,
os.diskIO.<devicename>.writeIOsPS
andos.diskIO.<devicename>.avgQueueLen
Also noticeable will be higher values of native counters of RDS for Oracle for example,
db.SQL.sorts
(disk)db.Cache.physical reads
anddb.Cache.db
block gets from cache
You may also notice an increase in per-second SQL statistics for example,
db.sql.stats.elapsed_time_per_sec
,db.sql.stats.rows_processed_per_sec
,db.sql.stats.buffer_gets_per_sec
anddb.sql.stats.physical_read_requests_per_sec
.
It is important to note that Per-Call statistics for some of these SQL statements (often referred to as Online Transaction Processing or OLTP) will have
- High number of executions per second (
db.sql_tokenized.stats.executions_per_sec
), - Sub second values for elapsed time per executions (
db.sql.stats.elapsed_time_per_exec
) - And less rows processed per execution (
db.sql.stats.rows_processed_per_exec
)
when compared to other SQLs with relatively less executions per sec
but larger value of elapsed time per execution
and rows processed per second
( often referred to as Online Analytic Processing or OLAP). When you compare tables or views built upon tables which are being accessed in both types of SQL statements, you would notice similarity. Often you would find OLAP queries performing slowly and, in the process, impacting the OLTP workload. You can get the table and access method from execution plans which can be viewed from Database Load section on the Performance Insights dashboard, where you can slice the Average Active Sessions (AAS) chart by Plans. The underlying tables which are part of the OLAP queries are ideal candidates for population into the In-Memory Column Store.
It is a recommended to capture these metrics when the same workload is re-run after the identified tables are loaded into the in-Memory column store. A comparison of pre and post metrics would reveal whether the In-Memory option is beneficial for your workload.
Enabling In-Memory for 19c and above on Amazon RDS for Oracle
The basic criteria are to set INMEMORY_SIZE
to a value greater than zero, which enables the In-Memory column store. The INMEMORY
area is a separate memory chunk carved out of the available SGA allocated to the instance.
Edit your custom parameter group to set the INMEMORY_SIZE
to the required size. Set INMEMORY_SIZE
to a minimum of 100 MB. The COMPATIBLE
initialization parameter must be set to 12.1.0 or higher.
In-Memory Area sizing guide
The In-Memory Area required depends on the database objects stored in it and the compression method applied on each object. For highest compression, choose FOR CAPACITY HIGH
or FOR CAPACITY LOW
; however, they require additional CPU for decompression. For the best query performance, it’s recommended to choose FOR QUERY HIGH
or FOR QUERY LOW
compression methods. Although they consume more memory compared to the other two options, scanning and filtering of data can happen without decompression. You can choose to use the DBMS_COMPRESSION interface from the compression advisor. The compression ratio advised may not be correct when you run the advisor on already compressed objects on disk. In such a case, load the object in memory with the desired compression and query against V$IM_SEGMENTS
to find the compression ratio. You can use the following query for that purpose:
Let’s run a query where we try to find the order that generated the minimum revenue using the view orders_summary_v
that we created earlier. This forces the optimizer to use some join filters, sorting, and aggregation, which is usually more like an OLAP type of query. Here is where Oracle Database In-Memory shines.
With Oracle Database In-memory enabled
The following query has the Oracle Database In-Memory feature enabled:
With Oracle Database In-memory disabled
To compare the performance of the same query without In-memory, we can set the ORDERS_TAB
table to NO INMEMORY
. This disables In-Memory usage for this table, but it also removes the table from the IM column store. If we want to reenable In-Memory for this table, it needs to get populated into the IM store again, which can be challenging in very busy enterprise systems, especially with very large tables. Therefore, it’s recommended to use the inmemory_query parameter to disable In-Memory at the session level for workload testing purposes. Now, let’s run the same query with In-Memory disabled:
The query ran within a second with the In-Memory feature enabled as compared to 2 minutes, 49 seconds without the feature. This is an approximate 99.94% improvement! This is because there is improved performance of joins with the use of Bloom filters when In-Memory is enabled.
Benchmarking using Enterprise workloads
To further understand the benefits of Oracle Database In-Memory, let’s generate a real-world application workload. We use Benchmark Factory to simulate an Enterprise application load by running TPC-C and TPC-H benchmarking tests. We have designed a workload based on a mixture of OLTP (TPC-C Scale: 40) and OLAP (TPC-H Scale:4) transactions. Scale defines the schema size. Both OLTP and OLAP schemas are each around 4.5 GB. OLTP tables are smaller, therefore a higher scale value was needed, whereas OLAP tables are larger fact tables and therefore need a smaller scale. The scale factor has been adjusted based on the INMEMORY_SIZE
defined on by RDS instance. The database used for the following tests is the same as in the previous test case.
The following screenshot shows the size of the database after the objects required for the mixed workload testing were loaded into the database.
The following screenshot shows details of the mixed workload being run as part of this benchmarking.
The OLTP and OLAP workload is distributed equally as you can see in the following screenshot. TPCC is used for OLTP workloads, whereas TPCH is used for Data Warehouse and analytics.
The seven transactions that will be run as part of this testing are a mix of OLTP and OLAP.
Real-world enterprise workload testing with In-Memory disabled
The tables have In-Memory disabled. This includes some partitioned tables for which the inmemory_status
is reflected in the dba_tab_partitions
view as seen in the following screenshot.
We initiate the test with In-Memory disabled and 100 concurrent users along with OLTP and OLAP workloads running concurrently and randomly.
Response time
The following graph (time scale in seconds) shows that the response time peaked at 9.48 seconds, whereas the average value was around 2.32 seconds.
This following graph with time scale in milliseconds shows 0.88 transactions per second, of which the average response time for each of those transactions was around 1,999 milliseconds (approximately 2 seconds).
Transaction time
The graph (time scale in seconds) is the breakdown of OLTP and OLAP transactions. The transaction time per run for OLAP queries like National Market Share Query (Q8) is 8.42 seconds, whereas it is 0.40 seconds for Promotion Effect Query (Q14). The OLTP transactions have a sub second transaction time.
Real-world enterprise workload testing with In-Memory enabled
The following tables have been populated into the column store with priority critical.
The following screenshot shows the non-partitioned tables.
The following screenshot shows the partitioned tables.
Subsequently, the tables have been loaded into the IM column store and fully populated. We haven’t used any compression as part of the IM population; therefore, the table sizes are comparable. You can choose to use COMPRESS
options either using the parameter file or for specific tables as required.
The same tests were run again with exactly the same parameters.
Response time
The following graph with time scale in milliseconds shows that the response time peaked at 1.28 seconds, whereas the average value was around 0.46 seconds.
This following graph with time scale in milliseconds shows there have been 1.02 transactions per second, of which the average response time for each of those transactions was around 563 milliseconds (approximately 0.56 seconds).
Transaction time
The following graph (time scale in milliseconds) is the breakdown of OLTP and OLAP transactions. The transaction time per run for OLAP queries like National Market Share Query (Q8) is 1.8 seconds, whereas it is 0.39 seconds for Promotion Effect Query (Q14). However, it’s noteworthy that the OLTP transactions remain unaffected with a sub second response time of 0.001 seconds (1 millisecond).
Summary of results
The OLAP queries were 78.62% faster, whereas the OLTP transactions remained unaffected and still delivered sub second transaction times. The response time improved by 86.49% with an average of 1.02 transactions per second, as compared to 0.88 transactions per second when In-Memory is disabled.
The real benefit lies in the fact that enabling In-Memory doesn’t need any application-level changes. It’s as simple as modifying some memory parameters and a few table attributes. It doesn’t impact OLTP and improves OLAP performance drastically.
Conclusion
In this post, we covered the challenges with real-time analytics and how they are remediated using the Oracle Database in-Memory option on Amazon RDS for Oracle. We further demonstrated how to implement this feature and ran some enterprise-level real-time mixed workloads with concurrent users and highlighted the benefits.
We encourage you to evaluate your workload and identify whether it is a suitable candidate to leverage the benefit Oracle Database In-Memory option provides. Leave a comment on this post, ask any questions or provide your feedback.
About the Author
Ravi Kiran is a Senior Database Specialist at AWS who primarily focusses on Oracle and PostgreSQL database engines. He works with enterprise customers to help them run optimized workloads on AWS. He also guides them with best practices for designing and architecting their databases, helping them optimize their costs and provide technical consultation.