AWS Database Blog

Use Oracle Real Application Testing features with Amazon RDS for Oracle

February 2024: This post was reviewed and updated with the Amazon Elastic File System (EFS) integration in RDS for Oracle. The updates show how to use RDS for Oracle as source for Real Application Testing (RAT).

Customers running databases in a self-managed environment face many challenges, such as scalability, performance, maintenance overhead, and reliability. To help increase resiliency, performance, and scalability, AWS has become the new norm for companies. AWS offers the widest variety of databases that are purpose-built for different types of applications so you can choose the right tool for the job to get the best cost and performance. For instance, Amazon Relational Database Service (Amazon RDS) for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud.

To help assess the performance of the target platform before migrating your critical database workloads, you can use Oracle Real Application Testing (Oracle RAT) to assess the effect of any database system changes on real-world applications in a test environment before you deploy the change in the production environment. With Oracle RAT, you can replay the workload run on a production database system on an RDS for Oracle instance to help analyze the difference between the source and target platforms. Oracle RAT, which is a separately licensed option of Oracle Enterprise edition, includes two main features to test the effect of system changes on real-world applications:

  • Database Replay allows you to capture workloads on a production system and replay them in your target environment while maintaining the unique characteristics of the workload. Essentially, it allows you to record your workload in real time and then play it back in another database, which can be a different version, operating system, or hardware configuration. It enables you to test a variety of system changes, such as hardware and software migration, operating system and database upgrades, patches, and other configuration changes.
  • SQL Performance Analyzer (SPA) enables you to assess the impact of system changes on SQL performance by identifying any difference in the SQL run plans and statistics resulting from the change.

In this post, we describe how to use Oracle RAT featues with Amazon RDS for Oracle to help you determine when to migrate your Oracle database to a fully managed database environment and also to assess the impact of various changes by testing them on lower environments before applying the changes to RDS for Oracle production instances. Some steps in this solution, such as the compare period report, use Oracle Automatic Workload Repository (AWR) features, which are licensed under the Tuning and Diagnostic pack

Solution overview

This post discusses two scenarios of using RAT features in Amazon RDS for Oracle.

In the first scenario, when migrating to Amazon RDS for Oracle, you can capture the workload from the source database system for a certain period, preprocess the captured files, and copy them to the RDS for Oracle instance where you can replay the workload. In this scenario the capture phase is executed on an Oracle database instance that provides shell access to the operating system of the database host.

At a high level, this process involves the following steps:

  1. Capture the workload from the source Oracle database.
  2. Capture the SQL workload from AWR into a SQL tuning set for SPA analysis.
  3. Preprocess the captured files.
  4. Copy the preprocessed files and SQL tuning set to the target RDS for Oracle instance.
  5. Import the tuning set to the target RDS for Oracle instance and run SPA.
  6. Replay the workload on the RDS for Oracle instance.
  7. Perform a detailed analysis of the captured and replayed workload using reports generated by database replay and SPA.

The following diagram illustrates this workflow.

The following components are involved in capturing the workload from a self-managed instance and replaying it on an RDS for Oracle instance:

  • Source database – This is the source Oracle database, which runs outside AWS or in Amazon Elastic Compute Cloud (Amazon EC2) or Amazon RDS Custom for Oracle that provides shell access to the operating system of the database host. We capture the workload for a specified period from this system.
  • Target RDS for Oracle instance – This is the target RDS for Oracle instance where we replay the captured workload.
  • Preprocessing system – This is the system we use for preprocessing the capture files. This can be on the same server that is hosting the source database. However, you must perform preprocessing using the same version as the target RDS for Oracle instance.
  • Replay client system – This is the system that hosts the replay client software, which is the same version as the target RDS for Oracle instance. You can use the same system for preprocessing and hosting replay clients.

In the second scenario, we capture the workload from Amazon RDS for Oracle instance to replay it in another RDS for Oracle instance. Since RDS for Oracle provides filesystem access only through DIRECTORY objects, you cannot directly access capture files created in subdirectories on the instance. Hence, Amazon Elastic File System (EFS) is required when using RDS Oracle as a source for workload capture. When the DIRECTORY object is created on Amazon EFS, the files can be accessed from an EC2 instance which mounts the same EFS for preprocessing and replaying the captured files.

For capturing the workload from an RDS for Oracle instance to replay it in another RDS for Oracle instance, you can leverage EFS integration as shown below.

Capture the workload

The first step in using Database Replay is to capture a workload, which you replay in the target RDS for Oracle instance. You need to perform this step at the source database instance. When you enable the capture process in an Oracle database, it records all external client requests made to the database in binary files called capture files, which are platform independent. These capture files contain all relevant information required to replay the workload in the target RDS for Oracle instance.

It’s important that the capture period covers a representative production workload. It may be ideal to choose a peak workload period for the capture. You can start testing the capture process with short windows (10–15 minutes) to estimate the space required to hold capture files. Based on the testing, you can run your final capture for a longer period (1–2 hours) to make sure the capture covers your peak workload.

Before starting a workload capture, you should have a strategy in place to restore the source database to the RDS for Oracle instance. The success of the replay process in the target RDS for Oracle instance depends on the convergence between the source and target systems in terms of the logical state of the application data. If the source and target systems aren’t identical in terms of application data, it can cause replay transactions to fail.

Create a directory object and capture the workload

If the source database is an RDS for Oracle instance, then the directory will be on EFS. Refer to Integrate Amazon RDS for Oracle with Amazon EFS for integrating an EFS filesystem with RDS for Oracle.

To create a directory object and capture the workload for a specific period of time, use the following code:

-- Create a directory object in the database pointing to a filesystem directory which has sufficient freespace

sqlplus "/ as sysdba"
SQL>create directory CAPTURE_DIR as '/opt/oracle/capture';

-- If the source database is RDS for Oracle, then create the directory as shown below.

BEGIN
  rdsadmin.rdsadmin_util.create_directory_efs(
    p_directory_name => 'CAPTURE_DIR', 
    p_path_on_efs    => '/rdsefs-fs-<id>/rat');
END;
/

-- Note the time or SCN to restore the database to Amazon RDS for Oracle instance to maintain same logical state prior to running the replay in the target environment.

SQL>select systimestamp,current_scn from v$database;

--  Create an AWR snapshot for the starting point of the capture.
SQL>exec dbms_workload_repository.create_snapshot;

-- Start the capture process

SQL>
begin
dbms_workload_capture.start_capture(
name => 'capture_aws',
dir => 'CAPTURE_DIR',
duration => null
);
end;
/

--- Allow the application workload to run for a representative period of time.

-- Stop the capture process

SQL>
begin
dbms_workload_capture.finish_capture;
end;
/

--  Create an AWR snapshot for the end point of the capture.
SQL>exec dbms_workload_repository.create_snapshot;

-- Identify snap_ids for the AWR snapshots created during the capture period.
SQL> select snap_id,begin_interval_time from dba_hist_snapshot order by snap_id;

Refer to the Oracle documentation for further details on capturing a workload and different parameters of the dbms_workload_capture procedure. The preceding example shows how to capture a workload using API calls. Alternatively, you can use Oracle Enterprise Manager to do the same with the convenience of using a GUI-based interface.

Generate the workload capture report

Workload capture reports contain captured workload statistics, information about the top session activities that were captured, and any workload filters used during the capture process. You can use the following code to generate a report using API calls:

-- This generates capture report in html format in the CAPTURE_DIR directory.

set serveroutput on size 1000000
declare
l_capture_dir varchar2(100) := 'CAPTURE_DIR';
l_capture_id number;
l_report clob;
l_output_file utl_file.file_type;
l_output_offset number := 1;
l_output_length number;
begin
select dbms_workload_capture.get_capture_info(l_capture_dir) into l_capture_id from dual;
l_report := dbms_workload_capture.report(
capture_id => l_capture_id,
format => dbms_workload_capture.type_html
);

l_output_length := dbms_lob.getlength(l_report);
l_output_file := utl_file.fopen(l_capture_dir, 'capture_'||l_capture_id||'_report.html', 'w');
begin
while (l_output_offset < l_output_length) loop
utl_file.put(l_output_file, dbms_lob.substr(l_report, 32767, l_output_offset));
utl_file.fflush(l_output_file);
l_output_offset := l_output_offset + 32767;
end loop;
utl_file.new_line(l_output_file);
utl_file.fclose(l_output_file);
dbms_output.put_line('output in file ' || l_capture_dir || ':capture_' || l_capture_id || '_report.html');
exception
when others then
utl_file.fclose(l_output_file);
raise;
end;
end;
/

When capturing the workload from RDS for Oracle, the workload capture report can be created on EFS directory which can be accessed from an EC2 instance.

The following is a sample workload capture report.

Create a SQL tuning set for SPA

SQL Performance Analyzer (SPA) is one of the solutions used in Oracle RAT. The steps to run SPA include creating a SQL tuning set for the workload that was captured, as explained in the previous steps. You then export the SQL tuning set from the source database server to be imported to the target RDS for Oracle instance for performance assessment.

To create a SQL tuning set for your captured workload, use the snapshot IDs in the AWR that correspond to the time the capture was run. The information on the timestamp is available in the capture report generated in the previous step. The following is the snippet of commands to run on the on-premises production database server:

---- To create SQL tuning set

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(
sqlset_name => 'SPA-AWS',
description => 'RAT SPA for onrprem to RDS');
END;
/

---- To load SQL tuning set with SQLs that got executed during the capture window
---- Input when it prompts, the snapshot ids from AWR of the timestamp that the capture was executed.

DECLARE
l_cursor dbms_sqltune.sqlset_cursor;
BEGIN
OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (dbms_sqltune.select_workload_repository (
&1, 
&2)) p; 
dbms_sqltune.load_sqlset (sqlset_name =>'SPA-AWS', populate_cursor => l_cursor);
END;
/
-- Refer to Oracle support note ORA-13768 Creating a SQL Tuning Set in Multitenant (Doc ID 2982010.1) if you are using multitenant architecture

---- Verify that few SQLs got loaded in the SQL Tuning Set

select count(*) from TABLE(DBMS_SQLTUNE.SELECT_SQLSET('SPA-AWS'));

---- Create a staging table for the SQL Tuning Set so that it can be exported
---- In this example, an existing schema SH is used to host this table. 

BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLSET (
table_name => 'RAT_STS_TABLE',
schema_name=>'SH' );
END;
/

---- Pack the table with the SQLs from SQL tuning set that was created

BEGIN
DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
sqlset_name => 'SPA-AWS',
staging_table_name =>'RAT_STS_TABLE',
staging_schema_owner => 'SH');
END;
/

Export the SQL tuning set table

In the previous step, you created the SQL tuning set table, which has all the SQLs that ran during the workload capture window. You now need to export this table from the source database instance to the test RDS for Oracle instance to evaluate performance.

To export the table, use the Oracle Data Pump utility:

expdp   dumpfile=sts.dmp logfile=SQLSET_DUMP.log directory=CAPTURE_DIR tables=SH.RAT_STS_TABLE

The preceding command creates the file sts.dmp in the capture directory.

When capturing the workload from RDS for Oracle, you can use DBMS_DATAPUMP to export the tuning set table to a DIRECTORY created on EFS filesystem which can be accessed from an EC2 instance.

Run the workload analyzer

Before you process the files, it’s important to run the workload analyzer, which gives details on how much percentage of the workload was captured and if there are any findings and recommendations that you can implement.

To run the workload capture analyzer, run the following command on any server that has access to the captured files and source database and has Java and Oracle binaries installed:

java -classpath \
$ORACLE_HOME/jdbc/lib/ojdbc8.jar:$ORACLE_HOME/rdbms/jlib/dbrparser.jar:$ORACLE_HOME/rdbms/jlib/dbranalyzer.jar \
oracle.dbreplay.workload.checker.CaptureChecker \
<capture directory>\
jdbc:oracle:thin:@<ip address>:<port>/<db_service_name>

Preprocess the capture files

Preprocessing a captured workload creates all the necessary metadata to help you replay the workload repeatedly in the target system. Because preprocessing capture files is a resource-intensive process, we recommend running this step in a non-production system with the following restrictions:

  • Preprocessing must be done on the same version or patchset as the target RDS for Oracle instance.
  • To execute preprocess on an RDS for Oracle instance you need to have EFS integration and an EC2 instance to access the preprocessed files.

Copy the contents of the directory where the capture files are created into the system that runs the preprocessing task, and make a directory object in the database to point to the directory where the capture files are stored.

You can preprocess the captured files using APIs as shown in the following code:

-- Create a directory object in the database pointing to a filesystem directory which contain the capture files in the preprocessing system.

sqlplus "/ as sysdba"
SQL>create directory PREPROC_DIR as '/opt/oracle/preprocess';

-- For RDS for Oracle, use the following command to create the DIR. This example assumes that captured files have been copied to preproc directory on the EFS filesystem.

BEGIN
  rdsadmin.rdsadmin_util.create_directory_efs(
    p_directory_name => 'PREPROC_DIR’, 
    p_path_on_efs    => '/rdsefs-fs-<EFS id>/preproc');
END;
/


-- Preprocess the workload.
SQL>
begin
dbms_workload_replay.process_capture('PREPROC_DIR');
end;
/

You can also use Enterprise Manager to preprocess capture files.

Copy processed files to the target platform

After the capture files are preprocessed, you transfer them to the target system where the workload can be replayed repeatedly. To replay the workload in the target RDS for Oracle instance, you need to make the files available to the system (a self-managed instance) where the replay client is hosted as well as to the RDS for Oracle instance.

Copy the directory containing the preprocessed files to the system hosting the replay client

You can copy the entire directory structure to the replay client system using standard tools because you have full access to the self-managed instance hosting the replay client. You can use an Amazon Simple Storage Service (Amazon S3) bucket as a staging area if your replay client system is an EC2 instance. You can also use EFS to store the preprocessed files which can be directly accessed by the replay client and the target RDS for Oracle instance. However, to avoid performance bottleneck when replay client and database sessions accessing the pre-processed files on EFS, it is recommended to keep the pre-processed files in RDS storage during the replay phase.

See the following code:

cd <dir where files are copied in to the replay client system>
-- You should see files as shown below. Please note names of files may vary depending on the version

/opt/oracle/preprocess> find . -type f

./cap/wcr_scapture.wmd
./cap/wcr_fcapture.wmd
./cap/wcr_cr.html
./cap/wcr_cr.text
./cap/wcr_cr.xml
./capfiles/inst1/aa/wcr_0nftph0000000.rec
./capfiles/inst1/aa/wcr_0nfubh0000004.rec
./capfiles/inst1/aa/wcr_0nfvyh0000005.rec
./capfiles/inst1/aa/wcr_0nfy0h0000006.rec
./capfiles/inst1/aa/wcr_0nfykh0000007.rec
./capfiles/inst1/aa/wcr_0nfz5h0000009.rec
./capfiles/inst1/aa/wcr_0nfzzh000000a.rec
./capfiles/inst1/aa/wcr_0nfz5h0000008.rec
./capfiles/inst1/aa/wcr_0ng0dh000000c.rec
./capfiles/inst1/aa/wcr_0ng05h000000b.rec
./capfiles/inst1/aa/wcr_0ng2wh000000d.rec
./capfiles/inst1/aa/wcr_0nftrh0000003.rec
./capfiles/inst1/aa/wcr_0nftqh0000001.rec
./capfiles/inst1/aa/wcr_0nftqh0000002.rec
./capture_53_report.html
./pp19.11.0.0.0/wcr_login.pp
./pp19.11.0.0.0/wcr_login.extb
./pp19.11.0.0.0/wcr_seq_data.extb
./pp19.11.0.0.0/wcr_conn_data.extb
./pp19.11.0.0.0/wcr_data.extb
./pp19.11.0.0.0/wcr_dep_graph.extb
./pp19.11.0.0.0/wcr_commits.extb
./pp19.11.0.0.0/wcr_references.extb
./pp19.11.0.0.0/wcr_xa.extb
./pp19.11.0.0.0/wcr_scn_order.extb
./pp19.11.0.0.0/wcr_sqltext.extb
./pp19.11.0.0.0/wcr_schema_info.extb
./pp19.11.0.0.0/capfiles/inst1/aa/wcr_0nftqh0000001.cur
./pp19.11.0.0.0/capfiles/inst1/aa/wcr_0nftqh0000002.cur
./pp19.11.0.0.0/capfiles/inst1/aa/wcr_0nftrh0000003.cur
./pp19.11.0.0.0/capfiles/inst1/aa/wcr_0nfvyh0000005.cur
./pp19.11.0.0.0/wcr_process.wmd
./pp19.11.0.0.0/wcr_calibrate.xml

Copy required files to the RDS for Oracle instance

You need to copy the following files, which are located under the preprocessed directory, to the RDS for Oracle instance to replay the workload:

  • cap/*.wmd
  • pp19.11.0.0.0/* (depending on the version, this could be pp12.1.0.2.0/ or pp12.2.0.2.0/)
  • sts.dmp (the SQL tuning set export taken earlier)

You can either copy these files selectively or copy all the files in the preprocessed directory. Due to the restriction in accessing subdirectories in an RDS for Oracle instance, all files (from subdirectories) must be copied to the parent directory object in the RDS for Oracle instance. However, this restriction does not apply when the DIRECTORY is pointing to an EFS filesystem.

You have multiple options for copying files to an RDS for Oracle instance, such as Amazon S3 integration or DBMS_FILE_TRANSFER using database links. If you have direct connectivity between the source database instance and target RDS for Oracle instance, you can copy files using DBMS_FILE_TRANSFER. You can also use EFS integration to make the preprocessed files on EFS available to the RDS instance. However, for best performance it is recommended to keep the pre-processed files in RDS storage during the replay phase.

For this post, we use the Amazon S3 integration to copy the processed files to the target RDS instance.

Create an S3 bucket to stage the files and follow the steps in the Amazon RDS User Guide to set up Amazon S3 integration for the RDS for Oracle instance:

-- Copy all preprocessed files to an S3 bucket(without subdirectories)
-- Set up RDS-S3-Integration between the bucket and the Amazon RDS for Oracle instance.


-- Create a directory object in the Amazon RDS for Oracle instance to store the files for replay.

exec rdsadmin.rdsadmin_util.create_directory('REPLAY_DIR');

-- Download files from the S3 bucket to the target instance

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => '<your bucket>',
p_s3_prefix => '',
p_directory_name => 'REPLAY_DIR')
AS TASK_ID FROM DUAL;

--  List files in REPLAY_DIR

SQL>col filename for a20
select filename, type, round(filesize/1024) file_kb, mtime
from table(rdsadmin.rds_file_util.listdir(p_directory => 'REPLAY_DIR'))
order by filename;

Prepare the target RDS for Oracle instance for replay

To prepare your target instance for replay, you first copy the source database, then you take a snapshot of the instance.

Copy data from the source database to the RDS for Oracle instance

Before a workload can be replayed, the logical state of the application data on the replay system should be similar to that of the capture system when replay begins. To achieve this, you can copy the source database to the RDS for Oracle instance as of the SCN or timestamp of capture start time using tools like Oracle Data Pump, AWS Database Migration Service (AWS DMS) (see Migrating Oracle databases with near-zero downtime using AWS DMS for instructions), or Oracle GoldenGate. If the source database is an RDS for Oracle instance you can use snapshot restore or PiTR to prepare the target instance from source database.

Take a snapshot of the RDS instance before starting the replay

Take a snapshot of the RDS for Oracle instance after the required data has been copied from the source database; this helps you repeat replaying the same workload. At the end of a replay process, you can restore the RDS for Oracle instance to a new instance and perform repeated testing. Restoring from a snapshot backup also helps you bring the target RDS for Oracle instance into the same state after running SQL Performance Analyzer before starting replay of the captured workload.

Amazon RDS for Oracle currently provides limited ability to manage files in directory objects on the instance. One such limitation is that subdirectories created by the replay aren’t directly accessible and can’t be deleted to free up space. Replaying the workload on a snapshot copy of the original RDS for Oracle instance can help you perform a complete cleanup by deleting the instance that ran the replay.

Run SQL Performance Analyzer on the RDS for Oracle instance

It’s a good practice to run SQL Performance Analyzer (SPA) before running database replay to assess the impact on SQL performance in the target platform. However, this is optional and can be skipped if you want to assess the performance of the workload using the database replay feature alone. If you choose to run both, make sure that target RDS for Oracle instance is restored to a healthy state after running SPA before replaying the workload to ensure the state of the database is consistent with the captured workload. See the following code:

-- Import the SQL Tuning Set which contain the captured SQLs into the target  Amazon RDS for Oracle instance.
-- sts.dmp should be existing in the directory object REPLAY_DIR as explained in the earlier section

--  List files in REPLAY_DIR

SQL>col filename for a20
select filename, type, round(filesize/1024) file_kb, mtime
from table(rdsadmin.rds_file_util.listdir(p_directory => 'REPLAY_DIR'))
order by filename;

-- Import sts.dmp. This API call assumes that source schema which contain STS table RAT_STS_TABLE exists in target as well.  You may use REMAP_SCHEMA if you like the table to be imported into another schema.


DECLARE
  v_hdnl NUMBER;
BEGIN
  v_hdnl := DBMS_DATAPUMP.OPEN( 
    operation => 'IMPORT', 
    job_mode  => 'TABLE', 
    job_name  => null);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'sts.dmp', 
    directory => 'REPLAY_DIR', 
    filetype  => dbms_datapump.ku$_file_type_dump_file);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'sqlset.log', 
    directory => 'REPLAY_DIR', 
    filetype  => dbms_datapump.ku$_file_type_log_file);
 DBMS_DATAPUMP.START_JOB(v_hdnl);
    END;
/      

-- Verify that RAT_STS_TABLE has been imported successfully.  You can verify the logfile sqlset.log   by pushing the logfile to an S3 bukcet using RDS-S3 Integration.


-- Unpack the SQL tuning Set

BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset(
sqlset_name => 'SPA-AWS',
sqlset_owner => 'SYS',
replace => TRUE,
staging_table_name => 'RAT_STS_TABLE',
staging_schema_owner => 'SH');
END;
/

-- Load execution plans from the SQL tuning set 

set serveroutput on
declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (
sqlset_name => 'SPA-AWS',
sqlset_owner => 'SYS',
fixed => 'YES',
enabled => 'YES'
);
DBMS_OUTPUT.PUT_line(my_int);
end;
/

-- Create SQL analysis task which will be used to execute the workload on the test database server. 

DECLARE
v_task VARCHAR2(64) := '';
BEGIN
v_task := DBMS_SQLPA.create_analysis_task(
sqlset_name => 'SPA-AWS', task_name=>'SPA-AWS',sqlset_owner=>'SYS');
END;
/

-- Create SQL plan baselines to enable the optimizer to avoid performance regressions by using execution plans with known performance characteristics. If a performance regression occurs due to plan changes, a SQL plan baseline can be created and used to prevent the optimizer from picking a new, regressed execution plan.

begin
dbms_sqlpa.execute_analysis_task(
task_name => 'SPA-AWS',
execution_type => 'CONVERT SQLSET',
execution_name => 'BASELINE_RAT_CAPTURE',
execution_desc => 'Creating Baseline Trial',
execution_params => dbms_advisor.arglist('sqlset_name', 'SPA-AWS','sqlset_owner','SYS')
);
end;
/

-- Using test execute method, SQL tuning set is executed on the test Amazon RDS for Oracle instance. The test runs each of the SQL statements contained in the workload to completion. During execution, SQL Performance Analyzer generates execution plans and computes execution statistics for each SQL statement in the workload. Each SQL statement in the SQL tuning set is executed separately from other SQL statements, without preserving their initial order of execution or concurrency. This is done at least twice for each SQL statement, for as many times as possible until the execution times out (up to a maximum of 10 times).  To execute the first trail of the SQL statements run the command as shown below.

BEGIN
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'SPA-AWS',
execution_type => 'TEST EXECUTE',
execution_name => 'FIRST_TRIAL_RAT',
execution_desc => 'First trial on new environment',
execution_params => dbms_advisor.arglist('sqlset_name', 'SPA-AWS','sqlset_owner','SYS')
);
END;
/

-- Now that the baseline is created and the SQL tuning set has been executed, performance data can be collected for comparison between the two using SQL Performance Analyzer. To run the comparison execute the below command on the Amazon RDS for Oracle instance

Begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'SPA-AWS',
execution_type => 'compare performance',
execution_name => 'SPA_Comparison_Capture',
execution_params => dbms_advisor.arglist('comparison_metric', 'ELAPSED_TIME',
'execution_name1', 'BASELINE_RAT_CAPTURE',
'execution_name2', 'FIRST_TRIAL_RAT',
'TIME_LIMIT', 'UNLIMITED')
);
end;
/

-- After the comparison analysis is completed, you can generate a report to identify the SQL statements that have improved, remained unchanged, or regressed due to the system change. The following command generates the performance report.


declare
l_replay_dir varchar2(30) := 'REPLAY_DIR';
l_report clob;
l_output_file utl_file.file_type;
l_output_offset number := 1;
l_output_length number;
begin
l_report := DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA-AWS', 'HTML','ALL', 'ALL');
-- write the file to disk (may be too large to output in sqlplus)
l_output_length := dbms_lob.getlength(l_report);
l_output_file := utl_file.fopen(l_replay_dir, 'rat_spa.html', 'w');
begin
while (l_output_offset < l_output_length) loop
utl_file.put(l_output_file, dbms_lob.substr(l_report, 32767, l_output_offset));
utl_file.fflush(l_output_file);
l_output_offset := l_output_offset + 32767;
end loop;
utl_file.new_line(l_output_file);
utl_file.fclose(l_output_file);
dbms_output.put_line('output in file ' || l_replay_dir || ':rat_spa.html');
exception
when others then
utl_file.fclose(l_output_file);
raise;
end;
end;
/

This generates the file rat_spa.html in REPLAY_DIR, which you can push to an S3 bucket using the Amazon S3 integration for Amazon RDS for Oracle. If you are using EFS integration, this file can be accessed from an EC2 instance.

Replay the workload in the RDS for Oracle instance

If you have already run SPA to analyze the performance of SQL workload, you can restore the RDS for Oracle instance from a snapshot prior to replaying workload to bring the instance into a healthy logical state to avoid transaction failures during the replay.

You can initiate replay on the target RDS for Oracle instance after completing the following prerequisites.

Prepare a self-managed system for hosting the replay client

The replay client is a multi-threaded program (an executable named wrc located in the $ORACLE_HOME/bin directory). You can use any system containing the Oracle client or server software installed with connectivity to the target RDS for Oracle instance as the replay client system. The version of replay client should be same as the version of the database with the preprocessed capture files and the version of the target RDS for Oracle instance.

You can use an Oracle Instant Client installation (Basic and Tools packages) on an EC2 instance as a replay client if you don’t have an existing system with the required software installed.

You need to make the preprocessed files available to the replay client, as explained in the previous section.

The optimal number of database replay clients required to replay the workload is linked to the workload you captured. Fortunately, you can know in advance how many replay clients you may need using the calibrate mode of the wrc workload replay client (detailed steps are shown in the next section).

The following are a few examples of recommendations made by wrc in calibrate mode.

Recommendation 1

Consider using at least 1 client divided among 1 CPU(s)

You will need at least 3 MB of memory per client process.

This workload needs one replay client with total 1 CPU and 3 MB memory to efficiently replay the workload.

Recommendation2

Consider using at least 3 clients divided among 1 CPU(s)

You will need at least 352 MB of memory per client process.

If your machine(s) cannot match that number, consider using more clients.

This workload needs three replay clients with total 1 CPU and 1056 MB (3×352 MB) memory to efficiently replay the workload.

Recommendation 3

Consider using at least 14 clients divided among 4 CPU(s).

You will need at least 153 MB of memory per client process. If your machine(s) cannot match that number, consider using more clients.

This workload needs 14 replay client with total 4 CPU and 2142 MB (14×153 MB) memory to efficiently replay the workload. You can split the workload clients across multiple systems to meet this resource requirements.

During the first replay attempt, you may need to monitor resource utilization on your replay client systems to adjust the resource allocations for the following replay attempts to ensure the replay activity isn’t throttled by resource crunch on the replay client systems. If you’re using an EC2 instance to host your replay clients, you can use Amazon CloudWatch to monitor your instances to make sure that you’re not exceeding the capacity of the instance or storage in terms of CPU, memory, IOPs, or throughput.

Prepare the target RDS for Oracle instance with the required processed files copied to a directory object

Prepare the target RDS for Oracle instance with a copy of the source database to make it logically consistent for replaying the workload and take a snapshot of the instance. You need to copy the required preprocessed files to a directory object, as explained in the previous section. See the following code:

-- Connect to the target Amazon RDS for Oracle instance as admin user to initialize and prepare replay.
-- Disable the advisor jobs so that they won’t interfere with the replay of workload causing any divergence.

BEGIN DBMS_AUTO_TASK_ADMIN.DISABLE( 
client_name => 'sql tuning advisor', 
operation => NULL, 
window_name => NULL); 
END; 
/

-- Initialize the replay

SQL>
begin
dbms_workload_replay.initialize_replay(
replay_name => 'replay1',
replay_dir => 'REPLAY_DIR'
);
end;
/

-- Prepare for the replay

begin
dbms_workload_replay.prepare_replay(synchronization => true);
end;
/

-- Invoke replay client in calibrate mode to understand number of replay clients needed to replay the workload.Set environment variables to point to the ORACLE_HOME containing replay client

$>wrc admin_user/<password>@<rds_instance> mode=calibrate replaydir=<directory where processed files are located>  debug=on connection_override=true

-- From the output of above command you can estimate number of wrc clients required to replay the workload.

-- Start replay client(s)(more than one if calibrate mode suggests so) in replay mode.
$>wrc admin_user/<password>@<rds_instance> mode=replay replaydir=<directory where processed files are located> debug=on connection_override=true

-- This will display the message like "Wait for the replay to startWait for the replay to start"

-- Connect to the Amazon RDS for Oracle instance as admin user, create an AWR snapshot and and start replay.
SQL>exec dbms_workload_repository.create_snapshot
SQL>
begin
dbms_workload_replay.start_replay;
end;
/



-- Replay client session(wrc) shows below outputs once the replay is initiated and finished.

Wait for the replay to start (03:28:37)
Replay client 1 started (03:29:33)
Replay client 1 finished (03:34:32)

-- Collect another AWR snapshot as soon as the replay is over.
SQL>exec dbms_workload_repository.create_snapshot

-- Identify snap_ids for the AWR snapshots created during the replay period.
SQL> select snap_id,begin_interval_time from dba_hist_snapshot order by snap_id;

Generate a workload replay report

You can assess performance by generating database replay reports, which give an insight of the database utilization, divergence level, top waits, top SQLs, and other statistical information.

To generate the workload replay report, you can run the following commands on the RDS for Oracle instance:

select id, start_time, end_time, error_message from dba_workload_replays order by start_time;

SQL>
declare
l_replay_id number := <id from above query output>;
l_replay_dir varchar2(30) := 'REPLAY_DIR';
l_report clob;
l_output_file utl_file.file_type;
l_output_offset number := 1;
l_output_length number;
begin
l_report := dbms_workload_replay.report(
replay_id => l_replay_id,
format => 'HTML'
);
-- write the file to disk (may be too large to output in sqlplus)
l_output_length := dbms_lob.getlength(l_report);
l_output_file := utl_file.fopen(l_replay_dir, 'replay_'||l_replay_id||'_report.html', 'w');
begin
while (l_output_offset < l_output_length) loop
utl_file.put(l_output_file, dbms_lob.substr(l_report, 32767, l_output_offset));
utl_file.fflush(l_output_file);
l_output_offset := l_output_offset + 32767;
end loop;
utl_file.new_line(l_output_file);
utl_file.fclose(l_output_file);
dbms_output.put_line('output in file ' || l_replay_dir || ':replay_' || l_replay_id || '_report.html');
exception
when others then
utl_file.fclose(l_output_file);
raise;
end;
end;
/

-- You can read the output html file directly from the REPLAY_DIRECTORY or push to S3 bucket using RDS-S3 integration

select * from table(rdsadmin.rds_file_util.read_text_file(p_directory => 'REPLAY_DIR', p_filename =>
'replay_1_report.html'));

The following is a sample report.

Use an AWR comparison report for capture vs. replay period

Amazon RDS for Oracle doesn’t support importing AWR snapshots for the capture period into the target instance, so you can’t use compare period reports of the database replay to compare the performance of the source and target systems. However, you can export AWR snapshots from an RDS for Oracle instance, which you can import to a self-managed instance. In this section, we export AWR snapshots corresponding to a replay period from the RDS for Oracle instance, import them to the source database where the workload was captured from, and generate an AWR comparison report to compare the performance of the source and target systems. You can also choose a third self-managed instance to import AWR snapshots from both source and target for comparison. See the following code:

-- Export AWR snapshots corresponding to the replay period using awrextr.sql
-- Since you cannot access ORACLE_HOME of Amazon RDS for Oracle instance you can use awrextr.sql from any Oracle self managed instance.
-- Connect to Amazon RDS for Oracle instance and and run awrextr.sql
SQL>@/tmp/awrextr.sql

Enter DB_ID,number of days , snap_ids,directory and file_name which will generate the dump file(eg: rds_awr_exp.dmp in the DIRECTORY object

-- Transfer the dumpfile to the instance where AWR will be imported to, using RDS-S3 integration or dbms_file_transfer.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
      p_bucket_name    =>  '<your bucket>', 
      p_prefix         =>  'rds_awr_exp.dmp', 
      p_s3_prefix      =>  '', 
      p_directory_name =>  'REPLAY_DIR') 
   AS TASK_ID FROM DUAL;
 

-- Assuming you will be importing this to the source database where workload was captured from, make the dumpfile available in a directory object and run awrload.sql to import the snapshots to the database.

SQL>@$ORACLE_HOME/rdbms/admin/awrload.sql

Input details like dumpfile name and DIRECTORY name.

-- Finally run the AWR comparison report
SQ>@$ORACLE_HOME/rdbms/admin/awrddrpi.sql

Input DBID, snap_ids for both source and target systems to generate a comparison report.

The following screenshot shows a few sections from an AWR compare period report.

Considerations when using Oracle RAT with Amazon RDS for Oracle

  • You need to use an EFS filesystem along with an EC2 instance to capture the workload from or to preprocess captured files on an RDS for Oracle instance.
  • Assess the performance impact of using EFS as target for storing capture files when capturing workload from an RDS for Oracle instance. For heavy workloads that are performance sensitive, capturing to an EFS directory can impact the performance of the workload. Refer to Amazon EFS performance to learn about performance characteristics of EFS filesystem types.
  • To avoid performance bottleneck when replay client and database sessions accessing the pre-processed files on EFS, it is recommended to move the pre-processed files in RDS storage during the replay phase.
  • You can run the replay client (${ORACLE_HOME}/bin/wrc) on a self-managed instance with an Oracle database software client or server installation of the same version as the target RDS for Oracle instance.
  • We recommend replaying the workload on a test instance created from a snapshot backup, which you can delete at the end of the replay activity.
  • You can export the AWR snapshots from the RDS for Oracle instance for the replay period and import them to the self-managed instance or RDS Custom for Oracle instance and see the capture vs. replay performance using an AWR compare period report.

Clean up

When you’re done using this solution, you can delete the capture files from the source self-managed instance and preprocessed files from the preprocessing system and replay client system. You should also stop or terminate any EC2 instances that you provisioned exclusively for hosting some of the components of the Database Replay feature, such as the replay client. If you used an S3 bucket and EFS to stage or  transfer files, you should delete those resources if no longer needed.

You can’t delete subdirectories created by the replay process in an RDS for Oracle instance. Instead, delete the entire RDS for Oracle instance that was created from the snapshot backup.

Summary

As multiple enterprises are moving their workloads to AWS, they choose Amazon RDS for Oracle to help deploy and run Oracle database workloads. Oracle RAT can help you evaluate when to migrate from a self-managed instance to a fully managed database environment such as Amazon RDS for Oracle and also to assess the impact of various changes by testing them on lower environments before applying the changes to RDS for Oracle production instances.

This post described the steps for using Oracle RAT on Amazon RDS for Oracle. Refer to Use Oracle Real Application Testing features with Amazon EC2 for information about using Oracle RAT for migrating database workloads to self-managed instances on Amazon EC2. If you have any comments or questions, please leave them in the comments section.


About the Authors

John Lonappan is a Senior Database Specialist Consultant / Solutions Architect at Amazon Web Services (AWS) with a focus on relational databases. Prior to AWS, John has worked as Database Architect for large Data center providers across the Globe. Outside of work, he is passionate about, Long drives, EV Conversion, playing chess and traveling.

Saumya Mula is a Database Consultant with the Professional Services team at Amazon Web Services. She works as a database migration consultant to provide Amazon customers with technical guidance to migrate their on-premises databases to AWS.

Jobin Joseph is a Senior Database Specialist Solution Architect based in Dubai.
With a focus on relational database engines, he assists customers in migrating and modernizing their database workloads to AWS. He is an Oracle Certified Master with 20 years of experience with Oracle databases.


Audit History

Last reviewed and updated in February 2024 by Jobin Joseph | Senior Database Specialist Solution Architect