In this section, we briefly discuss the storage layout for Amazon EC2 for Oracle databases that use LVM for storage management. Then we discuss how Oracle database storage modifications like increasing the storage provisioned or changing the IOPS provisioned were done before elastic volumes were introduced. We also cover the associated challenges. Finally, we demonstrate how to address some of these challenges with elastic volumes by using an example.
For larger databases that require multiple EBS volumes for database storage, you can use an LVM to manage the storage. In this scenario, you create volume groups and add the EBS volumes to the volume groups. Then you create logical volumes from the volume groups and create your file systems on top of the logical volumes. The following diagram shows this database storage layout using LVM.
To increase the storage or IOPS provisioned for systems using multiple EBS volumes and LVM for storage management, you create new EBS volumes. You then add them to volume groups using the following steps:
Although this increases the total storage capacity and IOPS available due to the addition of the new EBS volume, the data is not evenly distributed across your EBS volumes. This unevenness can result in hot spots. You have to manually rebalance or redistribute your data (Oracle data files) across the EBS volumes.
When you modify the size or IOPS of an EBS volume, the data is automatically spread across multiple backend devices. This approach helps avoid hot spots and ensure that you get the IOPS provisioned.
In this section, we demonstrate how to increase the storage provisioned for an Oracle database that uses LVM for storage management, without any downtime. For this demonstration, we use an Oracle 12c database running on Amazon Linux that uses LVM for storage management. Two EBS volumes, each with 100 GiB of storage, are presented as physical volumes to the LVM. From these volumes, a logical volume of 200 GiB is created for Oracle database storage. During this demonstration, we increase the storage provisioned to the database from 200 GiB to 400 GiB without downtime.
To demonstrate that the resize is performed without any database downtime, we created a database stored procedure called evtestproc
. This procedure inserts records into a table called evtesttab
at 10-second intervals. We run this procedure while we perform the resize operation. We can confirm that the resize was done without any database downtime by verifying that records were inserted into the evtesttab
table at 10-second intervals without any gap. For the definition of the evtestproc
stored procedure, see part 1 of this blog series.
From the AWS Management Console, verify the size of the EBS volumes. Currently the size is 100 GiB, as seen in the following screenshot.
[ec2-user@ip-172-31-55-62 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 20G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdf 202:80 0 65G 0 disk
└─dvg-lvol0 253:2 0 65G 0 lvm /u01
xvdg 202:96 0 100G 0 disk xvdh 202:112 0 100G 0 disk
To continue, we need to install the lvm
package if it’s not already installed. You can verify the installation using the following command.
[root@ip-172-31-55-62 ~]# rpm -qa |grep -i lvm
lvm2-libs-2.02.115-3.26.amzn1.x86_64
lvm2-2.02.115-3.26.amzn1.x86_64
If lvm
in not installed, you can install it using the yum
command as shown following.
[root@ip-172-31-55-62 ~]# yum install lvm2*
Now we create the physical volumes using LVM.
[root@ip-172-31-55-62 ec2-user]# pvcreate /dev/xvdg
Physical volume "/dev/sdg" successfully created
[root@ip-172-31-55-62 ec2-user]# pvcreate /dev/xvdh
Physical volume "/dev/sdh" successfully created
Note: Be aware of some device name considerations: In this example, /dev/xvdg
and /dev/sdg
refer to the same device, and /dev/xvdg
is a symbolic link for /dev/sdg
. Depending on the block device driver of the kernel, your device might be attached with a different name than you specified. For more information, see the Amazon EC2 User Guide for Linux Instances.
Then we create the volume group using LVM.
[root@ip-172-31-55-62 ec2-user]# vgcreate EV-Datafile-VG /dev/sdg /dev/sdh
Volume group "EV-Datafile-VG" successfully created
Then, we create the logical volume using LVM.
[root@ip-172-31-55-62 ec2-user]# lvcreate -l +100%FREE -n EV-Data-LV EV-Datafile-VG
Logical volume "EV-Data-LV" created.
We can verify the configuration using the pvdisplay
, vgdisplay
, and lvdisplay
commands as shown.
[root@ip-172-31-55-62 ec2-user]# pvdisplay
--- Physical volume ---
PV Name /dev/sdg
VG Name EV-Datafile-VG
PV Size 100.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 25599
Free PE 0
Allocated PE 25599
PV UUID NTnpgJ-ZwLR-Dqx9-J7cY-zCRH-EbTz-cEv3gt
--- Physical volume ---
PV Name /dev/sdh
VG Name EV-Datafile-VG
PV Size 100.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 25599
Free PE 0
Allocated PE 25599
PV UUID Sq92JP-hLO9-YdfQ-0a1c-Iew1-fugN-P3JqQ3
[root@ip-172-31-55-62 ec2-user]# vgdisplay
--- Volume group ---
VG Name EV-Datafile-VG
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 199.99 GiB
PE Size 4.00 MiB
Total PE 51198
Alloc PE / Size 0 / 0
Free PE / Size 51198 / 199.99 GiB
VG UUID GJw617-s5hP-eVnp-7MCv-VHNS-g0vs-wEjT3u
[root@ip-172-31-55-62 ~]# lvdisplay EV-Datafile-VG
--- Logical volume ---
LV Path /dev/EV-Datafile-VG/EV-Data-LV
LV Name EV-Data-LV
VG Name EV-Datafile-VG
LV UUID KhYS7n-fj5i-oEU3-2EPD-zSi6-2Tta-2XqPwe
LV Write Access read/write
LV Creation host, time ip-172-31-55-62, 2018-06-13 18:22:09 +0000
LV Status available
# open 0
LV Size 199.99 GiB
Current LE 51198
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
For storing the data files, we create a directory called customdf
as shown following.
[root@ip-172-31-55-62 ~]# mkdir -p /u01/app/oracle/oradata/cdb1/pdb1/customdf
We create a file system on the logical volume and mount it at /u01/app/oracle/oradata/cdb1/pdb1/customdf/
as shown following.
[root@ip-172-31-55-62 ec2-user]# mkfs.ext4 /dev/EV-Datafile-VG/EV-Data-LV
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 52426752 4k blocks and 13107200 inodes
Filesystem UUID: 028021c7-090e-44e3-bad7-f38733aca2c7
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@ip-172-31-55-62 ec2-user]# mount /dev/EV-Datafile-VG/EV-Data-LV /u01/app/oracle/oradata/cdb1/pdb1/customdf/
Note: Both Ext4 and XFS support inline resizing. For this example, we use Ext4, because it’s a simpler choice.
Now, we create a big file tablespace called EVTestTablespace
with a 180-GB data file called evtestdf.dbf
, using SQL*Plus as shown following.
SQL> CREATE BIGFILE TABLESPACE EVTestTablespace DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/customdf/EVTESTDF.dbf' SIZE 180G;
Tablespace created.
You can verify the disk utilization at the OS level using the df
command as shown following.
[oracle@ip-172-31-55-62 ec2-user]$ df –h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/xvda1 20G 1.3G 19G 7% /
/dev/mapper/dvg-lvol0 64G 17G 45G 28% /u01
/dev/mapper/EV--Datafile--VG-EV--Data--LV 197G 181G 6.7G 97% /u01/app/oracle/oradata/cdb1/pdb1/customdf
Now, we verify the size and data file location of the newly created tablespace as shown in the following screenshot.

Step 2: Setting up the stored procedure
To insert records into the evtesttab
table while we increase the storage provisioned to the database, start the evtestproc
stored procedure.
begin
evtestproc(); //PLSQL procedure to insert records into the EVTESTTAB table at 10-second intervals
end;
We query the table from SQL Workbench to verify that records are being inserted.

Step 3: Resizing the EBS volume
We now increase the size of the EBS volumes to 100 GiB from 200 GiB using the AWS CLI.
$ aws ec2 modify-volume --region us-east-1 --volume-id vol-0a6cc1b8519d43a69 --size 200
{
"VolumeModification": {
"TargetSize": 200,
"TargetVolumeType": "io1",
"ModificationState": "modifying",
"VolumeId": "vol-0a6cc1b8519d43a69",
"TargetIops": 3000,
"StartTime": "2018-06-13T21:48:45.000Z",
"Progress": 0,
"OriginalVolumeType": "io1",
"OriginalIops": 3000,
"OriginalSize": 100
}
}
$ aws ec2 modify-volume --region us-east-1 --volume-id vol-04d8f75ea8f27c9ee --size 200
{
"VolumeModification": {
"TargetSize": 200,
"TargetVolumeType": "io1",
"ModificationState": "modifying",
"VolumeId": "vol-04d8f75ea8f27c9ee",
"TargetIops": 3000,
"StartTime": "2018-06-13T21:51:41.000Z",
"Progress": 0,
"OriginalVolumeType": "io1",
"OriginalIops": 3000,
"OriginalSize": 100
}
}
After a short while, we check the status of the volume modification request using the AWS CLI. You can see that the volume has entered the optimizing state. The AWS Management Console shows the new size, as shown in the following screenshot.
$ aws ec2 describe-volumes-modifications --region us-east-1 --volume-id vol-0a6cc1b8519d43a69
{
"VolumesModifications": [
{
"TargetSize": 200,
"TargetVolumeType": "io1",
"ModificationState": "optimizing",
"VolumeId": "vol-0a6cc1b8519d43a69",
"TargetIops": 3000,
"StartTime": "2018-06-13T21:48:45.000Z",
"Progress": 0,
"OriginalVolumeType": "io1",
"OriginalIops": 3000,
"OriginalSize": 100
}
]
}
$ aws ec2 describe-volumes-modifications --region us-east-1 --volume-id vol-04d8f75ea8f27c9ee
{
"VolumesModifications": [
{
"TargetSize": 200,
"TargetVolumeType": "io1",
"ModificationState": "optimizing",
"VolumeId": "vol-04d8f75ea8f27c9ee",
"TargetIops": 3000,
"StartTime": "2018-06-13T21:51:41.000Z",
"Progress": 0,
"OriginalVolumeType": "io1",
"OriginalIops": 3000,
"OriginalSize": 100
}
]
}


Step 4: Resizing the physical volumes, logical volumes, and file system
To resize the physical volumes, use the pvresize
command.
[root@ip-172-31-55-62 ~]# pvresize /dev/sdg
Physical volume "/dev/sdg" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
[root@ip-172-31-55-62 ~]# pvresize /dev/sdh
Physical volume "/dev/sdh" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Verify that the new size is reflected using the pvdisplay
command.
[root@ip-172-31-55-62 ~]# pvdisplay /dev/sdg
--- Physical volume ---
PV Name /dev/sdg
VG Name EV-Datafile-VG
PV Size 200.00 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 51199
Free PE 25600
Allocated PE 25599
PV UUID NTnpgJ-ZwLR-Dqx9-J7cY-zCRH-EbTz-cEv3gt
[root@ip-172-31-55-62 ~]# pvdisplay /dev/sdh
--- Physical volume ---
PV Name /dev/sdh
VG Name EV-Datafile-VG
PV Size 200.00 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 51199
Free PE 25600
Allocated PE 25599
PV UUID Sq92JP-hLO9-YdfQ-0a1c-Iew1-fugN-P3JqQ3
We now resize the logical volume using the lvresize
command. The –l +100%FREE
option of the lvresize
command allocates all the newly added space to the logical volume.
[root@ip-172-31-55-62 ~]# lvresize -l +100%FREE /dev/EV-Datafile-VG/EV-Data-LV
Size of logical volume EV-Datafile-VG/EV-Data-LV changed from 199.99 GiB (51198 extents) to 399.99 GiB (102398 extents).
Logical volume EV-Data-LV successfully resized
Verify that the new volume size now appears using the lvdisplay
command.
[root@ip-172-31-55-62 ~]# lvdisplay EV-Datafile-VG
--- Logical volume ---
LV Path /dev/EV-Datafile-VG/EV-Data-LV
LV Name EV-Data-LV
VG Name EV-Datafile-VG
LV UUID KhYS7n-fj5i-oEU3-2EPD-zSi6-2Tta-2XqPwe
LV Write Access read/write
LV Creation host, time ip-172-31-55-62, 2018-06-13 18:22:09 +0000
LV Status available
# open 1
LV Size 399.99 GiB
Current LE 102398
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
We finally resize the file system (ext4
) using the resize2fs
command.
[root@ip-172-31-55-62 ~]# resize2fs /dev/EV-Datafile-VG/EV-Data-LV
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/EV-Datafile-VG/EV-Data-LV is mounted on /u01/app/oracle/oradata/cdb1/pdb1/customdf; on-line resizing required
old_desc_blocks = 13, new_desc_blocks = 25
The filesystem on /dev/EV-Datafile-VG/EV-Data-LV is now 104855552 (4k) blocks long.
Verify that the file system has been resized using the df
command.
[root@ip-172-31-55-62 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/xvda1 20G 1.3G 19G 7% /
/dev/mapper/dvg-lvol0 64G 17G 45G 28% /u01
/dev/mapper/EV--Datafile--VG-EV--Data--LV 394G 361G 16G 96% /u01/app/oracle/oradata/cdb1/pdb1/customdf
Step 5: Increasing the database storage
We now increase the database storage available by resizing the big file tablespace to 360 GB, using SQL*Plus as shown following
[SQL> ALTER TABLESPACE EVTestTableSpace RESIZE 360G;
Tablespace altered.
The database storage is now increased to 360 GB, as shown in the following screenshot.

Step 6 – Verify that there was no database downtime while the storage was resized
To verify that the PL/SQL procedure execution was uninterrupted, we query the evtesttab
table. This query also verifies that records were inserted at 10-second intervals without any gaps, as shown in the following screenshot.

Using this example, we demonstrate how to increase the storage allocated to an Oracle database that uses LVM for storage management, without any impact on database availability. You can also change the IOPS provisioned for the database or change the EBS volume type (for example, from io1 to gp2) using elastic volumes. You can do so without any impact on database availability or performance.
In the next post, we discuss the storage layout on Amazon EC2 for Oracle databases that use Oracle Automated Storage Management (Oracle ASM). We also demonstrate how you can scale the database storage without impact on database availability.
About the Authors
Ashok Shanmuga Sundaram is a partner solutions architect with the Global System Integrator (GSI) team at Amazon Web Services. He works with the GSIs to provide guidance on enterprise cloud adoption, migration and strategy.
Ejaz Sayyed is a partner solutions architect with the Global System Integrator (GSI) team at Amazon Web Services. His focus areas include AWS database services and, database and data warehouse migrations on AWS.
Nael Haddad is a senior product manager with Elastic Block Store (EBS) team at Amazon Web Services. He is driving new features associated with multiple EBS product offerings.