Increase u01 file system in Exadata Virtual Machine(OVM)
In this article , I am going to explain how to increase the default 20g size of /u01 partition on Exadata x7 machine to sufficient size depending upon your requirement.

For this , we need to use the db(compute) node storage capacity . for x7 we had around 3 TB space available in our compute node.
/dev/mapper/VGExaDb 3.2T 260G 3.0T 8% /EXAVMIMAGES
Current size of mount and file systems are as below
[root@dbmpclu3adm02 ~]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 12G 0 12G 0% /dev tmpfs 24G 0 24G 0% /dev/shm tmpfs 12G 1.4M 12G 1% /run tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/mapper/VGExaDb-LVDbSys1 24G 6.0G 17G 27% / /dev/xvda1 488M 52M 411M 12% /boot /dev/mapper/VGExaDb-LVDbOra1 20G 4.1G 15G 22% /u01 /dev/xvdc 50G 12G 36G 24% /u01/app/oracle/product/12.1.0.2/dbhome_1 /dev/xvdb 50G 10G 37G 22% /u01/app/19.0.0.0/grid tmpfs 2.4G 0 2.4G 0% /run/user/0
We will make the /u01 to 300GB in our case .
On the Dom0(physical) server go to the following location , there will be folder for every VM hosted on the sever . choose the VM , whose /u01 you want to increase and go to that location accordingly .
[root@dbmpdbadm03 ~]# cd /EXAVMIMAGES/dbmpclu3adm02.domain/GuestImages/dbmpclu3adm02.domain/ List the content under the folder , to check if any physical volume is present [root@dbmpdbadm03 dbmpclu3adm02.domain]# ls -ltr total 67951622 -rw-r----- 1 root root 2353 Aug 21 10:50 vm.cfg -rw-r----- 1 root root 53687091200 Sep 9 03:01 db12.1.0.2.190716-3.img -rw-r----- 1 root root 26843545600 Sep 9 03:01 System.img -rw-r----- 1 root root 66571993088 Sep 9 03:01 pv1_vgexadb.img -rw-r----- 1 root root 53687091200 Sep 9 03:01 grid19.4.0.0.190716.img We can see , we have 60GB physical volume aleady present , which is being exposed to the VM as virtual disk drive (xvdd)
Create a new physical volume of 300G the dom0 using qemu–img . qemu–img allows you to create, convert and modify hard drives images , which we will expose to our corresponding vm as virtual drive .
[root@dbmpdbadm03 dbmp]# qemu-img create pv2_vgexadb.img 300G Formatting 'pv2_vgexadb.img', fmt=raw size=322122547200
Check if the physical volume is created
[root@dbmpdbadm03 dbmpclu3adm02.domain]# ls -ltr total 67951622 -rw-r--r-- 1 root root 322122547200 Sep 9 03:02 pv2_vgexadb.img -rw-r----- 1 root root 53687091200 Sep 9 03:02 db12.1.0.2.190716-3.img -rw-r----- 1 root root 53687091200 Sep 9 03:03 grid19.4.0.0.190716.img -rw-r----- 1 root root 66571993088 Sep 9 03:03 pv1_vgexadb.img -rw-r----- 1 root root 26843545600 Sep 9 03:03 System.img The PV of 300G is created successfully .
Now , attach the created PV as block to the VM using block attach .
[root@dbmpdbadm03 ]# xm block-attach dbmpclu3adm02.domain file:/EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/pv2_vgexadb.img /dev/xvde w
Get the existing uuid details from vm.cfg to locate the correct folder for linking the created PV to the virtual machine drive
[root@dbmpdbadm03 dbmpclu3adm02.domain]# grep ^uuid vm.cfg uuid = '9ba30a0efb0c4e5b87c45453fbb960f4'
Create a new uuid for this created physical disk
[root@dbmpdbadm03 dbmpclu3adm02.domain]# uuidgen | tr -d '-' 7097742cd77e4929b2ec1d441b1c1cf4
Go to /OVS/Repositories/ and create a soft link between the created uuid and PV
[root@dbmpdbadm03 dbmpclu3adm02.domain]# cd /OVS/Repositories/ [root@dbmpdbadm03 Repositories]# [root@dbmpdbadm03 Repositories]# ls -ltr total 24 drwxr----- 3 root root 4096 Aug 20 14:58 4ed1c7993ad94314af44de82f77be2cc drwxr----- 3 root root 4096 Aug 21 10:15 183eddaf4557487b8f1d8880be8689ec drwxr----- 3 root root 4096 Aug 21 10:50 9ba30a0efb0c4e5b87c45453fbb960f4 [root@dbmpdbadm03 Repositories]# cd 9ba30a0efb0c4e5b87c45453fbb960f4 [root@dbmpdbadm03 9ba30a0efb0c4e5b87c45453fbb960f4]# ls -ltr total 4 drwxr----- 2 root root 4096 Aug 21 10:49 VirtualDisks lrwxrwxrwx 1 root root 58 Aug 21 10:50 vm.cfg -> /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/vm.cfg Crete the soft link [root@dbmpdbadm03 VirtualDisks]# ln -s /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/pv2_vgexadb.img /OVS/Repositories/9ba30a0efb0c4e5b87c45453fbb960f4/VirtualDisks/7097742cd77e4929b2ec1d441b1c1cf4.img check the link [root@dbmpdbadm03 VirtualDisks]# ls -ltr total 20 lrwxrwxrwx 1 root root 62 Aug 21 10:46 6e70061b7b2a46ce94856f67f0fd6c8f.img -> /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/System.img lrwxrwxrwx 1 root root 75 Aug 21 10:47 62d56c28ff024969b9f6de6034b5dd2e.img -> /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/grid19.4.0.0.190716.img lrwxrwxrwx 1 root root 75 Aug 21 10:47 a259c25d130c400da6ae66c2e4035cab.img -> /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/db12.1.0.2.190716-3.img lrwxrwxrwx 1 root root 67 Aug 21 10:49 6c5b78a312074d0fa6435955270a68bb.img -> /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/pv1_vgexadb.img lrwxrwxrwx 1 root root 67 Sep 9 03:35 7097742cd77e4929b2ec1d441b1c1cf4.img -> /EXAVMIMAGES/GuestImages/dbmpclu3adm02.domain/pv2_vgexadb.img
Now , edit the vm.cfg file and update the location of the virtual disk
[root@dbmpdbadm03 dbmpclu3adm02.domain]# vi vm.cfg add the following lines in the disk filed [root@dbmpdbadm03 dbmpclu3adm02.domain]# 'file:/OVS/Repositories/9ba30a0efb0c4e5b87c45453fbb960f4/VirtualDisks/7097742cd77e4929b2ec1d441b1c1cf4.img,xvde,w'
Now log out from the dom0 and login to domU(vm) server to check the disk being created in physicals server is reflecting here or not
[root@dbmpclu3adm01 ~]# ssh dbmpclu3adm02 Last login: Mon Sep 9 02:56:56 CDT 2019 Last login: Mon Sep 9 03:42:19 2019 from dbmpclu3adm01.domain Check /u01 size (still 20G) because we need to format the disk and add to lvm to reflect in /u01 [root@dbmpclu3adm02 ~]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 12G 0 12G 0% /dev tmpfs 24G 0 24G 0% /dev/shm tmpfs 12G 1.4M 12G 1% /run tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/mapper/VGExaDb-LVDbSys1 24G 6.0G 17G 27% / /dev/xvda1 488M 52M 411M 12% /boot /dev/mapper/VGExaDb-LVDbOra1 20G 4.1G 15G 22% /u01
Check if the disk is shwoing up in the vm server
[root@dbmpclu3adm02 ~]# lsblk -id NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdl 8:176 0 128M 0 disk sdm 8:192 0 128M 0 disk sdn 8:208 0 128M 0 disk sdo 8:224 0 128M 0 disk sdp 8:240 0 128M 0 disk xvda 202:0 0 25G 0 disk xvdb 202:16 0 50G 0 disk /u01/app/19.0.0.0/grid xvdc 202:32 0 50G 0 disk /u01/app/oracle/product/12.1.0.2/dbhome_1 xvdd 202:48 0 62G 0 disk xvde 202:64 0 300G 0 disk So , the xvde drive of 300G , which we have created is available now.
Check all the lvm present
[root@dbmpclu3adm02 ~]# lvs -o lv_name,lv_path,vg_name,lv_size LV Path VG LSize LVDbOra1 /dev/VGExaDb/LVDbOra1 VGExaDb 20.00g LVDbSwap1 /dev/VGExaDb/LVDbSwap1 VGExaDb 16.00g LVDbSys1 /dev/VGExaDb/LVDbSys1 VGExaDb 24.00g LVDbSys2 /dev/VGExaDb/LVDbSys2 VGExaDb 24.00g LVDbVddbmpCLU3ADM02DATAC3 /dev/VGExaDb/LVDbVddbmpCLU3ADM02DATAC3 VGExaDb 128.00m LVDbVddbmpCLU3ADM02RECOC3 /dev/VGExaDb/LVDbVddbmpCLU3ADM02RECOC3 VGExaDb 128.00m LVDoNotRemoveOrUse /dev/VGExaDb/LVDoNotRemoveOrUse VGExaDb 1.00g
Format the 300Gb disk and create partition using parted utility
[root@dbmpclu3adm02 ~]# parted /dev/xvde mklabel gpt Information: You may need to update /etc/fstab. [root@dbmpclu3adm02 ~]# parted -s /dev/xvde mkpart primary 0 100% Warning: The resulting partition is not properly aligned for best performance. [root@dbmpclu3adm02 ~]# [root@dbmpclu3adm02 ~]# parted -s /dev/xvde set 1 lvm on
Create a physical volume with the created disk
[root@dbmpclu3adm02 ~]# pvcreate /dev/xvde1 Physical volume "/dev/xvde1" successfully created.
Check the volume group free size and other details
root@dbmpclu3adm02 ~]# vgs VG #PV #LV #SN Attr VSize VFree VGExaDb 2 7 0 wz--n- 86.49g 1.24g
We have 86G VG and only 1 G free space available , link the PV to the volume group.
[root@dbmpclu3adm02 ~]# vgextend VGExaDb /dev/xvde1 Volume group "VGExaDb" successfully extended
Check the volume group size now
[root@dbmpclu3adm02 ~]# vgs VG #PV #LV #SN Attr VSize VFree VGExaDb 3 7 0 wz--n- <386.49g <301.24g [root@dbmpclu3adm02 ~]# vgdisplay -s "VGExaDb" <386.49 GiB [85.25 GiB used / <301.24 GiB free] [root@dbmpclu3adm02 ~]# vgdisplay VGExaDb -s "VGExaDb" <386.49 GiB [85.25 GiB used / <301.24 GiB free]
Find the lvm partition , on which /u01 is mounted
[root@dbmpclu3adm02 ~]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 12G 0 12G 0% /dev tmpfs 24G 0 24G 0% /dev/shm tmpfs 12G 1.4M 12G 1% /run tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/mapper/VGExaDb-LVDbSys1 24G 6.0G 17G 27% / /dev/xvda1 488M 52M 411M 12% /boot /dev/mapper/VGExaDb-LVDbOra1 20G 4.1G 15G 22% /u01 The /u01 is mounted on VGExaDb-LVDbOra1 lvm
Check all available logical volumes
[root@dbmpclu3adm02 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LVDbOra1 VGExaDb -wi-ao---- 20.00g LVDbSwap1 VGExaDb -wi-ao---- 16.00g LVDbSys1 VGExaDb -wi-ao---- 24.00g LVDbSys2 VGExaDb -wi-a----- 24.00g LVDbVddbmpCLU3ADM02DATAC3 VGExaDb -wi-ao---- 128.00m LVDbVddbmpCLU3ADM02RECOC3 VGExaDb -wi-ao---- 128.00m LVDoNotRemoveOrUse VGExaDb -wi-a----- 1.00g The LV( LVDbOra1) is still 20g , lets increase this with the available size from volume group
[root@dbmpclu3adm02 ~]# lvextend -L +300G /dev/mapper/VGExaDb-LVDbOra1 Size of logical volume VGExaDb/LVDbOra1 changed from 20.29 GiB (5195 extents) to 320.29 GiB (81995 extents). Logical volume VGExaDb/LVDbOra1 successfully resized.
Resize the FS for reflecting the available change
[root@dbmpclu3adm02 ~]# resize2fs /dev/mapper/VGExaDb-LVDbOra1 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/mapper/VGExaDb-LVDbOra1 is mounted on /u01; on-line resizing required old_desc_blocks = 2, new_desc_blocks = 21 The filesystem on /dev/mapper/VGExaDb-LVDbOra1 is now 83962880 blocks long.
Check the /u01 size now
[root@dbmpclu3adm02 ~]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 12G 0 12G 0% /dev tmpfs 24G 0 24G 0% /dev/shm tmpfs 12G 1.5M 12G 1% /run tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/mapper/VGExaDb-LVDbSys1 24G 6.0G 17G 27% / /dev/xvda1 488M 52M 411M 12% /boot /dev/mapper/VGExaDb-LVDbOra1 316G 4.1G 299G 2% /u01 Its now more than 300G :)
1,016 total views, 3 views today