Exadata Quarter Rack Patching
I wanted to share my recent experience of patching an entire quarter rack Exadata x4-2 machine in today’s blog. I believe it would be beneficial for those who are involved in similar tasks.
The first crucial step in this process involves careful planning and documentation. For those who are new to this, I highly recommend referring to the Exadata master doc 888828.1 MOS doc, which provides detailed information on folder and patch locations.
Oracle releases quarterly (Jan, Apr, Aug, Dec) patches for the entire Exadata full stack, including both the physical box and Elastic configuration. These patches are known as Quarterly Full Stack Download Patch (QFSDP), as outlined in the MOS doc.
The patch contains the below components in details
Component | Latest Release |
---|---|
Exadata Storage Server and Database Server Storage servers: All system firmware and software is automatically maintained by the Exadata update process. Do NOT manually update firmware or software unless directed by Oracle Support. Database servers: All system firmware and software originally installed is automatically maintained by the Exadata update process. Do not update or customize Oracle Linux kernel or related software unless directed by Oracle Support. Other software may be installed, updated, or customized. However, the Exadata update may not carry newer version dependencies of customized components. Therefore, you may be required to remove and subsequently reapply site-specific changes for successful Exadata update. Refer to Database Server Operating System section below. | Exadata with Intel x86-64 based database servers Exadata 23.1.6.0.0 (Document 2964919.1)Exadata 22.1.15.0.0 (Document 2964918.1) |
Oracle Grid Infrastructure and Database Note: Oracle Database and Grid Infrastructure software patches in addition to those listed in this document may be applied as required. If OPatch reports a conflict, you should not force apply your patch. Contact Oracle Support for assistance to resolve the patch conflict. Use Database and Grid Infrastructure software, updates, and patches for platform Linux x86-64. | Oracle Database 21c (Innovation Release) Jul 2023 – GI Release Update 21.11.0.0.230718 Oracle Database 19c (Long Term Release) Jul 2023 – GI Release Update 19.20.0.0.230718 Aug 2023 – GI MRP 19.20.0.0.230815 Aug 2023 – GI MRP 19.19.0.0.230815 |
Fabric and Management Switches Note: RDMA Fabric switch and InfiniBand switch software updates are delivered as Exadata software patches. Do NOT apply switch software updates obtained from other sources unless directed by Oracle Support. | Ethernet Network Fabric Switch Cisco NX-OS 10.2(4) (supplied with Exadata 23.1.0.0.0, 22.1.10.0.0, 21.2.23.0.0)InfiniBand Network Fabric Switch InfiniBand Switch software version 2.2.16-7 (supplied with Exadata 23.1.2.0.0, 22.1.11.0.0, 21.2.24.0.0)Ethernet Management Switch (X7 and later) Cisco NX-OS 10.2(4) (supplied with Exadata 23.1.0.0.0, 22.1.10.0.0, 21.2.23.0.0) |
Additional components PDU | Oracle Power Distribution Unit (PDU) Patch 33900587 – Metering Unit firmware and HTML interface v2.12 (for Enhanced PDUs) Patch 16523441 – Metering Unit firmware and HTML interface v1.06 (for original PDUs) |
Quarterly Full Stack Download Patch (QFSDP)
As a convenience for downloading Exadata software updates, Oracle provides the Quarterly Full Stack Download Patch (QFSDP). The QFSDP contains the complete collection of current software patches, released quarterly, aligned with the database Critical Patch Update (CPU) program.
Typically the QFSDP is available for download after the quarterly CPU release date. See the “Post Release Patches” section in the Patch Availability Document for availability dates.
Updates supplied in QFSDP are installed by following the component-specific README file. The updates are available for individual download.
QFSDP releases contain the latest software for the following components:
- Infrastructure
- Exadata database and storage server
- Fabric Switch
- Power Distribution Unit
- Database
- Oracle Database and Grid Infrastructure
- Oracle JavaVM
- Systems Management
- EM Agent
- EM OMS
- EM Plug-ins
The QFSDP contain patches for all the component of the machine This is Oracle Exadata Full Stack download patch includes patches for the following components: Infrastructure Database Systems Management Infrastructure folder contain the patches for following component ExadataStorageServer(cell node),DBServerPatch - (database node ),ExadataDBNodeUpdate -(DB Node Update Utility),SunRackIIPDUMeteringUnitFirmware. The database folder contain all database related patches including db home patches, grid home patches and Oracle Javavm Component patches. Download the patch(there will be multiple patches please tar it ) from mos (888828.1) The Quarterly Full Stack Download Patch For Oracle Exadata is an approximately 13GB tar image file when fully extracted. Due to its size it has been split into multiple zip files for downloading from My Oracle Support. To assemble the image, please follow below procedure : Download and extract all the ARU zip files available under bug 30463800 to a directory of your choice. Unzip each downloaded file to get the contents of a tar image unzip p30463800_189000_Linux.x86-64_1of10.zip inflating: 30463800.tar.splitaa inflating: README.html inflating: README.txt The above command creates 10 files named: 30463800.tar.splita[a-j] Assemble the tar images into one full image and uncompress the final image using below command : cat .tar. | tar -xvf - Once you have the patch ready untar in your destination directory and follow the below steps to patch the full node : Pre-cheks : 1.Put the target Exadata rack in Blackout in OEM. 2.Disable all backup for the database in the RACK. 3.Stop the agent in the target exadata boxes.
5. Shutdown the database with srvctl stops command as oracle user on each of the node 6.Disable and stop the cluster on the node dcli -g /root/dbs_group -l root '/u01/app/11.2.0.4/grid/bin/crsctl disable crs' dcli -g /root/dbs_group -l root '/u01/app/11.2.0.4/grid/bin/crsctl stop crs -f' Confirm all process are stopped dcli -g dbs_group -l root "ps -ef | grep grid" 7. Shut down all cell services on all cells as root user. dcli -g /root/cell_group -l root 'cellcli -e alter cell shutdown services all' Manually check all services are stopped in all cell dcli -g /root/cell_group -l root 'ps -ef |grep cellsrv' or dcli -g /root/cell_group -l root ' cellcli -e list cell detail' 8.Confirm we have ssh equivalence from exa02dbadm01 (Not required). dcli -g /root/cell_group -l root 'hostname -i
CELL PATCHING STARTS
For pre check and patch installation go to the directory:
$[oracle@parwezexa01 ] cd /u01/patches/QFSDP_APR_PATCH/22738416/Infrastructure/12.1.2.3.1/ExadataStorageServer_InfiniBandSwitch/patch_12.1.2.3.1.160718 NOTE: Before Patching the Cell node and Database Node, Check the version of patch applied till date: [root@parwezexa01 ~]# imageinfo -ver 12.1.2.2.1.160119 [root@parwezexacel01 ~]# imageinfo -ver 12.1.2.2.1.160119 9. Reset the server to a known state using the following command(copy cell_group file to current dir) ./patchmgr -cells /root/cell_group -reset_force 10.Clean up any previous patchmgr utility runs using the following command ./patchmgr -cells /root/cell_group -cleanup 11. Verify that the cells meet prerequisite checks using the following command ./patchmgr -cells /root/cell_group -patch_check_prereq 12.Start VNC server as root user in exa02dbadm01 with this command vncserver Connect to VNC terminal with correct terminal ID 13. Execute the below command as root user in VNC server session in xterm window ./patchmgr -cells /root/cell_group -patch 14. Monitor logs and ilom console while patching. ssh root@drexa02celadm01-ilom and then start /SP/console and press Y. 15. Once patching is over Check image status and history using the imageinfo and imagehistory commands on each cell. imageinfo imagehistory dcli -g /root/cell_group -l root 'cellcli -e list cell' dcli -g /root/cell_group -l root 'cellcli -e list celldisk' dcli -g /root/cell_group -l root 'cellcli -e list griddisk' dcli -g /root/cell_group -l root 'cellcli -e list cell detail' make sure all grid disk status is online.
DB NODE(OS) PATCHING
Now we need to patch the Exadata compute node. Copy the patch as /u01/ExadataDatabaseServer/p22954700_121231_Linux-x86-64.zip Go to location : [oracle@parwezexa01 ~]$ cd /u01/patches/QFSDP_APR_PATCH/22738416/Infrastructure/12.1.2.3.1/ExadataDatabaseServer_OL6 [oracle@parwezexa01 ExadataDatabaseServer_OL6]$ cp p22954700_121231_Linux-x86-64.zip /u01/ExadataDatabaseServer/ NOTE: The dbnodeupdate.sh script located in /u01/patches/QFSDP_APR_PATCH/22738416/Infrastructure/ExadataDBNodeUpdate/5.160426 is the latest db_node. Go to /u01/patches/QFSDP_APR_PATCH/22738416/Infrastructure/ExadataDBNodeUpdate/5.160426 in vnc session as root Execute this command from exa02dbadm01 and exa02dbadm02 ./dbnodeupdate.sh -u -l /u01/ExadataDatabaseServer/p22954700_121231_Linux-x86-64.zip -v --- For prereq ./dbnodeupdate.sh -u -l /u01/ExadataDatabaseServer/p22954700_121231_Linux-x86-64.zip --- For applying the Patch and then run what advised by the scripts After reboot run "./dbnodeupdate.sh -c" to complete the onetime. 18. Start CRS and enable CRS dcli -g /root/dbs_group -l root '/u01/app/11.2.0.4/grid/bin/crsctl enable crs' --check dcli -g /root/dbs_group -l root '/u01/app/11.2.0.4/grid/bin/crs_stat -t' dcli -g /root/dbs_group -l root '/u01/app/11.2.0.4/grid/bin/crsctl start crs'
Patch cluster homes (Grid + database homes)
on parwezexa01 export PATH=$PATH:/u01/app/11.2.0.4/grid/OPatch opatch auto /u01/patches/QFSDP_APR_PATCH/22738416/Database/11.2.0.4.0/11.2.0.4.160419_QDPE_Apr2016/22899777 on parwezexa02 export PATH=$PATH:/u01/app/11.2.0.4/grid/OPatch opatch auto /u01/patches/QFSDP_APR_PATCH/22738416/Database/11.2.0.4.0/11.2.0.4.160419_QDPE_Apr2016/22899777 21.Check whether properly patched on parwezexa01 . oranev +ASM1 $ORACLE_HOME/OPatch/opatch lsinventory -oh /u01/app/11.2.0.4/grid . oranev BIDWT1 $ORACLE_HOME/OPatch/opatch lsinventory -oh /u01/app/oracle/product/11.2.0.4/dbhome_1 on parwezexa02 . oranev +ASM2 $ORACLE_HOME/OPatch/opatch lsinventory -oh /u01/app/11.2.0.4/grid . oranev BIDWT2 $ORACLE_HOME/OPatch/opatch lsinventory -oh /u01/app/oracle/product/11.2.0.4/dbhome_1 Stop the Cluster and then apply the Patch. DB and Cluster Home must be down. Start the Cluster and Oracle Database Home. Start the cluster on both the nodes as root .oraenv +ASM1 dcli -g /root/dbs_group -l root '/u01/app/11.2.0.4/grid/bin/crsctl start crs' /u01/app/11.2.0.4/grid/bin/crsctl status resource -t -init 22. Startup all the database. Startup the database with srvctl stops command as oracle user on each of the node on parwezexa01 $ srvctl start home -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -s /u01/app/oracle/product/11.2.0.4/dbhome_1/stat_file_hostname
-nhostname
on parwezexa02 $ srvctl start home -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -s /u01/app/oracle/product/11.2.0.4/dbhome_1/stat_file_hostname
-nhostname
23. Run this sql file on database(Post patch steps). It should be on every database. .oraenv cd $ORACLE_HOME sqlplus "/as sysdba" SQL> @rdbms/admin/catbundle.sql exa apply check if any error cd /u01/app/oracle/cfgtoollogs/catbundle "grep ^ORA /u01/app/oracle/cfgtoollogs/catbundle/catbundle_EXA_BIDWS_APPLY_2015Jul18_08_52_43.log| sort -u" If there are errors, refer to "Known Issues" . Check if its applied COL COMP_NAME FOR A40 col VERSION for a10 col STATUS for a10 set lines 180 select comp_name,version,status from dba_registry; set lines 180 col ACTION_TIME for a30 set pages 100 col NAMESPACE for a10 col ACTION for a10 col COMMENTS for a20 col BUNDLE_SERIES for a5 select * from dba_registry_history;
Go to location :
6,792 total views, 20 views today
Superb work
Nice post. I used to be checking constantly this blog and I’m inspired!
Extremely helpful info particularly the ultimate part 🙂 I handle such info a lot.
I was seeking this particular information for a very long time.
Thanks and best of luck.
Thanks Alex!!! Glad you liked it .
Like!! Thank you for publishing this awesome article.
Thanks 🙂