Monday 11 November 2013

opatch auto - 11gR2

opatch auto - 11gR2

The OPatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle Database home.

On an 11gR2 RAC installation, if there is no existing database associated to the RDBMS home, when applying a patch using "opatch auto" command, OPatch will patch the Grid Infrastructure home but not the RDBMS home.
But, one or more database exists associated to the RDBMS home, then OPatch will patch both the Grid Infrastructure home and the RDBMS Home.

The opatch auto retrieves the db home information from the configured databases. So, if there is no existing database, then "OPatch auto" will skip that RDBMS Home while patching.

In order to patch the RDBMS home that has no database configured, use "-oh" option in opatch auto.   For example:
opatch auto < Patch Location > -oh /ora/oh1,/ora/oh2,/ora/oh3

Exadata Storage Server Patching - Some details

Exadata Storage Server Patching

●● Exadata Storage Server patch is applied to all cell nodes.
●● Patching is launched from compute node 1 and will use dcli/ssh to remotely patch each cell node.

●● Exadata Storage Server Patch zip also contains Database Minimal Pack or Database Convenience Pack, which are applied to all compute nodes. This patch is copied to each compute node and run locally.

●● Applying the storage software on the cell nodes will also change the Linux version and applying the database minimal pack on the compute nodes does NOT change the Linux version
To upgrade the Linux on Compute Node follow MOS Note: 1284070.1

Non rolling patch apply is much faster as you are applying the patch on all the cell nodes simultaneously, also there are NO risk to single disk failure. Please note, this would require full outage.

In case of rolling patch apply, database downtime is not required, but patch application time is very high. Major risk: ASM high redundancy to reduce disk failure exposure

Grid disks offline >>> Patch Cel01 >>> Grid disks online
Grid disks offline >>> Patch Cel02 >>> Grid disks online
Grid disks offline >>> Patch Cel..n>>> Grid disks online

Rolling Patch application can be risky affair, please be appraised of the followings:
Do not use -rolling option to patchmgr for rolling update or rollback without first applying required fixes on database hosts
./patchmgr -cells cell_group -patch_check_prereq -rolling >>> Make sure this is successful and review spool carefully.
./patchmgr -cells cell_group -patch –rolling

Non-rolling Patching Command:
./patchmgr -cells cell_group -patch_check_prereq
./patchmgr -cells cell_group -patch


How to Verify Cell Node is Patched Successfully

# imageinfo

Output of this command gives some good information, including Kernel Minor Version.

Active Image Version: 11.2.2.3.1.110429.1
Active Image Status: Success

If you get anything in "Active Image Status" except success, then you need to look at validations.log and vldrun*.log. The image status is marked as failure when there is a failure reported in one or more validations.
Check the /var/log/cellos/validations.log and /var/log/cellos/vldrun*.log files for any failures.

If a specific validation failed, then the log will indicate where to look for the additional logs for that validation.

Sunday 10 November 2013

How to find the Cell Group in Exadata Storage Servers

How to find the Cell Group in Exadata Storage Servers

cd /opt/oracle.SupportTools/onecommand
cat cell_group

#cat cell_group
xxxxxcel01
xxxxxcel02
xxxxxcel03
#

This means, when you would start the patching, it would apply the patch in Cell Node xxxxxcel01 and then in xxxxxcel02, and finally in xxxxxcel03.
As I have Exadata Quarter Rack, there are only 3 Storage Servers(Cell Node) and all would be patched during Cell Node Patching.

From this number of cells you can determine, whether Quarter/Half/Full rack exadata is in place.



Exadata and Exalogic - Roadmap

Recently I have been working on Exadata/Exalogic related stuffs, so would be putting brief details here.

Earlier I was involved in a PoC, 2 years back, would try to put practical details also from the same, at that time I did not start this blog.

Let me take you through the high Level architecture of a full rack Exadata:

In a full rack Exadata database machine, there are 8 database servers, 14 storage servers(7 cells at bottom and 7 cells at top), 3 InfiniBand Switches, 1 Cisco Switch, KVM and 2 PDUs is available.

There are 3 Infiniband Switches in a Exadata Full Rack System. Lower one is know as spine switch.
Most of the Exadata full racks have 2 leaf switches and 1 spine switch. 1/4 racks and 1/2 racks may or may not have a spine switch.

There are one Ethernet Switch in between 2 leaf infiniband switches, which has 48 ports. This is used as management network, and all the Exadata systems are plugged into this for management purpose. Ethernet switch for remote administration and monitoring of the Database Machine.

Storage Servers are also known as Cell Node, and Database Servers are also known as Compute node.