Oracle Upgrade GI and DB plan to 12.1.0.2

I have recently been working on a project to upgrade to 12c.



All of our UAT and PRODUCTION system are RAC so we need to upgrade GI and the DB.

We are currently at version 11.2.0.2 with a few extra patches installed.  With premier support ending in Jan 2015, see link below.
http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf

It's time to get my skates on and get development to test 12c and plan an upgrade path.

Upgrade the 11g GI installation
First validate the 11g GI installation
runcluvfy.sh stage -pre crsinst -upgrade [-rolling] -src_crshome src_Gridhome -dest_crshome dest_Gridhome -dest_version dest_release
[-fixup][-method {sudo|root} [-location dir_path] [-user user_name]] [-verbose]
Once the validation is complete I used the OUI with the Oracle 12.1.0.2 GI software.
If you have ORA_CRS_HOME environmental variable you may need to unset it

  • Step 1 - Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management
  • Step 2 - Select the language 
  • Step 3 - Select the nodes to upgrade
  • Step 4 - I uncheck the - Register with Enterprise Manage (EM) Cloud Control
  • Step 5 - Select the Privileged Operating System Groups
  • Step 6 - Select the ORACLE_BASE and ORACLE_HOME
  • Step 7 - You can run configuration scripts and enter the root password here.  I didn't select this, but in 12.1.0.1 this didn't work
  • Step 8 - Prerequisite Checks
  • Step 9 - Install the software and upgrade the current GI

Once complete run rootupgrade.sh, as root.sh:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2014/12/30 13:45:55 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2014/12/30 13:47:01 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2014/12/30 13:47:04 CLSRSC-464: Starting retrieval of the cluster configuration data
bash: /root/.bashrc: Permission denied
2014/12/30 13:47:16 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/12/30 13:47:32 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2014/12/30 13:47:32 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /opt/oracle/app/11.2.0.2/grid -oldCRSVersion 11.2.0.2.0 -nodeNumber 1 -firstNode true -startRolling true'
bash: /root/.bashrc: Permission denied
ASM configuration upgraded in local node successfully.
2014/12/30 13:47:41 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2014/12/30 13:47:41 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/12/30 13:48:59 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/12/30 13:52:28 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/12/30 13:56:32 CLSRSC-472: Attempting to export the OCR
2014/12/30 13:56:32 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle dba'
2014/12/30 13:56:45 CLSRSC-473: Successfully exported the OCR
2014/12/30 13:56:51 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2014/12/30 13:56:51 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2014/12/30 13:56:51 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2014/12/30 13:56:51 CLSRSC-543:
 3. The downgrade command must be run on the node test05 with the '-lastnode' option to restore global configuration data.
2014/12/30 13:57:22 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/12/30 13:57:43 CLSRSC-474: Initiating upgrade of resource types
2014/12/30 13:58:10 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.2.0 -d 12.1.0.2.0 -p first'
2014/12/30 13:58:10 CLSRSC-475: Upgrade of resource types successfully initiated.
bash: /root/.bashrc: Permission denied
bash: /root/.bashrc: Permission denied

2014/12/30 13:58:16 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
After rootupgrade.sh has completed on both nodes the upgrade is complete.


Install the Oracle 12c Database Software
I then installed the 12.1.0.2 database software.
Step 1 - Configure Security Updates
Step 2 - Choose the install type.  I choose the Install database software only
Step 3 - Choose Oracle Real Application Clusters database installation
Step 4 - Select the nodes to install the software on
Step 5 - Select the language
Step 6 - Select the edition (Enterprise)
Step 7 - Installation location - I found the ORACLE_BASE greyed out and I needed to change this.  When I changed the Software Location the ORACLE_BASE was not greyed out and I could change it.
Step 8 - Change the Operating System groups as appropriate.
Step 9 - Prerequisite Checks
It failed with Maximum locked memory check.  I changed the /etc/security/limits.conf and add:
oracle  hard  memlock  <value>
Step 10 - Install the software.
Step 11 - Run root.sh from the 12.1.0.2 installation


Upgrade the existing 11G Oracle database(s)
Step 1 - Select Upgrade Oracle Database
Step 2 - Select the 11G database to upgrade
Step 3 - Prerequisite checks, these took awhile.
Step 4 - Upgrade options - this screen allows you to set recompilation parallelism, Timezone data, gathering stats before upgrade, setting user tablespace to read only during upgrade and set the diagnostic and audit file destination.
Step 5 - Configuration with Enterprise Manager (EM) Database Express
Step 6 - Select a backup option
Step 7 - Summary
Step 8 - Upgrade the database, this is slow


Problems along the way.
On my first run I hit the following:
As per the bug Bug 18453812 : REGISTRATION OF 11.2 DATABASE FAILS AGAINST 12.1 CRS STACK either patch for the fix or workaround needs to be applied but not both, please revert the workaround and retry the upgrade. 
Bug 18453812 : REGISTRATION OF 11.2 DATABASE FAILS AGAINST 12.1 CRS STACK  

 
Explain why this is not a bug:  the fix or the workaround should be used, but not both. Since code fix is installed, we need to revert back the workaround fix. 


The workaround:
sudo /opt/gi/12101/bin/crsctl modify type ora.database.type -attr "ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=3.3" 
sudo /opt/gi/12101/bin/crsctl modify type ora.service.type -attr "ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=3.2" 

or
Apply patch 13460353

I had several problems with this upgrade.  I tried the first applying the upgrade and it failed.  I had backed up the database and needed to restore.   The restoration efforts were substantial and involved removing the database from cluster, restoring it and adding it to the cluster.  In the end I had to install the patch and this fixed the problems.   A couple of lessons:

  • Either apply the workaround or the patch.
  • Backup the database pre the upgrade


Upgrade Oracle reference doc:
https://docs.oracle.com/database/121/CWHPX/procstop.htm#CWHPX10146

No comments:

Post a Comment