Pythian Blog: Technical Track

“The Cluster Upgrade State is [ROLLING PATCH]” with Correct Patch Level in all Nodes

I was performing a spot health check in a client environment, when I encountered this:

[oracle@dbserver1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@dbserver1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [26717470].

By checking over the applied patches on the three nodes, I found they matched:

+ASM1@dbserver1 > kfod op=patches

List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM2@dbserver2 > kfod op=patches

List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM3@dbserver3 > kfod op=patches

List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

It was most likely a patch completed incorrectly, however checking on the dba_registry and dba_registry_sqlpatch, everything looked valid and successfully applied.

Based on that assessment, here is the quick fix I applied:

$GI_HOME/bin/clscfg -patch
$GI_HOME/bin/crsctl stop rollingpatch

Once done, the issue was fixed!

I hope this helps you. If you have any questions, or thoughts, please leave them in the comments.

No Comments Yet

Let us know what you think

Subscribe by email