I have an AdminManaged Database in my (2node) RAC Cluster.
How do I convert it to PolicyManaged ?
(Yes, let me admit : It makes no sense to have PolicyManaged on a 2node Cluster. But since I can't create an 8 or 16 node Cluster (with multiple databases to boot ?!), let me demonstrate with a 2node Cluster. The principle remains the same).
First, I show the configuration of the database in the Cluster :
This is the current definition of server pool(s) :
So, we see that I don't have any Server Pools defined. Only the default FREE and GENERIC (for AdminManaged database(s)) exist.
I now proceed to remove the database from the configuration.
I now create a new (custom) Server Pool (called "MyPool").
So, now with an "upto 2nodes" Server Pool, I add my database to it.
This shows that RAC is now a PolicyManaged database in the "MyPool" Server Pool !
Can I now start the database and check on the instance(s) ?
Yes, of course. I need to create initRAC_1.ora and initRAC_2.ora.
After creating the new parameter files (pointing to the SPFILE in the ASM Diskgroup), I try again.
Yes, I now have instances RAC_1 and RAC_2 (instead of RAC1 and RAC2) now running. If I had multiple (4 or more ?!) nodes (and a Server Pool configuration to match), there's no guarantee that RAC_1 starts on collabn1 (and RAC_2 on collabn2). These are "floating" instances that can start on any nodes in the Cluster.
(UPDATE : It seems that when I shutdown a node, a PolicyManaged Instance is SHUTDOWN ABORT, unlike an AdminManaged Instance which gets SHUTDOWN NORMAL ?)
.
.
.
How do I convert it to PolicyManaged ?
(Yes, let me admit : It makes no sense to have PolicyManaged on a 2node Cluster. But since I can't create an 8 or 16 node Cluster (with multiple databases to boot ?!), let me demonstrate with a 2node Cluster. The principle remains the same).
First, I show the configuration of the database in the Cluster :
[oracle@collabn1 ~]$ srvctl status database -d RAC Instance RAC1 is running on node collabn1 Instance RAC2 is running on node collabn2 [oracle@collabn1 ~]$ srvctl config database -d RAC Database unique name: RAC Database name: RAC Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1 Oracle user: oracle Spfile: +DATA/spfileRAC1.ora Password file: +DATA/RAC/PASSWORD/pwdrac.277.931824933 Domain: racattack Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: FRA,DATA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: Database instances: RAC1,RAC2 Configured nodes: collabn1,collabn2 Database is administrator managed [oracle@collabn1 ~]$
This is the current definition of server pool(s) :
[oracle@collabn1 ~]$ srvctl config srvpool Server pool name: Free Importance: 0, Min: 0, Max: -1 Category: Candidate server names: Server pool name: Generic Importance: 0, Min: 0, Max: -1 Category: Candidate server names: collabn1,collabn2 [oracle@collabn1 ~]$
So, we see that I don't have any Server Pools defined. Only the default FREE and GENERIC (for AdminManaged database(s)) exist.
I now proceed to remove the database from the configuration.
[oracle@collabn1 ~]$ srvctl stop database -d RAC [oracle@collabn1 ~]$ srvctl remove database -d RAC Remove the database RAC? (y/[n]) y [oracle@collabn1 ~]$
I now create a new (custom) Server Pool (called "MyPool").
[oracle@collabn1 ~]$ srvctl add srvpool -serverpool MyPool -importance 100 -min 1 -max 2 \ > -servers "collabn1,collabn2" -verbose [oracle@collabn1 ~]$ srvctl config srvpool -serverpool MyPool Server pool name: MyPool Importance: 100, Min: 1, Max: 2 Category: Candidate server names: collabn1,collabn2 [oracle@collabn1 ~]$
So, now with an "upto 2nodes" Server Pool, I add my database to it.
[oracle@collabn1 ~]$ srvctl add database -d RAC -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 \ > -serverpool MyPool -verbose [oracle@collabn1 ~]$ srvctl config database -d RAC Database unique name: RAC Database name: Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1 Oracle user: oracle Spfile: Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: MyPool Disk Groups: Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: Database instances: Configured nodes: Database is policy managed [oracle@collabn1 ~]$
This shows that RAC is now a PolicyManaged database in the "MyPool" Server Pool !
Can I now start the database and check on the instance(s) ?
[oracle@collabn1 ~]$ srvctl start database -d RAC PRCR-1079 : Failed to start resource ora.rac.db CRS-5017: The resource action "ora.rac.db start" encountered the following error: ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_2.ora' . For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn2/crs/trace/crsd_oraagent_oracle.trc". CRS-2674: Start of 'ora.rac.db' on 'collabn2' failed CRS-5017: The resource action "ora.rac.db start" encountered the following error: ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_1.ora' . For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn1/crs/trace/crsd_oraagent_oracle.trc". CRS-2674: Start of 'ora.rac.db' on 'collabn1' failed CRS-2632: There are no more servers to try to place resource 'ora.rac.db' on that would satisfy its placement policy [oracle@collabn1 ~]$
Yes, of course. I need to create initRAC_1.ora and initRAC_2.ora.
After creating the new parameter files (pointing to the SPFILE in the ASM Diskgroup), I try again.
[oracle@collabn1 ~]$ srvctl start database -d RAC [oracle@collabn1 ~]$ srvctl status database -d RAC Instance RAC_1 is running on node collabn1 Instance RAC_2 is running on node collabn2 [oracle@collabn1 ~]$ ps -ef |grep smon oracle 3447 1 0 11:37 ? 00:00:00 asm_smon_+ASM1 root 3605 1 0 11:37 ? 00:00:11 /u01/app/12.1.0/grid/bin/osysmond.bin oracle 4203 1 0 11:38 ? 00:00:00 mdb_smon_-MGMTDB oracle 22882 1 0 12:08 ? 00:00:00 ora_smon_RAC_1 oracle 23422 12657 0 12:10 pts/0 00:00:00 grep smon [oracle@collabn1 ~]$ [oracle@collabn2 ~]$ ps -ef |grep smon oracle 3495 1 0 11:41 ? 00:00:00 asm_smon_+ASM2 root 3593 1 0 11:41 ? 00:00:09 /u01/app/12.1.0/grid/bin/osysmond.bin oracle 15973 1 0 12:08 ? 00:00:00 ora_smon_RAC_2 oracle 16647 4582 0 12:10 pts/0 00:00:00 grep smon [oracle@collabn2 ~]$ [oracle@collabn1 ~]$ srvctl config database -d RAC Database unique name: RAC Database name: Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1 Oracle user: oracle Spfile: Password file: Domain: racattack Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: MyPool Disk Groups: DATA,FRA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: Database instances: Configured nodes: Database is policy managed [oracle@collabn1 ~]$
Yes, I now have instances RAC_1 and RAC_2 (instead of RAC1 and RAC2) now running. If I had multiple (4 or more ?!) nodes (and a Server Pool configuration to match), there's no guarantee that RAC_1 starts on collabn1 (and RAC_2 on collabn2). These are "floating" instances that can start on any nodes in the Cluster.
(UPDATE : It seems that when I shutdown a node, a PolicyManaged Instance is SHUTDOWN ABORT, unlike an AdminManaged Instance which gets SHUTDOWN NORMAL ?)
.
.
.