27 July, 2014

GI Commands : 1 -- Monitoring Status of Resources

In 11gR2

Listing the Status of Resources

[root@node1 ~]# su - grid
-sh-3.2$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.DATA1.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.DATA2.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.FRA.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.asm
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.gsd
               OFFLINE OFFLINE      node1                                        
               OFFLINE OFFLINE      node2                                        
ora.net1.network
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.ons
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.registry.acfs
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2                                        
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.cvu
      1        ONLINE  ONLINE       node1                                        
ora.gns
      1        ONLINE  ONLINE       node1                                        
ora.gns.vip
      1        ONLINE  ONLINE       node1                                        
ora.node1.vip
      1        ONLINE  ONLINE       node1                                        
ora.node2.vip
      1        ONLINE  ONLINE       node2                                        
ora.oc4j
      1        ONLINE  ONLINE       node1                                        
ora.racdb.db
      1        ONLINE  ONLINE       node1                    Open                
      2        ONLINE  ONLINE       node2                    Open                
ora.racdb.new_svc.svc
      1        ONLINE  ONLINE       node1                                        
      2        ONLINE  ONLINE       node2                                        
ora.scan1.vip
      1        ONLINE  ONLINE       node2                                        
ora.scan2.vip
      1        ONLINE  ONLINE       node1                                        
ora.scan3.vip
      1        ONLINE  ONLINE       node1                                        
-sh-3.2$ 

So we see that :
a) The Cluster consists of two nodes node1 and node2
b) There are 4 ASM DiskGroups DATA, DATA1, DATA2 and FRA
c) GSD is offline as expected -- it is required only for 9i Databases
d) There is a database racdb and a service new_svc  (see my previous post)


Listing the status of SCAN Listeners

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
-sh-3.2$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1
-sh-3.2$ 

So we see that
a) There are 3 SCAN Listeners
b) Since this is a 2-node cluster, 2 of the SCAN Listeners are on one node node1


Listing the status of the OCR

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

-sh-3.2$ su root
Password: 
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

So we see that :
a) The OCR is in 3 locations +DATA, +FRA and NFS filesystem /fra/ocrfile
b) A Logical corruption check of the OCR can only be done by root, not by grid


Listing the status of the Vote Disk

-sh-3.2$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
 2. ONLINE   0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
Located 2 voting disk(s).
-sh-3.2$ 

So we see that :
a) There are 2 votedisk copies  (yes, two -- not the recommended three !)
b) Both are on filesystem
How do I happen to have 2 votedisk copies ?  I actually had 3 but removed one. DON'T TRY THIS ON YOUR PRODUCTION CLUSTER.  I am adding the third one back now :
-sh-3.2$ crsctl add css votedisk /data2/votedisk.2
Now formatting voting disk: /data2/votedisk.2.
CRS-4603: Successful addition of voting disk /data2/votedisk.2.
-sh-3.2$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
 2. ONLINE   0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
 3. ONLINE   41c24037b51c4f97bf4cb7002649aee4 (/data2/votedisk.2) []
Located 3 voting disk(s).
-sh-3.2$ 

There, I am now back to 3 votedisk copies.
.
.
.

15 July, 2014

RAC Commands : 2 -- Updating Configuration for Services

In 11gR2

NOTE : This is in a Policy Managed configuration

Adding a database service

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl add service -d RACDB -s NEW_SVC -g RACSP -c SINGLETON
-sh-3.2$ srvctl config service -d RACDB -s NEW_SVC
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: 
Service is enabled on nodes: 
Service is disabled on nodes: 
-sh-3.2$ 
-sh-3.2$ srvctl start service -d RACDB -s NEW_SVC
-sh-3.2$ 

Since this is Policy Managed database in the RACSP Server Pool, I added a service with the appropriate parameters. The SINGLETON cardinality means that it will run on only one instance.  (See the previous post for the service MY_RAC_SVC with the cardinliaty UNIFORM).

Let's verify the alert.log entry.
-sh-3.2$ cd /u01/app/oracle/diag/rdbms/racdb/RACDB_1
-sh-3.2$ cd trace
-sh-3.2$ tail alert_RACDB_1.log
Begin automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:01:20 2014
End automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:06:45 2014
db_recovery_file_dest_size of 4000 MB is 22.25% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Tue Jul 15 22:11:10 2014
ALTER SYSTEM SET service_names='MY_RAC_SVC','NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
-sh-3.2$ 

The SCOPE specification of the ALTER SYSTEM limits the service to only this instance. (Note : MY_RAC_SVC had already been added to RACDB_2 earlier).


Removing a database service

Now, let's remove a database service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: 
Service is enabled on nodes: 
Service is disabled on nodes: 
-sh-3.2$ srvctl remove service -d RACDB -s MY_RAC_SVC
PRCR-1025 : Resource ora.racdb.my_rac_svc.svc is still running

I need to first stop the service.

-sh-3.2$ srvctl stop service -d RACDB -s MY_RAC_SVC
-sh-3.2$ 
-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: 
Service is enabled on nodes: 
Service is disabled on nodes: 
-sh-3.2$ tail alert_RACDB_1.log
End automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:06:45 2014
db_recovery_file_dest_size of 4000 MB is 22.25% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Tue Jul 15 22:11:10 2014
ALTER SYSTEM SET service_names='MY_RAC_SVC','NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
Tue Jul 15 22:17:48 2014
ALTER SYSTEM SET service_names='NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
-sh-3.2$ 

I can now remove the service.

-sh-3.2$ srvctl remove service -d RACDB -s MY_RAC_SVC
-sh-3.2$ srvctl config service -d RACDB
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: 
Service is enabled on nodes: 
Service is disabled on nodes: 
-sh-3.2$ 

Now, only the new service is listed.

Modifying the cardinality of a service

-sh-3.2$ srvctl modify service -d RACDB -s NEW_SVC -c UNIFORM
-sh-3.2$ srvctl config service -d RACDB -s NEW_SVC
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: 
Service is enabled on nodes: 
Service is disabled on nodes: 
-sh-3.2$ 
-sh-3.2$ srvctl start service -d RACDB -s NEW_SVC

The service has been modified from SINGLETON (single instance) to UNIFORM (all instances).
Verifying this on node 2.

-sh-3.2$ pwd
/u01/app/oracle/diag/rdbms/racdb/RACDB_2/trace
-sh-3.2$ tail -2 alert_RACDB_2.log
Tue Jul 15 22:27:36 2014
ALTER SYSTEM SET service_names='NEW_SVC' SCOPE=MEMORY SID='RACDB_2';
-sh-3.2$ 

The service has been enabled on RACDB_2 as well now.

.
.
.

13 July, 2014

RAC Commands : 1 -- Viewing Configuration

In 11gR2

Viewing the configuration of a RAC database

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/rdbms/11.2.0
Oracle user: oracle
Spfile: +DATA1/RACDB/spfileRACDB.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACSP
Database instances: 
Disk Groups: DATA1,FRA,DATA2
Mount point paths: 
Services: MY_RAC_SVC
Type: RAC
Database is policy managed
-sh-3.2$ 

So, we see that :
a) The database name is RACDB
b) It is a Policy Managed database (not Administrator Managed)
c) It is dependent on 3 ASM Disk Groups DATA1, DATA2, FRA
d) There is one service called MY_RAC_SVC configured
e) The database is in the  RACSP server pool
f) The database is configured to be Auto-started when Grid Infrastructure starts


Viewing the configuration of a RAC service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition: 
Service is enabled on nodes: 
Service is disabled on nodes: 
-sh-3.2$ 

So, we see that :
a) The service name is MY_RAC_SVC
b) The UNIFORM cardinality means that it is to run on all active nodes in the server pool
c) The server-side connection load balancing goal is LONG (for long running sessions)


Viewing the configuration of Server Pools

-sh-3.2$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Candidate server names: 
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Candidate server names: 
Server pool name: RACSP
Importance: 0, Min: 0, Max: 2
Candidate server names: 
-sh-3.2$ 

So we see that :
a) The RACSP server pool is the only created (named) server pool
b) This server pool has a max of 2 nodes

10 July, 2014

Installing OEL 6 and Database 12c

Here is a collection of posts on installing (a) Virtual Box (b) Oracle Enterprise Linux 6 (c) 12c Grid Infrastructure (Standalone, non-Clustered) and ASM (d) 12c Database with CDB and PDB.
.
.
.

08 July, 2014

Passed the 11g RAC and Grid Expert Exam

I passed the 11g RAC and Grid Expert Exam yesterday.
.
For those who are interested :

You must absolutely read the documentation on ASM, Grid Infrastructure and RAC. 

I also recommend 3 books 
1) Pro Oracle Database 11g RAC on Linux -- by Steve Shaw and Martin Bach [Apress Publishing]
2) Oracle 11g R1/R2 Real Application Clusters Essentials -- by Ben Prusinsky and Syed Jaffer Hussain [Packt Publishing] 
OR 
2) Oracle 11g R1/R2 Real Application Clusters Handbook -- by Ben Prusinsky, Guenad Jilveski and Syed Jaffer Husssain [Packt Publishing] 
3) Oracle Database 11g Release 2 High Availability -- by Scott Jesse, Bill Burton and Bryan Vongray [Oracle Press] 

The 11gR2 Grid and RAC Accelerated training at Oracle University is also recommended but expensive.
.
.
.