08 May, 2022

PDB Recovery Isolation in 21c

 In my previous blog post, where I demonstrated PDB Flashback, I had alluded to the "PDB Recovery Isolation" feature of 21c.

Here is a demonstration.

On the Primary, I issue only two commands to create a new PDB from an *existing, active* PDB (a PDB that is in Read-Write mode and is not  PDB$SEED) :  The source PDBHKC is active and you will notice the Archive Log Sequence# being incremented significantly as transactions are in-flight and continuing while PDBKC is being CREATEd and OPENed.


SQL> create pluggable database pdbnew from pdbhkc;

Pluggable database created.

SQL> alter pluggable database pdbnew open;

Pluggable database altered.

SQL> quit


These are the relevant Primary Database Alert entries (not every entry) :

2022-05-08T22:32:56.582810+08:00
create pluggable database pdbnew from pdbhkc

2022-05-08T22:33:02.783531+08:00
PDBNEW(6):Endian type of dictionary set to little

2022-05-08T22:33:03.103173+08:00
****************************************************************
Pluggable Database PDBNEW with pdb id - 6 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
local undo-1, localundoscn-0x000000000000011d
****************************************************************
PDBNEW(6):Media Recovery Start
2022-05-08T22:33:03.293420+08:00
PDBNEW(6):Serial Media Recovery started
PDBNEW(6):max_pdb is 6
2022-05-08T22:33:03.393288+08:00
PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 1 Seq 73 Reading mem 0
PDBNEW(6):  Mem# 0: /u02/oradata/DB213H1/redo01.log
2022-05-08T22:33:03.541053+08:00
Thread 1 advanced to log sequence 75 (LGWR switch),  current SCN: 4931897
  Current log# 3 seq# 75 mem# 0: /u02/oradata/DB213H1/redo03.log
2022-05-08T22:33:03.541508+08:00
PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 2 Seq 74 Reading mem 0
PDBNEW(6):  Mem# 0: /u02/oradata/DB213H1/redo02.log
2022-05-08T22:33:03.598468+08:00
PDBNEW(6):Incomplete Recovery applied until change 4931866 time 05/08/2022 22:33:02
PDBNEW(6):NET  (PID:16256): Media Recovery Complete [dbsdrv.c:15048]
2022-05-08T22:33:03.615889+08:00
NET  (PID:16388): Archived Log entry 119 added for B-1102494684.T-1.S-74 ID 0x7fbfe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:03.786205+08:00
PDBNEW(6):Pluggable database PDBNEW pseudo opening
PDBNEW(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBNEW(6):Autotune of undo retention is turned on.
PDBNEW(6):Undo initialization recovery: Parallel FPTR failed: start:15704441 end:15704446 diff:5 ms (0.0 seconds)
PDBNEW(6):Undo initialization recovery: err:0 start: 15704439 end: 15704511 diff: 72 ms (0.1 seconds)
PDBNEW(6):[16256] Successfully onlined Undo Tablespace 2.
PDBNEW(6):Undo initialization online undo segments: err:0 start: 15704512 end: 15704530 diff: 18 ms (0.0 seconds)
PDBNEW(6):Undo initialization finished serial:0 start:15704439 end:15704538 diff:99 ms (0.1 seconds)
PDBNEW(6):Database Characterset for PDBNEW is AL32UTF8
PDBNEW(6):Pluggable database PDBNEW pseudo closing
PDBNEW(6):JIT: pid 16256 requesting stop
PDBNEW(6):Closing sequence subsystem (15704602605).
PDBNEW(6):Buffer Cache flush started: 6
PDBNEW(6):Buffer Cache flush finished: 6
Completed: create pluggable database pdbnew from pdbhkc


2022-05-08T22:33:18.209272+08:00
alter pluggable database pdbnew open
2022-05-08T22:33:18.210926+08:00
PDBNEW(6):Pluggable database PDBNEW opening in read write
PDBNEW(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBNEW(6):Autotune of undo retention is turned on.
PDBNEW(6):Endian type of dictionary set to little
PDBNEW(6):Undo initialization recovery: Parallel FPTR complete: start:15718778 end:15718780 diff:2 ms (0.0 seconds)
PDBNEW(6):Undo initialization recovery: err:0 start: 15718778 end: 15718780 diff: 2 ms (0.0 seconds)
PDBNEW(6):[16256] Successfully onlined Undo Tablespace 2.
PDBNEW(6):Undo initialization online undo segments: err:0 start: 15718780 end: 15718873 diff: 93 ms (0.1 seconds)
PDBNEW(6):Undo initialization finished serial:0 start:15718778 end:15718875 diff:97 ms (0.1 seconds)
PDBNEW(6):Database Characterset for PDBNEW is AL32UTF8
PDBNEW(6):Deleting old file#13 from file$
PDBNEW(6):Deleting old file#14 from file$
PDBNEW(6):Deleting old file#15 from file$
PDBNEW(6):Adding new file#22 to file$(old file#13).             fopr-1, newblks-35840, oldblks-19200
PDBNEW(6):Adding new file#23 to file$(old file#14).             fopr-1, newblks-47360, oldblks-15360
PDBNEW(6):Adding new file#24 to file$(old file#15).             fopr-1, newblks-12800, oldblks-12800
PDBNEW(6):Successfully created internal service PDBNEW at open
****************************************************************
Post plug operations are now complete.
Pluggable database PDBNEW with pdb id - 6 is now marked as NEW.
****************************************************************
PDBNEW(6):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)


2022-05-08T22:33:21.760166+08:00
Thread 1 advanced to log sequence 81 (LGWR switch),  current SCN: 4933127
  Current log# 3 seq# 81 mem# 0: /u02/oradata/DB213H1/redo03.log


2022-05-08T22:33:23.162867+08:00
PDBNEW(6):Opening pdb with no Resource Manager plan active
Pluggable database PDBNEW opened read write
Completed: alter pluggable database pdbnew open


(The "Deleting old file#" and "Adding new file#" entries are simply about updating the controlfile)
Note that Log Sequence# has moved from 72 to 81 during the course of the CREATE and OPEN of the new PDB.


These are the Standby Alert Log entries :


2022-05-08T22:32:57.237583+08:00
ARC0 (PID:1354): Archived Log entry 55 added for B-1102494684.T-1.S-73 ID 0x7f7ae6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:32:57.237902+08:00
 rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-74 [krsr.c:18143]
2022-05-08T22:32:57.386776+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-74 (in transit) [krsm.c:6185]
2022-05-08T22:32:57.388789+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 74 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-08T22:33:02.900342+08:00
Recovery created pluggable database PDBNEW
PDBNEW(6):Tablespace-SYSTEM during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #22 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_system_k7ho4s6r_.dbf'.
PDBNEW(6):Tablespace-SYSAUX during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #23 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_sysaux_k7ho4s6s_.dbf'.
PDBNEW(6):Tablespace-UNDOTBS1 during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #24 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_undotbs1_k7ho4s6t_.dbf'.
PDBNEW(6):Tablespace-TEMP during PDB create offlined since source is in r/w mode or this is a refresh clone
2022-05-08T22:33:03.709388+08:00
 rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-75 [krsr.c:18143]
2022-05-08T22:33:03.724813+08:00
ARC2 (PID:1362): Archived Log entry 56 added for B-1102494684.T-1.S-74 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:04.051241+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-75 (in transit) [krsm.c:6185]
2022-05-08T22:33:04.052945+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 75 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_4_k5yt8y5f_.log
2022-05-08T22:33:07.786948+08:00
 rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-76 [krsr.c:18143]
2022-05-08T22:33:07.811804+08:00
ARC0 (PID:1354): Archived Log entry 57 added for B-1102494684.T-1.S-75 ID 0x7f7ae6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:07.926136+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-76 (in transit) [krsm.c:6185]
2022-05-08T22:33:07.927698+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 76 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-08T22:33:08.525489+08:00
PDBNEW(6):Full restore complete of datafile 24 /u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_undotbs1_k7ho53ny_.dbf.  Elapsed time: 0:00:01
PDBNEW(6):  checkpoint is 4932003
PDBNEW(6):  last deallocation scn is 3928601
PDBNEW(6):Datafile #24 has been copied to the standby.
2022-05-08T22:33:11.424337+08:00
 rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-77 [krsr.c:18143]
2022-05-08T22:33:11.492712+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-77 (in transit) [krsm.c:6185]
2022-05-08T22:33:11.494117+08:00
ARC2 (PID:1362): Archived Log entry 58 added for B-1102494684.T-1.S-76 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:11.552303+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 77 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_4_k5yt8y5f_.log
2022-05-08T22:33:12.363978+08:00
PDBNEW(6):Full restore complete of datafile 23 /u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_sysaux_k7ho54o6_.dbf.  Elapsed time: 0:00:04
PDBNEW(6):  checkpoint is 4932003
PDBNEW(6):  last deallocation scn is 3862144
PDBNEW(6):Datafile #23 has been copied to the standby.
2022-05-08T22:33:14.572554+08:00
 rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-78 [krsr.c:18143]
2022-05-08T22:33:14.572781+08:00
Trying to expand controlfile section 11 for Oracle Managed Files
Expanded controlfile section 11 from 56 to 112 records
Requested to grow by 56 records; added 2 blocks of records
2022-05-08T22:33:14.585789+08:00
ARC0 (PID:1354): Archived Log entry 59 added for B-1102494684.T-1.S-77 ID 0x7f7ae6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:14.650210+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-78 (in transit) [krsm.c:6185]
2022-05-08T22:33:14.698805+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 78 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-08T22:33:15.936076+08:00
PDBNEW(6):Full restore complete of datafile 22 /u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_system_k7ho58jz_.dbf.  Elapsed time: 0:00:03
PDBNEW(6):  checkpoint is 4932003
PDBNEW(6):  last deallocation scn is 3860354
PDBNEW(6):Datafile #22 has been copied to the standby.
2022-05-08T22:33:15.948328+08:00
PDBNEW(6):PDB Side Media Recovery started for pdbid(6)
PDBNEW(6):.... (PID:12667): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBNEW(6):max_pdb is 6
2022-05-08T22:33:16.031099+08:00
PDBNEW(6):Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_75_k7ho53pg_.arc
2022-05-08T22:33:16.045816+08:00
PDBNEW(6):Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_76_k7ho57fz_.arc
2022-05-08T22:33:16.076227+08:00
PDBNEW(6):Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_77_k7ho5bgp_.arc
PDBNEW(6):.... (PID:12667): Media Recovery Waiting for T-1.S-78 (in transit) [krsm.c:6185]
2022-05-08T22:33:16.106119+08:00
PDBNEW(6):Media Recovery of Online Log [Thread=1, Seq=78]
2022-05-08T22:33:16.106586+08:00
PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 5 Seq 78 Reading mem 0
PDBNEW(6):  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
PDBNEW(6):PSR[6]:influxScn=4932081,nxtCkptScn=4932081,minScn=4932004,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843
PDBNEW(6):The merge request has been submitted by side recovery for pdbid 6
2022-05-08T22:33:16.114934+08:00
PR00 (PID:1438): Background Media Recovery cancelled with status 16037 [krd.c:27039]
PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions
2022-05-08T22:33:16.119431+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:1438): Managed Recovery not using Real Time Apply [krsm.c:15876]
Recovery interrupted!
Recovered data files to a consistent state at change 4932082
Stopping change tracking
2022-05-08T22:33:16.347544+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
2022-05-08T22:33:17.205906+08:00
PDBNEW(6):.... (PID:12667): Side Recovery Complete [krds.c:1584]
2022-05-08T22:33:17.250805+08:00
ARC3 (PID:1364): Archived Log entry 60 added for B-1102494684.T-1.S-78 ID 0x7fd0e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:17.251185+08:00
 rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-79 [krsr.c:18143]
2022-05-08T22:33:20.411643+08:00
ARC1 (PID:1360): Archived Log entry 61 added for B-1102494684.T-1.S-79 ID 0x7fd1e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:20.411942+08:00
 rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-80 [krsr.c:18143]
2022-05-08T22:33:21.944250+08:00
 rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-81 [krsr.c:18143]
2022-05-08T22:33:21.944465+08:00
ARC2 (PID:1362): Archived Log entry 62 added for B-1102494684.T-1.S-80 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]


I have extracted below the important entries on the Standby that show the behaviour of PDB Recovery Isolation --- that Oracle at the Standby handles Recovery of the new PDB separately



2022-05-08T22:33:02.900342+08:00
Recovery created pluggable database PDBNEW
PDBNEW(6):Tablespace-SYSTEM during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #22 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_system_k7ho4s6r_.dbf'.
PDBNEW(6):Tablespace-SYSAUX during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #23 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_sysaux_k7ho4s6s_.dbf'.
PDBNEW(6):Tablespace-UNDOTBS1 during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #24 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_undotbs1_k7ho4s6t_.dbf'.
PDBNEW(6):Tablespace-TEMP during PDB create offlined since source is in r/w mode or this is a refresh clone

PDBNEW(6):PDB Side Media Recovery started for pdbid(6)
PDBNEW(6):.... (PID:12667): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBNEW(6):max_pdb is 6

PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 5 Seq 78 Reading mem 0
PDBNEW(6):  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
PDBNEW(6):PSR[6]:influxScn=4932081,nxtCkptScn=4932081,minScn=4932004,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843
PDBNEW(6):The merge request has been submitted by side recovery for pdbid 6
2022-05-08T22:33:16.114934+08:00
PR00 (PID:1438): Background Media Recovery cancelled with status 16037 [krd.c:27039]
PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions
2022-05-08T22:33:16.119431+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:1438): Managed Recovery not using Real Time Apply [krsm.c:15876]
Recovery interrupted!
Recovered data files to a consistent state at change 4932082
Stopping change tracking
2022-05-08T22:33:16.347544+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
2022-05-08T22:33:17.205906+08:00
PDBNEW(6):.... (PID:12667): Side Recovery Complete [krds.c:1584]

2022-05-08T22:33:17.250805+08:00
ARC3 (PID:1364): Archived Log entry 60 added for B-1102494684.T-1.S-78 ID 0x7fd0e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:17.251185+08:00
 rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-79 [krsr.c:18143]
2022-05-08T22:33:20.411643+08:00
ARC1 (PID:1360): Archived Log entry 61 added for B-1102494684.T-1.S-79 ID 0x7fd1e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:20.411942+08:00
 rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-80 [krsr.c:18143]
2022-05-08T22:33:21.944250+08:00
 rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-81 [krsr.c:18143]
2022-05-08T22:33:21.944465+08:00
ARC2 (PID:1362): Archived Log entry 62 added for B-1102494684.T-1.S-80 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:37.537823+08:00
all data files of pdbid 6 are brought online.
 Started logmerger process
2022-05-08T22:33:37.592170+08:00

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
PR00 (PID:12697): Managed Recovery starting Real Time Apply [krsm.c:15865]
max_pdb is 6
2022-05-08T22:33:37.736778+08:00
Parallel Media Recovery started with 2 slaves
2022-05-08T22:33:37.858641+08:00
Stopping change tracking
2022-05-08T22:33:37.914220+08:00
PR00 (PID:12697): Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_78_k7ho5f5y_.arc [krd.c:9408]
2022-05-08T22:33:38.024387+08:00
PR00 (PID:12697): Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_79_k7ho5jc9_.arc [krd.c:9408]
2022-05-08T22:33:38.548017+08:00
PR00 (PID:12697): Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_80_k7ho5kvg_.arc [krd.c:9408]
2022-05-08T22:33:38.764339+08:00
PR00 (PID:12697): Media Recovery Waiting for T-1.S-81 (in transit) [krsm.c:6185]
2022-05-08T22:33:38.764667+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 81 Reading mem 0




First, it identifies that the Source PDB is in Read-Write mode.
Then it begins a "Side Media Recovery" for the new PDB PDBNEW(6).
Later, it temporarily halts recovery of the full Standby CDB to merge the  "Side Recovery".
Finally, it resumes "normal" Recovery of the Standby from ArchiveLog Sequence 78 onwards.


03 May, 2022

PDB Flashback in 21c

 Oracle 21c allows you to Flashback a Pluggable Database to a past point-in-time / SCN.


Here is a quick demo  (also with alert log messages from the Primary and Standby) :


On the Primary :


SQL> create pluggable database pdbhkc admin user hemant identified by hemant;

Pluggable database created.

SQL> alter pluggable database pdbhkc open;

Pluggable database altered.

SQL>
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    3688309

SQL>
SQL> alter session set container=PDBHKC;

Session altered.

SQL> grant dba to hemant;  -- grant is done *after* SCN 3688309

Grant succeeded.

SQL> connect hemant/hemant@pdbhkc
Connected.
SQL>
SQL> create table obj_list as select * from user_objects;

Table created.

SQL> select count(*) from obj_list;  

  COUNT(*)
----------
         0

SQL>
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    3688878

SQL>
SQL> insert into obj_list select * from user_objects;

1 row created.

SQL> commit;

Commit complete.

SQL> insert into obj_list select * from obj_list;

1 row created.

SQL> commit;

Commit complete.

SQL> insert into obj_list select * from obj_list;

2 rows created.

SQL> commit;

Commit complete.

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    3689045

SQL>
SQL> quit
Disconnected from Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0


Now I will attempt a FLASHBACK to SCN 3688309

[oracle@primary admin]$ rman target /

Recovery Manager: Release 21.0.0.0.0 - Production on Tue May 3 14:18:15 2022
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB213H1 (DBID=3870017436)

RMAN> alter pluggable database pdbhkc close;

Statement processed

RMAN>
RMAN> flashback pluggable database pdbhkc to scn 3688309;

Starting flashback at 03-MAY-22
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=152 device type=DISK


starting media recovery
media recovery complete, elapsed time: 00:00:03

Finished flashback at 03-MAY-22

RMAN>
RMAN> alter pluggable database pdbhkc open resetlogs;

Statement processed

RMAN>
RMAN> quit


Recovery Manager complete.
[oracle@primary admin]$ sqlplus hemant/hemant@pdbhkc

SQL*Plus: Release 21.0.0.0.0 - Production on Tue May 3 14:20:01 2022
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> select count(*) from user_objects;

  COUNT(*)
----------
         0

SQL>
SQL> select current_scn from v$database;   -- this fails because DBA grant is not yet present
select current_scn from v$database
                        *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDBHKC;

Session altered.

SQL> grant dba to hemant;

Grant succeeded.

SQL> connect hemant/hemant@pdbhkc;
Connected.
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    3691628

SQL>
SQL> connect / as sysdba
Connected.
SQL> alter system archive log current;

System altered.

SQL>
SQL> col name format a8
SQL> get pdb_incarnations.sql
  1  select pi.con_id, p.name,  pi.pdb_incarnation#,
  2  pi.incarnation_scn, pi.prior_pdb_incarnation#, pi.status
  3  from v$pdb_incarnation pi, v$pdbs p
  4  where pi.con_id=p.con_id
  5* order by 1,2,3,4
SQL> /

    CON_ID NAME     PDB_INCARNATION# INCARNATION_SCN PRIOR_PDB_INCARNATION#                   STATUS
---------- -------- ---------------- --------------- ---------------------------------------- -------
         2 PDB$SEED                0               1                                          PARENT
         2 PDB$SEED                0         2601843 0                                        CURRENT
         3 PDB1                    0               1                                          PARENT
         3 PDB1                    0         2601843 0                                        CURRENT
         4 PDBHKC                  0         2601843                                          PARENT
         4 PDBHKC                  1         3688310 0                                        CURRENT

6 rows selected.

SQL>


Note how the "GRANT DBA" was after the Flashback SCN and had to be regranted. After the FLASHBACK, the custom table "obj_list" does not exist.


Primary Alert Log Entries :

Primary Instance alert log :
==============================
2022-05-03T14:18:28.808263+08:00
alter pluggable database pdbhkc close
2022-05-03T14:18:28.815690+08:00
PDBHKC(4):Pluggable database PDBHKC closing
PDBHKC(4):JIT: pid 7263 requesting stop
PDBHKC(4):Closing sequence subsystem (5915010500).
PDBHKC(4):Buffer Cache flush started: 4
PDBHKC(4):Buffer Cache flush finished: 4
Pluggable database PDBHKC closed
Completed: alter pluggable database pdbhkc close
2022-05-03T14:19:12.776065+08:00
RMAN flashback PDB to scn 3688309
Flashback Restore Start
Restore Flashback Pluggable Database PDBHKC (4) until change 3686342
Flashback Restore Complete
Flashback Media Recovery Start
2022-05-03T14:19:13.675674+08:00
Serial Media Recovery started
max_pdb is 4
2022-05-03T14:19:13.769641+08:00
Recovery of Online Redo Log: Thread 1 Group 1 Seq 28 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1/redo01.log
2022-05-03T14:19:13.970865+08:00
Recovery of Online Redo Log: Thread 1 Group 2 Seq 29 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1/redo02.log
2022-05-03T14:19:14.621235+08:00
Incomplete Recovery applied until change 3688310 time 05/03/2022 14:12:57
Flashback Media Recovery Complete
Flashback Pluggable Database PDBHKC (4) recovered until change 3688310
Completed: RMAN flashback PDB to scn 3688309
2022-05-03T14:19:42.382165+08:00
alter pluggable database pdbhkc open resetlogs
2022-05-03T14:19:42.461760+08:00
Online datafile 15
Online datafile 14
Online datafile 13
PDBHKC(4):Pluggable database PDBHKC pseudo opening
PDBHKC(4):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBHKC(4):Autotune of undo retention is turned on.
PDBHKC(4):Endian type of dictionary set to little
PDBHKC(4):Undo initialization recovery: Parallel FPTR failed: start:5988531 end:5988533 diff:2 ms (0.0 seconds)
PDBHKC(4):Undo initialization recovery: err:0 start: 5988531 end: 5988547 diff: 16 ms (0.0 seconds)
PDBHKC(4):[7263] Successfully onlined Undo Tablespace 2.
PDBHKC(4):Undo initialization online undo segments: err:0 start: 5988547 end: 5988555 diff: 8 ms (0.0 seconds)
PDBHKC(4):Undo initialization finished serial:0 start:5988531 end:5988556 diff:25 ms (0.0 seconds)
PDBHKC(4):Database Characterset for PDBHKC is AL32UTF8
PDBHKC(4):Pluggable database PDBHKC pseudo closing
PDBHKC(4):JIT: pid 7263 requesting stop
PDBHKC(4):Closing sequence subsystem (5988605782).
PDBHKC(4):Buffer Cache flush started: 4
PDBHKC(4):Buffer Cache flush finished: 4
PDBHKC(4):Pluggable database PDBHKC opening in read write
PDBHKC(4):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBHKC(4):Autotune of undo retention is turned on.
PDBHKC(4):Endian type of dictionary set to little
PDBHKC(4):Undo initialization recovery: Parallel FPTR complete: start:5989023 end:5989027 diff:4 ms (0.0 seconds)
PDBHKC(4):Undo initialization recovery: err:0 start: 5989023 end: 5989027 diff: 4 ms (0.0 seconds)
PDBHKC(4):[7263] Successfully onlined Undo Tablespace 2.
PDBHKC(4):Undo initialization online undo segments: err:0 start: 5989027 end: 5989188 diff: 161 ms (0.2 seconds)
PDBHKC(4):Undo initialization finished serial:0 start:5989023 end:5989194 diff:171 ms (0.2 seconds)
PDBHKC(4):Pluggable database PDBHKC dictionary check beginning
2022-05-03T14:19:43.379867+08:00
PDBHKC(4):Pluggable Database PDBHKC Dictionary check complete
PDBHKC(4):Database Characterset for PDBHKC is AL32UTF8
2022-05-03T14:19:43.935738+08:00
PDBHKC(4):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)
2022-05-03T14:19:44.385306+08:00
PDBHKC(4):Opening pdb with no Resource Manager plan active
2022-05-03T14:19:44.967205+08:00
Control autobackup written to DISK device

handle '/u01/app/oracle/DB213H1/autobackup/2022_05_03/o1_mf_s_1103725184_k71ld0sw_.bkp'

Pluggable database PDBHKC closed
Completed: alter pluggable database pdbhkc open resetlogs
2022-05-03T14:21:30.693825+08:00
2022-05-03T14:21:30.693825+08:00
PDBHKC(4):TABLE AUDSYS.AUD$UNIFIED: ADDED INTERVAL PARTITION SYS_P328 (2864) VALUES LESS THAN (TIMESTAMP' 2022-05-04 00:00:00')
2022-05-03T14:23:50.678383+08:00
ALTER SYSTEM ARCHIVE LOG
2022-05-03T14:23:50.737171+08:00
Thread 1 advanced to log sequence 30 (LGWR switch),  current SCN: 3691895
  Current log# 3 seq# 30 mem# 0: /u02/oradata/DB213H1/redo03.log
2022-05-03T14:23:50.840999+08:00
Deleted Oracle managed file /u01/app/oracle/DB213H1/archivelog/2022_04_20/o1_mf_1_9_k5yopcl1_.arc
2022-05-03T14:23:50.843496+08:00
NET  (PID:7571): Archived Log entry 29 added for B-1102494684.T-1.S-29 ID 0x7f6ee6abae9c LAD:1 [krse.c:4933]
2022-05-03T14:23:50.879882+08:00
Deleted Oracle managed file /u01/app/oracle/DB213H1/archivelog/2022_04_20/o1_mf_1_10_k5yos1cc_.arc


Note the messages 
"RMAN flashback PDB to scn 3688309" 
"Restore Flashback Pluggable Database PDBHKC (4) until change 3686342" 
"Incomplete Recovery applied until change 3688310" 
"Completed: RMAN flashback PDB to scn 3688309"



Standby Alert Log Entries :

Standby Instance alert log :
============================
2022-05-03T14:19:42.855709+08:00
PDBHKC(4):PDB Side Media Recovery started for pdbid(4)
PDBHKC(4):.... (PID:6103): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBHKC(4):max_pdb is 4
PDBHKC(4):Datafile 15 (ckpscn 3690771) is orphaned on PDB incarnation#=1
PDBHKC(4):Datafile 14 (ckpscn 3690771) is orphaned on PDB incarnation#=1
PDBHKC(4):Datafile 13 (ckpscn 3690771) is orphaned on PDB incarnation#=1
PDBHKC(4):PSR[4]: the incarnation of data files don't match, so we are trying to fetch data files again from the primary database.
2022-05-03T14:19:42.962411+08:00
PDBHKC(4):Media Recovery failed with error 65138
PDBHKC(4):ORA-65138 is signaled while preparing the media recovery.
2022-05-03T14:19:43.506242+08:00
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #0
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #1
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #2
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #3
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #4
PDBHKC(4):Applying datafile dictionary check redo for datafile #13
PDBHKC(4):Applying datafile dictionary check redo for datafile #14
PDBHKC(4):Applying datafile dictionary check redo for datafile #15
2022-05-03T14:19:45.892494+08:00
PDBHKC(4):Full restore complete of datafile 15 /u02/oradata/DB213H1_STDBY/DE155F0301020A30E0538338A8C07AFA/datafile/o1_mf_undotbs1_k71kvzq4_.dbf.  Elapsed time: 0:00:00
PDBHKC(4):  checkpoint is 3691257
PDBHKC(4):  last deallocation scn is 3
PDBHKC(4):Datafile #15 has been copied to the standby.
2022-05-03T14:19:49.845284+08:00
PDBHKC(4):Full restore complete of datafile 14 /u02/oradata/DB213H1_STDBY/DE155F0301020A30E0538338A8C07AFA/datafile/o1_mf_sysaux_k71kvzq3_.dbf.  Elapsed time: 0:00:03
PDBHKC(4):  checkpoint is 3691259
PDBHKC(4):  last deallocation scn is 1071312
PDBHKC(4):Datafile #14 has been copied to the standby.
2022-05-03T14:19:53.340847+08:00
PDBHKC(4):Full restore complete of datafile 13 /u02/oradata/DB213H1_STDBY/DE155F0301020A30E0538338A8C07AFA/datafile/o1_mf_system_k71kvzpm_.dbf.  Elapsed time: 0:00:03
PDBHKC(4):  checkpoint is 3691268
PDBHKC(4):  last deallocation scn is 2645444
PDBHKC(4):Datafile #13 has been copied to the standby.
2022-05-03T14:19:53.354185+08:00
PDBHKC(4):PDB Side Media Recovery started for pdbid(4)
PDBHKC(4):.... (PID:6103): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBHKC(4):max_pdb is 4
PDBHKC(4):.... (PID:6103): Media Recovery Waiting for T-1.S-29 (in transit) [krsm.c:6185]
2022-05-03T14:19:53.419125+08:00
PDBHKC(4):Media Recovery of Online Log [Thread=1, Seq=29]
2022-05-03T14:19:53.419332+08:00
PDBHKC(4):Recovery of Online Redo Log: Thread 1 Group 5 Seq 29 Reading mem 0
PDBHKC(4):  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
PDBHKC(4):PSR[4]:influxScn=3691275,nxtCkptScn=3691275,minScn=3691268,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843
PDBHKC(4):The merge request has been submitted by side recovery for pdbid 4
2022-05-03T14:19:53.462873+08:00
PR00 (PID:1896): Background Media Recovery cancelled with status 16037 [krd.c:27039]
PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions
2022-05-03T14:19:53.466115+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1896.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:1896): Managed Recovery not using Real Time Apply [krsm.c:15876]
Recovery interrupted!
Recovered data files to a consistent state at change 3691276
Stopping change tracking
2022-05-03T14:19:53.679424+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1896.trc:
ORA-16037: user requested cancel of managed recovery operation
2022-05-03T14:19:54.506794+08:00
PDBHKC(4):.... (PID:6103): Side Recovery Complete [krds.c:1584]
2022-05-03T14:20:14.680123+08:00
all data files of pdbid 4 are brought online.
 Started logmerger process
2022-05-03T14:20:14.739054+08:00

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
PR00 (PID:6144): Managed Recovery starting Real Time Apply [krsm.c:15865]
max_pdb is 4
2022-05-03T14:20:14.902709+08:00
Parallel Media Recovery started with 4 slaves
2022-05-03T14:20:15.022317+08:00
Stopping change tracking
PR00 (PID:6144): Media Recovery Waiting for T-1.S-29 (in transit) [krsm.c:6185]
2022-05-03T14:20:15.207572+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 29 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-03T14:23:50.977661+08:00
 rfs (PID:2029): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-30 [krsr.c:18143]
2022-05-03T14:23:50.978472+08:00
ARC1 (PID:1786): Archived Log entry 11 added for B-1102494684.T-1.S-29 ID 0x7f21e6abae9c LAD:1 [krse.c:4933]
2022-05-03T14:23:51.046002+08:00
PR00 (PID:6144): Media Recovery Waiting for T-1.S-30 (in transit) [krsm.c:6185]
2022-05-03T14:23:51.046688+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 30 Reading mem 0
  Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_4_k5yt8y5f_.log


Note the messages 
"PDBHKC(4):PSR[4]: the incarnation of data files don't match, so we are trying to fetch data files again from the primary database."
"PDBHKC(4):Media Recovery failed with error 65138"
"PDBHKC(4):ORA-65138 is signaled while preparing the media recovery."
"PDBHKC(4):Full restore complete of datafile ..<fileid>" for each datafile in the PDB
"PDBHKC(4):Datafile #<fileid> has been copied to the standby." for each datafile in the PDB
"PDBHKC(4):PDB Side Media Recovery started for pdbid(4)"
"PDBHKC(4):.... (PID:6103): Managed Recovery starting Real Time Apply [krsm.c:15865]"
"PDBHKC(4):PSR[4]:influxScn=3691275,nxtCkptScn=3691275,minScn=3691268,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843"
"PDBHKC(4):The merge request has been submitted by side recovery for pdbid 4"
"PR00 (PID:1896): Background Media Recovery cancelled with status 16037 [krd.c:27039]"
"PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions"
"ORA-16037: user requested cancel of managed recovery operation"
"PR00 (PID:1896): Managed Recovery not using Real Time Apply [krsm.c:15876]"
"PDBHKC(4):.... (PID:6103): Side Recovery Complete [krds.c:1584]"
"all data files of pdbid 4 are brought online."
" Started logmerger process" 
"PR00 (PID:6144): Managed Recovery starting Real Time Apply [krsm.c:15865]"

These messages would relate to the new 12c feature "PDB Recovery Isolation" whereby Oracle DataGuard detects that one PDB has to be recovered separately -- i.e. using a separate background session.  (this seems to be called "side recovery" in the messages)

Here are the ORA error descriptions :


65138, 00000, "Data file %s of pluggable database %s belongs to an orphan PDB incarnation."
// *Cause: Either the specified data file was restored from a backup that was
//         taken during a period of time that was  discarded by
//         a RESETLOGS operation, or Oracle could not identify which pluggable
//         database incarnation the file belongs to. One would also encounter
//         this error if a pluggable database (PDB) flashback or restore was
//         performed to an orphan PDB incarnation followed by complete
//         recovery of the PDB. The alert log contains more information.
// *Action: Restore a backup of this file that belonged to either the current
//          or a prior incarnation of the pluggable database.  If you are using
//          RMAN to restore, RMAN will automatically select a correct backup.
//          If you are using Flashback, then perform flashback to a point
//          prior to the start of the orphan incarnation.


16037, 00000, "user requested cancel of managed recovery operation"
// *Cause:  The managed standby database recovery operation has been
//          canceled per user request.
// *Action: No action is required.
//

16037 is a standard message because Oracle stops media recovery for a while.

However, this behaviour indicates that Oracle actually attempts to copy the entire PDB from the Primary to the Standby.  It would be interesting to see how it works with an available backup of the PDB at the Standby -- particularly so if the PDB is very large and copy-over-the-network would take time.

15 April, 2022

A Fresh Look at AWR Reports --- 4 : SQL*Net Waits

 After the first three posts on CPU Usage, Redo Size and I/O Profile, I present my interpretation of SQL*Net waits.

From the same AWR Report as the first three reviews, I have extracted the specific SQL*Net waits from the Foreground Wait Events table:

SQL*Net Waits from the Foreground Waits Table

"SQL*Net more data from client" waits occur when the server process hasn't received the required data (e.g. an INSERT statement data) within 1 network packet and has to wait to receive more more.  "SQL*Net more data to client" is the reverse -- the client has requested, say a large row or dataset that doesn't fit into 1 network packet.  The network packet size is determined essentially by two parameters "Session Data Unit" (aka "SDU") and "Transport Data Unit" (aka "TDU").  When sending rows out of the server, it can automatically also do compression also. ARRAYSIZE (or Fetch Size, depending on your client code, in java) can also achieve some compression. TDU is obsolete in all recent versions but SDU can still be tuned.  For guidance on tuning SDU,  section 14.2 of the Tuning Guide.

"SQL*Net message to client" doesn't really show time taken to send the message (e.g. sql select statement result or execution of commit) to the client.  It is actually the time taken by the Server Process to put the results on to the Network Layer on the DB Server.  Beyond that, the Oracle Server Process does *not* how long it takes for the data to reach the client (whether a user desktop or an application server / middle-tier).  The actual time would depend on network latency and bandwidth.  So, you can interpret the number of occurrences of the Wait Event but NOT the time.

"SQL*Net break/reset" would be presented when an error occurs or an unhandled exception is raised. A high number of such waits indicate repeated occurrences of errors in SQL calls.

"SQL*Net message from client" may or may not be a problem (and that is why Oracle does not include it in % DB time).  There are different manifestations of this event :

1. The database server has sent a row or a set of rows (depending on arraysize or fetchsize) and is waiting for the client / application server to acknowledge and request the next set of rows.  The client could be a user who is running, say, SQL*Developer, and has seen the first 50 rows of a 1000 row result set but has walked away from his desk to get some coffee and has not yet requested the next set of 50 rows. (Yes, it is true that SQL*Developer and most clients do NOT fetch and present all the rows to the user immediately, this is is done in batches -- the parameter is configurable in most client or java programs).  The user may come back 5minutes or 30minutes later and ask for the next 50 rows.  Till then, the Oracle Server Process is "waiting" on this event.

2. It could also be an application server / middle tier that is formatting / sorting the rows or is writing the results to disk or is joining this set of rows with another data set (even from another source -- e.g. an ODBC connection, an Excel file etc) and is taking some milliseconds to do so.  Till then, the Oracle Server Process may be "waiting" on this event.

3. The database server has sent the results of a query (or success of execution of a DML) to the client / application server and is waiting for the next SQL call.  The next SQL call could be a COMMIT (or ROLLBACK) or any INSERT/UPDATE/DML statement.  It may come 1second later or it may come 30minutes later or it may come 10hours later.  As long as the client / application server has maintained a session on the Database, the Oracle Server Process will be "waiting" on this event.



Other than the above set of "Wait" Events relating to a client / application server, there may occur exactly the same Events relating to a Database Link. :  "SQL*Net more data from/to dblink",  "SQL*Net message from/to dblink" and SQL*Net break/reset to dblink".  They don't appear in my AWR because my example load does not include Database Links to / from other databases.  But you need to interpret them as well.  For example, if you have Database Links for periodic refresh's of Materialized Views between Databases, you may well see such "SQL*Net message" and "SQL*Net more data" waits relating to "dblink"s.


02 April, 2022

A Fresh Look at AWR Reports -- 3 : I/O Profile

 After viewing CPU Usage and Redo Size statistics, here's a look at the I/O Profile.

Here's the header which indicates that this AWR report is for duration of 2hours on 19c on Windows :

AWR Header


Instead of the Load Profile, this time I will skip directly to the I/O Profile section :

I/O Profile

In the above table, "Database" seems to include both "Datafile" and "Tempfile".

Also, for some co-relation, the IOStat section :

IOStat by Filetype Summary

As I've already mentioned in my previous 2 posts, the load I that I generated was more I/O Intensive than CPU Intensive.
Here we can see a break-down of the I/O Profile.  I am more interested in the *Writes*.

From the previous post, we've already seen that 20GB of writes were for Duplexed Online Redo Logs and 10GB for ArchiveLogs.
But what about the rate per second ?

Online Redo Log file writes were 2.7MB/second.

Datafile + Tempfile writes were 3.3MB/second.  Note that Tempfile I/O was higher than Datafile I/O.

Although Datafile requests per second were higher, Datafile writes by DBWR may well be in smaller batches (i.e. fewer blocks at each write) --- more I/O calls (85.59), less total written bytes (1.388M).  This is in the nature of DBWR behaviour as DBWR periodically "collects" dirty buffers to flush to disk.

Writes to Tempfile for Sort/GroupBy operations (10.59 for 1.89M) can be more blocks per call.  This is because such operations can consist of large number of blocks in each sesssion. The writes are from PGA, not the Buffer Cache in the SGA.

So, what could have been happening here, that Tempfile writes were so high, relative to Datafile writes ?  Sort Operations, Group By Queries, Create / Rebuild Index operations.

Note : Create / Rebuild operations are unlikely to be all visible in the SQL Statements section of the AWR report because, as DDL statements, they get invalidated and aged out of the Shared Pool quickly before each AWR Snapshot run.  Yet, their I/O profile is visible in the IO Sections of the AWR Report).
Create/Rebuild Index operations do not generate Redo for their Tempfile writes but *do* generate Redo for their Datafile writes if they are not executed as NoLogging operations.









23 March, 2022

A Fresh Look at AWR Reports -- 2 : Redo Size

 In my previous blog post, I showed how to interpret CPU Usage from an AWR report.

Here, I take the same 2hour AWR report and help interpret Redo Size -- ie. volume of Redo Generated and Written to Disk.

Here's the header which indicates that this AWR report is for duration of 2hours on 19c on Windows :

AWR Header


These are the first few lines of the Load Profile :

First few lines of Load Profile

This indicates that the Redo Size (total Redo Generated) is just about 10GB  (multiply Redo Size bytes per Second by the total elapsed seconds (120.71) minutes)

I can confirm that from the Key Instance Activity Stats section :

Instance Activity Statistics

So, did my database write about 10GB (about 9.8GB actually, if you divide by 1024s) of Redo to disk ?   Is that true ?

Let me go to the IOStat(s) section.  (Please ignore the "average service time" in this table because I've run this test on a PC with a standard Hard-Disk -- not an Enterprise SAN !)

IO Statistics by FileType

What do I see here ?  The total volume written to Log Files (i.e. Online Redo Logs) is 19GB !

Why is that so ?

Because, in my database, every Redo Log Group is duplexed -- it has 2 members.  So, Redo of approximately 9.8GB actually translates to 19GB (or close to 20GB) of Physical Writes to Online Redo Log files.

Note that there are 10GB of Reads from  the Online Redo Log files.  That is the Log Archiver Process(es) writing out the Online Redo Logs to the Archive Logs -- which you can see is 10GB of Writes to Archive Log !

Thus, the initial "Redo Size" of 10GB is actually 30GB of Physical Writes to Disk (20GB for Duplexed Online Redo Logs  PLUS 10GB of Archive Logs).

If I had defined an additional ArchiveLogDestination on my server (as I have done in previous implementations where dedicated disks or NFS mount points were assigned), there would have been another 10GB of Physical Writes.

For every Standby Database (note how in previous blog posts here and here , I have demonstrated 3 separate Standby Databases for the Primary Database), 10GB of Redo has to be transferred over the Network and then written to the Standby Redo Logs at each Standby Site/Server.

So, my 10GB of Redo could effectively translate to anywhere from 30GB to 70GB of Physical Write of Redo  Plus 10GB to 30GB of transfer over the network !!

When you look at the SAN or underlying storage, you must consider how all those writes get through.  Does the SAN have enough Write-Through Cache ?  Are those Redo Logs and the one or more ArchiveLogDestination(s) on the Primary Server  actually on separate Disks  -- you might have separate FileSystem MountPoints or ASM DiskGroups but how do they translate down to the Disks in the SAN ?


20 March, 2022

A Fresh Look at AWR Reports -- 1 : CPU Usage

 I have covered AWR Reports (from 10g to 12c, including MultiTenant) in the past.  You could search for "AWR" in the "Search this Blog" in the panel on the right or by specifying :   
site://hemantoracledba.blogspot.com AWR
in the Google Search Window

In this post I present a comparison between OS and Oracle CPU Statistics.

AWR Header

So, I have an AWR Report for 19c on Windows for a duration of 120minutes with 12 CPUs.  That is a total of 86,400 available CPU seconds (120minutes X 12 CPUs X 60seconds_per_minute)



Load Profile


The Load Profile shows 0.4 Oracle CPU seconds per Elapsed Seconds.  This means that, on average, I am consuming less than 1 CPU.


Listing of Waits and Wait Classes

Oracle Database CPU time is 13.5% of total Oracle Time.  This is 1,918 seconds.
Note very carefully that this does NOT mean that the Oracle Database Processes account for 13.5% of Server CPU Usage.



O.S. and Oracle Instance CPU
This shows that the Server reports 24% CPU Usage but Oracle used only 2.4% of the total CPU available (approximately 2.4% of 86,400seconds is 2,073seconds close to Oracle's DB CPU figure)
This also translates about 9% of the total actual CPU "busyness" on the Server.

So, if Oracle consumed only 9% of the actual CPU Usage on the server, I would need to look for other processes on the same server that would have been consuming CPU.  

This could be other Oracle Database Instances (my report is for database instance "CDB1" -- was there / were there one or more other Database Instances active on the same server ?).  This could also be non-Oracle programs, Application Server software,  other system / monitoring tools running (I've seen monitoring tools alone consuming 20% of the CPU on a server !).

If this server is hosting only this 1 Database Instance (CDB1), it is excessively provisioned (12 CPUs) for the actual load.  Of course, the load that I have generated here is I/O intensive, but it means that I could have actually reduced the number of CPUs to, say, 4 CPUs and paid much less in Hardware, Oracle Licensing and Oracle Support costs.  (4 CPUs would have allowed for the OS and the other non-Oracle processes that seem to be accounting for 24% user-time of 12 CPUs).




12 March, 2022

SQL Execution Statistics -- 4 : Adaptive Plans

In the previous 3 blog posts, I have shown how you can use the GATHER_PLAN_STATISTICS Hint or the STATISTICS_LEVEL='ALL' to get sql execution statistics.

This may work with Adaptive Plans as well.



SQL> select /*+ gather_plan_statistics */ count(*)
  2  from tables_list t
  3  where (owner, table_name)
  4  not in
  5  (
  6   select owner, table_name
  7   from indexes_list i
  8  )
  9  /

  COUNT(*)
----------
       593

SQL> select * from table(dbms_xplan.display_cursor('',format=>'ALLSTATS LAST +COST +ADAPTIVE'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  g6q8m94krdvz7, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from tables_list t where
(owner, table_name) not in (  select owner, table_name  from
indexes_list i )

Plan hash value: 2460844421

---------------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                | Name               | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT         |                    |      1 |        |    11 (100)|      1 |00:00:00.01 |      37 |       |       |          |
|     1 |  SORT AGGREGATE          |                    |      1 |      1 |            |      1 |00:00:00.01 |      37 |       |       |          |
|  *  2 |   HASH JOIN ANTI         |                    |      1 |     22 |    11   (0)|    593 |00:00:00.01 |      37 |  1335K|  1335K| 1376K (0)|
|-    3 |    NESTED LOOPS ANTI     |                    |      1 |     22 |    11   (0)|   2202 |00:00:00.01 |      17 |       |       |          |
|-    4 |     STATISTICS COLLECTOR |                    |      1 |        |            |   2202 |00:00:00.01 |      17 |       |       |          |
|     5 |      INDEX FAST FULL SCAN| TABLES_LIST_NDX    |      1 |   2202 |     5   (0)|   2202 |00:00:00.01 |      17 |       |       |          |
|- *  6 |     INDEX RANGE SCAN     | INDEXES_LIST_NDX_1 |      0 |   2918 |     6   (0)|      0 |00:00:00.01 |       0 |       |       |          |
|     7 |    INDEX FAST FULL SCAN  | INDEXES_LIST_NDX_1 |      1 |   2918 |     6   (0)|   2918 |00:00:00.01 |      20 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")
   6 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)


31 rows selected.

SQL>
SQL> alter session set statistics_level='ALL';

Session altered.

SQL> select count(*)
  2  from tables_list t
  3  where (owner, table_name)
  4  not in
  5  (
  6   select owner, table_name
  7   from indexes_list i
  8  )
  9  /

  COUNT(*)
----------
       593

SQL> select * from table(dbms_xplan.display_cursor('',format=>'ALLSTATS LAST +COST +ADAPTIVE'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  c1fdcdxhvjrat, child number 0
-------------------------------------
select count(*) from tables_list t where (owner, table_name) not in (
select owner, table_name  from indexes_list i )

Plan hash value: 2460844421

---------------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                | Name               | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT         |                    |      1 |        |    11 (100)|      1 |00:00:00.01 |      37 |       |       |          |
|     1 |  SORT AGGREGATE          |                    |      1 |      1 |            |      1 |00:00:00.01 |      37 |       |       |          |
|  *  2 |   HASH JOIN ANTI         |                    |      1 |     22 |    11   (0)|    593 |00:00:00.01 |      37 |  1335K|  1335K| 1381K (0)|
|-    3 |    NESTED LOOPS ANTI     |                    |      1 |     22 |    11   (0)|   2202 |00:00:00.01 |      17 |       |       |          |
|-    4 |     STATISTICS COLLECTOR |                    |      1 |        |            |   2202 |00:00:00.01 |      17 |       |       |          |
|     5 |      INDEX FAST FULL SCAN| TABLES_LIST_NDX    |      1 |   2202 |     5   (0)|   2202 |00:00:00.01 |      17 |       |       |          |
|- *  6 |     INDEX RANGE SCAN     | INDEXES_LIST_NDX_1 |      0 |   2918 |     6   (0)|      0 |00:00:00.01 |       0 |       |       |          |
|     7 |    INDEX FAST FULL SCAN  | INDEXES_LIST_NDX_1 |      1 |   2918 |     6   (0)|   2918 |00:00:00.01 |      20 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")
   6 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)


30 rows selected.

SQL>


ote : The two SQL statements, although being the same, have different SQL_IDs because of the presence / absence of the GATHER_PLAN_STATISTICS Hint.

06 March, 2022

SQL Execution Statistics -- 3 : Using STATISTICS_LEVEL

 In my previous posts here and here, I have shown how to use the GATHER_PLAN_STATISTICS Hint to collect additional information.  But what if you can't modify the SELECT query statement ?

You could set STATISTICS_LEVEL='ALL' at the session, run the query and then reset STATISTICS_LEVEL to the default  'TYPICAL'



SQL> alter session set tracefile_identifier='H2';

Session altered.

SQL> alter session set statistics_level='ALL';

Session altered.

SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>TRUE);  -- just to get the SQL_ID in the trace file

PL/SQL procedure successfully completed.

SQL>
SQL> select
  2  d.department_id, d.department_name, e.first_name, e.last_name
  3  from hr.departments d join hr.employees e
  4  on d.department_id=e.department_id
  5  order by 1,2,4,3
  6  /
....
....
....
106 rows selected.

SQL>
SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>FALSE);

PL/SQL procedure successfully completed.

SQL>
QL> select * from table(dbms_xplan.display_cursor('4aa2k5b98ybfr',format=>'ALLSTATS LAST'));  -- SQL_ID identified from the trace file

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  4aa2k5b98ybfr, child number 0
-------------------------------------
select d.department_id, d.department_name, e.first_name, e.last_name
from hr.departments d join hr.employees e on
d.department_id=e.department_id order by 1,2,4,3

Plan hash value: 2209899241

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                   |      1 |        |    106 |00:00:00.10 |      10 |     14 |       |       |          |
|   1 |  SORT ORDER BY                |                   |      1 |    106 |    106 |00:00:00.10 |      10 |     14 | 11264 | 11264 |10240  (0)|
|   2 |   MERGE JOIN                  |                   |      1 |    106 |    106 |00:00:00.10 |      10 |     14 |       |       |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| DEPARTMENTS       |      1 |     27 |     12 |00:00:00.03 |       2 |      2 |       |       |          |
|   4 |     INDEX FULL SCAN           | DEPT_ID_PK        |      1 |     27 |     12 |00:00:00.03 |       1 |      1 |       |       |          |
|*  5 |    SORT JOIN                  |                   |     12 |    107 |    106 |00:00:00.06 |       8 |     12 |  9216 |  9216 | 8192  (0)|
|   6 |     VIEW                      | index$_join$_002  |      1 |    107 |    106 |00:00:00.06 |       8 |     12 |       |       |          |
|*  7 |      HASH JOIN                |                   |      1 |        |    106 |00:00:00.06 |       8 |     12 |  1610K|  1610K| 1658K (0)|
|   8 |       INDEX FAST FULL SCAN    | EMP_DEPARTMENT_IX |      1 |    107 |    106 |00:00:00.06 |       4 |      6 |       |       |          |
|   9 |       INDEX FAST FULL SCAN    | EMP_NAME_IX       |      1 |    107 |    107 |00:00:00.01 |       4 |      6 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
       filter("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
   7 - access(ROWID=ROWID)


30 rows selected.

SQL>


Thus, I was able to get similar execution statistics without the Hint if I set STATISTICS_LEVEL='ALL'.

(Note : The top level operation here is a MERGE JOIN while that in the first example of the first post of this series on 27-January-22 was a HASH JOIN. The result here is 106 rows, not the 108 rows in the first example. The first example was in an HR schema in another database -- e.g. you would see DEPARTMENT_ID=280 'New Department' and Employee 'HEMANT CHITALE' in that listing.  These are two different HR schemas where I have modified some of the data.  Thus, you'll see a difference in the Execution Plan as well between the first example and this one. although the query is similar -- because the databases, patch levels,  possibly optimizer parameters, schema and data are different.)

Even a slight difference in data or parameters or patch levels can result in different execution plans !







22 February, 2022

SQL Execution Statistics -- 2 : SQL_ID from a Tracefile

In my previous blog post, I have demonstrated sql execution statistics for the last SQL statement of the current session.

However, you can trace SQL execution in the current session or in any other session and then review the information from the tracefile.

(For different methods of SQL Tracing see this series of blog posts

For example, in an SQLPlus session :



SQL> alter session set tracefile_identifier='CUSTOMER_COUNT';

Session altered.

SQL>
SQL> select *
  2  from v$diag_info
  3  where name = 'Default Trace File';

   INST_ID NAME
---------- ----------------------------------------------------------------
VALUE
---------------------------------------------------------------------------------------
    CON_ID
----------
         1 Default Trace File
/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_9416_CUSTOMER_COUNT.trc
         0


SQL>
SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>TRUE);

PL/SQL procedure successfully completed.

SQL> select /*+ gather_plan_statistics */ count(*) from tpcc.orders;

  COUNT(*)
----------
    303697

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>

From the tracefile :
PARSING IN CURSOR #139722640816616 len=62 dep=0 uid=106 oct=3 lid=106 tim=20489021058 hv=3158106211 ad='777865e0' sqlid='du49mwuy3ts33'
select /*+ gather_plan_statistics */ count(*) from tpcc.orders
END OF STMT
PARSE #139722640816616:c=10950,e=11447,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=630573765,tim=20489021057
EXEC #139722640816616:c=52,e=51,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=630573765,tim=20489021253
FETCH #139722640816616:c=78319,e=464572,p=1504,cr=1508,cu=0,mis=0,r=1,dep=0,og=1,plh=630573765,tim=20489485885
STAT #139722640816616 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=1508 pr=1504 pw=0 str=1 time=464570 us)'
STAT #139722640816616 id=2 cnt=303697 pid=1 pos=1 obj=80213 op='TABLE ACCESS FULL ORDERS (cr=1508 pr=1504 pw=0 str=1 time=566811 us cost=411 size=0 card=303697)'
FETCH #139722640816616:c=1,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=630573765,tim=20489486318


From a tkprof of the tracefile :
SQL ID: du49mwuy3ts33 Plan Hash: 630573765

select /*+ gather_plan_statistics */ count(*)
from
 tpcc.orders


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.07       0.46       1504       1508          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.08       0.47       1504       1508          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=1508 pr=1504 pw=0 time=464570 us starts=1)
    303697     303697     303697   TABLE ACCESS FULL ORDERS (cr=1508 pr=1504 pw=0 time=566811 us starts=1 cost=411 size=0 card=303697)
    
 
 SQL> select * from table(dbms_xplan.display_cursor('du49mwuy3ts33',format=>'ALLSTATS LAST +COST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  du49mwuy3ts33, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from tpcc.orders

Plan hash value: 630573765

-------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |      1 |        |   411 (100)|      1 |00:00:00.38 |    1506 |   1505 |
|   1 |  SORT AGGREGATE    |        |      1 |      1 |            |      1 |00:00:00.38 |    1506 |   1505 |
|   2 |   TABLE ACCESS FULL| ORDERS |      1 |    303K|   411   (1)|    303K|00:00:00.07 |    1506 |   1505 |
-------------------------------------------------------------------------------------------------------------


14 rows selected.

SQL>


From the tracefile, I could identify the SQL_ID (du49mwuy3ts33) and then run dbms_xplan.display_cursor for that SQL_ID.  

There is a slight discrepancy between the tracefile and the dbms_xplan.display_cursor output on the Actual Time and Buffers ("cr" in the tracefile) and Physical Reads ("pr" in the tracefile).  Similarly, you might notice a discrepancy in the time reported in the raw trace file STAT lines for operation Ids 2 and 1.  When the timing is in microseconds, some discrepancies may arise.
You must also note that gather_plan_statistics itself introduces an overhead on execution time for SQL statements that run in microseconds/milliseconds.


(For an explanation of reading the tracefile, see this post on the details and and this post on the summary)

27 January, 2022

SQL Execution Statistics -- 1 : A Quick Introduction

 A couple of quick examples of how to obtain SQL Execution Statistics using the GATHER_PLAN_STATISTICS Hint


Here's how you get the Statisitics for an SQL executed from the same session.
SQL> Run query with /*+ gather_plan_statistics */ hint
SQL> Get execution statistics with dbms_xplan.display_cursor(FORMAT=>'ALLSTATS LAST');

For an SQL from another session, you will have to pass the SQL_ID (and, if necessary, the CHILD_NUMBER) as a parameter(s) to the DISPLAY_CURSOR function.

I present a query on DEPARTMENTS and EMPLOYEES with a three different execution plans.  The first is the one automatically generated by the Optimizer and using a Hash Join.  The next two use Nested Loop(s) but differently based on how I specify the LEADING and USE_NL Hints.


The first one using a HASH JOIN :


SQL> select /*+ gather_plan_statistics   */
  2  d.department_id, d.department_name, e.first_name, e.last_name
  3  from hr.departments d join hr.employees e
  4  on d.department_id=e.department_id
  5  order by 1,2,4,3
  6  /

DEPARTMENT_ID DEPARTMENT_NAME                FIRST_NAME           LAST_NAME
------------- ------------------------------ -------------------- -------------------------
           10 Administration                 Jennifer             Whalen
           20 Marketing                      Pat                  Fay
           20 Marketing                      Michael              Hartstein
           30 Purchasing                     Shelli               Baida
           30 Purchasing                     Karen                Colmenares
           30 Purchasing                     Guy                  Himuro
           30 Purchasing                     Alexander            Khoo
           30 Purchasing                     Den                  Raphaely
           30 Purchasing                     Sigal                Tobias
           40 Human Resources                Susan                Mavris
           50 Shipping                       Mozhe                Atkinson
....
....  Intermediate rows deleted from the output presented here
....
          100 Finance                        Ismael               Sciarra
          100 Finance                        Jose Manuel          Urman
          110 Accounting                     William              Gietz
          110 Accounting                     Shelley              Higgins
          280 New Department                 Hemant               Chitale

107 rows selected.

SQL>
SQL> @show_last_sql_statistics.sql
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'))
  2  /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  59vduj8s28fhn, child number 0
-------------------------------------
select /*+ gather_plan_statistics   */ d.department_id,
d.department_name, e.first_name, e.last_name from hr.departments d join
hr.employees e on d.department_id=e.department_id order by 1,2,4,3

Plan hash value: 207737665

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                   |      1 |        |    107 |00:00:00.01 |      14 |       |       |          |
|   1 |  SORT ORDER BY           |                   |      1 |    107 |    107 |00:00:00.01 |      14 | 18432 | 18432 |16384  (0)|
|*  2 |   HASH JOIN              |                   |      1 |    107 |    107 |00:00:00.01 |      14 |  1572K|  1572K| 1652K (0)|
|   3 |    TABLE ACCESS FULL     | DEPARTMENTS       |      1 |     29 |     29 |00:00:00.01 |       6 |       |       |          |
|   4 |    VIEW                  | index$_join$_002  |      1 |    108 |    107 |00:00:00.01 |       8 |       |       |          |
|*  5 |     HASH JOIN            |                   |      1 |        |    107 |00:00:00.01 |       8 |  1610K|  1610K| 1681K (0)|
|   6 |      INDEX FAST FULL SCAN| EMP_DEPARTMENT_IX |      1 |    108 |    107 |00:00:00.01 |       4 |       |       |          |
|   7 |      INDEX FAST FULL SCAN| EMP_NAME_IX       |      1 |    108 |    108 |00:00:00.01 |       4 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
   5 - access(ROWID=ROWID)


27 rows selected.

SQL> SQL>




The second one with a simple NESTED LOOP using  DEPARTMENTS as the "Outer" table :

SQL> select /*+ gather_plan_statistics  leading(d) use_nl(e) */
  2  d.department_id, d.department_name, e.first_name, e.last_name
  3  from hr.departments d join hr.employees e
  4  on d.department_id=e.department_id
  5  order by 1,2,4,3
  6  /

DEPARTMENT_ID DEPARTMENT_NAME                FIRST_NAME           LAST_NAME
------------- ------------------------------ -------------------- -------------------------
           10 Administration                 Jennifer             Whalen
           20 Marketing                      Pat                  Fay
           20 Marketing                      Michael              Hartstein
           30 Purchasing                     Shelli               Baida
           30 Purchasing                     Karen                Colmenares
           30 Purchasing                     Guy                  Himuro
           30 Purchasing                     Alexander            Khoo
           30 Purchasing                     Den                  Raphaely
           30 Purchasing                     Sigal                Tobias
           40 Human Resources                Susan                Mavris
           50 Shipping                       Mozhe                Atkinson
....
....  Intermediate rows deleted from the output presented here
....
          100 Finance                        Ismael               Sciarra
          100 Finance                        Jose Manuel          Urman
          110 Accounting                     William              Gietz
          110 Accounting                     Shelley              Higgins
          280 New Department                 Hemant               Chitale

107 rows selected.


SQL>
SQL> @show_last_sql_statistics.sql
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'))
  2  /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  8q08zbr367cgg, child number 0
-------------------------------------
select /*+ gather_plan_statistics  leading(d) use_nl(e) */
d.department_id, d.department_name, e.first_name, e.last_name from
hr.departments d join hr.employees e on d.department_id=e.department_id
order by 1,2,4,3

Plan hash value: 2178791499

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                   |      1 |        |    107 |00:00:00.01 |      19 |       |       |          |
|   1 |  SORT ORDER BY                |                   |      1 |    107 |    107 |00:00:00.01 |      19 | 22528 | 22528 |20480  (0)|
|   2 |   NESTED LOOPS                |                   |      1 |    107 |    107 |00:00:00.01 |      19 |       |       |          |
|   3 |    NESTED LOOPS               |                   |      1 |    261 |    107 |00:00:00.01 |       9 |       |       |          |
|   4 |     TABLE ACCESS FULL         | DEPARTMENTS       |      1 |     29 |     29 |00:00:00.01 |       6 |       |       |          |
|*  5 |     INDEX RANGE SCAN          | EMP_DEPARTMENT_IX |     29 |      9 |    107 |00:00:00.01 |       3 |       |       |          |
|   6 |    TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |    107 |      4 |    107 |00:00:00.01 |      10 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")


26 rows selected.

SQL> SQL>


And the third, more complicated, Execution Plan for a slightly different SQL :

SQL> select /*+ gather_plan_statistics  leading(e) use_nl(d) */
  2  d.department_id, d.department_name, e.first_name, e.last_name
  3  from hr.departments d join hr.employees e
  4  on d.department_id=e.department_id
  5  order by 1,2,4,3
  6  /

DEPARTMENT_ID DEPARTMENT_NAME                FIRST_NAME           LAST_NAME
------------- ------------------------------ -------------------- -------------------------
           10 Administration                 Jennifer             Whalen
           20 Marketing                      Pat                  Fay
           20 Marketing                      Michael              Hartstein
           30 Purchasing                     Shelli               Baida
           30 Purchasing                     Karen                Colmenares
           30 Purchasing                     Guy                  Himuro
           30 Purchasing                     Alexander            Khoo
           30 Purchasing                     Den                  Raphaely
           30 Purchasing                     Sigal                Tobias
           40 Human Resources                Susan                Mavris
           50 Shipping                       Mozhe                Atkinson
....
....  Intermediate rows deleted from the output presented here
....
          100 Finance                        Ismael               Sciarra
          100 Finance                        Jose Manuel          Urman
          110 Accounting                     William              Gietz
          110 Accounting                     Shelley              Higgins
          280 New Department                 Hemant               Chitale

107 rows selected.

SQL>
SQL> @show_last_sql_statistics.sql
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'))
  2  /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2kbc7rcpjcu0w, child number 0
-------------------------------------
select /*+ gather_plan_statistics  leading(e) use_nl(d) */
d.department_id, d.department_name, e.first_name, e.last_name from
hr.departments d join hr.employees e on d.department_id=e.department_id
order by 1,2,4,3

Plan hash value: 2010582952

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                   |      1 |        |    107 |00:00:00.01 |     119 |       |       |          |
|   1 |  SORT ORDER BY                |                   |      1 |    107 |    107 |00:00:00.01 |     119 | 18432 | 18432 |16384  (0)|
|   2 |   NESTED LOOPS                |                   |      1 |    107 |    107 |00:00:00.01 |     119 |       |       |          |
|   3 |    NESTED LOOPS               |                   |      1 |    108 |    107 |00:00:00.01 |      12 |       |       |          |
|   4 |     VIEW                      | index$_join$_002  |      1 |    108 |    107 |00:00:00.01 |       8 |       |       |          |
|*  5 |      HASH JOIN                |                   |      1 |        |    107 |00:00:00.01 |       8 |  1610K|  1610K| 1539K (0)|
|   6 |       INDEX FAST FULL SCAN    | EMP_DEPARTMENT_IX |      1 |    108 |    107 |00:00:00.01 |       4 |       |       |          |
|   7 |       INDEX FAST FULL SCAN    | EMP_NAME_IX       |      1 |    108 |    108 |00:00:00.01 |       4 |       |       |          |
|*  8 |     INDEX UNIQUE SCAN         | DEPT_ID_PK        |    107 |      1 |    107 |00:00:00.01 |       4 |       |       |          |
|   9 |    TABLE ACCESS BY INDEX ROWID| DEPARTMENTS       |    107 |      1 |    107 |00:00:00.01 |     107 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(ROWID=ROWID)
   8 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")


30 rows selected.

SQL> SQL>


All three executions return exactly the same Result Set.  What are different are the Execution Plans and the Logical Reads.
The first Execution Plan (using a HASH JOIN) required 14 Logical Reads (Buffer Gets).
Note how the Second Execution required 19 Logical Reads  while the Third Execution required 119 Logical Reads  having queried the DEPARTMENTS Table via the Rows from the Nested Loop driven by the intermdiate "view" index$_join$_001  107 times (reading 1 Block each time).  

Since these are extremely small tables, the Actual Time is less than 1second but what matters in the comparison is the number of Logical Reads

("Starts" is the number of times that particular Step (Id) was actually executed, 
"E-Rows" is the Estimated Rows for the Execution Plan,
"A-Rows" is the *Actual* number of Rows retrieved at Runtime,
"A-Time" is the *Actual* Time in Hours:Minutes:Seconds,
"0mem" is the "Estimated Optimal Memory Usage", 
"1mem" is the Estimated 1-pass (one write to temp) Memory Usage 
"Used-Mem" is the Actual Memory Used)


SQL> select table_name, blocks, num_rows
  2  from dba_tables
  3  where owner = 'HR'
  4  and table_name in ('DEPARTMENTS','EMPLOYEES')
  5  order by 1
  6  /

TABLE_NAME                         BLOCKS   NUM_ROWS
------------------------------ ---------- ----------
DEPARTMENTS                             5         29
EMPLOYEES                               5        108

SQL>


Given the sizes of the two tables, I am sure that all the blocks are cached in the Buffer Cache, so I am not interested in Physical I/O statistics and Wait Times.  The Logical Reads consume CPU time.

This is an example of how to read SQL Execution Statistics.  

You may extrapolate to tables with thousands of blocks and tens of thousands of rows.  

Also, consider Memory Usage for the operations when you may have dozens of hundreds of sessions concurrently doing HASH JOINs and SORTs and being limited by PGA_AGGREGATE_TARGET.  Some Memory operations may overflow to the TEMP Tablespace, noticeably increasing total execution time.

23 January, 2022

Refreshable Clone PDB -- 4 Converting it to a Read Write PDB

 As demonstrated in my first post on Refreshable Clone PDBs, the Clone PDB can be opened in only READ ONLY mode.  This is akin to a Basic Materialized View or a Read Only Standby Database, either of which is updated from the source table(s) / database.

Such a Refreshable Clone PDB is useful as a Reporting Database where you can run queries / extracts -- with data AS OF the last Refresh -- without putting load on the running Production (Transactional, Read Write Database).

But if you want to (finally) open the Clone PDB in Read Write mode (and severe any links with the source, thus preventing further Refresh's) you can do so by changing the Refresh Mode.



SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

  COUNT(*)
----------
     73645

SQL> connect / as sysdba
Connected.

SQL> alter pluggable database ro_pdb close;  -- close it so that I can do a REFRESH

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb refresh;  -- execute the REFRES

Pluggable database altered.

SQL> alter pluggable database ro_pdb open ;  -- attempt to OPEN, defaulting to READ WRITE mode
alter pluggable database ro_pdb open
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL> alter pluggable database ro_pdb open read write;
alter pluggable database ro_pdb open read write
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL>
SQL> alter pluggable database ro_pdb open read only;  -- open READ ONLY

Pluggable database altered.

SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;  -- verify that the database has been refreshed (new rows visible in the source table)

  COUNT(*)
----------
     83645

SQL>
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database ro_pdb close;  -- close it again

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb refresh mode none ;  -- ***DISABLE FURTHER REFRESH's***

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb open read write;  -- open in READ WRITE mode now !!

Pluggable database altered.

SQL>
SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

  COUNT(*)
----------
     83645

SQL> delete list_of_objects where owner = 'HEMANT';  -- proof that the database is now WRITABLE

32 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> alter pluggable database ro_pdb close;

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh;  -- check if it can be REFRESH'ed from the source
alter pluggable database ro_pdb refresh
*
ERROR at line 1:
ORA-65261: pluggable database RO_PDB not enabled for refresh


SQL>


Thus, to enable the PDB to be WRITABLE, REFRESH has to be disabled. 
 Once REFRESH is disabled, the PDB can no longer be refreshed from the source. It has diverged from the source.

These are the alert log messages when REFRESH is disabled :


2022-01-23T15:35:55.766486+08:00
alter pluggable database ro_pdb refresh mode none
2022-01-23T15:35:55.846041+08:00
RO_PDB(6):Pluggable database RO_PDB pseudo opening
RO_PDB(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
RO_PDB(6):Autotune of undo retention is turned on.
RO_PDB(6):Endian type of dictionary set to little
RO_PDB(6):Undo initialization recovery: Parallel FPTR failed: start:1511350 end:1511367 diff:17 ms (0.0 seconds)
RO_PDB(6):Undo initialization recovery: err:0 start: 1511329 end: 1511464 diff: 135 ms (0.1 seconds)
RO_PDB(6):[6305] Successfully onlined Undo Tablespace 2.
RO_PDB(6):Undo initialization online undo segments: err:0 start: 1511465 end: 1511507 diff: 42 ms (0.0 seconds)
RO_PDB(6):Undo initialization finished serial:0 start:1511329 end:1511509 diff:180 ms (0.2 seconds)
RO_PDB(6):Database Characterset for RO_PDB is AL32UTF8
2022-01-23T15:35:57.144146+08:00
RO_PDB(6):Pluggable database RO_PDB pseudo closing
RO_PDB(6):JIT: pid 6305 requesting stop
RO_PDB(6):Closing sequence subsystem (1512195729).
RO_PDB(6):Buffer Cache flush started: 6
RO_PDB(6):Buffer Cache flush finished: 6
Completed: alter pluggable database ro_pdb refresh mode none


And these are the messages when it is OPENed in READ WRITE mode :



2022-01-23T15:36:56.085938+08:00
alter pluggable database ro_pdb open read write
2022-01-23T15:36:56.088166+08:00
RO_PDB(6):Pluggable database RO_PDB opening in read write
RO_PDB(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
RO_PDB(6):Autotune of undo retention is turned on.
RO_PDB(6):Endian type of dictionary set to little
RO_PDB(6):Undo initialization recovery: Parallel FPTR complete: start:1571711 end:1571732 diff:21 ms (0.0 seconds)
RO_PDB(6):Undo initialization recovery: err:0 start: 1571710 end: 1571732 diff: 22 ms (0.0 seconds)
2022-01-23T15:36:58.249919+08:00
RO_PDB(6):[6305] Successfully onlined Undo Tablespace 2.
RO_PDB(6):Undo initialization online undo segments: err:0 start: 1571732 end: 1573115 diff: 1383 ms (1.4 seconds)
RO_PDB(6):Undo initialization finished serial:0 start:1571710 end:1573161 diff:1451 ms (1.5 seconds)
RO_PDB(6):Deleting old file#9 from file$
RO_PDB(6):Deleting old file#10 from file$
RO_PDB(6):Deleting old file#11 from file$
RO_PDB(6):Deleting old file#12 from file$
RO_PDB(6):Deleting old file#26 from file$
RO_PDB(6):Deleting old file#31 from file$
RO_PDB(6):Deleting old file#32 from file$
RO_PDB(6):Deleting old file#33 from file$
RO_PDB(6):Deleting old file#34 from file$
RO_PDB(6):Adding new file#50 to file$(old file#9).             fopr-1, newblks-48640, oldblks-19200
RO_PDB(6):Adding new file#51 to file$(old file#10).             fopr-1, newblks-61440, oldblks-15360
RO_PDB(6):Adding new file#52 to file$(old file#11).             fopr-1, newblks-58240, oldblks-12800
RO_PDB(6):Adding new file#53 to file$(old file#12).             fopr-1, newblks-46880, oldblks-640
RO_PDB(6):Adding new file#54 to file$(old file#26).             fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#56 to file$(old file#31).             fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#55 to file$(old file#32).             fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#57 to file$(old file#34).             fopr-1, newblks-115200, oldblks-12800
RO_PDB(6):Successfully created internal service RO_PDB at open
2022-01-23T15:36:59.694482+08:00
****************************************************************
Post plug operations are now complete.
Pluggable database RO_PDB with pdb id - 6 is now marked as NEW.
****************************************************************
RO_PDB(6):Pluggable database RO_PDB dictionary check beginning
RO_PDB(6):Pluggable Database RO_PDB Dictionary check complete
RO_PDB(6):Database Characterset for RO_PDB is AL32UTF8
2022-01-23T15:37:05.752451+08:00
RO_PDB(6):JIT: pid 6305 requesting full stop
2022-01-23T15:37:11.910855+08:00
RO_PDB(6):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)
2022-01-23T15:37:20.655852+08:00
RO_PDB(6):Opening pdb with no Resource Manager plan active
RO_PDB(6):joxcsys_required_dirobj_exists: directory object exists with required path /opt/oracle/product/19c/dbhome_1/javavm/admin/, pid 6305 cid 6
2022-01-23T15:37:22.578807+08:00
Pluggable database RO_PDB opened read write
Completed: alter pluggable database ro_pdb open read write


Note : To understand FILE# values, see my previous post on Datafile Names for Refresh Clone PDB.
(Some of the FILE# values have changed since that post because I have added a new tablespace in the source database and recreated RO_PDB as a Refreshable Clone PDB since that post).