Here are my two responses :
On the "maximum number of control files" :
You have to consider how Oracle uses the control file.
Logically it is one single controlfile multiplexed 'N' times. Therefore, each physical copy has to be an exact mirror of every other copy (just as every member of a log group is a mirror of other members in the same group).
When does Oracle update the controlfile ? Very frequently ! When a log switch occurs, when ARCH needs to add information about an archived file, when datafiles are added, when direct path operations take place, when RMAN creates backuppieces etc ....
Since the multiplexed controlfiles have to be mirrors, Oracle has to actually update every controlfile precisely. It has to lock the controlfile. If you seen 'enq : CF contention' waits and/or read threads and bugs relating to this you'd have seen the impact of this. The default CF enqueue is 900seconds. Database instances have been known to crash when the CF enqueue is held too long by one process while another background is attempting to update the controlfile.
If you have multiplexed the controlfile 12 times, the enqueue has to be held until Oracle is sure that all 12 copies have been written to the OS.
Essentially : the controlfile is a point of serialisation. Too many copies and the serialisation can be a severe constraint.
Also when there are multiple controlfile "copies" some of them might just happen to be on "slower" disks. In such a case, controlfile updates become as slow as the slowest disk !
So, rather than a "technical constraint" there is a real-world constraint.
Logically it is one single controlfile multiplexed 'N' times. Therefore, each physical copy has to be an exact mirror of every other copy (just as every member of a log group is a mirror of other members in the same group).
When does Oracle update the controlfile ? Very frequently ! When a log switch occurs, when ARCH needs to add information about an archived file, when datafiles are added, when direct path operations take place, when RMAN creates backuppieces etc ....
Since the multiplexed controlfiles have to be mirrors, Oracle has to actually update every controlfile precisely. It has to lock the controlfile. If you seen 'enq : CF contention' waits and/or read threads and bugs relating to this you'd have seen the impact of this. The default CF enqueue is 900seconds. Database instances have been known to crash when the CF enqueue is held too long by one process while another background is attempting to update the controlfile.
If you have multiplexed the controlfile 12 times, the enqueue has to be held until Oracle is sure that all 12 copies have been written to the OS.
Essentially : the controlfile is a point of serialisation. Too many copies and the serialisation can be a severe constraint.
Also when there are multiple controlfile "copies" some of them might just happen to be on "slower" disks. In such a case, controlfile updates become as slow as the slowest disk !
So, rather than a "technical constraint" there is a real-world constraint.
On the "limited size of the control file" :
Think in terms of how the controlfile can be backed up ...
Why did Oracle never allow User-Managed backups of controlfiles using OS commands like "cp" or "tar" or "cpio" ? Datafiles can be backed up (after an ALTER DATABASE / TABLESPACE BEGIN BACKUP) using such commands even though these may well use a block size of 512bytes or 1KB or 4KB -- a block size other than the database block size. The datafiles are "protected" from fractured block scenarios by capturing block updates in the redo stream. Unfortunately, controlfile updates cannot be so protected. A controlfile copy has to be an "all or nothing" operation. That is why Oracle provided the ALTER DATABASE BACKUP CONTROLFILE TO 'filename' command -- it actually "locks" the controlfile while it is being backed up to the destination. An excessively large controlfile would take very long to backup and would mean that it would be locked for that duration.
Why did Oracle never allow User-Managed backups of controlfiles using OS commands like "cp" or "tar" or "cpio" ? Datafiles can be backed up (after an ALTER DATABASE / TABLESPACE BEGIN BACKUP) using such commands even though these may well use a block size of 512bytes or 1KB or 4KB -- a block size other than the database block size. The datafiles are "protected" from fractured block scenarios by capturing block updates in the redo stream. Unfortunately, controlfile updates cannot be so protected. A controlfile copy has to be an "all or nothing" operation. That is why Oracle provided the ALTER DATABASE BACKUP CONTROLFILE TO 'filename' command -- it actually "locks" the controlfile while it is being backed up to the destination. An excessively large controlfile would take very long to backup and would mean that it would be locked for that duration.
.
.
.
3 comments:
Nice post Hemant, thanks for sharing
Hi,
Understood why we need to limit the size of controlfile.
Can you tell me why oracle has limit the maximum number of controlfiles to 7.
I had created a 11g database with 8 controlfiles
When i tried to create the database with 9controlfiles then we got alert/error in alert log file as
" ORA-00208: number of control file names exceeds limit of 8
CKPT (ospid: 16830): terminating the instance due to error 208
System state dump requested by (instance=1, osid=16830 (CKPT)), summary=[abnormal instance termination]."
I think we can increase these limit value morethan 8
Regards,
Preethi
Preethi,
I have explained why Oracle has a limit "maximum number of controlfiles". Please read my post *again*.
8 controlfiles to be synced very very frequently ? Do you really build an 8-way mirrored database on one storage ?
That is WAY TOO EXCESSIVE.
Post a Comment