Showing posts with label Architecture. Show all posts
Showing posts with label Architecture. Show all posts

Thursday, December 20, 2012

Additional Redo Generation During Online Backup

In this article I'll describe you the reason, why there are excessive (additional) redo information generate when we take the online Database/Tablespace backup.
Going further let me clear one thing, during online backup the database performs normal read write operation on datafile and generates online redologs but with some additional/excessive information.  

When we put the Database/TBS in backup mode by the below command.
alter database begin backup;
or
alter tablespace test begin backup;

Oracle immediately flushes all the earlier changes from SGA memory (Buffer cache) to physical disk (oracle blocks) and then checkpoints the tablespace.
Now the datafile header of the tablespace is frozen i.e. the SCN is marked and it will not increase by any of the update in datafile. This helps oracle to find out when tablespace datafile has the last consistent data and what archive redo log files might be needed to fully recover that file.

When a block changed for the FIRST time in a datafile, Oracle writes the ENTIRE BLOCK into the redo log files inspite to write only the changed bytes. Normally Oracle writes only the changed bytes into the online redolog. Now we have the ENTIRE BLOCK of the redo vector in redo logfile, when the same block is get updated by NEXT time then only changed redo vector would get written down into the online redolog inspite of the ENTIRE BLOCK. The reason behind it is, might be the DBWR and copying process is working over the same block simultaneously.
Lets clear it by the below situation.
In Linux, filesystem have a default blocksize of 2k, while Oracle's is 8k. This means that the filesystem read factor is 2k at a time. The backup program goes to read 8k Oracle block. The OS already copied the 4k of the block, meanwhile DBWR has called to rewrite the same block. The entire 8k block is rewritten. The backup program reads the last 4k of the block. 
Above situation is, where the DBWR's written activity happend over block  at the same time backup process is copying that block. In this case backup copy of the DB block could contain FRACTURED BLOCK.
Hence Oracle log the ENTIRE block image in the redologs for event of recovery, if there is any fractured block it will be totally rewritten from redologs and will be consistent.     

Tuesday, October 30, 2012

Shared Pool Cache

It holds the executable PL/SQL code and SQL statements, as well as information regarding the data dictionary tables. 
Whenever a new user requests for a data through SQL session, Oracle first determines whether that SQL is already in Shared Pool or not. If Oracle finds in Shared pool then it just simply do Soft Parse to that SQL but when it doesn't find ,It has to do Hard Parse.
Also Shared Pool should be size properly.It should be neither over-sized nor under-sized.   
Both the I/O rates and the CPU usage will diminish when the database uses its shared pool memory effectively.

Shared Pool components:

Library Cache:
All application code has to be parsed first and executed later. Oracle stores all compiled SQL statements in the library cache component of the shared pool. The library cache component of the shared pool memory is shared by all users of the database. Every time when a SQL statement is fired,Oracle first checks the library cache to see if there is an already parsed and ready-to-execute form of the statement in there. If there is, Oracle uses soft parse otherwise it will do hard parse. Hard parsing involves the use of critical system resources, such as processing power, and internal Oracle structures, such as latches; so we should try to avoid its occurrence. High hard-parse counts will lead to resource contention and a consequent slowdown of the database when responding to user requests.

Data Dictionary Cache
It contains object definitions, usernames, roles, privileges, and other such information. When a segment of SQL code run, Oracle first determines whether user has the privilege to perform the planned operation, for that It checks the data dictionary cache.

Sunday, October 28, 2012

Database Buffer Cache

The database buffer cache consists of the memory buffers that Oracle uses to hold the data read by the server process from datafiles on disk in response to user requests. When the users modify data, those changes are made in the database buffer cache. Thus the buffer cache contains both the original blocks by physical read from disk and the changed blocks that have to be written back to disk.
Note. Logical read, the hot block which is in Buffer Cache, is much faster than Physical read.

Database buffer cache can be group into three components:
Free buffers: These are buffers that contains no data, and, thus, the database can use them to hold new data it reads from disk.
Dirty buffers: These contain data that was read from disk and then modified, but hasn't yet been written to the datafiles on disk.
Pinned buffers: These are data buffers that are currently in active use by user sessions.

When a user process requests data, Oracle will first check whether the data is already available in the buffer cache. If it is, the server process will read the data from the memory directly  and send it to the user. If the data is not found in the buffer cache, the server process will read the relevant data from the datafiles on disk and cache it in the database buffer cache.If the server process can’t find a free buffer after searching through a threshold number of buffers, it asks the database writer process to write some of the dirty buffers to disk for time being until the user fire commit for that transaction , thus freeing up for writing the new data it wants to read into the buffer cache.

Saturday, October 27, 2012

Oracle's Background Process

Database Writer(DBWR)
This Process  is responsible for writing the dirty buffers(modified) from database buffer cache to the datafiles residing on the physical disk.
It continuously monitor the database buffer cache for free space and if its getting low,DBWR makes room available by writing those data, who are least recently used(LRU) in buffer cache,to the data files.
Note:Though the dirty buffers get write down in the data files but this changed data blocks will not be in effect until an unless user doesn't fire the commit for that transaction.This changes is just for time being whenever oracle face the data buffer cache free space issue.
The database writer process writes dirty buffers to disk under the following conditions:
1. When the database issues a checkpoint
2. When a server process can’t find a clean reusable buffer after checking a threshold number of buffers
3. Every 3 seconds
We can have a maximum of 20 database writer processes (DBW0 through DBW9, and DBWa through DBWj).

Log Writer(LGWR)
This process is responsible to transfer the contents of the redo log buffer to disk. Whenever any changes made to the database table (whether an insertion, update, or deletion), Oracle writes the committed and uncommitted changes to a redo log buffer. The LGWR process immediately transfers these changes from the redo log buffer to the redo log files on disk whenever a user commits a transaction.After write down those dirty redo buffer to online redo log files, LGWR marks that redo log buffers as REUSE.
When we multiplex the redo log, the log writer will write the contents of the redo log buffer to all members of the redo log group. If one or more members are damaged or by any mean become unavailable, the log writer will just write to the available members of a group. If it can’t write to even one member of a redo log group, the log writer signals an error.
The log writer writes all redo log buffer entries to the redo logs under the following circumstances:
• Every 3 seconds.
• When the redo log buffer is one-third full.
• When the database writer signals that redo records need to be written to disk. 

Process Monitor(PMON)
When user processes fail, the PMON cleans up after them, ensuring that the database frees up the resources that the dead processes were using. For example, when a user process dies while holding certain table locks, the PMON process releases those locks so other users can use the tables without any interference from the dead process.


System Monitor(SMON)
The SMON performs system-monitoring tasks for the Oracle instance, such as these:
• Upon restarting an instance that crashed, SMON determines whether the database is consistent.
• SMON cleans up unnecessary temporary segments.
• SMON coalesces free extents if using locally managed tablespaces, which enables to assign larger contiguous free areas on disk to the database objects.

Redo Log Files

The redo log files record all the changes made to the database, and play an important role during the recovery of a database.
The set of redo log files that are currently being used to record the changes to the database are called online redo log files. These logs can be archived to a different location before being reused, and the saved logs are called archived redo logs.
Redo log files contain the following information about database changes made by transactions:
• Indicators specifying when the transaction started
• The name of the transaction
• The name of the data object that was being updated
• The “before image” of the transaction (the data as it was before the changes were made)
• The “after image” of the transaction (the data as it was after the transaction made the changes)
• Commit indicators that indicate whether and when the transaction completed.
So when a database crashes, all transactions, both uncommitted as well as committed, have to be applied to the datafiles on disk, using the information in the redo log files. All redo log transactions that have both a begin and a commit entry must be roll-farward, and all transactions that have a begin entry but no commit entry must be roll-back.Committed transactions are thus re-created by applying the “after image” records in the redo log files to the database, and incomplete transactions are undone by using the “before image” records in the undo tablespace.
Oracle requires that every database have at least two redo log groups, each group consisting of at least one individual log file member.

control file

The control filis a file that the Oracle maintains to manage the state of the database, and it is the single most important file in the Oracle database. Every database has one control file, but due to its importance, multiple identical copies should maintain. When the database writes to the control file, all the multiplexed copies of the file get written with the same information i.e. identical. The control file is critical to the functioning of the database, and recovery is difficult without access to an updated control file.
The control file contains the names and physical locations, in oracle readable binary format, of the datafiles, redo log files, current log sequence numbers, backup set details, and the all-important system change number (SCN), which indicates the most recent version of committed changes in the database.
Only Oracle can write information to the control file, and the Oracle server process continually updates the control file during the operation of the database.
The control file is important in verifying the integrity of the database and when recovering the database. 
When Oracle instance starts, it consults the control file first, to identify all the datafiles and the redo log files that must be opened for database operations.

datafile


A datafile belongs to only one database, and one or more datafiles constitute the tablespace.
When the database instance needs to read table or index data, it reads that from the datafiles on disk, unless that data is already cached in Oracle’s memory.
Similarly, the database writes new table or index data or updates to existing data to the datafiles on disk for permanent storage.

Difference between Database and Instance.

Oracle Database is a Physical Structure of files like Datafiles,Controlfile,Redolog files,Parameter file,Password file,Network file(tnsnames.ora, listener.ora). This files are useless until and unless we interact with them and for this we need the operating system which provides the processing capabilities as well as resources, such as memory, to read/write the data on the disk drives.
So the combination of the specific set of processes created by Oracle over the server and the memory allocated to it by Operating system is known as Instance.