BC remote Oracle DBA - Call (800) 766-1884  
Oracle Consulting Oracle Training Development

Remote DBA

Remote DBA Plans  

Remote DBA Service

Remote DBA RAC

   
Remote DBA Oracle Home
Remote DBA Oracle Training
Remote DBA SQL Tuning Consulting
Remote DBA Oracle Tuning Consulting
Remote DBA Data Warehouse Consulting
Remote DBA Oracle Project Management
Remote DBA Oracle Security Assessment
Remote DBA Unix Consulting
Burleson Books
Burleson Articles
Burleson Web Courses
Burleson Qualifications
Oracle Links
Remote DBA Oracle Monitoring
Remote DBA Support Benefits
Remote DBA Plans & Prices
Our Automation Strategy
What We Monitor
Oracle Apps Support
Print Our Brochure
Contact Us (e-mail)
Oracle Job Opportunities
Oracle Consulting Prices





   

 

 

 

Remote DBA services

Remote DBA Support

Remote DBA RAC

Remote DBA Reasons

Remote Oracle Tuning

Remote DBA Links

Oracle DBA Support

Oracle DBA Forum

Oracle Disaster

Oracle Training

Oracle Tuning

Oracle Training

 Remote DBA SQL Server

Remote MSSQL Consulting

Oracle DBA Hosting

Oracle License Negotiation

 

 


 

 

 

 

 
 

Oracle9i UNIX and Multiple Block Sizes Administration

Oracle UNIX/Linux Tips by Burleson Consulting

Oracle9i and multiple block sizes

The ability of Oracle to support multiple block sizes did not get a lot of fanfare during the publicity rollout of Oracle9i. Rather than being touted as an important tool to reduce disk I/O, the multiple block size feature was buried far down on the list of new features of the Oracle9i database. However, for the Oracle administrator, multiple blocksizes are extremely important and exciting.  For the first time, you will be able to customize your data buffer sizes according to the specific needs of your database.

The ability to support multiple block sizes within Oracle9i opens up a whole new world of disk I/O management.  Prior to Oracle9i, your entire Oracle database had to have a single block size and this block size was determined at the time that the database was created. 

With the introduction of Oracle8i, we received the ability to segregate tables and index blocks into three separate data buffers, but all of the buffer caches had to be the same block size.  We had the KEEP pool to store frequently referenced table blocks, the RECYCLE pool to hold blocks from large-table full-table scans, and a DEFAULT pool for miscellaneous object blocks.

With Oracle9i, we can define tablespaces with block sizes of 2K, 4K, 8K, 16K and 32K, and assign tables and indexes to the best block size to minimize I/O and best manage wasted space in our data buffers.  When we combine the new data buffers for these block sizes, we get a total of seven separate and distinct data buffers to segregate our incoming table and index rows.

As we know, disk I/O is the single most expensive operation within an Oracle9i database, and multiple block sizes give us a powerful new tool to manage disk I/O with more power than ever before.

To fully understand the importance of multiple block sizes, it is important to take a look at the basic nature of disk I/O.

Allocating multiple data buffer caches

Let’s see firsthand how the multiple data buffers work.  For example, we could define the following buffer cache allocations in our initialization file.

db_block_size=32768         -- This is the system-wide
                            -- default block size 

db_cache_size=3G            -- This allocates a total of 3 gigabytes
                            -- for all of the 32K data buffers 

db_keep_cache_size=1G       -- Here we use 1 gigabyte for the KEEP pool

db_recycle_cache_size=500M  -- Here is 500 meg for the RECYCLE pool
                            -- Hence, the DEFAULT pool is 1,500 meg 

-- *****************************************************************
-- The caches below are all additional RAM memory (total=3.1 gig)
-- that are above and beyond the allocation from db_cache_size
-- ***************************************************************** 

db_2k_cache_size=200M       -- This cache is reserved for random
                            -- block retrieval on tables that
                            -- have small rows. 

dictionary_cache=200m       -- The data dictionary blocks will be 32k 

db_4k_cache_size=500M       -- This 4K buffer will be reserved                             -- exclusively for tables with a small average
                            -- row length and random access 

db_8k_cache_size=800M       -- This is a separate cache for
                            -- segregating I/O for specific tables 

db_16k_cache_size=1600M     -- This is a separate cache for
                            -- segregating I/O for specific tables

From this example, what is the total RAM allocated to the data buffer caches? The total RAM required is the sum of all the named buffer caches, plus db_cache_size, in this case, 6,100 megabytes, or 6.1 gigabytes. 

Remember that db_keep_cache_size and db_recycle_cache_size are subtracted from the db_cache_size.  In the example above, the DEFAULT pool is 1.5 gigabytes, after subtracting the allocation for the KEEP and RECYCLE pools.

Also note that you cannot create a buffer of the same size as your db_block_size.  In our example the db_block_size is 31768, so I cannot allocate a db_32k_cache_size.

Here we have defined seven totally separate data buffers. Let’s review the usage for each data buffer, computing the number of data blocks each buffer can hold (Table 1)

NAME

SIZE

BLOCK SIZE

BLOCK SPACE

KEEP pool

1,000M

32K

31,250 blocks

RECYCLE pool

500M

32K

15,625 blocks

DEFAULT pool

1,500M

32K

46,875 blocks

2K cache

200M

2K

100,000 blocks

4K cache

500M

4K

125,000 blocks

8K cache

800M

8K

100,000 blocks

16K cache

1,600M

16K

100,000 blocks

Table 2: Computing the size and capacity for multiple block sizes

Our next step is to create tablespaces using each of these block sizes.  Oracle will automatically load a tablespaces’ blocks into the data buffer of the appropriate block size.

For example, we talked about creating the db_2k_cache_size exclusively for tables with small row sizes that are always accessed randomly.  Hence, we could define a 2K tablespace as follows:

create tablespace
   2k_tablespace
datafile
   ‘/u01/oradata/mysid/2k_file.dbf’
size
   100M
blocksize
   2k
;

Once defined, Oracle will always load block from the 2k_tablespace into the db_2k_cache_size data buffer.  Now, all we need to do is to move all appropriate tables into the new tablespace using the Create Table As Select (CTAS) command:

-- First, disable all RI constraints

create table
   new_customer
as select
   *
from
   customer
tablespace
   2k_tablespace
;

rename customer to old_customer;
rename new_customer to customer;

-- finally, transfer all RI constraints and indexes

Now that we see how to create tablespaces with different block sizes, let’s explore some other important considerations for determining the tablespace blocksize.

Large Blocks and Oracle Indexes

Prior to Oracle9i, many Oracle tuning experts recommended that a database be re-defined with a larger blocksize. Many people were mystified when a database with a 2K block size was increased to an 8K block size and the entire database ran faster. A common justification for resisting a block size increase was “This database randomly fetches small rows. I can’t see why moving to a larger block size would improve performance.” So, then, what explains the performance improvement with larger block sizes?

When choosing a block size, many DBAs forget about the index trees and how Oracle indexes are accessed sequentially when doing an index range scan.  An index range scan is commonly seen in nested loop joins, and the vast majority of row access involved indexes.

Because index range scans involve gathering sequential index nodes, placing the indexes in a larger block size reduces disk I/O and improves throughput for the whole database.

So then, why not create our entire Oracle database with large block sizes and forget about multiple block sizes?  The answer is not simple.  In order to fully utilize the RAM memory in the data buffers, you must segregate tables according to their distribution of related data.

* Small blocks - Tables with small rows that are accessed in a random fashion should be placed onto tablespaces with small block sizes. With random access and small block sizes, more of the RAM in the data buffer remains available to hold frequently referenced rows from other tables.

* Large blocks – Indexes, row-ordered tables, single-table clusters, and table with frequent full-table scans should reside in tablespaces with large block sizes. This is because a single I/O will fetch many related rows and subsequent requests for the “next” rows will already be in the data buffer.

The goal here is simple; we want to maximize the amount of available RAM memory for the data buffers by setting the block sizes according to the amount of I/O experienced by the table or index.  Random access of small rows suggests small block sizes, while sequential access of related rows suggests large block sizes.

For example, consider a query that accesses 100 random 80-byte rows from Oracle.  Since the accesses are random, we can assume that no two rows exist on the same block, and that 100 block reads are required to access the result set.

If we have 16k blocks, then we would need 16 meg (16k * 100) of RAM space in the db_16k_cache_size data buffer. If we use 2k blocks, then our 100 I/Os only use 2 meg (2k * 100) in the data buffer.  For this query, we would have saved 14 megabytes of RAM to hold other row data.

Waste not, want not

Until RAM memory becomes cheap enough that we can cache our whole database, we need to manage the RAM that we allocate to our data buffers. The allocation of tables and indexes according to block sizes is a balancing act.

If we allocate the data blocks too large, then we waste valuable data buffer space holding row data that Oracle will never reference.  If we allocate the data block too small, and Oracle will have to do more disk I/O to satisfy a request. Here are some general rules for allocating data block sizes:

* Segregate large-table full-table scans - Tables that experience large-table full-table scans will benefit from the largest supported block size and should be placed in a tablespace with your largest block size.

* Set db_recycle_cache_size carefully - If you are not setting db_cache_size to the largest supported block size for your server, you should not use the db_recycle_cache_size parameter.  Instead, you will want to create a db_32k_cache_size (or whatever your max is), and assign all tables that experience frequent large-table full-table scans to the largest buffer cache in your database.

* The Data Dictionary cache uses the default block size - You should ensure that the data dictionary (e.g. your SYSTEM tablespace) is always fully cached in a data buffer pool. Remember, the block size of the data dictionary is not as important as ensuring that the data buffer associated with the SYSTEM tablespace has enough RAM to fully-cache all data dictionary blocks.

* Indexes want large block sizes - Indexes will always favor the largest supported blocksize. You want to be able to retrieve as many index nodes as possible in a single I/O, especially for SQL that performs index range scans.  Hence, all indexes should reside in tablespaces with a 32k block size.

* Average row length - The block size for a tables’ tablespace should always be greater than the average row length for the table (dba_tables.avg_row_len).  Not it is smaller than the average row length, rows chaining occurs and excessive disk I/O is incurred.

* Use large blocks for data sorting – Your TEMP tablespace will benefit from the largest supported blocksize. This allows disk sorting to happen in large blocks with a minimum of disk I/O.

The intent of this section is to give you an idea of the impact of multiple block sizes and multiple RAM caches.  Once you are aware of the salient issues surrounding the use of block sizes, you can now make intelligent decisions about the proper assignment of block sizes to your tables and indexes. 

However, it is important to note that your tuning changes are never permanent, and you can always move tables from one tablespace to another, experimenting with different block sizes.  For example, if you placed a table into a 2K tablespace and the I/O increases, you can simply move the table into a tablespace with a larger blocksize. Minimizing I/O by adjusting block sizes is a long iterative process.

Now that we know how to reduce UNIX disk I/O, how do we confirm the I/O reduction?  The next section will explore how to extend the UNIX iostat utility to monitor disk I/O.

 


If you like Oracle tuning, see the book "Oracle Tuning: The Definitive Reference", with 950 pages of tuning tips and scripts. 

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.


Expert Remote DBA

BC is America's oldest and largest Remote DBA Oracle support provider.  Get real Remote DBA experts, call
BC Remote DBA today.

 

 

Remote DBA Service
 

Oracle Tuning Book

 

Advance SQL Tuning Book 

BC Oracle support

Oracle books by Rampant

Oracle monitoring software

 

 

 

 

 

 

BC Remote Oracle Support

Remote DBA

Remote DBA Services

Copyright © 1996 -  2013 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation.



Hit Counter