 |
|
The Persistent SQL Philosophy
Oracle Tips by Burleson Consulting |
If your shop has relatively static tables and
indexes, you may want to adopt the persistent SQL philosophy that
states that there exists one, and only one optimal execution plan
for any SQL statement. Shops that subscribe to this philosophy are
characterized by stable applications that have been tuned to use
host variables (instead of literal values) in all SQL queries.
Persistent shops also have tables and indexes
whose statistics rarely change the execution plan for their SQL
queries, regardless of how often the statistics are re-computed.
Many persistent shops have all of their SQL embedded inside PL/SQL
packages and the applications call their SQL using a standard PL/SQL
function of stored procedure call. This insulates the SQL from the
application programs, ensuring that all applications execute
identical SQL, and also ensuring that all of the SQL has been
properly tuned.
Choosing this approach means that all tuned
SQL will utilize optimizer plan stability, and the CBO statistics
are only used for ad-hoc queries and those new queries that have not
yet been tuned. Of course, there is also a performance benefit to
using optimizer plan stability because the SQL statements are
pre-parsed and ready to run. This approach is generally used in
shops where experience has found that the execution plans for SQL
rarely changes after the CBO statistics have been re-analyzed.
The persistent SQL philosophy requires the
Remote DBA to write scripts to detect all SQL statements that do not
possess stored outlines, and to tune these queries on behalf of the
developers. We will discuss these techniques in detail in Chapter
13. The persistent SQL philosophy also requires less reliance on CBO
statistics, and the Remote DBA generally only analyzes tables when they are
first migrated into the production environment. Since optimizer plan
stability does not rely on statistics the server overhead of
periodically re-computing statistics for the CBO is avoided.
The Dynamic SQL Philosophy
The dynamic SQL philosophy subscribes to the
belief that their Oracle SQL will change execution plans in
accordance with the changes to the CBO statistics. Shops that
subscribe to the dynamic SQL philosophy are characterized by highly
volatile environments where tables and indexes change radically and
frequently. These shops frequently re-analyze their CBO statistics
and allow the CBO to choose the execution plan based upon the
current status of their CBO statistics.
A good example of a shop that uses the
dynamic SQL philosophy would be one where tables grow over a
specified period of time and then are purged, and new data is
re-loaded. In these types of environments, the num_rows and
avg_row_len for the tables is frequently changing, as are the
distributions of index values. Decision support environments and
scientific database often adopt this philosophy because entirely new
subsets of data are loaded into tables, the data is analyzed, the
tables truncated, and a wholly different set of data is loaded into
the table structures.
Other common characteristic of dynamic shops
are those where the SQL cannot be easily tuned. Oracle
databases that are accessed by casual users via ODBC, and
third-party tools such as Crystal Report or Microsoft Access are
often forced into the dynamic philosophy because the incoming SQL is
always different. However, it is very important to note
that the use of third-party application suites such as SAP and
PeopleSoft does not always require the adoption of the dynamic
philosophy. The SQL from these types of application suites can
be captured in the library cache, and optimizer plan stability can
be used to make the execution plan persistent.
These shops require a very different approach
to SQL tuning than persistent SQL shops. Each time new data is
loaded or the data changes, the effected tables and indexes must be
re-analyzed, and these shops often incorporate the dbms_stats
package directly into their load routines. In Oracle8i, the Remote DBA for
dynamic shops must be always vigilant for changes to the
distribution of index column values. When column values for any
index become skewed, the Remote DBA must create column histograms for the
index so the optimizer can choose between a full-table scan versus
an index range scan to service queries. Of course, these shops will
benefit greatly with the use of Oracle9i where the database will
automatically create column histograms for index columns with skewed
distributions.
Many companies adopt one of these
philosophies without completely realizing the ramifications of their
chosen approach. In practice, most shops begin with a dynamic
philosophy and then undertake to migrate to the static approach
after experience indicates that their execution plans rarely change
after a re-analysis of the tables and indexes.
Remember, the Oracle8i features of
cursor_sharing and optimizer plan stability are a Godsend to many
Oracle developers and offer a proven method to improve the tuning
and persistence of SQL execution plans.
Conclusion
Before any Oracle professionals undertake
serious Oracle SQL tuning they must first master the basic optimizer
modes and understand how they function. The major points of this
chapter include:
-
The Remote DBA must decide which default optimizer mode
offers the best overall response time, and then tune the remaining
queries.
-
The Remote DBA must decide whether to re-analyze statistics
or to use optimizer plan stability.
-
The Remote DBA must understand the philosophical
differences between first_rows and all_rows, and set their
defaults accordingly.
Next, let’s move on and take a closer look at
SQL Internal processing. Getting into the bowels of the SQL
optimizers will greatly aid you in your SQL tuning endeavors.
This is an excerpt from "Oracle High-Performance SQL Tuning" by
Donald K. Burleson, published by Oracle Press.