PK
%Aoa, mimetypeapplication/epub+zipPK %A iTunesMetadata.plistN
Oracle Database provides several tracing tools that can help you monitor and analyze applications running against an Oracle database.
End to End Application Tracing can identify the source of an excessive workload, such as a high load SQL statement, by client identifier, service, module, action, session, instance, or an entire database. This isolates the problem to a specific user, service, session, or application component.
Oracle Database provides the trcsess
command-line utility that consolidates tracing information based on specific criteria.
The SQL Trace facility and TKPROF
are two basic performance diagnostic tools that can help you monitor applications running against the Oracle database.
This chapter contains the following sections:
See Also: SQL*Plus User's Guide and Reference for information about the use of Autotrace to trace and tune SQL*Plus statements |
End to End Application Tracing simplifies the process of diagnosing performance problems in multitier environments. In these environments, a request from an end client is routed to different database sessions by the middle tier, making it difficult to track a client across database sessions. End to End Application Tracing uses a client ID to uniquely trace a specific end-client through all tiers to the database.
This feature could identify the source of an excessive workload, such as a high load SQL statement, and allow you to contact the specific user responsible. Also, a user having problems can contact you. You can then identify what this user's session is doing at the database level.
End to End Application Tracing also simplifies management of application workloads by tracking specific modules and actions in a service.
End to End Application Tracing can identify workload problems for:
Client identifier - specifies an end user based on the logon ID, such as HR.HR
Service - specifies a group of applications with common attributes, service level thresholds, and priorities; or a single application, such as ACCTG
for an accounting application
Module - specifies a functional block, such as Accounts Receivable or General Ledger, of an application
Action - specifies an action, such as an INSERT or UPDATE operation, in a module
Session - specifies a session based on a given database session identifier (SID), on the local instance
Instance - specifies a given instance based on the instance name
After tracing information is written to files, you can consolidate this information with the trcsess
utility and diagnose it with an analysis utility such as TKPROF
.
To create services on single-instance Oracle databases, use the DBMS_SERVICE.CREATE_SERVICE
procedure or set the SERVICE_NAMES
initialization parameter.
The module and action names are set by the application developer. For example, you would use the SET_MODULE
and SET_ACTION
procedures in the DBMS_APPICATION_INFO
package to set these values in a PL/SQL program.
The recommended interface for End to End Application Tracing is Oracle Enterprise Manager. Using Enterprise Manager, you can view the top consumers for each consumer type, and enable or disable statistics gathering and SQL tracing for specific consumers. Whenever possible, you should use Enterprise Manager to manage End to End Application Tracing, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, then you can manage this feature using the DBMS_MONITOR
APIs, as described in the following sections:
Enabling and Disabling Statistic Gathering for End to End Tracing
Viewing Gathered Statistics for End to End Application Tracing
See Also:
|
To gather the appropriate statistics using PL/SQL, you need to enable statistics gathering for client identifier, service, module, or action using procedures in the DBMS_MONITOR
package.
You can gather statistics by the following criteria:
The default level is the session-level statistics gathering. Statistics gathering is global for the database and continues after an instance is restarted.
The procedure CLIENT_ID_STAT_ENABLE
enables statistic gathering for a given client identifier. For example, to enable statistics gathering for a specific client identifier:
EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_ENABLE(client_id => 'OE.OE');
In the example, OE.OE
is the client identifier for which you want to collect statistics. You can view client identifiers in the CLIENT_IDENTIFIER
column in V$SESSION
.
The procedure CLIENT_ID_STAT_DISABLE
disables statistic gathering for a given client identifier. For example:
EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE(client_id => 'OE.OE');
The procedure SERV_MOD_ACT_STAT_ENABLE
enables statistic gathering for a combination of service, module, and action. For example:
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(service_name => 'ACCTG', module_name => 'PAYROLL'); EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(service_name => 'ACCTG', module_name => 'GLEDGER', action_name => 'INSERT ITEM');
If both of the previous commands are executed, then statistics are gathered as follows:
For the ACCTG
service, because accumulation for each service name is the default
For all actions in the PAYROLL
module
For the INSERT
ITEM
action within the GLEDGER
module
The procedure SERV_MOD_ACT_STAT_DISABLE
disables statistic gathering for a combination of service, module, and action. For example:
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_DISABLE(service_name => 'ACCTG', module_name => 'GLEDGER', action_name => 'INSERT ITEM');
Regarding statistics gathering, when you change the module or action using these procedures, the change takes effect when the next user call is executed in the session. For example, if a module is set to module1
in a session, and if the module is reset to module2
in a user call in the session, then the module remains module1
during this user call. The module is changed to module2
in the next user call in the session.
You can display the statistics that have been gathered with several dynamic views.
The accumulated global statistics for the currently enabled statistics can be displayed with the DBA_ENABLED_AGGREGATIONS
view.
The accumulated statistics for a specified client identifier can be displayed in the V$CLIENT_STATS
view.
The accumulated statistics for a specified service can be displayed in V$SERVICE_STATS
view.
The accumulated statistics for a combination of specified service, module, and action can be displayed in the V$SERV_MOD_ACT_STATS
view.
The accumulated statistics for elapsed time of database calls and for CPU use can be displayed in the V$SERVICEMETRIC
view.
To enable tracing for client identifier, service, module, action, session, instance or database, execute the appropriate procedures in the DBMS_MONITOR
package. You can enable tracing for specific diagnosis and workload management by the following criteria:
With the criteria that you provide, specific trace information is captured in a set of trace files and combined into a single output trace file.
The CLIENT_ID_TRACE_ENABLE
procedure enables tracing globally for the database for a given client identifier. For example:
EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE(client_id => 'OE.OE', waits => TRUE, binds => FALSE);
In this example, OE.OE
is the client identifier for which SQL tracing is to be enabled. The TRUE
argument specifies that wait information will be present in the trace. The FALSE
argument specifies that bind information will not be present in the trace.
The CLIENT_ID_TRACE_DISABLE
procedure disables tracing globally for the database for a given client identifier. To disable tracing, for the previous example:
EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE(client_id => 'OE.OE');
The SERV_MOD_ACT_TRACE_ENABLE
procedure enables SQL tracing for a given combination of service name, module, and action globally for a database, unless an instance name is specified in the procedure.
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(service_name => 'ACCTG', waits => TRUE, binds => FALSE, instance_name => 'inst1');
In this example, the service ACCTG
is specified. The module or action name is not specified. The TRUE
argument specifies that wait information will be present in the trace. The FALSE
argument specifies that bind information will not be present in the trace. The inst1
instance is specified to enable tracing only for that instance.
To enable tracing for all actions for a given combination of service and module:
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(service_name => 'ACCTG', module_name => 'PAYROLL', waits => TRUE, binds => FALSE, instance_name => 'inst1');
The SERV_MOD_ACT_TRACE_DISABLE
procedure disables the trace at all enabled instances for a given combination of service name, module, and action name globally. For example, the following disables tracing for the first example in this section:
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE(service_name => 'ACCTG', instance_name => 'inst1');
This example disables tracing for the second example in this section:
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE(service_name => 'ACCTG', module_name => 'PAYROLL', instance_name => 'inst1');
The SESSION_TRACE_ENABLE
procedure enables the trace for a given database session identifier (SID), on the local instance.
To enable tracing for a specific session ID and serial number, determine the values for the session to trace:
SELECT SID, SERIAL#, USERNAME FROM V$SESSION; SID SERIAL# USERNAME ---------- ---------- ------------------------------ 27 60 OE ...
Use the appropriate values to enable tracing for a specific session:
EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(session_id => 27, serial_num => 60, waits => TRUE, binds => FALSE);
The TRUE
argument specifies that wait information will be present in the trace. The FALSE
argument specifies that bind information will not be present in the trace.
The SESSION_TRACE_DISABLE
procedure disables the trace for a given database session identifier (SID) and serial number. For example:
EXECUTE DBMS_MONITOR.SESSION_TRACE_DISABLE(session_id => 27, serial_num => 60);
While the DBMS_MONITOR
package can only be invoked by a user with the DBA role, any user can also enable SQL tracing for their own session by using the DBMS_SESSION
package. A user can invoke the SESSION_TRACE_ENABLE
procedure to enable session-level SQL trace for the user's session. For example:
EXECUTE DBMS_SESSION.SESSION_TRACE_ENABLE(waits => TRUE, binds => FALSE);
The TRUE
argument specifies that wait information will be present in the trace. The FALSE
argument specifies that bind information will not be present in the trace.
The SESSION_TRACE_DISABLE
procedure disables the trace for the invoking session. For example:
EXECUTE DBMS_SESSION.SESSION_TRACE_DISABLE();
The DATABASE_TRACE_ENABLE
procedure enables SQL tracing for a given instance or an entire database. Tracing is enabled for all current and future sessions. For example:
EXECUTE DBMS_MONITOR.DATABASE_TRACE_ENABLE(waits => TRUE, binds => FALSE, instance_name => 'inst1');
In this example, the inst1
instance is specified to enable tracing for that instance. The TRUE
argument specifies that wait information will be present in the trace. The FALSE
argument specifies that bind information will not be present in the trace. This example results in SQL tracing of all SQL in the inst1
instance.
The DATABASE_TRACE_ENABLE
procedure overrides all other session-level traces, but will be complementary to the client identifier, service, module, and action traces. All new sessions will inherit the wait and bind information specified by this procedure until the DATABASE_TRACE_DISABLE
procedure is called. When this procedure is invoked with the instance_name
parameter specified, it will reset the session-level SQL trace for the named instance. If this procedure is invoked without the instance_name
parameter specified, then it will reset the session-level SQL trace for the entire database.
The DATABASE_TRACE_DISABLE
procedure disables the tracing for an entire instance or database. For example:
EXECUTE DBMS_MONITOR.DATABASE_TRACE_DISABLE(instance_name => 'inst1');
In this example, all session-level SQL tracing will be disabled for the inst1
instance. To disable the session-level SQL tracing for an entire database, invoke the DATABASE_TRACE_DISABLE
procedure without specifying the instance_name
parameter:
EXECUTE DBMS_MONITOR.DATABASE_TRACE_DISABLE();
An Oracle Enterprise Manager report or the DBA_ENABLED_TRACES
view can display outstanding traces. In the DBA_ENABLED_TRACES
view, you can determine detailed information about how a trace was enabled, including the trace type. The trace type specifies whether the trace is enabled for client identifier, session, service, database, or a combination of service, module, and action.
The trcsess
utility consolidates trace output from selected trace files based on several criteria:
Session ID
Client ID
Service name
Action name
Module name
After trcsess
merges the trace information into a single output file, the output file could be processed by TKPROF
.
trcsess
is useful for consolidating the tracing of a particular session for performance or debugging purposes. Tracing a specific session is usually not a problem in the dedicated server model as a single dedicated process serves a session during its lifetime. You can see the trace information for the session from the trace file belonging to the dedicated server serving it. However, in a shared server configuration a user session is serviced by different processes from time to time. The trace pertaining to the user session is scattered across different trace files belonging to different processes. This makes it difficult to get a complete picture of the life cycle of a session.
The syntax for the trcsess
utility is:
trcsess [output=output_file_name] [session=session_id] [clientid=client_id] [service=service_name] [action=action_name] [module=module_name] [trace_files]
where
output
specifies the file where the output is generated. If this option is not specified, then the utility writes to standard output.
session
consolidates the trace information for the session specified. The session identifier is a combination of session index and session serial number, such as 21.2371
. You can locate these values in the V$SESSION
view.
clientid
consolidates the trace information given client ID.
service
consolidates the trace information for the given service name.
action
consolidates the trace information for the given action name.
module
consolidates the trace information for the given module name.
trace_files
is a list of all the trace file names, separated by spaces, in which trcsess
should look for trace information. You can use the wildcard character (*
) to specify the trace file names. If you do not specify trace files, then trcsess
takes all the files in the current directory as input.
You must specify one of the session
, clientid
, service
, action
, or module
options. If more then one of the session
, clientid
, service
, action
, or module
options is specified, then the trace files which satisfies all the criteria specified are consolidated into the output file.
This sample output of trcsess
shows the consolidation of traces for a particular session. In this example the session index and serial number equals 21.2371
.
You can invoke trcsess
with various options. In the following case, all files in current directory are taken as input:
trcsess session=21.2371
In this case, several trace files are specified:
trcsess session=21.2371 main_12359.trc main_12995.trc
The sample output is similar to the following:
[PROCESS ID = 12359] *** 2002-04-02 09:48:28.376 PARSING IN CURSOR #1 len=17 dep=0 uid=27 oct=3 lid=27 tim=868373970961 hv=887450622 ad='22683fb4' select * from cat END OF STMT PARSE #1:c=0,e=339,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=868373970944 EXEC #1:c=0,e=221,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=868373971411 FETCH #1:c=0,e=791,p=0,cr=7,cu=0,mis=0,r=1,dep=0,og=4,tim=868373972435 FETCH #1:c=0,e=1486,p=0,cr=20,cu=0,mis=0,r=6,dep=0,og=4,tim=868373986238 *** 2002-04-02 10:03:58.058 XCTEND rlbk=0, rd_only=1 STAT #1 id=1 cnt=7 pid=0 pos=1 obj=0 op='FILTER ' STAT #1 id=2 cnt=7 pid=1 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID OBJ$ ' STAT #1 id=3 cnt=7 pid=2 pos=1 obj=37 op='INDEX RANGE SCAN I_OBJ2 ' STAT #1 id=4 cnt=0 pid=1 pos=2 obj=4 op='TABLE ACCESS CLUSTER TAB$J2 ' STAT #1 id=5 cnt=6 pid=4 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# ' [PROCESS ID=12995] *** 2002-04-02 10:04:32.738 Archiving is disabled Archiving is disabled
The SQL Trace facility and TKPROF
let you accurately assess the efficiency of the SQL statements an application runs. For best results, use these tools with EXPLAIN
PLAN
rather than using EXPLAIN
PLAN
alone.
The SQL Trace facility provides performance information on individual SQL statements. It generates the following statistics for each statement:
Parse, execute, and fetch counts
CPU and elapsed times
Physical reads and logical reads
Number of rows processed
Misses on the library cache
Username under which each parse occurred
Each commit and rollback
Wait event data for each SQL statement, and a summary for each trace file
If the cursor for the SQL statement is closed, then SQL Trace also provides row source information that includes:
Row operations showing the actual execution plan of each SQL statement
Number of rows, number of consistent reads, number of physical reads, number of physical writes, and time elapsed for each operation on a row
Although it is possible to enable the SQL Trace facility for a session or for an instance, it is recommended that you use the DBMS_SESSION
or DBMS_MONITOR
packages instead. When the SQL Trace facility is enabled for a session or for an instance, performance statistics for all SQL statements executed in a user session or in the instance are placed into trace files. Using the SQL Trace facility can have a severe performance impact and may result in increased system overhead, excessive CPU usage, and inadequate disk space.
See Also: "Enabling and Disabling for End-to-End Tracing" to learn how to use theDBMS_SESSION or DBMS_MONITOR packages to enable SQL tracing for a session or an instance |
Oracle Database provides the trcsess
command-line utility that consolidates tracing information from several trace files based on specific criteria, such as session or client ID. See "Using the trcsess Utility".
You can run the TKPROF
program to format the contents of the trace file and place the output into a readable output file. TKPROF
can also:
Create a SQL script that stores the statistics in the database
Determine the execution plans of SQL statements
Note: If the cursor for a SQL statement is not closed, thenTKPROF output does not automatically include the actual execution plan of the SQL statement. In this situation, you can use the EXPLAIN option with TKPROF to generate an execution plan. |
TKPROF
reports each statement executed with the resources it has consumed, the number of times it was called, and the number of rows which it processed. This information lets you easily locate those statements that are using the greatest resource. With experience or with baselines available, you can assess whether the resources used are reasonable given the work done.
Follow these steps to use the SQL Trace facility and TKPROF
:
Set initialization parameters for trace file management.
See "Step 1: Setting Initialization Parameters for Trace File Management".
Enable the SQL Trace facility for the desired session, and run the application. This step produces a trace file containing statistics for the SQL statements issued by the application.
Run TKPROF
to translate the trace file created in Step 2 into a readable output file. This step can optionally create a SQL script that you can use to store the statistics in a database.
Interpret the output file created in Step 3.
Optionally, run the SQL script produced in Step 3 to store the statistics in the database.
The following sections discuss each step in depth.
When the SQL Trace facility is enabled for a session, Oracle Database generates a trace file containing statistics for traced SQL statements for that session. When the SQL Trace facility is enabled for an instance, Oracle Database creates a separate trace file for each process. Before enabling the SQL Trace facility:
Check the settings of the TIMED_STATISTICS
, MAX_DUMP_FILE_SIZE
, and USER_DUMP_DEST
initialization parameters. See Table 21-1.
Table 21-1 Initialization Parameters to Check Before Enabling SQL Trace
See Also:
|
Devise a way of recognizing the resulting trace file.
Be sure you know how to distinguish the trace files by name. Oracle Database writes them to the user dump destination specified by USER_DUMP_DEST
. However, this directory can soon contain many hundreds of files, usually with generated names. It might be difficult to match trace files back to the session or process that created them. You can tag trace files by including in your programs a statement like SELECT
'
program_name
'
FROM
DUAL
. You can then trace each file back to the process that created it.
You can also set the TRACEFILE_IDENTIFIER
initialization parameter to specify a custom identifier that becomes part of the trace file name. For example, you can add my_trace_id
to subsequent trace file names for easy identification with the following:
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'my_trace_id';
See Also: Oracle Database Reference for information about theTRACEFILE_IDENTIFIER initialization parameter |
If the operating system retains multiple versions of files, then ensure that the version limit is high enough to accommodate the number of trace files you expect the SQL Trace facility to generate.
The generated trace files can be owned by an operating system user other than yourself. This user must make the trace files available to you before you can use TKPROF
to format them.
See Also:
|
Enable the SQL Trace facility for the session by using one of the following:
DBMS_MONITOR.DATABASE_TRACE_ENABLE
procedure (recommended)
DBMS_SESSION.SET_SQL_TRACE
procedure
ALTER SESSION SET SQL_TRACE = TRUE;
Caution: Because running the SQL Trace facility increases system overhead, enable it only when tuning SQL statements, and disable it when you are finished. Oracle recommends that you use theDBMS_SESSION or DBMS_MONITOR packages to enable SQL tracing for a session or an instance. To learn about these packages, see "Enabling and Disabling for End-to-End Tracing".
You may need to modify an application to contain the |
To disable the SQL Trace facility for the session, enter:
ALTER SESSION SET SQL_TRACE = FALSE;
The SQL Trace facility is automatically disabled for the session when the application disconnects from Oracle.
You can enable the SQL Trace facility for an instance by setting the value of the SQL_TRACE
initialization parameter to TRUE
in the initialization file.
SQL_TRACE = TRUE
After the instance has been restarted with the updated initialization parameter file, SQL Trace is enabled for the instance and statistics are collected for all sessions. If the SQL Trace facility has been enabled for the instance, then you can disable it for the instance by setting the value of the SQL_TRACE
parameter to FALSE
.
Note: SettingSQL_TRACE to TRUE can have a severe performance impact. For more information, see Oracle Database Reference. |
TKPROF
accepts as input a trace file produced by the SQL Trace facility, and it produces a formatted output file. TKPROF
can also be used to generate execution plans.
After the SQL Trace facility has generated trace files, you can:
Run TKPROF
on each individual trace file, producing several formatted output files, one for each session.
Concatenate the trace files, and then run TKPROF
on the result to produce a formatted output file for the entire instance.
Run the trcsess
command-line utility to consolidate tracing information from several trace files, then run TKPROF
on the result. See "Using the trcsess Utility".
TKPROF
does not report COMMITs
and ROLLBACKs
that are recorded in the trace file.
Sample output from TKPROF
is as follows:
SELECT * FROM emp, dept WHERE emp.deptno = dept.deptno; call count cpu elapsed disk query current rows ---- ------- ------- --------- -------- -------- ------- ------ Parse 1 0.16 0.29 3 13 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.03 0.26 2 2 4 14 Misses in library cache during parse: 1 Parsing user id: (8) SCOTT Rows Execution Plan ------- ---------------------------------------------------
14 MERGE JOIN 4 SORT JOIN 4 TABLE ACCESS (FULL) OF 'DEPT' 14 SORT JOIN 14 TABLE ACCESS (FULL) OF 'EMP'
For this statement, TKPROF
output includes the following information:
The text of the SQL statement
The SQL Trace statistics in tabular form
The number of library cache misses for the parsing and execution of the statement.
The user initially parsing the statement.
TKPROF
also provides a summary of user level statements and recursive SQL calls for the trace file.
TKPROF
is run from the operating system prompt. The syntax is:
tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n] [aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table] [explain=user/password] [record=filename4] [width=n]
The input and output files are the only required arguments. If you invoke TKPROF
without arguments, then the tool displays online help. Use the arguments in Table 21-2 with TKPROF
.
Table 21-2 TKPROF Arguments
Argument | Description |
---|---|
|
Specifies the input file, a trace file containing statistics produced by the SQL Trace facility. This file can be either a trace file produced for a single session, or a file produced by concatenating individual trace files from multiple sessions. |
|
Specifies the file to which |
|
Specifies whether to record summary for any wait events found in the trace file. Values are |
|
Sorts traced SQL statements in descending order of specified sort option before listing them into the output file. If multiple options are specified, then the output is sorted in descending order by the sum of the values specified in the sort options. If you omit this parameter, then |
|
Number of times parsed. |
|
CPU time spent parsing. |
|
Elapsed time spent parsing. |
|
Number of physical reads from disk during parse. |
|
Number of consistent mode block reads during parse. |
|
Number of current mode block reads during parse. |
|
Number of library cache misses during parse. |
|
Number of executes. |
|
CPU time spent executing. |
|
Elapsed time spent executing. |
|
Number of physical reads from disk during execute. |
|
Number of consistent mode block reads during execute. |
|
Number of current mode block reads during execute. |
|
Number of rows processed during execute. |
|
Number of library cache misses during execute. |
|
Number of fetches. |
|
CPU time spent fetching. |
|
Elapsed time spent fetching. |
|
Number of physical reads from disk during fetch. |
|
Number of consistent mode block reads during fetch. |
|
Number of current mode block reads during fetch. |
|
Number of rows fetched. |
|
Userid of user that parsed the cursor. |
|
Lists only the first integer sorted SQL statements from the output file. If you omit this parameter, then |
|
If you specify |
|
Creates a SQL script that stores the trace file statistics in the database. |
|
Enables and disables the listing of SQL statements issued by the user |
|
Specifies the schema and name of the table into which The specified user must be able to issue This option allows multiple individuals to run If you use the If no plan table exists, |
|
Determines the execution plan for each SQL statement in the trace file and writes these execution plans to the output file. |
|
Creates a SQL script with the specified |
|
An integer that controls the output line width of some TKPROF output, such as the explain plan. This parameter is useful for post-processing of TKPROF output. |
This section provides two brief examples of TKPROF
usage. For an complete example of TKPROF
output, see "Sample TKPROF Output".
If you are processing a large trace file using a combination of SORT
parameters and the PRINT
parameter, then you can produce a TKPROF
output file containing only the highest resource-intensive statements. For example, the following statement prints the 10 statements in the trace file that have generated the most physical I/O:
TKPROF ora53269.trc ora53269.prf SORT = (PRSDSK, EXEDSK, FCHDSK) PRINT = 10
This example runs TKPROF
, accepts a trace file named examp12_jane_fg_sqlplus_007
.trc
, and writes a formatted output file named outputa
.prf
:
TKPROF examp12_jane_fg_sqlplus_007.trc OUTPUTA.PRF EXPLAIN=scott/tiger TABLE=scott.temp_plan_table_a INSERT=STOREA.SQL SYS=NO SORT=(EXECPU,FCHCPU)
This example is likely to be longer than a single line on the screen, and you might need to use continuation characters, depending on the operating system.
Note the other parameters in this example:
The EXPLAIN
value causes TKPROF
to connect as the user scott
and use the EXPLAIN
PLAN
statement to generate the execution plan for each traced SQL statement. You can use this to get access paths and row source counts.
Note: If the cursor for a SQL statement is not closed, thenTKPROF output does not automatically include the actual execution plan of the SQL statement. In this situation, you can use the EXPLAIN option with TKPROF to generate an execution plan. |
The TABLE
value causes TKPROF
to use the table temp_plan_table_a
in the schema scott
as a temporary plan table.
The INSERT
value causes TKPROF
to generate a SQL script named STOREA
.SQL
that stores statistics for all traced SQL statements in the database.
The SYS
parameter with the value of NO
c auses TKPROF
to omit recursive SQL statements from the output file. In this way, you can ignore internal Oracle Database statements such as temporary table operations.
The SORT
value causes TKPROF
to sort the SQL statements in order of the sum of the CPU time spent executing and the CPU time spent fetching rows before writing them to the output file. For greatest efficiency, always use SORT
parameters.
This section provides pointers for interpreting TKPROF
output.
While TKPROF
provides a very useful analysis, the most accurate measure of efficiency is the actual performance of the application in question. At the end of the TKPROF
output is a summary of the work done in the database engine by the process during the period that the trace was running.
TKPROF
lists the statistics for a SQL statement returned by the SQL Trace facility in rows and columns. Each row corresponds to one of three steps of SQL statement processing. Statistics are identified by the value of the CALL
column. See Table 21-3.
Table 21-3 CALL Column Values
CALL Value | Meaning |
---|---|
|
Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects. |
|
Actual execution of the statement by Oracle. For |
|
Retrieves rows returned by a query. Fetches are only performed for |
The other columns of the SQL Trace facility output are combined statistics for all parses, all executes, and all fetches of a statement. The sum of query
and current
is the total number of buffers accessed, also called Logical I/Os (LIOs). See Table 21-4.
Table 21-4 SQL Trace Statistics for Parses, Executes, and Fetches.
Statistics about the processed rows appear in the ROWS
column. See Table 21-5.
Table 21-5 SQL Trace Statistics for the ROWS Column
SQL Trace Statistic | Meaning |
---|---|
|
Total number of rows processed by the SQL statement. This total does not include rows processed by subqueries of the SQL statement. |
For SELECT
statements, the number of rows returned appears for the fetch step. For UPDATE
, DELETE
, and INSERT
statements, the number of rows processed appears for the execute step.
Note: The row source counts are displayed when a cursor is closed. In SQL*Plus, there is only one user cursor, so each statement executed causes the previous cursor to be closed; therefore, the row source counts are displayed. PL/SQL has its own cursor handling and does not close child cursors when the parent cursor is closed. Exiting (or reconnecting) causes the counts to be displayed. |
Row source operations provide the number of rows processed for each operation executed on the rows and additional row source information, such as physical reads and writes. The following is a sample:
Rows Row Source Operation ------- --------------------------------------------------- 0 DELETE (cr=43141 r=266947 w=25854 time=60235565 us) 28144 HASH JOIN ANTI (cr=43057 r=262332 w=25854 time=48830056 us) 51427 TABLE ACCESS FULL STATS$SQLTEXT (cr=3465 r=3463 w=0 time=865083 us) 647529 INDEX FAST FULL SCAN STATS$SQL_SUMMARY_PK (cr=39592 r=39325 w=0 time=10522877 us) (object id 7409)
In this sample TKPROF
output, note the following under the Row Source Operation column:
cr
specifies consistent reads performed by the row source
r
specifies physical reads performed by the row source
w
specifies physical writes performed by the row source
time
specifies time in microseconds
If wait event information exists, then the TKPROF
output includes a section similar to the following:
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file sequential read 8084 0.12 5.34 direct path write 834 0.00 0.00 direct path write temp 834 0.00 0.05 db file parallel read 8 1.53 5.51 db file scattered read 4180 0.07 1.45 direct path read 7082 0.00 0.05 direct path read temp 7082 0.00 0.44 rdbms ipc reply 20 0.00 0.01 SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 0.00 0.00
In addition, wait events are summed for the entire trace file at the end of the file.
To ensure that wait events information is written to the trace file for the session, run the following SQL statement:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
Timing statistics have a resolution of one hundredth of a second; therefore, any operation on a cursor that takes a hundredth of a second or less might not be timed accurately. Keep this in mind when interpreting statistics. In particular, be careful when interpreting the results from simple queries that execute very quickly.
Sometimes, to execute a SQL statement issued by a user, Oracle Database must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, then Oracle Database makes recursive calls to allocate the space dynamically. Recursive calls are also generated when data dictionary information is not available in the data dictionary cache and must be retrieved from disk.
If recursive calls occur while the SQL Trace facility is enabled, then TKPROF
produces statistics for the recursive SQL statements and marks them clearly as recursive SQL statements in the output file. You can suppress the listing of Oracle Database internal recursive calls (for example, space management) in the output file by setting the SYS
command-line parameter to NO
. The statistics for a recursive SQL statement are included in the listing for that statement, not in the listing for the SQL statement that caused the recursive call. So, when you are calculating the total resources required to process a SQL statement, consider the statistics for that statement and those for recursive calls caused by that statement.
Note: Recursive SQL statistics are not included for SQL-level operations. |
TKPROF
also lists the number of library cache misses resulting from parse and execute steps for each SQL statement. These statistics appear on separate lines following the tabular statistics. If the statement resulted in no library cache misses, then TKPROF
does not list the statistic. In "Sample TKPROF Output", the statement resulted in one library cache miss for the parse step and no misses for the execute step.
The following SQL statements are truncated to 25 characters in the SQL Trace file:
SET ROLE GRANT ALTER USER ALTER ROLE CREATE USER CREATE ROLE
TKPROF
also lists the user ID of the user issuing each SQL statement. If the SQL Trace input file contained statistics from multiple users, and if the statement was issued by multiple users, then TKPROF
lists the ID of the last user to parse the statement. The user ID of all database users appears in the data dictionary in the column ALL_USERS
.USER_ID
.
If you specify the EXPLAIN
parameter on the TKPROF
statement line, then TKPROF
uses the EXPLAIN
PLAN
statement to generate the execution plan of each SQL statement traced. TKPROF
also displays the number of rows processed by each step of the execution plan.
Note: Trace files generated immediately after instance startup contain data that reflects the activity of the startup process. In particular, they reflect a disproportionate amount of I/O activity as caches in the system global area (SGA) are filled. For the purposes of tuning, ignore such trace files. |
You need to find which SQL statements use the most CPU or disk resource. If the TIMED_STATISTICS
parameter is on, then you can find high CPU activity in the CPU
column. If TIMED_STATISTICS
is not on, then check the QUERY
and CURRENT
columns.
With the exception of locking problems and inefficient PL/SQL loops, neither the CPU time nor the elapsed time is necessary to find problem statements. The key is the number of block visits, both query (that is, subject to read consistency) and current (that is, not subject to read consistency). Segment headers and blocks that are going to be updated are acquired in current mode, but all query and subquery processing requests the data in query mode. These are precisely the same measures as the instance statistics CONSISTENT
GETS
and DB
BLOCK
GETS
. You can find high disk activity in the disk column.
The following listing shows TKPROF
output for one SQL statement as it appears in the output file:
SELECT * FROM emp, dept WHERE emp.deptno = dept.deptno; call count cpu elapsed disk query current rows ---- ------- ------- --------- -------- -------- ------- ------ Parse 11 0.08 0.18 0 0 0 0 Execute 11 0.23 0.66 0 3 6 0 Fetch 35 6.70 6.83 100 12326 2 824 ------------------------------------------------------------------ total 57 7.01 7.67 100 12329 8 826 Misses in library cache during parse: 0
If it is acceptable to have 7.01 CPU seconds and to retrieve 824 rows, then you need not look any further at this trace output. In fact, a major use of TKPROF
reports in a tuning exercise is to eliminate processes from the detailed tuning phase.
You can also see that 10 unnecessary parse call were made (because there were 11 parse calls for this one statement) and that array fetch operations were performed. You know this because more rows were fetched than there were fetches performed. A large gap between CPU
and elapsed
timings indicates Physical I/Os (PIOs).
You might want to keep a history of the statistics generated by the SQL Trace facility for an application, and compare them over time. TKPROF
can generate a SQL script that creates a table and inserts rows of statistics into it. This script contains:
A CREATE
TABLE
statement that creates an output table named TKPROF_TABLE
.
INSERT
statements that add rows of statistics, one for each traced SQL statement, to the TKPROF_TABLE
.
After running TKPROF
, you can run this script to store the statistics in the database.
When you run TKPROF
, use the INSERT
parameter to specify the name of the generated SQL script. If you omit this parameter, then TKPROF
does not generate a script.
After TKPROF
has created the SQL script, you might want to edit the script before running it. If you have created an output table for previously collected statistics, and if you want to add new statistics to this table, then remove the CREATE
TABLE
statement from the script. The script then inserts the new rows into the existing table.
If you have created multiple output tables, perhaps to store statistics from different databases in different tables, then edit the CREATE
TABLE
and INSERT
statements to change the name of the output table.
The following CREATE
TABLE
statement creates the TKPROF_TABLE
:
CREATE TABLE TKPROF_TABLE (
DATE_OF_INSERT DATE, CURSOR_NUM NUMBER, DEPTH NUMBER, USER_ID NUMBER, PARSE_CNT NUMBER, PARSE_CPU NUMBER, PARSE_ELAP NUMBER, PARSE_DISK NUMBER, PARSE_QUERY NUMBER, PARSE_CURRENT NUMBER, PARSE_MISS NUMBER, EXE_COUNT NUMBER, EXE_CPU NUMBER, EXE_ELAP NUMBER, EXE_DISK NUMBER, EXE_QUERY NUMBER, EXE_CURRENT NUMBER, EXE_MISS NUMBER, EXE_ROWS NUMBER, FETCH_COUNT NUMBER, FETCH_CPU NUMBER, FETCH_ELAP NUMBER, FETCH_DISK NUMBER, FETCH_QUERY NUMBER, FETCH_CURRENT NUMBER, FETCH_ROWS NUMBER, CLOCK_TICKS NUMBER, SQL_STATEMENT LONG);
Most output table columns correspond directly to the statistics that appear in the formatted output file. For example, the PARSE_CNT
column value corresponds to the count statistic for the parse step in the output file.
The columns in Table 21-6 help you identify a row of statistics.
Table 21-6 TKPROF_TABLE Columns for Identifying a Row of Statistics
The output table does not store the statement's execution plan. The following query returns the statistics from the output table. These statistics correspond to the formatted output shown in the section "Sample TKPROF Output".
SELECT * FROM TKPROF_TABLE;
Oracle Database responds with something similar to:
DATE_OF_INSERT CURSOR_NUM DEPTH USER_ID PARSE_CNT PARSE_CPU PARSE_ELAP -------------- ---------- ----- ------- --------- --------- ---------- 21-DEC-1998 1 0 8 1 16 22 PARSE_DISK PARSE_QUERY PARSE_CURRENT PARSE_MISS EXE_COUNT EXE_CPU ---------- ----------- ------------- ---------- --------- ------- 3 11 0 1 1 0 EXE_ELAP EXE_DISK EXE_QUERY EXE_CURRENT EXE_MISS EXE_ROWS FETCH_COUNT -------- -------- --------- ----------- -------- -------- ----------- 0 0 0 0 0 0 1 FETCH_CPU FETCH_ELAP FETCH_DISK FETCH_QUERY FETCH_CURRENT FETCH_ROWS --------- ---------- ---------- ----------- ------------- ---------- 2 20 2 2 4 10 SQL_STATEMENT --------------------------------------------------------------------- SELECT * FROM EMP, DEPT WHERE EMP.DEPTNO = DEPT.DEPTNO
This section describes some fine points of TKPROF
interpretation:
If you are not aware of the values being bound at run time, then it is possible to fall into the argument trap. EXPLAIN
PLAN
cannot determine the type of a bind variable from the text of SQL statements, and it always assumes that the type is varchar
. If the bind variable is actually a number or a date, then TKPROF
can cause implicit data conversions, which can cause inefficient plans to be executed. To avoid this situation, experiment with different data types in the query.
To avoid this problem, perform the conversion yourself.
The next example illustrates the read consistency trap. Without knowing that an uncommitted transaction had made a series of updates to the NAME
column, it is very difficult to see why so many block visits would be incurred.
Cases like this are not normally repeatable: if the process were run again, it is unlikely that another transaction would interact with it in the same way.
SELECT name_id FROM cq_names WHERE name = 'FLOOR'; call count cpu elapsed disk query current rows ---- ----- --- ------- ---- ----- ------- ---- Parse 1 0.10 0.18 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.11 0.21 2 101 0 1 Misses in library cache during parse: 1 Parsing user id: 01 (USER1) Rows Execution Plan ---- --------- ---- 0 SELECT STATEMENT 1 TABLE ACCESS (BY ROWID) OF 'CQ_NAMES' 2 INDEX (RANGE SCAN) OF 'CQ_NAMEJ:S_NAME' (NON_UNIQUE)
This example shows an extreme (and thus easily detected) example of the schema trap. At first, it is difficult to see why such an apparently straightforward indexed query needs to look at so many database blocks, or why it should access any blocks at all in current mode.
SELECT name_id FROM cq_names WHERE name = 'FLOOR'; call count cpu elapsed disk query current rows -------- ------- -------- --------- ------- ------ ------- ---- Parse 1 0.06 0.10 0 0 0 0 Execute 1 0.02 0.02 0 0 0 0 Fetch 1 0.23 0.30 31 31 3 1 Misses in library cache during parse: 0 Parsing user id: 02 (USER2) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT 2340 TABLE ACCESS (BY ROWID) OF 'CQ_NAMES' 0 INDEX (RANGE SCAN) OF 'CQ_NAMES_NAME' (NON-UNIQUE)
Two statistics suggest that the query might have been executed with a full table scan. These statistics are the current mode block visits, plus the number of rows originating from the Table Access row source in the execution plan. The explanation is that the required index was built after the trace file had been produced, but before TKPROF
had been run.
Generating a new trace file gives the following data:
SELECT name_id FROM cq_names WHERE name = 'FLOOR'; call count cpu elapsed disk query current rows ----- ------ ------ -------- ----- ------ ------- ----- Parse 1 0.01 0.02 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.00 0.00 0 2 0 1 Misses in library cache during parse: 0 Parsing user id: 02 (USER2) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT 1 TABLE ACCESS (BY ROWID) OF 'CQ_NAMES' 2 INDEX (RANGE SCAN) OF 'CQ_NAMES_NAME' (NON-UNIQUE)
One of the marked features of this correct version is that the parse call took 10 milliseconds of CPU time and 20 milliseconds of elapsed time, but the query apparently took no time at all to execute and perform the fetch. These anomalies arise because the clock tick of 10 milliseconds is too long relative to the time taken to execute and fetch the data. In such cases, it is important to get lots of executions of the statements, so that you have statistically valid numbers.
Sometimes, as in the following example, you might wonder why a particular query has taken so long.
UPDATE cq_names SET ATTRIBUTES = lower(ATTRIBUTES) WHERE ATTRIBUTES = :att call count cpu elapsed disk query current rows -------- ------- -------- --------- -------- -------- ------- ---------- Parse 1 0.06 0.24 0 0 0 0 Execute 1 0.62 19.62 22 526 12 7 Fetch 0 0.00 0.00 0 0 0 0 Misses in library cache during parse: 1 Parsing user id: 02 (USER2) Rows Execution Plan ------- --------------------------------------------------- 0 UPDATE STATEMENT 2519 TABLE ACCESS (FULL) OF 'CQ_NAMES'
Again, the answer is interference from another transaction. In this case, another transaction held a shared lock on the table cq_names
for several seconds before and after the update was issued. It takes a fair amount of experience to diagnose that interference effects are occurring. On the one hand, comparative data is essential when the interference is contributing only a short delay (or a small increase in block visits in the previous example). However, if the interference contributes only modest overhead, and if the statement is essentially efficient, then its statistics may not require analysis.
This section provides an example of TKPROF
output. Portions have been edited out for the sake of brevity.
TKPROF: Release 10.1.0.0.0 - Mon Feb 10 14:43:00 2003 (c) Copyright 2001 Oracle Corporation. All rights reserved. Trace file: main_ora_27621.trc Sort options: default ******************************************************************************** count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call ********************************************************************************
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.01 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 0.01 0.00 0 0 0 0 Misses in library cache during parse: 1 Optimizer mode: FIRST_ROWS Parsing user id: 44 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 28.59 28.59 ******************************************************************************** select condition from cdef$ where rowid=:1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.00 0.00 0 2 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.00 0.00 0 2 0 1 Misses in library cache during parse: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Rows Row Source Operation ------- --------------------------------------------------- 1 TABLE ACCESS BY USER ROWID OBJ#(31) (cr=1 r=0 w=0 time=151 us) ******************************************************************************** SELECT last_name, job_id, salary FROM employees WHERE salary = (SELECT max(salary) FROM employees) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.02 0.01 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 0 15 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.02 0.01 0 15 0 1 Misses in library cache during parse: 1 Optimizer mode: FIRST_ROWS Parsing user id: 44 Rows Row Source Operation ------- --------------------------------------------------- 1 TABLE ACCESS FULL EMPLOYEES (cr=15 r=0 w=0 time=1743 us) 1 SORT AGGREGATE (cr=7 r=0 w=0 time=777 us) 107 TABLE ACCESS FULL EMPLOYEES (cr=7 r=0 w=0 time=655 us) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 SQL*Net message from client 2 9.62 9.62 ******************************************************************************** ******************************************************************************** delete from stats$sqltext st where (hash_value, text_subset) not in (select --+ hash_aj hash_value, text_subset from stats$sql_summary ss where ( ( snap_id < :lo_snap or snap_id > :hi_snap ) and dbid = :dbid and instance_number = :inst_num ) or ( dbid != :dbid or instance_number != :inst_num) ) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 29.60 60.68 266984 43776 131172 28144 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 29.60 60.68 266984 43776 131172 28144 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: 22 Rows Row Source Operation ------- --------------------------------------------------- 0 DELETE (cr=43141 r=266947 w=25854 time=60235565 us) 28144 HASH JOIN ANTI (cr=43057 r=262332 w=25854 time=48830056 us) 51427 TABLE ACCESS FULL STATS$SQLTEXT (cr=3465 r=3463 w=0 time=865083 us) 647529 INDEX FAST FULL SCAN STATS$SQL_SUMMARY_PK (cr=39592 r=39325 w=0 time=10522877 us) (object id 7409) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file sequential read 8084 0.12 5.34 direct path write 834 0.00 0.00 direct path write temp 834 0.00 0.05 db file parallel read 8 1.53 5.51 db file scattered read 4180 0.07 1.45 direct path read 7082 0.00 0.05 direct path read temp 7082 0.00 0.44 rdbms ipc reply 20 0.00 0.01 SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 0.00 0.00 ********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 4 0.04 0.01 0 0 0 0 Execute 5 0.00 0.04 0 0 0 0 Fetch 2 0.00 0.00 0 15 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 11 0.04 0.06 0 15 0 1 Misses in library cache during parse: 4 Misses in library cache during execute: 1 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 6 0.00 0.00 SQL*Net message from client 5 77.77 128.88 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.00 0.00 0 2 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.00 0.00 0 2 0 1 Misses in library cache during parse: 1 5 user SQL statements in session. 1 internal SQL statements in session. 6 SQL statements in session. ******************************************************************************** Trace file: main_ora_27621.trc Trace file compatibility: 9.00.01 Sort options: default 1 session in tracefile. 5 user SQL statements in trace file. 1 internal SQL statements in trace file. 6 SQL statements in trace file. 6 unique SQL statements in trace file. 76 lines in trace file. 128 elapsed seconds in trace file.
This section describes new performance tuning features of Oracle Database 11g Release 2 (11.2) and provides pointers to additional information. The features and enhancements described in this section comprise the overall effort to optimize database performance.
For a summary of all new features for Oracle Database 11g Release 2 (11.2), see Oracle Database New Features Guide.
The new and updated performance tuning features include:
Resource Manager enhancements for parallel statement queuing
You can use Resource Manager to control the order of statements in a parallel statement queue. For example, you can ensure that high-priority statements spend less time in the queue. Also, you can use a directive to prevent one consumer group from monopolizing all of the parallel servers, and to specify the maximum time in seconds that a parallel statement can wait to be launched.
For more information, see "Managing CPU Resources Using Oracle Database Resource Manager" and Oracle Database VLDB and Partitioning Guide.
Resource Manager enhancements for CPU utilization limit
You can use Resource Manager to limit the CPU consumption of a consumer group. This feature restricts the CPU consumption of low-priority sessions and can help provide more consistent performance for the workload in a consumer group.
For more information, see "Managing CPU Resources Using Oracle Database Resource Manager".
New package for Automatic SQL Tuning
The DBMS_AUTO_SQLTUNE
package is the new interface for managing the Automatic SQL Tuning task. Unlike the SQL Tuning Advisor package DBMS_SQLTUNE
, which requires ADVISOR
privileges, DBMS_AUTO_SQLTUNE
requires the DBA
role.
For more information, see "Configuring Automatic SQL Tuning".
Oracle Orion I/O Calibration Tool Documentation
Oracle Orion is a tool for predicting the performance of an Oracle database without having to install Oracle or create a database. Unlike other I/O calibration tools, Oracle Orion is expressly designed for simulating Oracle database I/O workloads using the same I/O software stack as Oracle. Orion can also simulate the effect of striping performed by Oracle Automatic Storage Management.
For more information, see "I/O Calibration with the Oracle Orion Calibration Tool".
The new and updated performance tuning features include:
New Automatic Workload Repository (AWR) views
AWR supports several new historical views, including DBA_HIST_DB_CACHE_ADVICE
and DBA_HIST_IOSTAT_DETAIL
.
For more information, see "Using Automatic Workload Repository Views".
New Automatic Workload Repository reports
New AWR reports and AWR Compare Periods reports have been added for Oracle Real Application Clusters (Oracle RAC).
For more information, see "Generating Automatic Workload Repository Reports" and "Generating Automatic Workload Repository Compare Periods Reports".
Table annotation support for the client result cache
The client result cache supports table annotations.
For more information, see "Using Result Cache Table Annotations".
Enhancement to the RESULT_CACHE
annotation for PL/SQL functions
In Oracle Database 11g Release 1 (11.1), PL/SQL functions that performed queries referencing annotated tables required the RELIES_ON
clause. This clause has been deprecated and is no longer required.
Hints specifying parallelism at the statement level
The scope of the parallel hints has been extended to include the statement level.
For more information, see "Hints for Parallel Execution".
In-Memory Parallel Execution
When using parallel query, you can configure the database to use the database buffer cache instead of performing direct reads into the PGA for a SQL statement. This configuration may be appropriate when database servers have a large amount of memory. Also, an Oracle Real Applications Cluster (Oracle RAC) database can aggregate the size of the buffer cache of all nodes, thereby caching larger objects and caching more queries.
For more information, see "Using the Buffer Cache Effectively".
Hints for online application upgrades
The online application upgrade hints suggest how to handle conflicting INSERT
and UPDATE
operations when performing an online application upgrade using edition-based redefinition. For more information, see "Hints for Online Application Upgrade".
SQL Tuning Advisor enhancements
This release includes the following enhancements to SQL Tuning Advisor:
While tuning a SQL statement, SQL Tuning Advisor searches real-time and historical performance data for alternative execution plans for the statement. If plans other than the original plan exist, then SQL Tuning Advisor reports an alternative plan finding. See "Alternative Plan Analysis".
You can transport a SQL tuning set to any database created in Oracle Database 10g (Release 2) or later. This technique is useful when using SQL Performance Analyzer to tune regressions on a test database. See "Transporting a SQL Tuning Set".
Sometimes SQL Tuning Advisor may recommend accepting a profile that uses the Automatic Degree of Parallelism (Auto DOP) feature. A parallel query profile is only recommended when the original plan is serial and when parallel execution can significantly reduce the elapsed time for a long-running query. See "SQL Profile Recommendations".
Migrating stored outlines to SQL plan baselines
Oracle Database enables you to safely migrate from stored outlines to SQL plan baselines. After the migration, you can maintain the same plan stability you had using stored outlines while being able to utilize the more advanced features provided by the SQL Plan Management framework. For more information, see "Migrating Stored Outlines to SQL Plan Baselines".
Part III describes how to tune various elements of your database system to optimize performance of an Oracle database instance.
The chapters in this part are:
This chapter discusses SQL processing, optimization methods, and how the query optimizer (usually called the optimizer) chooses a specific plan to execute SQL.
The chapter contains the following sections:
The optimizer is built-in software that determines the most efficient way to execute a SQL statement.
This section contains the following topics:
The database can execute a SQL statement in multiple ways, such as full table scans, index scans, nested loops, and hash joins. The optimizer considers many factors related to the objects and the conditions in the query when determining an execution plan. This determination is an important step in SQL processing and can greatly affect execution time.
Note: The optimizer might not make the same decisions from one version of Oracle Database to the next. In recent versions, the optimizer might make different decisions because better information is available. |
When the user submits a SQL statement for execution, the optimizer performs the following steps:
The optimizer generates a set of potential plans for the SQL statement based on available access paths and hints.
The optimizer estimates the cost of each plan based on statistics in the data dictionary. Statistics include information on the data distribution and storage characteristics of the tables, indexes, and partitions accessed by the statement.
The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory.
Serial plans with higher costs take longer to execute than those with smaller costs. When using a parallel plan, resource use is not directly related to elapsed time.
The optimizer compares the plans and chooses the plan with the lowest cost.
The output from the optimizer is an execution plan that describes the optimum method of execution. The plans shows the combination of the steps Oracle Database uses to execute a SQL statement. Each step either retrieves rows physically from the database or prepares them for the user issuing the statement.
For any SQL statement processed by Oracle Database, the optimizer performs the operations listed in Table 11-1.
Table 11-1 Optimizer Operations
Operation | Description |
---|---|
Evaluation of expressions and conditions |
The optimizer first evaluates expressions and conditions containing constants as fully as possible. |
Statement transformation |
For complex statements involving, for example, correlated subqueries or views, the optimizer might transform the original statement into an equivalent join statement. |
Choice of optimizer goals |
The optimizer determines the goal of optimization. See "Choosing an Optimizer Goal". |
Choice of access paths |
For each table accessed by the statement, the optimizer chooses one or more of the available access paths to obtain table data. See "Overview of Optimizer Access Paths". |
Choice of join orders |
For a join statement that joins more than two tables, the optimizer chooses which pair of tables is joined first, and then which table is joined to the result, and so on. See "How the Query Optimizer Chooses Execution Plans for Joins". |
Sometimes, you may have more information about a particular application's data than is available to the optimizer. In such cases you can use hints in SQL statements to instruct the optimizer about how a statement should be executed.
The query optimizer operations include:
Figure 11-1 illustrates optimizer components.
Each query portion of a statement is called a query block. The input to the query transformer is a parsed query, which is represented by a set of query blocks.
In the following example, the SQL statement consists of two query blocks. The subquery in parentheses is the inner query block. The outer query block, which is the rest of the SQL statement, retrieves names of employees in the departments whose IDs were supplied by the subquery.
SELECT first_name, last_name FROM employees WHERE department_id IN (SELECT department_id FROM departments WHERE location_id = 1800);
The query form determines how query blocks are interrelated. The transformer determines whether it is advantageous to rewrite the original SQL statement into a semantically equivalent SQL statement that can be processed more efficiently.
The query transformer employs several query transformation techniques, including the following:
Any combination of these transformations can apply to a given query.
Each view referenced in a query is expanded by the parser into a separate query block. The block essentially represents the view definition, and thus the result of a view. One option for the optimizer is to analyze the view query block separately and generate a view subplan. The optimizer then processes the rest of the query by using the view subplan to generate an overall query plan. This technique usually leads to a suboptimal query plan because the view is optimized separately.
In view merging, the transformer merges the query block representing the view into the containing query block. For example, suppose you create a view as follows:
CREATE VIEW employees_50_vw AS SELECT employee_id, last_name, job_id, salary, commission_pct, department_id FROM employees WHERE department_id = 50;
You then query the view as follows:
SELECT employee_id FROM employees_50_vw WHERE employee_id > 150;
The optimizer can use view merging to transform the query of employees_50_vw
into the following equivalent query:
SELECT employee_id FROM employees WHERE department_id = 50 AND employee_id > 150;
The view merging optimization applies to views that contain only selections, projections, and joins. That is, mergeable views do not contain set operators, aggregate functions, DISTINCT
, GROUP BY
, CONNECT BY
, and so on.
To enable the optimizer to use view merging for any query issued by the user, you must grant the MERGE
ANY
VIEW
privilege to the user. Grant the MERGE
VIEW
privilege to a user on specific views to enable the optimizer to use view merging for queries on these views. These privileges are required only under specific conditions, such as when a view is not merged because the security checks fail.
See Also:
|
In predicate pushing, the optimizer "pushes" the relevant predicates from the containing query block into the view query block. For views that are not merged, this technique improves the subplan of the unmerged view because the database can use the pushed-in predicates to access indexes or to use as filters.
For example, suppose you create a view that references two employee tables. The view is defined with a compound query that uses the UNION
set operator, as follows:
CREATE VIEW all_employees_vw AS ( SELECT employee_id, last_name, job_id, commission_pct, department_id FROM employees ) UNION ( SELECT employee_id, last_name, job_id, commission_pct, department_id FROM contract_workers );
You then query the view as follows:
SELECT last_name
FROM all_employees_vw
WHERE department_id = 50;
Because the view is a compound query, the optimizer cannot merge the view's query into the accessing query block. Instead, the optimizer can transform the accessing statement by pushing its predicate, the WHERE
clause condition department_id=50
, into the view's compound query. The equivalent transformed query is as follows:
SELECT last_name FROM ( SELECT employee_id, last_name, job_id, commission_pct, department_id FROM employees WHERE department_id=50 UNION SELECT employee_id, last_name, job_id, commission_pct, department_id FROM contract_workers WHERE department_id=50 );
In subquery unnesting, the optimizer transforms a nested query into an equivalent join statement, and then optimizes the join. This transformation enables the optimizer to take advantage of the join optimizer technique. The optimizer can perform this transformation only if the resulting join statement is guaranteed to return exactly the same rows as the original statement, and if subqueries do not contain aggregate functions such as AVG
.
For example, suppose you connect as user sh
and execute the following query:
SELECT * FROM sales WHERE cust_id IN ( SELECT cust_id FROM customers );
Because the customers.cust_id column
is a primary key, the optimizer can transform the complex query into the following join statement that is guaranteed to return the same data:
SELECT sales.* FROM sales, customers WHERE sales.cust_id = customers.cust_id;
If the optimizer cannot transform a complex statement into a join statement, it selects execution plans for the parent statement and the subquery as though they were separate statements. The optimizer then executes the subquery and uses the rows returned to execute the parent query. To improve execution speed of the overall query plan, the optimizer orders the subplans efficiently.
A materialized view is like a query with a result that the database materializes and stores in a table. When the database finds a user query compatible with the query associated with a materialized view, then the database can rewrite the query in terms of the materialized view. This technique improves query execution because most of the query result has been precomputed.
The query transformer looks for any materialized views that are compatible with the user query and selects one or more materialized views to rewrite the user query. The use of materialized views to rewrite a query is cost-based. That is, the optimizer does not rewrite the query if the plan generated without the materialized views has a lower cost than the plan generated with the materialized views.
Consider the following materialized view, cal_month_sales_mv
, which aggregates the dollar amount sold each month:
CREATE MATERIALIZED VIEW cal_month_sales_mv ENABLE QUERY REWRITE AS SELECT t.calendar_month_desc, SUM(s.amount_sold) AS dollars FROM sales s, times t WHERE s.time_id = t.time_id GROUP BY t.calendar_month_desc;
Assume that sales number is around one million in a typical month. The view has the precomputed aggregates for the dollar amount sold for each month. Consider the following query, which asks for the sum of the amount sold for each month:
SELECT t.calendar_month_desc, SUM(s.amount_sold) FROM sales s, times t WHERE s.time_id = t.time_id GROUP BY t.calendar_month_desc;
Without query rewrite, the database must access sales
directly and compute the sum of the amount sold. This method involves reading many million rows from sales
, which invariably increases query response time. The join also further slows query response because the database must compute the join on several million rows. With query rewrite, the optimizer transparently rewrites the query as follows:
SELECT calendar_month, dollars FROM cal_month_sales_mv;
The estimator determines the overall cost of a given execution plan. The estimator generates three different types of measures to achieve this goal:
This measure represents a fraction of rows from a row set. The selectivity is tied to a query predicate, such as last_name='Smith'
, or a combination of predicates.
This measure represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work.
If statistics are available, then the estimator uses them to compute the measures. The statistics improve the degree of accuracy of the measures.
The selectivity represents a fraction of rows from a row set. The row set can be a base table, a view, or the result of a join or a GROUP
BY
operator. The selectivity is tied to a query predicate, such as last_name
= 'Smith'
, or a combination of predicates, such as last_name
= 'Smith'
AND
job_type
= 'Clerk'
.
A predicate filters a specific number of rows from a row set. Thus, the selectivity of a predicate indicates how many rows pass the predicate test. Selectivity ranges from 0.0 to 1.0. A selectivity of 0.0 means that no rows are selected from a row set, whereas a selectivity of 1.0 means that all rows are selected. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0.
The optimizer estimates selectivity depending on whether statistics are available:
Statistics not available
Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING
initialization parameter, the optimizer either uses dynamic sampling or an internal default value. The database uses different internal defaults depending on the predicate type. For example, the internal default for an equality predicate (last_name
= 'Smith'
) is lower than for a range predicate (last_name >
'Smith'
) because an equality predicate is expected to return a smaller fraction of rows. See "Estimating Statistics with Dynamic Sampling".
Statistics available
When statistics are available, the estimator uses them to estimate selectivity. Assume there are 150 distinct employee last names. For an equality predicate last_name =
'Smith'
, selectivity is the reciprocal of the number n
of distinct values of last_name
, which in this example is .006 because the query selects rows that contain 1 out of 150 distinct values.
If a histogram is available on the last_name
column, then the estimator uses the histogram instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates, especially for columns that contain skewed data. See "Viewing Histograms".
Cardinality represents the number of rows in a row set. In this context, the row set can be a base table, a view, or the result of a join or GROUP
BY
operator.
The cost represents units of work or resource used in an operation. The optimizer uses disk I/O, CPU usage, and memory usage as units of work. The operation can be scanning a table, accessing rows from a table by using an index, joining two tables together, or sorting a row set. The cost is the number of work units expected to be incurred when the database executes the query and produces its result.
The access path determines the number of units of work required to get data from a base table. The access path can be a table scan, a fast full index scan, or an index scan.
Table scan or fast full index scan
During a table scan or fast full index scan, the database reads multiple blocks from disk in a single I/O. Therefore, the cost of the scan depends on the number of blocks to be scanned and the multiblock read count value.
Index scan
The cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys. The cost of fetching rows using rowids depends on the index clustering factor. See "Assessing I/O for Blocks, not Rows".
The join cost represents the combination of the individual access costs of the two row sets being joined, plus the cost of the join operation.
The plan generator explores various plans for a query block by trying out different access paths, join methods, and join orders. Many plans are possible because of the various combinations of different access paths, join methods, and join orders that the database can use to produce the same result. The purpose of the generator is to pick the plan with the lowest cost.
A join order is the order in which different join items, such as tables, are accessed and joined together. Assume that the database joins table1
, table2
, and table3
. The join order might be as follows:
The database accesses table1
.
The database accesses table2
and joins its rows to table1
.
The database accesses table3
and joins its data to the result of the join between table1
and table2
.
The optimizer represents each nested subquery or unmerged view by a separate query block and generates a subplan. The database optimizes query blocks separately from the bottom up. Thus, the database optimizes the innermost query block first and generates a subplan for it, and then lastly generates the outer query block representing the entire query.
The number of possible plans for a query block is proportional to the number of join items in the FROM
clause. This number rises exponentially with the number of join items. For example, the possible plans for a join of five tables will be significantly higher than the possible plans for a join of two tables.
The plan generator uses an internal cutoff to reduce the number of plans it tries when finding the lowest-cost plan. The cutoff is based on the cost of the current best plan. If the current best cost is large, then the plan generator explores alternative plans to find a lower cost plan. If the current best cost is small, then the generator ends the search swiftly because further cost improvement will not be significant.
The cutoff works well if the plan generator starts with an initial join order that produces a plan with cost close to optimal. Finding a good initial join order is a difficult problem.
In bind variable peeking (also known as bind peeking), the optimizer looks at the value in a bind variable when the database performs a hard parse of a statement.
When a query uses literals, the optimizer can use the literal values to find the best plan. However, when a query uses bind variables, the optimizer must select the best plan without the presence of literals in the SQL text. This task can be extremely difficult. By peeking at bind values the optimizer can determine the selectivity of a WHERE
clause condition as if literals had been used, thereby improving the plan.
Example 11-1 Bind Peeking
Assume that the following 100,000 row emp
table exists in the database. The table has the following definition:
SQL> DESCRIBE emp Name Null? Type ---------------------- -------- ---------------------------------- ENAME VARCHAR2(20) EMPNO NUMBER PHONE VARCHAR2(20) DEPTNO NUMBER
The data is significantly skewed in the deptno
column. The valu e 10 is found in 99.9% of the rows. Each of the other deptno
values (0
through 9
) is found in 1% of the rows. You have gathered statistics for the table, resulting in a histogram on the deptno
column. You define a bind variable and query emp
using the bind value 9
as follows:
VARIABLE deptno NUMBER EXEC :deptno := 9 SELECT /*ACS_1*/ count(*), max(empno) FROM emp WHERE deptno = :deptno;
The query returns 10 rows:
COUNT(*) MAX(EMPNO) ---------- ---------- 10 99
To generate the execution plan for the query, the database peeked at the value 9
during the hard parse. The optimizer generated selectivity estimates as if the user had executed the following query:
select /*ACS_1*/ count(*), max(empno) from emp where deptno = 9;
When choosing a plan, the optimizer only peeks at the bind value during the hard parse. This plan may not be optimal for all possible values.
The adaptive cursor sharing feature enables a single statement that contains bind variables to use multiple execution plans. Cursor sharing is "adaptive" because the cursor adapts its behavior so that the database does not always use the same plan for each execution or bind variable value.
For appropriate queries, the database monitors data accessed over time for different bind values, ensuring the optimal choice of cursor for a specific bind value. For example, the optimizer might choose one plan for bind value 9
and a different plan for bind value 10
. Cursor sharing is "adaptive" because the cursor adapts its behavior so that the same plan is not always used for each execution or bind variable value.
Adaptive cursor sharing is enabled for the database by default and cannot be disabled. Note that adaptive cursor sharing does not apply to SQL statements containing more than 14 bind variables.
Note: Adaptive cursor sharing is independent of theCURSOR_SHARING initialization parameter (see "Sharing Cursors for Existing Applications"). Adaptive cursor sharing is equally applicable to statements that contain user-defined and system-generated bind variables. |
A bind-sensitive cursor is a cursor whose optimal plan may depend on the value of a bind variable. The database monitors the behavior of a bind-sensitive cursor that uses different bind values to determine whether a different plan is beneficial.
The criteria used by the optimizer to decide whether a cursor is bind-sensitive include the following:
The optimizer has peeked at the bind values to generate selectivity estimates.
A histogram exists on the column containing the bind value.
Example 11-2 Bind-Sensitive Cursors
In Example 11-1 you queried the emp
table using the bind value 9
for deptno
. Now you run the DBMS_XPLAN.DISPLAY_CURSOR
function to show the query plan:
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR);
The output is as follows:
---------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time| ---------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 2 (100)| | | 1 | SORT AGGREGATE | | 1 | 16 | | | | 2 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 16 | 2 (0)| 00:00:01| |* 3 | INDEX RANGE SCAN | EMP_I1 | 1 | | 1 (0)| 00:00:01| ----------------------------------------------------------------------------------
The plan indicates that the optimizer chose an index range scan, which is expected because of the selectivity (only 1%) of the value 9
. You can query V$SQL
to view statistics about the cursor:
COL BIND_SENSI FORMAT a10 COL BIND_AWARE FORMAT a10 COL BIND_SHARE FORMAT a10 SELECT CHILD_NUMBER, EXECUTIONS, BUFFER_GETS, IS_BIND_SENSITIVE AS "BIND_SENSI", IS_BIND_AWARE AS "BIND_AWARE", IS_SHAREABLE AS "BIND_SHARE" FROM V$SQL WHERE SQL_TEXT LIKE 'select /*ACS_1%';
As shown in the following output, one child cursor exists for this statement and has been executed once. A small number of buffer gets are associated with the child cursor. Because the deptno
data is skewed, the database created a histogram. This histogram led the database to mark the cursor as bind-sensitive (IS_BIND_SENSITIVE
is Y
).
CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE ------------ ---------- ----------- ---------- ---------- ---------- 0 1 56 Y N Y
For each execution of the query with a new bind value, the database records the execution statistics for the new value and compares them to the execution statistics for the previous value. If execution statistics vary greatly, then the database marks the cursor bind-aware.
A bind-aware cursor is a bind-sensitive cursor eligible to use different plans for different bind values. After a cursor has been made bind-aware, the optimizer chooses plans for future executions based on the bind value and its selectivity estimate.
When a statement with a bind-sensitive cursor executes, the database decides whether to mark the cursor bind-aware. The decision depends on whether the cursor produces significantly different data access patterns for different bind values. If the database marks the cursor bind-aware, then the next time that the cursor executes the database does the following:
Generates a new plan based on the new bind value.
Marks the original cursor generated for the statement as not shareable (V$SQL.IS_SHAREABLE
is N
). This cursor is no longer usable and will be among the first to be aged out of the shared SQL area.
Example 11-3 Bind-Aware Cursors
In Example 11-1 you queried emp using the bind value 9
. Now you query emp
using the bind value 10
. The query returns 99,900 rows that contain the value 10
:
COUNT(*) MAX(EMPNO) ---------- ---------- 99900 100000
Because the cursor for this statement is bind-sensitive, the optimizer assumes that the cursor can be shared. Consequently, the optimizer uses the same index range scan for the value 10
as for the value 9
.
The V$SQL
output shows that the same bind-sensitive cursor was executed a second time (the query using 10
) and required many more buffer gets than the first execution:
SELECT CHILD_NUMBER, EXECUTIONS, BUFFER_GETS, IS_BIND_SENSITIVE AS "BIND_SENSI", IS_BIND_AWARE AS "BIND_AWARE", IS_SHAREABLE AS "BIND_SHARE" FROM V$SQL WHERE SQL_TEXT LIKE 'select /*ACS_1%'; CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE ------------ ---------- ----------- ---------- ---------- ---------- 0 2 1010 Y N Y
Now you execute the query using the value 10
a second time. The database compares statistics for previous executions and marks the cursor as bind-aware. In this case, the optimizer decides that a new plan is warranted, so it performs a hard parse of the statement and generates a new plan. The new plan uses a full table scan instead of an index range scan:
--------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 208 (100)| | | 1 | SORT AGGREGATE | | 1 | 16 | | | |* 2 | TABLE ACCESS FULL| EMP | 95000 | 1484K| 208 (1)| 00:00:03 | ---------------------------------------------------------------------------
A query of V$SQL
shows that the database created an additional child cursor (child number 1
) that represents the plan containing the full table scan. This new cursor shows a lower number of buffer gets and is marked bind-aware:
SELECT CHILD_NUMBER, EXECUTIONS, BUFFER_GETS, IS_BIND_SENSITIVE AS "BIND_SENSI", IS_BIND_AWARE AS "BIND_AWARE", IS_SHAREABLE AS "BIND_SHARE" FROM V$SQL WHERE SQL_TEXT LIKE 'select /*ACS_1%'; CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE ------------ ---------- ----------- ---------- ---------- ---------- 0 2 1010 Y N Y 1 2 1522 Y Y Y
After you execute the query twice with value 10
, you execute it again using the more selective value 9
. Because of adaptive cursor sharing, the optimizer "adapts" the cursor and chooses an index range scan rather than a full table scan for this value.
A query of V$SQL
indicates that the database created a new child cursor (child number 2
) for the execution of the query:
CHILD_NUMBER EXECUTIONS BUFFER_GETS BIND_SENSI BIND_AWARE BIND_SHARE ------------ ---------- ----------- ---------- ---------- ---------- 0 2 1010 Y N N 1 1 1522 Y Y Y 2 1 7 Y Y Y
Because the database is now using adaptive cursor sharing, the database no longer uses the original cursor (child 0
), which is not bind-aware. The shared SQL area will age out the defunct cursor.
If the optimizer creates a plan for a bind-aware cursor, and if this plan is the same as an existing cursor, then the optimizer can perform cursor merging. In this case, the database merges cursors to save space in the shared SQL area. The database increases the selectivity range for the cursor to include the selectivity of the new bind.
Suppose you execute a query with a bind value that does not fall within the selectivity ranges of the existing cursors. The database performs a hard parse and generates a new plan and new cursor. If this new plan is the same plan used by an existing cursor, then the database merges these two cursors and deletes one of the old cursors.
You can use the V$
views for adaptive cursor sharing to see selectivity ranges, cursor information (such as whether a cursor is bind-aware or bind-sensitive), and execution statistics:
V$SQL
shows whether a cursor is bind-sensitive or bind-aware
V$SQL_CS_HISTOGRAM
shows the distribution of the execution count across a three-bucket execution history histogram
V$SQL_CS_SELECTIVITY
shows the selectivity ranges stored for every predicate containing a bind variable if the selectivity was used to check cursor sharing
V$SQL_CS_STATISTICS
summarizes the information that the optimizer uses to determine whether to mark a cursor bind-aware.
Access paths are ways in which data is retrieved from the database. In general, index access paths are useful for statements that retrieve a small subset of table rows, whereas full scans are more efficient when accessing a large portion of the table. Online transaction processing (OLTP) applications, which consist of short-running SQL statements with high selectivity, often are characterized by the use of index access paths. Decision support systems, however, tend to use partitioned tables and perform full scans of the relevant partitions.
This section describes the data access paths that the database can use to locate and retrieve any row in any table.
This type of scan reads all rows from a table and filters out those that do not meet the selection criteria. During a full table scan, all blocks in the table that are under the high water mark are scanned. The high water mark indicates the amount of used space, or space that had been formatted to receive data. Each row is examined to determine whether it satisfies the statement's WHERE
clause.
When Oracle Database performs a full table scan, the blocks are read sequentially. Because the blocks are adjacent, the database can make I/O calls larger than a single block to speed up the process. The size of the read calls range from one block to the number of blocks indicated by the initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT
. Using multiblock reads, the database can perform a full table scan very efficiently. The database reads each block only once.
Example 11-14, "EXPLAIN PLAN Output" contains an example of a full table scan on the employees
table.
Full table scans are cheaper than index range scans when accessing a large fraction of the blocks in a table. Full table scans can use larger I/O calls, and making fewer large I/O calls is cheaper than making many smaller calls.
The optimizer uses a full table scan in any of the following cases:
If the query cannot use existing indexes, then it uses a full table scan. For example, if there is a function used on the indexed column in the query, then the optimizer cannot use the index and instead uses a full table scan.
If you need to use the index for case-independent searches, then either do not permit mixed-case data in the search columns or create a function-based index, such as UPPER
(last_name
), on the search column. See "Using Function-based Indexes for Performance".
If the optimizer thinks that the query requires most of the blocks in the table, then it uses a full table scan, even though indexes are available.
If a table contains less than DB_FILE_MULTIBLOCK_READ_COUNT
blocks under the high water mark, which the database can read in a single I/O call, then a full table scan might be cheaper than an index range scan, regardless of the fraction of tables being accessed or indexes present.
A high degree of parallelism for a table skews the optimizer toward full table scans over range scans. Examine the DEGREE
column in ALL_TABLES
for the table to determine the degree of parallelism.
Use the hint FULL(
table
alias
)
to instruct the optimizer to use a full table scan. For more information on the FULL
hint, see "Hints for Access Paths".
You can use the CACHE
and NOCACHE
hints to indicate where the retrieved blocks are placed in the buffer cache. The CACHE
hint instructs the optimizer to place the retrieved blocks at the most recently used end of the LRU list in the buffer cache when the database performs a full table scan.
Small tables are automatically cached according to the criteria in Table 11-2.
Table 11-2 Table Caching Criteria
Table Size | Size Criteria | Caching |
---|---|---|
Small |
Number of blocks < 20 or 2% of total cached blocks, whichever is larger |
If |
Medium |
Larger than a small table, but < 10% of total cached blocks |
Oracle Database decides whether to cache a table based on its table scan and workload history. It caches the table only if a future table scan is likely to find the cached blocks. |
Large |
> 10% of total cached blocks |
Not cached |
Automatic caching of small tables is disabled for tables that are created or altered with the CACHE
attribute.
When a full table scan is required, the database can improve response time by using multiple parallel execution servers. In some cases, as when the database has a large amount of memory, the database can cache parallel query data in the SGA instead of using direct reads into the PGA. Typically, parallel queries occur in low-concurrency data warehouses because of the potential resource usage.
The rowid of a row specifies the data file and data block containing the row and the location of the row in that block. Locating a row by specifying its rowid is the fastest way to retrieve a single row, because the exact location of the row in the database is specified.
To access a table by rowid, Oracle Database first obtains the rowids of the selected rows, either from the statement's WHERE
clause or through an index scan of one or more of the table's indexes. Oracle Database then locates each selected row in the table based on its rowid.
In Example 11-14, "EXPLAIN PLAN Output", the plan includes an index scan on the jobs
and departments
tables. The database uses the rowids retrieved to return the rows.
This is generally the second step after retrieving the rowid from an index. The table access might be required for any columns in the statement not present in the index.
Access by rowid does not need to follow every index scan. If the index contains all the columns needed for the statement, then table access by rowid might not occur.
Note: Rowids are an internal representation of where the database stores data. Rowids can change between versions. Accessing data based on position is not recommended because rows can move around due to row migration and chaining, export and import, and some other operations. Foreign keys should be based on primary keys. For more information on rowids, see Oracle Database Advanced Application Developer's Guide. |
In this method, a row is retrieved by traversing the index, using the indexed column values specified by the statement. An index scan retrieves data from an index based on the value of one or more columns in the index. To perform an index scan, Oracle Database searches the index for the indexed column values accessed by the statement. If the statement accesses only columns of the index, then Oracle Database reads the indexed column values directly from the index, rather than from the table.
The index contains not only the indexed value, but also the rowids of rows in the table having that value. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle Database can find the rows in the table by using either a table access by rowid or a cluster scan.
An index scan can be one of the following types:
Oracle Database performs I/O by blocks. Therefore, the optimizer's decision to use full table scans is influenced by the percentage of blocks accessed, not rows. This is called the index clustering factor. If blocks contain single rows, then rows accessed and blocks accessed are the same.
However, most tables have multiple rows in each block. Consequently, the desired number of rows may be clustered in a few blocks or spread out over a larger number of blocks.
Although the clustering factor is a property of the index, the clustering factor actually relates to the spread of similar indexed column values within data blocks in the table. A lower clustering factor indicates that the individual rows are concentrated within fewer blocks in the table. Conversely, a high clustering factor indicates that the individual rows are scattered more randomly across blocks in the table. Therefore, a high clustering factor means that it costs more to use a range scan to fetch rows by rowid, because more blocks in the table need to be visited to return the data. Example 11-4 shows how the clustering factor can affect cost.
There is a table with 9 rows.
There is a non-unique index on col1
for table.
The c1
column currently stores the values A
, B
, and C
.
The table only has three data blocks.
Case 1: The index clustering factor is low for the rows as they are arranged in the following diagram.
Block 1 Block 2 Block 3 ------- ------- ------- A A A B B B C C C
This is because the rows that have the same indexed column values for c1
are located within the same physical blocks in the table. The cost of using a range scan to return all rows that have the value A
is low because only one block in the table must be read.
Case 2: If the same rows in the table are rearranged so that the index values are scattered across the table blocks (rather than collocated), then the index clustering factor is higher.
Block 1 Block 2 Block 3 ------- ------- ------- A B C A B C A B C
This is because all three blocks in the table must be read in order to retrieve all rows with the value A
in col1
.
This scan returns, at most, a single rowid. Oracle Database performs a unique scan if a statement contains a UNIQUE
or a PRIMARY
KEY
constraint that guarantees that only a single row is accessed.
In Example 11-14, "EXPLAIN PLAN Output", the database performs an index scan on the jobs
and departments
tables, using the job_id_pk
and dept_id_pk
indexes respectively.
The database uses this access path when the user specifies all columns of a unique (B-tree) index or an index created as a result of a primary key constraint with equality conditions.
See Also: Oracle Database Concepts for more details on index structures and for detailed information on how a B-tree is searched |
In general, you should not need to use a hint to do a unique scan. There might be cases where the table is across a database link and being accessed from a local table, or where the table is small enough for the optimizer to prefer a full table scan.
The hint INDEX(
alias index_name
)
specifies the index to use, but not an access path (range scan or unique scan). For more information on the INDEX
hint, see "Hints for Access Paths".
An index range scan is a common operation for accessing selective data. It can be bounded (bounded on both sides) or unbounded (on one or both sides). Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted in ascending order by rowid.
If you require the data to be sorted by order, then use the ORDER
BY
clause, and do not rely on an index. If an index can satisfy an ORDER
BY
clause, then the optimizer uses this option and avoids a sort.
In Example 11-5, the order has been imported from a legacy system, and you are querying the order by the reference used in the legacy system. Assume this reference is the order_date
.
Example 11-5 Index Range Scan
SELECT order_status, order_id FROM orders WHERE order_date = :b1; --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 20 | 3 (34)| | 1 | TABLE ACCESS BY INDEX ROWID| ORDERS | 1 | 20 | 3 (34)| |* 2 | INDEX RANGE SCAN | ORD_ORDER_DATE_IX | 1 | | 2 (50)| --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("ORDERS"."ORDER_DATE"=:Z)
This should be a highly selective query, and you should see the query using the index on the column to retrieve the desired rows. The data returned is sorted in ascending order by the rowids for the order_date
. Because the index column order_date
is identical for the selected rows here, the data is sorted by rowid.
The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions, such as the following:
col1 = :b1
col1 < :b1
col1 > :b1
AND
combination of the preceding conditions for leading columns in the index
col1 like 'ASD%'
wild-card searches should not be in a leading position otherwise the condition col1 like '%ASD'
does not result in a range scan
Range scans can use unique or non-unique indexes. Range scans avoid sorting when index columns constitute the ORDER
BY
/GROUP
BY
clause.
A hint might be required if the optimizer chooses some other index or uses a full table scan. The hint INDEX(
table_alias
index_name
)
instructs the optimizer to use a specific index. For more information on the INDEX
hint, see "Hints for Access Paths".
An index range scan descending is identical to an index range scan, except that the data is returned in descending order. Indexes, by default, are stored in ascending order. Usually, the database uses this scan when ordering data in a descending order to return the most recent data first, or when seeking a value less than a specified value.
The optimizer uses index range scan descending when an index can satisfy an order by descending clause.
Use the hint INDEX_DESC(
table_alias
index_name
)
for this access path. For more information on the INDEX_DESC
hint, see "Hints for Access Paths".
Index skip scans improve index scans by nonprefix columns. Often, scanning index blocks is faster than scanning table data blocks.
Skip scanning lets a composite index be split logically into smaller subindexes. In skip scanning, the initial column of the composite index is not specified in the query. In other words, it is skipped.
The database determines the number of logical subindexes by the number of distinct values in the initial column. Skip scanning is advantageous when there are few distinct values in the leading column of the composite index and many distinct values in the nonleading key of the index.
The database may choose an index skip scan when the leading column of the composite index is not specified in a query predicate. For example, assume that you run the following query for a customer in the sh.customers
table:
SELECT * FROM sh.customers WHERE cust_email = 'Abbey@company.com';
The customers
table has a column cust_gender
whose values are either M
or F
. Assume that a composite index exists on the columns (cust_gender
, cust_email
) that was created as follows:
CREATE INDEX customers_gender_email ON sh.customers (cust_gender, cust_email);
Example 11-6 shows a portion of the index entries.
Example 11-6 Composite Index Entries
F,Wolf@company.com,rowid F,Wolsey@company.com,rowid F,Wood@company.com,rowid F,Woodman@company.com,rowid F,Yang@company.com,rowid F,Zimmerman@company.com,rowid M,Abbassi@company.com,rowid M,Abbey@company.com,rowid
The database can use a skip scan of this index even though cust_gender
is not specified in the WHERE
clause.
In a skip scan, the number of logical subindexes is determined by the number of distinct values in the leading column. In Example 11-6, the leading column has two possible values. The database logically splits the index into one subindex with the key F
and a second subindex with the key M
.
When searching for the record for the customer whose email is Abbey@company.com
, the database searches the subindex with the value F
first and then searches the subindex with the value M
. Conceptually, the database processes the query as follows:
SELECT * FROM sh.customers WHERE cust_gender = 'F' AND cust_email = 'Abbey@company.com' UNION ALL SELECT * FROM sh.customers WHERE cust_gender = 'M' AND cust_email = 'Abbey@company.com';
A full index scan eliminates a sort operation, because the data is ordered by the index key. It reads the blocks singly. Oracle Database may use a full scan in any of the following situations:
An ORDER
BY
clause that meets the following requirements is present in the query:
All of the columns in the ORDER
BY
clause must be in the index.
The order of the columns in the ORDER
BY
clause must match the order of the leading index columns.
The ORDER
BY
clause can contain all of the columns in the index or a subset of the columns in the index.
The query requires a sort merge join. The database can perform a full index scan instead of doing a full table scan followed by a sort when the query meets the following requirements:
All of the columns referenced in the query must be in the index.
The order of the columns referenced in the query must match the order of the leading index columns.
The query can contain all of the columns in the index or a subset of the columns in the index.
A GROUP
BY
clause is present in the query, and the columns in the GROUP
BY
clause are present in the index. The columns do not need to be in the same order in the index and the GROUP
BY
clause. The GROUP
BY
clause can contain all of the columns in the index or a subset of the columns in the index.
Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT
NULL
constraint. A fast full scan accesses the data in the index itself, without accessing the table. The database cannot use this scan to eliminate a sort operation because the data is not ordered by the index key. The database reads the entire index using multiblock reads, unlike a full index scan, and can scan in parallel.
You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE
or the INDEX_FFS
hint. A fast full scan is faster than a normal full index scan because it can use multiblock I/O and can run in parallel just like a table scan.
Note: SettingPARALLEL for indexes does not impact the cost calculation. |
The fast full scan has a special index hint, INDEX_FFS
, which has the same format and arguments as the regular INDEX
hint. For more information on the INDEX_FFS
hint, see "Hints for Access Paths".
An index join is a hash join of several indexes that together contain all the table columns referenced in the query. If the database uses an index join, then table access is not needed because the database can retrieve all the relevant column values from the indexes. The database cannot use an index join cannot to eliminate a sort operation.
You can specify an index join with the INDEX_JOIN
hint. For more information on the INDEX_JOIN
hint, see "Hints for Access Paths".
A bitmap join uses a bitmap for key values and a mapping function that converts each bit position to a rowid. Bitmaps can efficiently merge indexes that correspond to several conditions in a WHERE
clause, using Boolean operations to resolve AND
and OR
conditions.
Note: Bitmap indexes and bitmap join indexes are available only in the Oracle Enterprise Edition. |
The database uses a cluster scan to retrieve all rows that have the same cluster key value from a table stored in an indexed cluster. In an indexed cluster, the database stores all rows with the same cluster key value in the same data block. To perform a cluster scan, Oracle Database first obtains the rowid of one of the selected rows by scanning the cluster index. Oracle Database then locates the rows based on this rowid.
The database uses a hash scan to locate rows in a hash cluster based on a hash value. In a hash cluster, all rows with the same hash value are stored in the same data block. To perform a hash scan, Oracle Database first obtains the hash value by applying a hash function to a cluster key value specified by the statement. Oracle Database then scans the data blocks containing rows with that hash value.
A sample table scan retrieves a random sample of data from a simple table or a complex SELECT
statement, such as a statement involving joins and views. The database uses this access path when a statement's FROM
clause includes the SAMPLE
clause or the SAMPLE
BLOCK
clause. To perform a sample table scan when sampling by rows with the SAMPLE
clause, the database reads a specified percentage of rows in the table. To perform a sample table scan when sampling by blocks with the SAMPLE
BLOCK
clause, the database reads a specified percentage of table blocks.
Example 11-7 uses a sample table scan to access 1% of the employees
table, sampling by blocks.
Example 11-7 Sample Table Scan
SELECT * FROM employees SAMPLE BLOCK (1);
The EXPLAIN
PLAN
output for this statement might look like this:
------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 68 | 3 (34)| | 1 | TABLE ACCESS SAMPLE | EMPLOYEES | 1 | 68 | 3 (34)| -------------------------------------------------------------------------
The query optimizer chooses an access path based on the following factors:
The available access paths for the statement
The estimated cost of executing the statement, using each access path or combination of paths
To choose an access path, the optimizer first determines which access paths are available by examining the conditions in the statement's WHERE
clause and its FROM
clause. The optimizer then generates a set of possible execution plans using available access paths and estimates the cost of each plan, using the statistics for the index, columns, and tables accessible to the statement. Finally, the optimizer chooses the execution plan with the lowest estimated cost.
When choosing an access path, the query optimizer is influenced by the following:
You can instruct the optimizer to use a specific access path using a hint, except when the statement's FROM
clause contains SAMPLE
or SAMPLE
BLOCK
.
Old Statistics
For example, if a table has not been analyzed since it was created, and if it has less than DB_FILE_MULTIBLOCK_READ_COUNT
blocks under the high water mark, then the optimizer thinks that the table is small and uses a full table scan. Review the LAST_ANALYZED
and BLOCKS
columns in the ALL_TABLES
table to examine the statistics.
Joins are statements that retrieve data from multiple tables. A join is characterized by multiple tables in the FROM
clause. The existence of a join condition in the WHERE
clause defines the relationship between the tables. In a join, one row set is called inner, and the other is called outer.
This section discusses:
To choose an execution plan for a join statement, the optimizer must make these interrelated decisions:
Access Paths
As for simple statements, the optimizer must choose an access path to retrieve data from each table in the join statement.
Join Method
To join each pair of row sources, Oracle Database must perform a join operation. Join methods include nested loop, sort merge, cartesian, and hash joins.
Join Order
To execute a statement that joins more than two tables, Oracle Database joins two of the tables and then joins the resulting row source to the next table. This process continues until all tables are joined into the result.
The query optimizer considers the following when choosing an execution plan:
The optimizer first determines whether joining two or more tables definitely results in a row source containing at most one row. The optimizer recognizes such situations based on UNIQUE
and PRIMARY
KEY
constraints on the tables. If such a situation exists, then the optimizer places these tables first in the join order. The optimizer then optimizes the join of the remaining set of tables.
For join statements with outer join conditions, the table with the outer join operator must come after the other table in the condition in the join order. The optimizer does not consider join orders that violate this rule. Similarly, when a subquery has been converted into an antijoin or semijoin, the tables from the subquery must come after those tables in the outer query block to which they were connected or correlated. However, hash antijoins and semijoins are able to override this ordering condition in certain circumstances.
With the query optimizer, the optimizer generates a set of execution plans, according to possible join orders, join methods, and available access paths. The optimizer then estimates the cost of each plan and chooses the one with the lowest cost. The optimizer estimates costs in the following ways:
The cost of a nested loops operation is based on the cost of reading each selected row of the outer table and each of its matching rows of the inner table into memory. The optimizer estimates these costs using the statistics in the data dictionary.
The cost of a sort merge join is based largely on the cost of reading all the sources into memory and sorting them.
The cost of a hash join is based largely on the cost of building a hash table on one of the input sides to the join and using the rows from the other of the join to probe it.
The optimizer also considers other factors when determining the cost of each operation. For example:
A smaller sort area size is likely to increase the cost for a sort merge join because sorting takes more CPU time and I/O in a smaller sort area. See "PGA Memory Management" to learn how to size SQL work areas.
A larger multiblock read count is likely to decrease the cost for a sort merge join in relation to a nested loop join. If the database can read a large number of sequential blocks from disk in a single I/O, then an index on the inner table for the nested loop join is less likely to improve performance over a full table scan. The multiblock read count is specified by the initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT
.
You can use the ORDERED
hint to override the optimizer's choice of join orders. If the ORDERED
hint specifies a join order that violates the rule for an outer join, then the optimizer ignores the hint and chooses the order. Also, you can override the optimizer's choice of join method with hints.
Nested loop joins are useful when the following conditions are true:
The database joins small subsets of data.
The join condition is an efficient method of accessing the second table.
It is important to ensure that the inner table is driven from (dependent on) the outer table. If the inner table's access path is independent of the outer table, then the same rows are retrieved for every iteration of the outer loop, degrading performance considerably. In such cases, hash joins joining the two independent row sources perform better.
A nested loop join involves the following steps:
The optimizer determines the driving table and designates it as the outer table.
The other table is designated as the inner table.
For every row in the outer table, Oracle Database accesses all the rows in the inner table. The outer loop is for every row in the outer table and the inner loop is for every row in the inner table. The outer loop appears before the inner loop in the execution plan, as follows:
NESTED LOOPS outer_loop inner_loop
Oracle Database 11g introduces a new implementation for nested loop joins. As a result, execution plans that include nested loops might appear different than they did in previous releases of Oracle Database. Both the new implementation and the original implementation for nested loop joins are possible in Oracle Database 11g. So, when analyzing execution plans, it is important to understand that the number of NESTED
LOOPS
join row sources might be different.
Consider the following query:
SELECT e.first_name, e.last_name, e.salary, d.department_name FROM hr.employees e, hr.departments d WHERE d.department_name IN ('Marketing', 'Sales') AND e.department_id = d.department_id;
Before Oracle Database 11g, the execution plan for this query might appear similar to the following execution plan:
------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 19 | 722 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 10 | 220 | 1 (0)| 00:00:01 | | 2 | NESTED LOOPS | | 19 | 722 | 3 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | DEPARTMENTS | 2 | 32 | 2 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 10 | | 0 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("D"."DEPARTMENT_NAME"='Marketing' OR "D"."DEPARTMENT_NAME"='Sales') 4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
In this example, the outer side of the join consists of a scan of the hr.departments
table that returns the rows that match the condition department_name
IN
('Marketing', 'Sales')
. The inner loop retrieves the employees in the hr.employees
table that are associated with those departments.
Oracle Database 11g introduces a new implementation for nested loop joins to reduce overall latency for physical I/O. When an index or a table block is not in the buffer cache and is needed to process the join, a physical I/O is required. Oracle Database 11g can batch multiple physical I/O requests and process them using a vector I/O instead of processing them one at a time.
As part of the new implementation for nested loop joins, two NESTED
LOOPS
join row sources might appear in the execution plan where only one would have appeared in prior releases. In such cases, Oracle Database allocates one NESTED
LOOPS
join row source to join the values from the table on the outer side of the join with the index on the inner side. A second row source is allocated to join the result of the first join, which includes the rowids stored in the index, with the table on the inner side of the join.
Consider the query in "Original Implementation for Nested Loop Joins". In Oracle Database 11g, with the new implementation for nested loop joins, the execution plan for this query might appear similar to the following execution plan:
------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost(%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 19 | 722 | 3 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 19 | 722 | 3 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | DEPARTMENTS | 2 | 32 | 2 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 10 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 10 | 220 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("D"."DEPARTMENT_NAME"='Marketing' OR "D"."DEPARTMENT_NAME"='Sales') 4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
In this case, the rows from the hr.departments
table constitute the outer side of the first join. The inner side of the first join is the index emp_department_ix
. The results of the first join constitute the outer side of the second join, which has the hr.employees
table as its inner side.
There are cases where a second join row source is not allocated, and the execution plan looks the same as it did in prior releases. The following list describes such cases:
All of the columns needed from the inner side of the join are present in the index, and there is no table access required. In this case, Oracle Database allocates only one join row source.
The order of the rows returned might be different than it was in previous releases. Hence, when Oracle Database tries to preserve a specific ordering of the rows, for example to eliminate the need for an ORDER
BY
sort, Oracle Database might use the original implementation for nested loop joins.
The OPTIMIZER_FEATURES_ENABLE
initialization parameter is set to a release before Oracle Database 11g. In this case, Oracle Database uses the original implementation for nested loop joins.
The optimizer uses nested loop joins when joining small number of rows, with a good driving condition between the two tables. You drive from the outer loop to the inner loop, so the order of tables in the execution plan is important.
The outer loop is the driving row source. It produces a set of rows for driving the join condition. The row source can be a table accessed using an index scan or a full table scan. Also, the rows can be produced from any other operation. For example, the output from a nested loop join can serve as a row source for another nested loop join.
The inner loop is iterated for every row returned from the outer loop, ideally by an index scan. If the access path for the inner loop is not dependent on the outer loop, then you can end up with a Cartesian product; for every iteration of the outer loop, the inner loop produces the same set of rows. Therefore, you should use other join methods when two independent row sources are joined together.
If the optimizer chooses to use some other join method, then you can use the USE_NL
(table1 table2
) hint, where table1
and table2
are the aliases of the tables being joined.
For some SQL examples, the data is small enough for the optimizer to prefer full table scans and use hash joins. This is the case for the SQL example shown in Example 11-8, "Hash Joins". However, you can add a USE_NL
to instruct the optimizer to change the join method to nested loop. For more information on the USE_NL
hint, see "Hints for Join Operations".
The outer loop of a nested loop can be a nested loop itself. You can nest two or more outer loops to join as many tables as needed. Each loop is a data access method, as follows:
SELECT STATEMENT NESTED LOOP 3 NESTED LOOP 2 (OUTER LOOP 3.1) NESTED LOOP 1 (OUTER LOOP 2.1) OUTER LOOP 1.1 - #1 INNER LOOP 1.2 - #2 INNER LOOP 2.2 - #3 INNER LOOP 3.2 - #4
The database uses hash joins to join large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.
This method is best when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.
The optimizer uses a hash join to join two tables if they are joined using an equijoin and if either of the following conditions are true:
A large amount of data must be joined.
A large fraction of a small table must be joined.
In Example 11-8, the database uses the table orders
to build the hash table. The database scans the larger order_items
later.
Example 11-8 Hash Joins
SELECT o.customer_id, l.unit_price * l.quantity FROM orders o ,order_items l WHERE l.order_id = o.order_id; -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 665 | 13300 | 8 (25)| |* 1 | HASH JOIN | | 665 | 13300 | 8 (25)| | 2 | TABLE ACCESS FULL | ORDERS | 105 | 840 | 4 (25)| | 3 | TABLE ACCESS FULL | ORDER_ITEMS | 665 | 7980 | 4 (25)| -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("L"."ORDER_ID"="O"."ORDER_ID")
Apply the USE_HASH
hint to instruct the optimizer to use a hash join when joining two tables together. See "PGA Memory Management" to learn how to size SQL work areas. See "Hints for Join Operations" to learn about the USE_HASH
hint.
Sort merge joins can join rows from two independent sources. Hash joins generally perform better than sort merge joins. However, sort merge joins can perform better than hash joins if both of the following conditions exist:
The row sources are sorted already.
A sort operation does not have to be done.
However, if a sort merge join involves choosing a slower access method (an index scan as opposed to a full table scan), then the benefit of using a sort merge might be lost.
Sort merge joins are useful when the join condition between two tables is an inequality condition such as <
, <=
, >
, or >=
. Sort merge joins perform better than nested loop joins for large data sets. You cannot use hash joins unless there is an equality condition.
In a merge join, there is no concept of a driving table. The join consists of two steps:
Sort join operation: Both the inputs are sorted on the join key.
Merge join operation: The sorted lists are merged together.
If the input is sorted by the join column, then a sort join operation is not performed for that row source. However, a sort merge join always creates a positionable sort buffer for the right side of the join so that it can seek back to the last match in the case where duplicate join key values come out of the left side of the join.
The optimizer can choose a sort merge join over a hash join for joining large amounts of data if any of the following conditions are true:
The join condition between two tables is not an equijoin.
Because of sorts required by other operations, the optimizer finds it is cheaper to use a sort merge than a hash join.
To instruct the optimizer to use a sort merge join, apply the USE_MERGE
hint. You might also need to give hints to force an access path.
There are situations where it makes sense to override the optimizer with the USE_MERGE
hint. For example, the optimizer can choose a full scan on a table and avoid a sort operation in a query. However, there is an increased cost because a large table is accessed through an index and single block reads, as opposed to faster access through a full table scan.
For more information on the USE_MERGE
hint, see "Hints for Join Operations".
The database uses a Cartesian join when one or more of the tables does not have any join conditions to any other tables in the statement. The optimizer joins every row from one data source with every row from the other data source, creating the Cartesian product of the two sets.
The optimizer uses Cartesian joins when it is asked to join two tables with no join conditions. In some cases, a common filter condition between the two tables could be picked up by the optimizer as a possible join condition. In other cases, the optimizer may decide to generate a Cartesian product of two very small tables that are both joined to the same large table.
An outer join extends the result of a simple join. An outer join returns all rows that satisfy the join condition and also returns some or all of those rows from one table for which no rows from the other satisfy the join condition.
The database uses this operation to loop through an outer join between two tables. The outer join returns the outer (preserved) table rows, even when no corresponding rows are in the inner (optional) table.
In a regular outer join, the optimizer chooses the order of tables (driving and driven) based on the cost. However, in a nested loop outer join, the join condition determines the order of tables. The database uses the outer table, with rows that are being preserved, to drive to the inner table.
The optimizer uses nested loop joins to process an outer join in the following circumstances:
It is possible to drive from the outer table to inner table.
Data volume is low enough to make the nested loop method efficient.
For an example of a nested loop outer join, you can add the USE_NL
hint to Example 11-9 to instruct the optimizer to use a nested loop. For example:
SELECT /*+ USE_NL(c o) */ cust_last_name, sum(nvl2(o.customer_id,0,1)) "Count"
The optimizer uses hash joins for processing an outer join if the data volume is high enough to make the hash join method efficient or if it is not possible to drive from the outer table to inner table.
The order of tables is determined by the cost. The outer table, including preserved rows, may be used to build the hash table, or it may be used to probe one.
Example 11-9 shows a typical hash join outer join query. In this example, all the customers with credit limits greater than 1000 are queried. An outer join is needed so that you do not miss the customers who do not have any orders.
Example 11-9 Hash Join Outer Joins
SELECT cust_last_name, sum(nvl2(o.customer_id,0,1)) "Count" FROM customers c, orders o WHERE c.credit_limit > 1000 AND c.customer_id = o.customer_id(+) GROUP BY cust_last_name; ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 168 | 3192 | 6 (17)| | 1 | HASH GROUP BY | | 168 | 3192 | 6 (17)| |* 2 | NESTED LOOPS OUTER | | 260 | 4940 | 5 (0) | |* 3 | TABLE ACCESS FULL | CUSTOMERS | 260 | 3900 | 5 (0) | |* 4 | INDEX RANGE SCAN | ORD_CUSTOMER_IX | 105 | 420 | 0 (0) | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("C"."CREDIT_LIMIT">1000) 4 - access("C"."CUSTOMER_ID"="0"."CUSTOMER_ID"(+)) filter("O"."CUSTOMER_ID"(+)>0)
The query looks for customers which satisfy various conditions. An outer join returns NULL
for the inner table columns along with the outer (preserved) table rows when it does not find any corresponding rows in the inner table. This operation finds all the customers
rows that do not have any orders
rows.
In this case, the outer join condition is the following:
customers.customer_id = orders.customer_id(+)
The components of this condition represent the following:
The outer table is customers
.
The inner table is orders
.
The join preserves the customers
rows, including those rows without a corresponding row in orders
.
You could use a NOT
EXISTS
subquery to return the rows. However, because you are querying all the rows in the table, the hash join performs better (unless the NOT
EXISTS
subquery is not nested).
In Example 11-10, the outer join is to a multitable view. The optimizer cannot drive into the view like in a normal join or push the predicates, so it builds the entire row set of the view.
Example 11-10 Outer Join to a Multitable View
SELECT c.cust_last_name, sum(revenue) FROM customers c, v_orders o WHERE c.credit_limit > 2000 AND o.customer_id(+) = c.customer_id GROUP BY c.cust_last_name; ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 144 | 4608 | 16 (32)| | 1 | HASH GROUP BY | | 144 | 4608 | 16 (32)| |* 2 | HASH JOIN OUTER | | 663 | 21216 | 15 (27)| |* 3 | TABLE ACCESS FULL | CUSTOMERS | 195 | 2925 | 6 (17)| | 4 | VIEW | V_ORDERS | 665 | 11305 | | | 5 | HASH GROUP BY | | 665 | 15960 | 9 (34)| |* 6 | HASH JOIN | | 665 | 15960 | 8 (25)| |* 7 | TABLE ACCESS FULL| ORDERS | 105 | 840 | 4 (25)| | 8 | TABLE ACCESS FULL| ORDER_ITEMS | 665 | 10640 | 4 (25)| ---------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("O"."CUSTOMER_ID"(+)="C"."CUSTOMER_ID") 3 - filter("C"."CREDIT_LIMIT">2000) 6 - access("O"."ORDER_ID"="L"."ORDER_ID") 7 - filter("O"."CUSTOMER_ID">0)
The view definition is as follows:
CREATE OR REPLACE view v_orders AS SELECT l.product_id, SUM(l.quantity*unit_price) revenue, o.order_id, o.customer_id FROM orders o, order_items l WHERE o.order_id = l.order_id GROUP BY l.product_id, o.order_id, o.customer_id;
When an outer join cannot drive from the outer (preserved) table to the inner (optional) table, it cannot use a hash join or nested loop joins. Then it uses the sort merge outer join for performing the join operation.
The optimizer uses sort merge for an outer join:
If a nested loop join is inefficient. A nested loop join can be inefficient because of data volumes.
The optimizer finds it is cheaper to use a sort merge over a hash join because of sorts required by other operations.
A full outer join acts like a combination of the left and right outer joins. In addition to the inner join, rows from both tables that have not been returned in the result of the inner join are preserved and extended with nulls. In other words, full outer joins let you join tables together, yet still show rows that do not have corresponding rows in the joined tables.
The query in Example 11-11 retrieves all departments and all employees in each department, but also includes:
Any employees without departments
Any departments without employees
Example 11-11 Full Outer Join
SELECT d.department_id, e.employee_id FROM employees e FULL OUTER JOIN departments d ON e.department_id = d.department_id ORDER BY d.department_id;
The statement produces the following output:
DEPARTMENT_ID EMPLOYEE_ID ------------- ----------- 10 200 20 201 20 202 30 114 30 115 30 116 ... 270 280 178 207 125 rows selected.
Starting with Oracle Database 11g, Oracle Database automatically uses a native execution method based on a hash join for executing full outer joins whenever possible. When the database uses the new method to execute a full outer join, the execution plan for the query contains HASH
JOIN
FULL
OUTER
. Example 11-12 shows the execution plan for the query in Example 11-11.
Example 11-12 Execution Plan for a Full Outer Join
---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 122 | 4758 | 6 (34)| 00:0 0:01 | | 1 | SORT ORDER BY | | 122 | 4758 | 6 (34)| 00:0 0:01 | | 2 | VIEW | VW_FOJ_0 | 122 | 4758 | 5 (20)| 00:0 0:01 | |* 3 | HASH JOIN FULL OUTER | | 122 | 1342 | 5 (20)| 00:0 0:01 | | 4 | INDEX FAST FULL SCAN| DEPT_ID_PK | 27 | 108 | 2 (0)| 00:0 0:01 | | 5 | TABLE ACCESS FULL | EMPLOYEES | 107 | 749 | 2 (0)| 00:0 0:01 | ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
Notice that HASH
JOIN
FULL
OUTER
is included in the plan. Therefore, the query uses the hash full outer join execution method. Typically, when the full outer join condition between two tables is an equi-join, the hash full outer join execution method is possible, and Oracle Database uses it automatically.
To instruct the optimizer to consider using the hash full outer join execution method, apply the NATIVE_FULL_OUTER_JOIN
hint. To instruct the optimizer not to consider using the hash full outer join execution method, apply the NO_NATIVE_FULL_OUTER_JOIN
hint. The NO_NATIVE_FULL_OUTER_JOIN
hint instructs the optimizer to exclude the native execution method when joining each specified table. Instead, the full outer join is executed as a union of left outer join and an anti-join.
To execute a SQL statement, Oracle Database may need to perform many steps. Each step either retrieves rows of data physically from the database or prepares them in some way for the user issuing the statement. The combination of the steps that Oracle Database uses to execute a statement is an execution plan. An execution plan includes an access path for each table that the statement accesses and an ordering of the tables (the join order) with the appropriate join method.
You can examine the execution plan chosen by the optimizer for a SQL statement by using the EXPLAIN PLAN
statement. When the statement is issued, the optimizer chooses an execution plan and then inserts data describing the plan into a database table. Simply issue the EXPLAIN PLAN
statement and then query the output table.
These are the basics of using the EXPLAIN PLAN
statement:
Use the SQL script CATPLAN
.SQL
to create a sample output table called PLAN_TABLE
in your schema. See "The PLAN_TABLE Output Table".
Include the EXPLAIN PLAN
FOR
clause before the SQL statement. See "Running EXPLAIN PLAN".
After issuing the
EXPLAIN PLAN
statement, use one of the scripts or package provided by Oracle Database to display the most recent plan table output. See "Displaying PLAN_TABLE Output".
The execution order in
EXPLAIN PLAN
output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is normally executed first.
Example 11-13 uses EXPLAIN PLAN
to examine a SQL statement that selects the employee_id
, job_title
, salary
, and department_name
for the employees whose IDs are less than 103.
Example 11-13 Using EXPLAIN PLAN
EXPLAIN PLAN FOR SELECT e.employee_id, j.job_title, e.salary, d.department_name FROM employees e, jobs j, departments d WHERE e.employee_id < 103 AND e.job_id = j.job_id AND e.department_id = d.department_id;
The resulting output table in Example 11-14 shows the execution plan chosen by the optimizer to execute the SQL statement in the example:
Example 11-14 EXPLAIN PLAN Output
----------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ----------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 189 | 10 (10)| | 1 | NESTED LOOPS | | 3 | 189 | 10 (10)| | 2 | NESTED LOOPS | | 3 | 141 | 7 (15)| |* 3 | TABLE ACCESS FULL | EMPLOYEES | 3 | 60 | 4 (25)| | 4 | TABLE ACCESS BY INDEX ROWID| JOBS | 19 | 513 | 2 (50)| |* 5 | INDEX UNIQUE SCAN | JOB_ID_PK | 1 | | | | 6 | TABLE ACCESS BY INDEX ROWID | DEPARTMENTS | 27 | 432 | 2 (50)| |* 7 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | | ----------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("E"."EMPLOYEE_ID"<103) 5 - access("E"."JOB_ID"="J"."JOB_ID") 7 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID" ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 189 | 8 (13)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS cP | | 3 | 189 | 8 (13)| 00:00:01 | | 3 | MERGE JOIN | | 3 | 141 | 5 (20)| 00:00:01 | | 4 | TABLE ACCESS BY INDEX ROWID | JOBS | 19 | 513 | 2 (0)| 00:00:01 | | 5 | INDEX FULL SCAN | JOB_ID_PK | 19 | | 1 (0)| 00:00:01 | |* 6 | SORT JOIN | | 3 | 60 | 3 (34)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 3 | 60 | 2 (0)| 00:00:01 | |* 8 | INDEX RANGE SCAN | EMP_EMP_ID_PK | 3 | | 1 (0)| 00:00:01 | |* 9 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 | | 10 | TABLE ACCESS BY INDEX ROWID | DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("E"."JOB_ID"="J"."JOB_ID") filter("E"."JOB_ID"="J"."JOB_ID") 8 - access("E"."EMPLOYEE_ID"<103) 9 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
Each row in the output table corresponds to a single step in the execution plan. Note that the step IDs with asterisks are listed in the Predicate Information section.
Each step of the execution plan returns a set of rows. The next step either uses these rows or, in the last step, returns the rows to the user or application issuing the SQL statement. A row set is a set of rows returned by a step.
The numbering of the step IDs reflects the order in which they are displayed in response to the EXPLAIN
PLAN
statement. Each step of the execution plan either retrieves rows from the database or accepts rows from one or more row sources as input.
The following steps in Example 11-14 physically retrieve data from an object in the database:
Step 3 reads all rows of the employees
table.
Step 5 looks up each job_id
in JOB_ID_PK
index and finds the rowids of the associated rows in the jobs
table.
Step 4 retrieves the rows with rowids that were returned by Step 5 from the jobs
table.
Step 7 looks up each department_id
in DEPT_ID_PK
index and finds the rowids of the associated rows in the departments
table.
Step 6 retrieves the rows with rowids that were returned by Step 7 from the departments
table.
The following steps in Example 11-14 operate on rows returned by the previous row source:
Step 2 performs the nested loop operation on job_id
in the jobs
and employees
tables, accepting row sources from Steps 3 and 4, joining each row from Step 3 source to its corresponding row in Step 4, and returning the resulting rows to Step 2.
Step 1 performs the nested loop operation, accepting row sources from Step 2 and Step 6, joining each row from Step 2 source to its corresponding row in Step 6, and returning the resulting rows to Step 1.
See Also:
|
Table 11-3 lists initialization parameters that you can use to control the behavior of the query optimizer. You can use these parameters to enable various optimizer features to improve the performance of SQL execution.
Table 11-3 Initialization Parameters That Control Optimizer Behavior
Initialization Parameter | Description |
---|---|
|
Converts literal values in SQL statements to bind variables. Converting the values improves cursor sharing and can affect the execution plans of SQL statements. The optimizer generates the execution plan based on the presence of the bind variables and not the actual literal values. |
|
Specifies the number of blocks that are read in a single I/O during a full table scan or index fast full scan. The optimizer uses the value of |
|
Controls the costing of an index probe in conjunction with a nested loop. The range of values |
|
Adjusts the cost of index probes. The range of values is 1 to 10000. The default value is 100, which means that indexes are evaluated as an access path based on the normal costing model. A value of 10 means that the cost of an index access path is one-tenth the normal cost of an index access path. |
|
Sets the mode of the optimizer at instance startup. The possible values are |
|
Controls the amount of memory allocated for sorts and hash joins. Larger amounts of memory allocated for sorts or hash joins reduce the optimizer cost of these operations. |
|
Enables the optimizer to cost a star transformation for star queries (if |
The OPTIMIZER_FEATURES_ENABLE
initialization parameter enables a series of optimizer-related features, depending on the release. It accepts one of a list of valid string values corresponding to the release numbers, such as 10.2.0.1
or 11.2.0.1
.
You can use this parameter to preserve the old behavior of the optimizer after a database upgrade. For example, if you upgrade the Oracle Database 11g from Release 1 (11.1.0.7) to Release 2 (11.2.0.2), then the default value of the OPTIMIZER_FEATURES_ENABLE
parameter changes from 11.1.0.7
to 11.2.0.2
. This upgrade results in the optimizer enabling optimization features based on 11.2.0.2.
For backward compatibility, you might not want the query plans to change because of new optimizer features in a new release. In such a case, you can set the OPTIMIZER_FEATURES_ENABLE
parameter to an earlier version.
Note: Oracle does not recommend explicitly setting theOPTIMIZER_FEATURES_ENABLE parameter to an earlier release. To avoid possible SQL performance regression that may result from execution plan changes, consider using SQL plan management instead. See Chapter 15, "Using SQL Plan Management." |
To set OPTIMIZER_FEATURES_ENABLE:
Query the current optimizer features settings.
For example, run the following SQL*Plus command:
SQL> SHOW PARAMETER optimizer_features_enable NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2
Set the optimizer features setting at the instance or session level.
For example, run the following SQL statement to set the optimizer version to 10.2.0.5:
SQL> ALTER SYSTEM SET optimizer_features_enable='10.2.0.5';
The preceding statement disables all new optimizer features that were added in releases following release 10.2.0.5. If you upgrade to a new release and you want to enable the features available with that release, then you do not need to explicitly set the OPTIMIZER_FEATURES_ENABLE
initialization parameter.
See Also: Oracle Database Reference for information about optimizer features that are enabled when you set theOPTIMIZER_FEATURES_ENABLE parameter to each of the release values |
You can influence the optimizer's choices by setting the optimizer goal and by gathering representative statistics for the query optimizer. You can set the following optimizer goals:
Best throughput (default)
The database uses the least amount of resources necessary to process all rows accessed by the statement.
For applications performed in batch, such as Oracle Reports applications, optimize for best throughput. Usually, throughput is more important in batch applications, because the user initiating the application is only concerned with the time necessary for the application to complete. Response time is less important because the user does not examine the results of individual statements while the application is running.
Best response time
The database uses the least amount of resources necessary to process the first row accessed by a SQL statement.
For interactive applications such as Oracle Forms applications or SQL*Plus queries, optimize for best response time. Usually, response time is important in interactive applications because the interactive user is waiting to see the first row or first few rows accessed by the statement.
The optimizer behavior when choosing an optimization approach and goal for a SQL statement is affected by the following factors:
The OPTIMIZER_MODE
initialization parameter establishes the default behavior for choosing an optimization approach for the instance. Table 11-4 lists the possible values and description.
Table 11-4 OPTIMIZER_MODE Initialization Parameter Values
You can change the goal of the query optimizer for all SQL statements in a session by changing the parameter value in initialization file or by the ALTER
SESSION
SET
OPTIMIZER_MODE
statement. For example:
The following statement in an initialization parameter file establishes the goal of the query optimizer for all sessions of the instance to best response time:
OPTIMIZER_MODE = FIRST_ROWS_1
The following SQL statement changes the goal of the query optimizer for the current session to best response time:
ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
If the optimizer uses the cost-based approach for a SQL statement, and if some tables accessed by the statement have no statistics, then the optimizer uses internal information, such as the number of data blocks allocated to these tables, to estimate other statistics for these tables.
To specify the goal of the optimizer for an individual SQL statement, use a hint from Table 11-5. Any of these hints in an individual SQL statement can override the OPTIMIZER_MODE
initialization parameter for that SQL statement.
Table 11-5 Hints for Changing the Query Optimizer Goal
Hint | Description |
---|---|
This hint instructs Oracle Database to optimize an individual SQL statement with a goal of best response time to return the first n number of rows, where n equals any positive integer. The hint uses a cost-based approach for the SQL statement, regardless of the presence of statistic. | |
This hint explicitly chooses the cost-based approach to optimize a SQL statement with a goal of best throughput. |
The statistics used by the query optimizer are stored in the data dictionary. You can use the DBMS_STATS
package to collect exact or estimated statistics about physical storage characteristics and data distribution in these schema objects.
To maintain the effectiveness of the query optimizer, you must have statistics that are representative of the data. For table columns that contain values with large variations in number of duplicates, called skewed data, you should collect histograms.
The resulting statistics provide the query optimizer with information about data uniqueness and distribution. Using this information, the query optimizer is able to compute plan costs with a high degree of accuracy and choose the best execution plan based on the least cost.
If no statistics are available when using query optimization, then the optimizer performs dynamic sampling depending on the setting of the OPTIMIZER_DYNAMIC_SAMPLING
initialization parameter. This sampling may cause slower parse times so for best performance, the optimizer should have representative optimizer statistics.
This chapter describes Oracle Database automatic features for performance diagnosing and tuning.
This chapter contains the following topics:
See Also: Oracle Database 2 Day + Performance Tuning Guide for information about using Oracle Enterprise Manager to diagnose and tune the database with the Automatic Database Diagnostic Monitor |
When problems occur with a system, it is important to perform accurate and timely diagnosis of the problem before making any changes to a system. Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms. However, an accurate diagnosis of the actual problem in the initial stage significantly increases the probability of success in resolving the problem.
With Oracle Database, the statistical data needed for accurate diagnosis of a problem is stored in the Automatic Workload Repository (AWR). The Automatic Database Diagnostic Monitor (ADDM):
Analyzes the AWR data on a regular basis
Diagnoses the root causes of performance problems
Provides recommendations for correcting any problems
Identifies non-problem areas of the system
Because AWR is a repository of historical performance data, ADDM can analyze performance issues after the event, often saving time and resources in reproducing a problem. For information about the AWR, see "Overview of the Automatic Workload Repository".
In most cases, ADDM output should be the first place that a DBA looks when notified of a performance problem. ADDM provides the following benefits:
Automatic performance diagnostic report every hour by default
Problem diagnosis based on decades of tuning expertise
Time-based quantification of problem impacts and recommendation benefits
Identification of root cause, not symptoms
Recommendations for treating the root causes of problems
Identification of non-problem areas of the system
Minimal overhead to the system during the diagnostic process
It is important to realize that tuning is an iterative process, and fixing one problem can cause the bottleneck to shift to another part of the system. Even with the benefit of ADDM analysis, it can take multiple tuning cycles to reach acceptable system performance. ADDM benefits apply beyond production systems; on development and test systems, ADDM can provide an early warning of performance issues.
This section contains the following topics:
An ADDM analysis can be performed on a pair of AWR snapshots and a set of instances from the same database. The pair of AWR snapshots define the time period for analysis, and the set of instances define the target for analysis.
If you are using Oracle Real Application Clusters (Oracle RAC), ADDM has three analysis modes:
Database
In Database mode, ADDM analyzes all instances of the database.
Instance
In Instance mode, ADDM analyzes a particular instance of the database.
Partial
In Partial mode, ADDM analyzes a subset of all database instances.
If you are not using Oracle RAC, ADDM can only function in Instance mode because there is only one instance of the database.
An ADDM analysis is performed each time an AWR snapshot is taken and the results are saved in the database. The time period analyzed by ADDM is defined by the last two snapshots (the last hour by default). ADDM will always analyze the specified instance in Instance mode. For non-Oracle RAC or single instance environments, the analysis performed in the Instance mode is the same as a database-wide analysis. If you are using Oracle RAC, ADDM will also analyze the entire database in Database mode, as described in "Using ADDM with Oracle Real Application Clusters". After an ADDM completes its analysis, you can view the results using Oracle Enterprise Manager, or by viewing a report in a SQL*Plus session.
ADDM analysis is performed top down, first identifying symptoms, and then refining them to reach the root causes of performance problems. The goal of the analysis is to reduce a single throughput metric called DB
time
. DB
time
is the cumulative time spent by the database in processing user requests. It includes wait time and CPU time of all non-idle user sessions. DB
time
is displayed in the V$SESS_TIME_MODEL
and V$SYS_TIME_MODEL
views.
See Also:
|
By reducing DB
time
, the database is able to support more user requests using the same resources, which increases throughput. The problems reported by ADDM are sorted by the amount of DB
time
they are responsible for. System areas that are not responsible for a significant portion of DB
time
are reported as non-problem areas.
The types of problems that ADDM considers include the following:
CPU bottlenecks - Is the system CPU bound by Oracle Database or some other application?
Undersized Memory Structures - Are the Oracle Database memory structures, such as the SGA, PGA, and buffer cache, adequately sized?
I/O capacity issues - Is the I/O subsystem performing as expected?
High load SQL statements - Are there any SQL statements which are consuming excessive system resources?
High load PL/SQL execution and compilation, and high-load Java usage
Oracle RAC specific issues - What are the global cache hot blocks and objects; are there any interconnect latency issues?
Sub-optimal use of Oracle Database by the application - Are there problems with poor connection management, excessive parsing, or application level lock contention?
Database configuration issues - Is there evidence of incorrect sizing of log files, archiving issues, excessive checkpoints, or sub-optimal parameter settings?
Concurrency issues - Are there buffer busy problems?
Hot objects and top SQL for various problem areas
Note: This is not a comprehensive list of all problem types that ADDM considers in its analysis. |
ADDM also documents the non-problem areas of the system. For example, wait event classes that are not significantly impacting the performance of the system are identified and removed from the tuning consideration at an early stage, saving time and effort that would be spent on items that do not impact overall system performance.
If you are using Oracle RAC, you can run ADDM in Database analysis mode to analyze the throughput performance of all instances of the database. In Database mode, ADDM considers DB time as the sum of the database time for all database instances. Using the Database analysis mode enables you to view all findings that are significant to the entire database in a single report, instead of reviewing a separate report for each instance.
The Database mode report includes findings about database resources (such as I/O and interconnect). The report also aggregates findings from the various instances if they are significant to the entire database. For example, if the CPU load on a single instance is high enough to affect the entire database, the finding will appear in the Database mode analysis, which will point to the particular instance responsible for the problem.
See Also: Oracle Database 2 Day + Real Application Clusters Guide for information about using ADDM with Oracle RAC |
In addition to problem diagnostics, ADDM recommends possible solutions. ADDM analysis results are represented as a set of findings. See Example 6-1 for an example of ADDM analysis result. Each ADDM finding can belong to one of the following types:
Problem findings describe the root cause of a database performance problem.
Symptom findings contain information that often lead to one or more problem findings.
Information findings are used for reporting information that are relevant to understanding the performance of the database, but do not constitute a performance problem (such as non-problem areas of the database and the activity of automatic database maintenance).
Warning findings contain information about problems that may affect the completeness or accuracy of the ADDM analysis (such as missing data in the AWR).
Each problem finding is quantified by an impact that is an estimate of the portion of DB
time
caused by the finding's performance issue. A problem finding can be associated with a list of recommendations for reducing the impact of the performance problem. The types of recommendations include:
Hardware changes: adding CPUs or changing the I/O subsystem configuration
Database configuration: changing initialization parameter settings
Schema changes: hash partitioning a table or index, or using automatic segment-space management (ASSM)
Application changes: using the cache option for sequences or using bind variables
Using other advisors: running SQL Tuning Advisor on high-load SQL or running the Segment Advisor on hot objects
A list of recommendations can contain various alternatives for solving the same problem; you do not have to apply all the recommendations to solve a specific problem. Each recommendation has a benefit which is an estimate of the portion of DB
time
that can be saved if the recommendation is implemented. Recommendations are composed of actions and rationales. You must apply all the actions of a recommendation to gain the estimated benefit. The rationales are used for explaining why the set of actions were recommended and to provide additional information to implement the suggested recommendation.
Consider the following section of an ADDM report in Example 6-1.
Example 6-1 Example ADDM Report
FINDING 1: 31% impact (7798 seconds) ------------------------------------ SQL statements were not shared due to the usage of literals. This resulted in additional hard parses which were consuming significant database time. RECOMMENDATION 1: Application Analysis, 31% benefit (7798 seconds) ACTION: Investigate application logic for possible use of bind variables instead of literals. Alternatively, you may set the parameter "cursor_sharing" to "force". RATIONALE: SQL statements with PLAN_HASH_VALUE 3106087033 were found to be using literals. Look in V$SQL for examples of such SQL statements.
In Example 6-1, the finding points to a particular root cause, the usage of literals in SQL statements, which is estimated to have an impact of about 31% of total DB
time
in the analysis period.
The finding has a recommendation associated with it, composed of one action and one rationale. The action specifies a solution to the problem found and is estimated to have a maximum benefit of up to 31% DB
time
in the analysis period. Note that the benefit is given as a portion of the total DB
time
and not as a portion of the finding's impact. The rationale provides additional information on tracking potential SQL statements that were using literals and causing this performance issue. Using the specified plan hash value of SQL statements that could be a problem, a DBA could quickly examine a few sample statements.
When a specific problem has multiple causes, the ADDM may report multiple problem and symptom findings. In this case, the impacts of these multiple findings can contain the same portion of DB
time
. Because the performance issues of findings can overlap, the sum of the impacts of the findings can exceed 100% of DB
time
. For example, if a system performs many reads, then ADDM might report a SQL statement responsible for 50% of DB
time
due to I/O activity as one finding, and an undersized buffer cache responsible for 75% of DB
time
as another finding.
When multiple recommendations are associated with a problem finding, the recommendations may contain alternatives for solving the problem. In this case, the sum of the recommendations' benefits may be higher than the finding's impact.
When appropriate, an ADDM action may have multiple solutions for you to choose from. In the example, the most effective solution is to use bind variables. However, it is often difficult to modify the application. Changing the value of the CURSOR_SHARING
initialization parameter is much easier to implement and can provide significant improvement.
Automatic database diagnostic monitoring is enabled by default and is controlled by the CONTROL_MANAGEMENT_PACK_ACCESS
and the STATISTICS_LEVEL
initialization parameters.
The CONTROL_MANAGEMENT_PACK_ACCESS
parameter should be set to DIAGNOSTIC
or DIAGNOSTIC+TUNING
to enable automatic database diagnostic monitoring. The default setting is DIAGNOSTIC+TUNING
. Setting CONTROL_MANAGEMENT_PACK_ACCESS
to NONE
disables ADDM.
The STATISTICS_LEVEL
parameter should be set to the TYPICAL
or ALL
to enable automatic database diagnostic monitoring. The default setting is TYPICAL
. Setting STATISTICS_LEVEL
to BASIC
disables many Oracle Database features, including ADDM, and is strongly discouraged.
See Also: Oracle Database Reference for information about theCONTROL_MANAGEMENT_PACK_ACCESS and STATISTICS_LEVEL initialization parameters |
ADDM analysis of I/O performance partially depends on a single argument, DBIO_EXPECTED
, that describes the expected performance of the I/O subsystem. The value of DBIO_EXPECTED
is the average time it takes to read a single database block in microseconds. Oracle Database uses the default value of 10 milliseconds, which is an appropriate value for most modern hard drives. If your hardware is significantly different, such as very old hardware or very fast RAM disks, consider using a different value.
To determine the correct setting for DBIO_EXPECTED
parameter:
Measure the average read time of a single database block read for your hardware. Note that this measurement is for random I/O, which includes seek time if you use standard hard drives. Typical values for hard drives are between 5000 and 20000 microseconds.
Set the value one time for all subsequent ADDM executions. For example, if the measured value if 8000 microseconds, you should execute the following command as SYS user:
EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER( 'ADDM', 'DBIO_EXPECTED', 8000);
To diagnose database performance problems, first review the ADDM analysis results that are automatically created each time an AWR snapshot is taken. If a different analysis is required (such as a longer analysis period, using a different DBIO_EXPECTED
setting, or changing the analysis mode), you can run ADDM manually as described in this section.
ADDM can analyze any two AWR snapshots (on the same database), as long as both snapshots are still stored in the AWR (have not been purged). ADDM can only analyze instances that are started before the beginning snapshot and remain running until the ending snapshot. Additionally, ADDM will not analyze instances that experience significant errors when generating the AWR snapshots. In such cases, ADDM will analyze the largest subset of instances that did not experience these problems.
The primary interface for diagnostic monitoring is Oracle Enterprise Manager. Whenever possible, you should run ADDM using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can run ADDM using the DBMS_ADDM
package. In order to run the DBMS_ADDM
APIs, the user must be granted the ADVISOR
privilege.
This section contains the following topics:
See Also: Oracle Database PL/SQL Packages and Types Reference for information about theDBMS_ADDM package |
For Oracle RAC configurations, you can run ADDM in Database mode to analyze all instances of the databases. For single-instance configurations, you can still run ADDM in Database mode; ADDM will simply behave as if running in Instance mode.
To run ADDM in Database mode, use the DBMS_ADDM
.ANALYZE_DB
procedure:
BEGIN DBMS_ADDM.ANALYZE_DB ( task_name IN OUT VARCHAR2, begin_snapshot IN NUMBER, end_snapshot IN NUMBER, db_id IN NUMBER := NULL); END; /
The task_name
parameter specifies the name of the analysis task that will be created. The begin_snapshot
parameter specifies the snapshot number of the beginning snapshot in the analysis period. The end_snapshot
parameter specifies the snapshot number of the ending snapshot in the analysis period. The db_id
parameter specifies the database identifier of the database that will be analyzed. If unspecified, this parameter defaults to the database identifier of the database to which you are currently connected.
The following example creates an ADDM task in database analysis mode, and executes it to diagnose the performance of the entire database during the time period defined by snapshots 137 and 145:
VAR tname VARCHAR2(30); BEGIN :tname := 'ADDM for 7PM to 9PM'; DBMS_ADDM.ANALYZE_DB(:tname, 137, 145); END; /
To analyze a particular instance of the database, you can run ADDM in Instance mode. To run ADDM in Instance mode, use the DBMS_ADDM
.ANALYZE_INST
procedure:
BEGIN DBMS_ADDM.ANALYZE_INST ( task_name IN OUT VARCHAR2, begin_snapshot IN NUMBER, end_snapshot IN NUMBER, instance_number IN NUMBER := NULL, db_id IN NUMBER := NULL); END; /
The task_name
parameter specifies the name of the analysis task that will be created. The begin_snapshot
parameter specifies the snapshot number of the beginning snapshot in the analysis period. The end_snapshot
parameter specifies the snapshot number of the ending snapshot in the analysis period. The instance_number
parameter specifies the instance number of the instance that will be analyzed. If unspecified, this parameter defaults to the instance number of the instance to which you are currently connected. The db_id
parameter specifies the database identifier of the database that will be analyzed. If unspecified, this parameter defaults to the database identifier of the database to which you are currently connected.
The following example creates an ADDM task in instance analysis mode, and executes it to diagnose the performance of instance number 1 during the time period defined by snapshots 137 and 145:
VAR tname VARCHAR2(30); BEGIN :tname := 'my ADDM for 7PM to 9PM'; DBMS_ADDM.ANALYZE_INST(:tname, 137, 145, 1); END; /
To analyze a subset of all database instances, you can run ADDM in Partial mode. To run ADDM in Partial mode, use the DBMS_ADDM
.ANALYZE_PARTIAL
procedure:
BEGIN DBMS_ADDM.ANALYZE_PARTIAL ( task_name IN OUT VARCHAR2, instance_numbers IN VARCHAR2, begin_snapshot IN NUMBER, end_snapshot IN NUMBER, db_id IN NUMBER := NULL); END; /
The task_name
parameter specifies the name of the analysis task that will be created. The instance_numbers
parameter specifies a comma-delimited list of instance numbers of instances that will be analyzed. The begin_snapshot
parameter specifies the snapshot number of the beginning snapshot in the analysis period. The end_snapshot
parameter specifies the snapshot number of the ending snapshot in the analysis period. The db_id
parameter specifies the database identifier of the database that will be analyzed. If unspecified, this parameter defaults to the database identifier of the database to which you are currently connected.
The following example creates an ADDM task in partial analysis mode, and executes it to diagnose the performance of instance numbers 1, 2, and 4, during the time period defined by snapshots 137 and 145:
VAR tname VARCHAR2(30); BEGIN :tname := 'my ADDM for 7PM to 9PM'; DBMS_ADDM.ANALYZE_PARTIAL(:tname, '1,2,4', 137, 145); END; /
To display a text report of an executed ADDM task, use the DBMS_ADDM
.GET_REPORT
function:
DBMS_ADDM.GET_REPORT ( task_name IN VARCHAR2 RETURN CLOB);
The following example displays a text report of the ADDM task specified by its task name using the tname
variable:
SET LONG 1000000 PAGESIZE 0; SELECT DBMS_ADDM.GET_REPORT(:tname) FROM DUAL;
Note that the return type of a report is a CLOB
, formatted to fit line size of 80. For information about reviewing the ADDM analysis results in an ADDM report, see "ADDM Analysis Results".
Typically, you should view output and information from ADDM using Oracle Enterprise Manager or ADDM reports.
However, you can display ADDM information through the DBA_ADVISOR
views. This group of views includes:
DBA_ADVISOR_FINDINGS
This view displays all the findings discovered by all advisors. Each finding is displayed with an associated finding ID, name, and type. For tasks with multiple executions, the name of each task execution associated with each finding is also listed.
DBA_ADDM_FINDINGS
This view contains a subset of the findings displayed in the related DBA_ADVISOR_FINDINGS
view. This view only displays the ADDM findings discovered by all advisors. Each ADDM finding is displayed with an associated finding ID, name, and type.
DBA_ADVISOR_FINDING_NAMES
List of all finding names registered with the advisor framework.
DBA_ADVISOR_RECOMMENDATIONS
This view displays the results of completed diagnostic tasks with recommendations for the problems identified in each execution. The recommendations should be reviewed in the order of the RANK
column, as this relays the magnitude of the problem for the recommendation. The BENEFIT
column displays the benefit to the system you can expect after the recommendation is performed. For tasks with multiple executions, the name of each task execution associated with each advisor task is also listed.
DBA_ADVISOR_TASKS
This view provides basic information about existing tasks, such as the task ID, task name, and when the task was created. For tasks with multiple executions, the name and type of the last or current execution associated with each advisor task is also listed.
See Also: Oracle Database Reference for information about static data dictionary views |
This chapter discusses the automatic SQL tuning features of Oracle Database. Automatic SQL tuning automates the manual process, which is complex, repetitive, and time-consuming.
This chapter contains the following sections:
See Also: Oracle Database 2 Day + Performance Tuning Guide for information about using the automatic SQL tuning features with Oracle Enterprise Manager (Enterprise Manager) |
Oracle Database uses the optimizer to generate the execution plans for submitted SQL statements. The optimizer operates in the following modes:
Normal mode
The optimizer compiles the SQL and generates an execution plan. The normal mode generates a reasonable plan for the vast majority of SQL statements. Under normal mode, the optimizer operates with very strict time constraints, usually a fraction of a second.
Tuning mode
The optimizer performs additional analysis to check whether it can further improve the plan produced in normal mode. The optimizer output is not an execution plan, but a series of actions, along with their rationale and expected benefit for producing a significantly better plan. When running in tuning mode, the optimizer is known as the Automatic Tuning Optimizer.
Under tuning mode, the optimizer can take several minutes to tune a single statement. It is both time and resource intensive to invoke Automatic Tuning Optimizer every time a query must be hard-parsed. Automatic Tuning Optimizer is meant for complex and high-load SQL statements that have nontrivial impact on the database.
Automatic Database Diagnostic Monitor (ADDM) proactively identifies high-load SQL statements that are good candidates for SQL tuning (see Chapter 6, "Automatic Performance Diagnostics"). The automatic SQL tuning feature also automatically identifies problematic SQL statements and implements tuning recommendations during system maintenance windows as an automated maintenance task.
The Automatic Tuning Optimizer performs the following types of tuning analysis:
The optimizer relies on object statistics to generate execution plans. If these statistics are stale or missing, then the optimizer does not have the necessary information it needs and can generate poor execution plans. The Automatic Tuning Optimizer checks each query object for missing or stale statistics, and produces two types of output:
Recommendations to gather relevant statistics for objects with stale or no statistics
Because optimizer statistics are automatically collected and refreshed, this problem occurs only when automatic optimizer statistics collection is disabled. See "Managing Automatic Optimizer Statistics Collection".
Auxiliary statistics for objects with no statistics, and statistic adjustment factor for objects with stale statistics
The database stores this auxiliary information in an object called a SQL profile.
A SQL profile is a set of auxiliary information specific to a SQL statement. Conceptually, a SQL profile is to a SQL statement what statistics are to a table or index. The database can use the auxiliary information to improve execution plans.
An access path is the means by which data is retrieved from a database. For example, a query using an index and a query using a full table scan use different access paths.
Indexes can tremendously enhance performance of a SQL statement by reducing the need for full scans of large tables. Effective indexing is a common tuning technique. Automatic Tuning Optimizer explores whether a new index can significantly enhance query performance. If so, then the advisor recommends index creation.
Because the Automatic Tuning Optimizer does not analyze how its index recommendation can affect the entire SQL workload, it also recommends running SQL Access Advisor on the SQL statement along with a representative SQL workload. SQL Access Advisor looks at the impact of creating an index on the entire SQL workload before making recommendations. See "Automatic SQL Tuning Features".
Automatic Tuning Optimizer identifies common problems with the structure of SQL statements that can lead to poor performance. These could be syntactic, semantic, or design problems. In each case, Automatic Tuning Optimizer makes relevant suggestions to restructure the statements. The suggested alternative is similar, but not equivalent, to the original statement.
For example, the optimizer may suggest replacing the UNION
operator with UNION
ALL
or NOT
IN
with NOT
EXISTS
. You can then determine if the advice is applicable to your situation. For example, if the schema design is such that duplicates are not possible, then the UNION
ALL
operator is much more efficient than the UNION
operator. These changes require a good understanding of the data properties and should be implemented only after careful consideration.
While tuning a SQL statement, SQL Tuning Advisor searches real-time and historical performance data for alternative execution plans for the statement. If plans other than the original plan exist, then SQL Tuning Advisor reports an alternative plan finding.
SQL Tuning Advisor validates the alternative execution plans and notes any plans that are not reproducible. When reproducible alternative plans are found, you can create a SQL plan baseline to instruct the optimizer to choose these plans in the future.
Example 17-1 shows an alternative plan finding for a SELECT
statement.
Example 17-1 Alternative Plan Finding
2- Alternative Plan Finding --------------------------- Some alternative execution plans for this statement were found by searching the system's real-time and historical performance data. The following table lists these plans ranked by their average elapsed time. See section "ALTERNATIVE PLANS SECTION" for detailed information on each plan. id plan hash last seen elapsed (s) origin note -- ---------- -------------------- ------------ --------------- ---------------- 1 1378942017 2009-02-05/23:12:08 0.000 Cursor Cache original plan 2 2842999589 2009-02-05/23:12:08 0.002 STS Information ----------- - The Original Plan appears to have the best performance, based on the elapsed time per execution. However, if you know that one alternative plan is better than the Original Plan, you can create a SQL plan baseline for it. This will instruct the Oracle optimizer to pick it over any other choices in the future. execute dbms_sqltune.create_sql_plan_baseline(task_name => 'TASK_XXXXX', object_id => 2, task_owner => 'SYS', plan_hash => xxxxxxxx);
Example 17-1 shows that SQL Tuning Advisor found two plans, one in the shared SQL area and one in a SQL tuning set. The plan in the shared SQL area is the same as the original plan.
SQL Tuning Advisor only recommends an alternative plan if the elapsed time of the original plan is worse than alternative plans. In this case, SQL Tuning Advisor recommends that users create a SQL plan baseline on the plan with the best performance. In Example 17-1, the alternative plan did not perform as well as the original plan, so SQL Tuning Advisor did not recommend using the alternative plan.
In Example 17-2, the alternative plans section of the SQL Tuning Advisor output includes both the original and alternative plans and summarizes their performance. The most important statistic is elapsed time. The original plan used an index, whereas the alternative plan used a full table scan, increasing elapsed time by .002 seconds.
Example 17-2 Alternative Plans Section
Plan 1 ------ Plan Origin :Cursor Cache Plan Hash Value :1378942017 Executions :50 Elapsed Time :0.000 sec CPU Time :0.000 sec Buffer Gets :0 Disk Reads :0 Disk Writes :0 Notes: 1. Statistics shown are averaged over multiple executions. 2. The plan matches the original plan. -------------------------------------------- | Id | Operation | Name | -------------------------------------------- | 0 | SELECT STATEMENT | | | 1 | SORT AGGREGATE | | | 2 | MERGE JOIN | | | 3 | INDEX FULL SCAN | TEST1_INDEX | | 4 | SORT JOIN | | | 5 | TABLE ACCESS FULL| TEST | -------------------------------------------- Plan 2 ------ Plan Origin :STS Plan Hash Value :2842999589 Executions :10 Elapsed Time :0.002 sec CPU Time :0.002 sec Buffer Gets :3 Disk Reads :0 Disk Writes :0 Notes: 1. Statistics shown are averaged over multiple executions. ------------------------------------- | Id | Operation | Name | ------------------------------------- | 0 | SELECT STATEMENT | | | 1 | SORT AGGREGATE | | | 2 | HASH JOIN | | | 3 | TABLE ACCESS FULL| TEST | | 4 | TABLE ACCESS FULL| TEST1 | -------------------------------------
To adopt an alternative plan regardless of whether SQL Tuning Advisor recommends it, call DBMS_SQLTUNE.CREATE_SQL_PLAN_BASELINE
. You can use this procedure to create a SQL plan baseline on any existing reproducible plan.
SQL Tuning Advisor takes one or more SQL statements as an input and invokes the Automatic Tuning Optimizer to perform SQL tuning on the statements. The output takes the form of advice or recommendations, along with a rationale for each recommendation and its expected benefit. The recommendation relates to a collection of statistics on objects, creation of new indexes, restructuring of the SQL statement, or creation of a SQL profile. You can choose to accept the recommendation to complete the tuning of the SQL statements.
The database can automatically tune SQL statements by identifying problematic statements and implementing recommendations using SQL Tuning Advisor during system maintenance windows. When run automatically, SQL Tuning Advisor is known as the Automatic SQL Tuning Advisor.
This section explains how to manage the Automatic SQL Tuning Advisor:
Oracle Database automatically runs SQL Tuning Advisor on selected high-load SQL statements from the Automatic Workload Repository (AWR) that qualify as tuning candidates. This task, called Automatic SQL Tuning, runs in the default maintenance windows on a nightly basis. By default, automatic SQL tuning runs for at most one hour. You can customize attributes of the maintenance windows, including start and end time, frequency, and days of the week.
After automatic SQL tuning begins, the database performs the following steps:
Identifies SQL candidates in the AWR for tuning
Oracle Database analyzes statistics in AWR and generates a list of potential SQL statements that are eligible for tuning. These statements include repeating high-load statements that have a significant impact on the database.
The database tunes only SQL statements that have an execution plan with a high potential for improvement. The database ignores recursive SQL and statements that have been tuned recently (in the last month), parallel queries, DML, DDL, and SQL statements with performance problems caused by concurrency issues.
The database orders the SQL statements that are selected as candidates based on their performance impact. The database calculates the impact by summing the CPU time and the I/O times in AWR for the selected statement in the past week.
Tunes each SQL statement individually by calling SQL Tuning Advisor
During the tuning process, the database considers and reports all recommendation types, but it can implement only SQL profiles automatically.
Tests SQL profiles by executing the SQL statement
If a SQL profile is recommended, the database tests the new profile by executing the SQL statement both with and without the profile. If the performance improvement improves at least threefold, then the database accepts the SQL profile, but only if the ACCEPT_SQL_PROFILES
task parameter is set to TRUE
. Otherwise, the automatic SQL tuning reports merely report the recommendation to create a SQL profile.
Optionally, implements the SQL profiles provided they meet the criteria of threefold performance improvement
The database considers other factors when deciding whether to implement the SQL profile. For example, the database does not implement a profile when the objects referenced in the statement have stale optimizer statistics. SQL profiles that have been implemented automatically show type is AUTO
in the DBA_SQL_PROFILES
view.
If the database uses SQL plan management, and if a SQL plan baseline exists for the SQL statement, then the database adds a new plan baseline when creating the SQL profile. As a result, the optimizer uses the new plan immediately after profile creation. See Chapter 15, "Using SQL Plan Management".
At any time during or after the automatic SQL tuning process, you can view the results using the automatic SQL tuning report. This report describes in detail all the SQL statements that were analyzed, the recommendations generated, and the SQL profiles that were automatically implemented.
Figure 17-1 shows the steps performed by the database during automatic SQL tuning.
Automatic SQL tuning runs as part of the automated maintenance tasks infrastructure.
To enable automatic SQL tuning, use the ENABLE
procedure in the DBMS_AUTO_TASK_ADMIN
package:
BEGIN DBMS_AUTO_TASK_ADMIN.ENABLE( client_name => 'sql tuning advisor', operation => NULL, window_name => NULL); END; /
To disable automatic SQL tuning, use the DISABLE
procedure in the DBMS_AUTO_TASK_ADMIN
package:
BEGIN DBMS_AUTO_TASK_ADMIN.DISABLE( client_name => 'sql tuning advisor', operation => NULL, window_name => NULL); END; /
You can pass a specific window name using the window_name
parameter to enable or disable the task in certain maintenance windows only.
Setting the STATISTICS_LEVEL
parameter to BASIC
disables automatic statistics gathering by the AWR and, as a result, also disables automatic SQL tuning.
See Also:
|
Starting with Oracle Database 11g Release 2 (11.2.0.2), you can use the DBMS_AUTO_SQLTUNE
package to configure the behavior of the automatic SQL tuning task. For previous releases, use DBMS_SQLTUNE
instead.
Table 17-2 lists the configurable parameters specific to automatic SQL tuning.
Table 17-1 SET_AUTO_TUNING_TASK_PARAMETER Automatic SQL Tuning Parameters
Parameter | Description |
---|---|
|
Specifies whether to accept SQL profiles automatically. |
|
Specifies the number of days for which to save the task history in the advisor framework schema. By default, the task history is saved for 30 days before it expires. |
|
Specifies the limit of SQL profiles that are accepted for each automatic SQL tuning task. Consider setting the limit of SQL profiles that are accepted for each automatic SQL tuning task based on the acceptable level of changes that can be made to the system on a daily basis. |
|
Specifies the limit of SQL profiles that are accepted in total. |
To use the DBMS_AUTO_SQLTUNE
package, you must have the DBA
role, or have EXECUTE
privileges granted by an administrator. The only exception is the EXECUTE_AUTO_TUNING_TASK
procedure, which can only be run by SYS
.
To configure automatic SQL tuning:
Start SQL*Plus, and connect to the database with DBA
privileges (or connect as SYS
if you plan to run EXECUTE_AUTO_TUNING_TASK
).
Run the DBMS_AUTO_SQLTUNE.SET_AUTO_TUNING_TASK_PARAMETER
procedure.
The following example configures the automatic SQL tuning task to automatically accept SQL profiles recommended by SQL Tuning Advisor:
BEGIN DBMS_AUTO_SQLTUNE.SET_AUTO_TUNING_TASK_PARAMETER( parameter => 'ACCEPT_SQL_PROFILES', value => 'TRUE'); END; /
See Also:
|
Starting with Oracle Database 11g Release 2 (11.2.0.2), you can use the DBMS_AUTO_SQLTUNE
.REPORT_AUTO_TUNING_TASK
function to generate the automatic SQL tuning report. For previous releases, use the DBMS_SQLTUNE
package instead.
The report contains information about multiple executions of the Automatic SQL Tuning task. Depending on the sections that were included in the report, you can view information about the automatic SQL tuning task in the following sections:
General information
The general information section has a high-level description of the automatic SQL tuning task, including information about the inputs given for the report, the number of SQL statements tuned during the maintenance, and the number of SQL profiles created.
Summary
The summary section lists the SQL statements (by their SQL identifiers) that were tuned during the maintenance window and the estimated benefit of each SQL profile, or their actual execution statistics after test executing the SQL statement with the SQL profile.
Tuning findings
This section contains the following information about each SQL statement analyzed by SQL Tuning Advisor:
All findings associated with each SQL statement
Whether the profile was accepted on the database, and why
Whether the SQL profile is currently enabled on the database
Detailed execution statistics captured when testing the SQL profile
Explain plans
This section shows the old and new explain plans used by each SQL statement analyzed by SQL Tuning Advisor.
Errors
This section lists all errors encountered by the automatic SQL tuning task.
To view the automatic SQL tuning report using DBMS_AUTO_SQLTUNE:
Start SQL*Plus, and connect to the database with the appropriate privileges.
Run the DBMS_AUTO_SQLTUNE.REPORT_AUTO_TUNING_TASK
function.
In the following example, the advisor generates a text report to show all SQL statements that were analyzed in the most recent execution, including recommendations that were not implemented.
VARIABLE my_rept CLOB; BEGIN :my_rept :=DBMS_AUTO_SQLTUNE.REPORT_AUTO_TUNING_TASK( begin_exec => NULL, end_exec => NULL, type => 'TEXT', level => 'TYPICAL', section => 'ALL', object_id => NULL, result_limit => NULL); END; / PRINT :my_rept
See Also:
|
You can invoke SQL Tuning Advisor manually for on-demand tuning of one or more SQL statements. To tune multiple statements, you must create a SQL tuning set (STS). A SQL tuning set is a database object that stores SQL statements along with their execution context. You can create a SQL tuning set using command line APIs or Enterprise Manager. See "Managing SQL Tuning Sets".
Input for SQL Tuning Advisor can come from several sources, including the following:
ADDM (Automatic Database Diagnostic Monitor)
The primary input source is ADDM. By default, ADDM runs proactively once every hour and analyzes key statistics gathered by the Automatic Workload Repository (AWR) over the last hour to identify any performance problems including high-load SQL statements. If a high-load SQL is identified, ADDM recommends running SQL Tuning Advisor on the SQL. See "Overview of the Automatic Database Diagnostic Monitor".
AWR
The second most important input source is the Automatic Workload Repository (AWR). AWR takes regular snapshots of system activity, including high-load SQL statements ranked by relevant statistics, such as CPU consumption and wait time.
You can view the AWR and manually identify high-load SQL statements. You can run SQL Tuning A dvisor on these statements, although Oracle Database automatically performs this work as part of automatic SQL tuning. By default, AWR retains data for the last eight days. You can locate and tune any high-load SQL that ran within the retention period of AWR using this method. See "Overview of the Automatic Workload Repository".
Shared SQL area
The third likely source of input is the shared SQL area. The database uses this source to tune recent SQL statements that have yet to be captured in the AWR. The shared SQL area and AWR provide the capability to identify and tune high-load SQL statements from the current time going as far back as the AWR retention allows, which by default is at least 8 days.
SQL tuning set
Another possible input source for SQL Tuning Advisor is the SQL tuning set. A SQL tuning set (STS) is a database object that stores SQL statements along with their execution context. An STS can include SQL statements that are yet to be deployed, with the goal of measuring their individual performance, or identifying the ones whose performance falls short of expectation. When a set of SQL statements serve as input, the database must first construct and use an STS. See "Managing SQL Tuning Sets".
SQL Tuning Advisor provides options to manage the scope and duration of a tuning task. You can set the scope of a tuning task either of the following:
Limited
In this case, SQL Tuning Advisor produces recommendations based on statistical checks, access path analysis, and SQL structure analysis. SQL profile recommendations are not generated.
Comprehensive
In this case, SQL Tuning Advisor carries out all the analysis it performs under limited scope plus SQL Profiling. With the comprehensive option you can also specify a time limit for the tuning task, which by default is 30 minutes.
After analyzing the SQL statements, SQL Tuning Advisor provides advice on optimizing the execution plan, the rationale for the proposed optimization, the estimated performance benefit, and the command to implement the advice. You choose whether to accept the recommendations to optimize the SQL statements.
The recommended interface for running SQL Tuning Advisor is Enterprise Manager. Whenever possible, run SQL Tuning Advisor using Enterprise Manager, as described in the Oracle Database 2 Day + Performance Tuning Guide. If Enterprise Manager is unavailable, then you can run SQL Tuning Advisor using procedures in the DBMS_SQLTUNE
package. To use the APIs, the user must be granted specific privileges.
Running SQL Tuning Advisor using DBMS_SQLTUNE
package is a multi-step process:
Create a SQL tuning set (if tuning multiple SQL statements)
Create a SQL tuning task
Execute a SQL tuning task
Display the results of a SQL tuning task
Implement recommendations as appropriate
You can create a SQL tuning task for a single SQL statement. For tuning multiple statements, a SQL tuning set (STS) has to be first created. An STS is a database object that stores SQL statements along with their execution context. You can create an STS manually using command line APIs or automatically using Enterprise Manager. See "Managing SQL Tuning Sets".
Figure 17-2 shows the steps involved when running SQL Tuning Advisor using the DBMS_SQLTUNE
package.
This section covers the following topics:
See Also:
|
You can create tuning tasks from the text of a single SQL statement, a SQL tuning set containing multiple statements, a SQL statement selected by SQL identifier from the shared SQL area, or a SQL statement selected by SQL identifier from AWR.
For example, to use SQL Tuning Advisor to optimize a specified SQL statement text, create a tuning task with the SQL statement passed as a CLOB argument. For the following PL/SQL code, the user hr
has been granted the ADVISOR
privilege, and the function is run as user hr
on the hr.employees
table.
DECLARE my_task_name VARCHAR2(30); my_sqltext CLOB; BEGIN my_sqltext := 'SELECT /*+ ORDERED */ * ' || 'FROM employees e, locations l, departments d ' || 'WHERE e.department_id = d.department_id AND ' || 'l.location_id = d.location_id AND ' || 'e.employee_id < :bnd'; my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK( sql_text => my_sqltext, bind_list => sql_binds(anydata.ConvertNumber(100)), user_name => 'HR', scope => 'COMPREHENSIVE', time_limit => 60, task_name => 'my_sql_tuning_task', description => 'Task to tune a query on a specified employee'); END; /
In the preceding example, 100
is the value for bind variable :bnd
passed as function argument of type SQL_BINDS
, HR
is the user under which the CREATE_TUNING_TASK
function analyzes the SQL statement, the scope is set to COMPREHENSIVE
which means that the advisor also performs SQL Profiling analysis, and 60 is the maximum time in seconds that the function can run. In addition, values for task name and description are provided.
The CREATE_TUNING_TASK
function returns the task name that you provided or generates a unique name. You can use the task name to specify this task when using other APIs. To view task names associated with an owner, run the following query:
SELECT TASK_NAME FROM DBA_ADVISOR_LOG WHERE OWNER = 'HR';
You can fine tune a SQL tuning task after it has been created by configuring its parameters using the SET_TUNING_TASK_PARAMETER
procedure in the DBMS_SQLTUNE
package:
BEGIN DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER( task_name => 'my_sql_tuning_task', parameter => 'TIME_LIMIT', value => 300); END; /
In the preceding example, the maximum time that the SQL tuning task can run is changed to 300 seconds.
Table 17-2 lists parameters that you can configure using the SET_TUNING_TASK_PARAMETER
procedure.
Table 17-2 SET_TUNING_TASK_PARAMETER Procedure Parameters
Parameter | Description |
---|---|
|
Specifies the scope of the tuning task:
|
|
Username under which the SQL statement is parsed |
|
Number of days before the task is deleted |
|
Default execution type if not specified by the |
|
Time limit (in number of seconds) before the task times out |
|
Time limit (in number of seconds) for each SQL statement |
|
Determines if the SQL Tuning Advisor test executes the SQL statements to verify the recommendation benefit:
|
|
Basic filter used for SQL tuning set |
|
Object filter used for SQL tuning set |
|
Plan filter used for SQL tuning set |
|
First ranking measure used for SQL tuning set |
|
Second ranking measure used for SQL tuning set |
|
Third ranking measure used for SQL tuning set |
|
Extra filter used for SQL tuning set (besides |
|
Maximum number of SQL statements to tune |
|
Percentage filter of statements from SQL tuning set |
After you have created a tuning task, execute the task and start the tuning process. For example, run the following PL/SQL code:
BEGIN DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task' ); END; /
Like any other SQL Tuning Advisor task, you can also execute the automatic tuning task SYS_AUTO_SQL_TUNING_TASK
using the EXECUTE_TUNING_TASK
API. SQL Tuning Advisor performs the same analysis and actions as it would when run automatically. You can also pass an execution name to the API to name the new execution.
You can check the status of the task by reviewing the information in the USER_ADVISOR_TASKS
view or check execution progress of the task in the V$SESSION_LONGOPS
view. For example, run the following query:
SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'my_sql_tuning_task';
You can check the execution progress of SQL Tuning Advisor in the V$ADVISOR_PROGRESS
view. For example, run the following query:
SELECT SOFAR, TOTALWORK FROM V$ADVISOR_PROGRESS WHERE USER_NAME = 'HR' AND TASK_NAME = 'my_sql_tuning_task';
After a task has been executed, you display a report of the results with the REPORT_TUNING_TASK
function. For example:
SET LONG 1000 SET LONGCHUNKSIZE 1000 SET LINESIZE 100 SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task') FROM DUAL;
The report contains all the findings and recommendations of SQL Tuning Advisor. For each proposed recommendation, the rationale and benefit is provided along with the SQL statements needed to implement the recommendation.
You can find additional information about tuning tasks and results in DBA views. See "SQL Tuning Views".
You can use the following APIs for managing SQL tuning tasks:
INTERRUPT_TUNING_TASK
to interrupt a task while executing, causing a normal exit with intermediate results
RESUME_TUNING_TASK
to resume a previously interrupted task
CANCEL_TUNING_TASK
to cancel a task while executing, removing all results from the task
RESET_TUNING_TASK
to reset a task while executing, removing all results from the task and returning the task to its initial state
DROP_TUNING_TASK
to drop a task, removing all results associated with the task
A SQL tuning set (STS) is a database object that includes one or more SQL statements along with their execution statistics and execution context, and could include a user priority ranking. You can load SQL statements into a SQL tuning set from different SQL sources, such as AWR, the shared SQL area, or customized SQL provided by the user. An STS includes:
A set of SQL statements
Associated execution context, such as user schema, application module name and action, list of bind values, and the cursor compilation environment
Associated basic execution statistics, such as elapsed time, CPU time, buffer gets, disk reads, rows processed, cursor fetches, the number of executions, the number of complete executions, optimizer cost, and the command type
Associated execution plans and row source statistics for each SQL statement (optional)
You can filter SQL statements using the application module name and action, or any of the execution statistics. In addition, you can rank statements based on any combination of execution statistics.
You can use an STS as input to SQL Tuning Advisor, which performs automatic tuning of the SQL statements based on other user-specified input parameters. You can export SQL tuning sets from one database to another, enabling transfer of SQL workloads between databases for remote performance diagnostics and tuning. When poorly performing SQL statements occur on a production database, developers may not want investigate and tune directly on the production database. The DBA can transport the problematic SQL statements to a test database where the developers can safely analyze and tune them. To transport SQL tuning sets, use the DBMS_SQLTUNE
package.
Whenever possible, you should manage SQL tuning sets using Enterprise Manager, as described in the Oracle Database 2 Day + Performance Tuning Guide. If Enterprise Manager is unavailable, then you can manage SQL tuning sets using the DBMS_SQLTUNE
package procedures.
Typically, you use STS operations in the following sequence:
Create a new STS
"Creating a SQL Tuning Set" describes this task.
Load the STS
"Loading a SQL Tuning Set" describes this task.
Select the STS to review the contents
"Displaying the Contents of a SQL Tuning Set" describes this task.
Update the STS if necessary
"Modifying a SQL Tuning Set" describes this task.
Create a tuning task with the STS as input
Transport the STS to another system, if necessary
"Transporting a SQL Tuning Set" describes this task.
Drop the STS when finished
"Dropping a SQL Tuning Set" describes this task.
To use the APIs, you need the ADMINISTER SQL TUNING SET
system privilege to manage SQL tuning sets that you own, or the ADMINISTER
ANY
SQL
TUNING
SET
system privilege to manage any SQL tuning sets.
Figure 17-3 shows the steps involved when using SQL tuning sets APIs.
This section covers the following topics:
See Also:
|
The CREATE_SQLSET
procedure creates an empty STS object in the database. For example, the following procedure creates an STS object that you could use to tune I/O-intensive SQL statements during a specific period:
BEGIN DBMS_SQLTUNE.CREATE_SQLSET( sqlset_name => 'my_sql_tuning_set', description => 'I/O intensive workload'); END; /
In the preceding example, my_sql_tuning_set
is the name of the STS in the database. 'I/O intensive workload'
is the description assigned to the STS.
The LOAD_SQLSET
procedure populates the STS with selected SQL statements. The standard sources for populating an STS are the workload repository, another STS, or the shared SQL area. For both the workload repository and STS, predefined table functions can select columns from the source to populate a new STS.
In the following example, procedure calls load my_sql_tuning_set
from an AWR baseline called peak
baseline
. The data has been filtered to select only the top 30 SQL statements ordered by elapsed time. First a ref cursor is opened to select from the specified baseline. Next the statements and their statistics are loaded from the baseline into the STS.
DECLARE baseline_cursor DBMS_SQLTUNE.SQLSET_CURSOR; BEGIN OPEN baseline_cursor FOR SELECT VALUE(p) FROM TABLE (DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY( 'peak baseline', NULL, NULL, 'elapsed_time', NULL, NULL, NULL, 30)) p; DBMS_SQLTUNE.LOAD_SQLSET( sqlset_name => 'my_sql_tuning_set', populate_cursor => baseline_cursor); END; /
The SELECT_SQLSET
table function reads the contents of the STS. After an STS has been created and populated, you can browse the SQL in the STS using different filtering criteria. The SELECT_SQLSET
procedure is provided for this purpose.
In the following example, the SQL statements in the STS are displayed for statements with a disk-reads to buffer-gets ratio greater than or equal to 75%.
SELECT * FROM TABLE(DBMS_SQLTUNE.SELECT_SQLSET( 'my_sql_tuning_set', '(disk_reads/buffer_gets) >= 0.75'));
Additional details of the SQL tuning sets that have been created and loaded can also be displayed with DBA views, such as DBA_SQLSET
, DBA_SQLSET_STATEMENTS
, and DBA_SQLSET_BINDS
.
You can update and delete SQL statements from an STS based on a search condition. In the following example, the DELETE_SQLSET
procedure deletes SQL statements from my_sql_tuning_set
that have been executed less than fifty times.
BEGIN DBMS_SQLTUNE.DELETE_SQLSET( sqlset_name => 'my_sql_tuning_set', basic_filter => 'executions < 50'); END; /
You can transport SQL tuning sets. This operation involves exporting the STS from one database to a staging table, and then importing the STS from the staging table into another database.
You can transport a SQL tuning set to any database created in Oracle Database 10g (Release 2) or later. This technique is useful when using SQL Performance Analyzer to tune regressions on a test database. For example, you can transport an STS in the following scenario:
An STS with regressed SQL resides in a production database created in Oracle Database 11g Release 2 (11.2).
You are running SQL Performance Analyzer trials on a remote test database created in Oracle Database 11g Release 1 (11.1) or Oracle Database 10g.
You want to copy the STS from the production database to the test database and tune the regressions from the SQL Performance Analyzer trials.
To transport a SQL tuning set:
Use the CREATE_STGTAB_SQLSET
procedure to create a staging table where the SQL tuning sets will be exported.
The following example creates my_10g_staging_table
in the dba1
schema and specifies the format of the staging table as 10.2:
BEGIN DBMS_SQLTUNE.create_stgtab_sqlset( table_name => 'my_10g_staging_table', schema_name => 'dba1', db_version => DBMS_SQLTUNE.STS_STGTAB_10_2_VERSION ); END; /
Use the PACK_STGTAB_SQLSET
procedure to export SQL tuning sets into the staging table.
The following example populates dba1.my_10g_staging_table
with the STS my_sts
owned by hr
:
BEGIN DBMS_SQLTUNE.pack_stgtab_sqlset( sqlset_name => 'my_sts', sqlset_owner => 'hr', staging_table_name => 'my_10g_staging_table', staging_schema_owner => 'dba1', db_version => DBMS_SQLTUNE.STS_STGTAB_10_2_VERSION ); END; /
Move the staging table to the database where the SQL tuning sets will be imported using the mechanism of choice (such as Oracle Data Pump or database link).
On the database where the SQL tuning sets will be imported, use the UNPACK_STGTAB_SQLSET
procedure to import SQL tuning sets from the staging table.
The following example shows how to import SQL tuning sets contained in the staging table:
BEGIN DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET( sqlset_name => '%', replace => TRUE, staging_table_name => 'my_10g_staging_table'); END; /
The DROP_SQLSET
procedure drops an STS that is no longer needed. For example:
BEGIN DBMS_SQLTUNE.DROP_SQLSET( sqlset_name => 'my_sql_tuning_set' ); END; /
You can use the following APIs to manage an STS:
Updating the attributes of SQL statements in an STS
The UPDATE_SQLSET
procedure updates the attributes of SQL statements (such as PRIORITY
or OTHER
) in an existing STS identified by STS name and SQL ID.
Capturing the full system workload
The CAPTURE_CURSOR_CACHE_SQLSET
function enables the capture of the full system workload by repeatedly polling the shared SQL area over a specified interval. This function more efficient than repeatedly using the SELECT_CURSOR_CACHE
and LOAD_SQLSET
procedures to capture the shared SQL area over an extended period. This function effectively captures the entire workload, as opposed to the AWR—which only captures the workload of high-load SQL statements—or the LOAD_SQLSET
procedure, which accesses the data source only once.
Adding and removing a reference to an STS
The ADD_SQLSET_REFERENCE
function adds a new reference to an existing STS to indicate its use by a client. The function returns the identifier of the added reference. The REMOVE_SQLSET_REFERENCE
procedure deactivates an STS to indicate it is no longer used by the client.
A SQL profile is a set of auxiliary information specific to a SQL statement.
This section contains the following topics:
See Also: Oracle Database 2 Day + Performance Tuning Guide to learn how to manage SQL profiles using Enterprise Manager |
A SQL profile contains corrections for poor optimizer estimates discovered during Automatic SQL Tuning. This information can improve optimizer cardinality and selectivity estimates, which in turn leads the optimizer to select better plans.
The SQL profile does not contain information about individual execution plans. Rather, the optimizer has the following sources of information when choosing plans:
The environment, which contains the database configuration, bind variable values, optimizer statistics, data set, and so on
The supplemental statistics in the SQL profile
If the environment or SQL profile change, then the optimizer can create a new plan.
You can use SQL profiles with or without SQL plan management. If you use SQL plan management, then the plan chosen by the optimizer must be an enabled plan baseline. If the statement has multiple plans in the baseline, then the profile remains useful because it enables the optimizer to chose the lowest-cost plan in the baseline.
Figure 17-4 illustrates the relationship between a SQL statement and the SQL profile for this statement. The optimizer uses the profile and the environment to generate a query plan. In this example, the plan is in the SQL plan baseline for the statement.
SQL profiles provide the following benefits:
Unlike hints and stored outlines, profiles do not tie the optimizer to a specific plan or subplan. Profiles fix incorrect estimates while giving the optimizer the flexibility to pick the best plan in different situations.
Unlike hints, no changes to application source code are necessary when using profiles.
The use of SQL profiles by the database is transparent to the user.
During SQL tuning, you select a statement for automatic tuning and run SQL Tuning Advisor. The database can profile the following types of statement:
DML statements (SELECT
, INSERT
with a SELECT
clause, UPDATE
, and DELETE
)
CREATE
TABLE
statements (only with the AS
SELECT
clause)
MERGE
statements (the update or insert operations)
SQL Tuning Advisor invokes Automatic Tuning Optimizer to generate recommendations. Recommendations to accept SQL profiles occur in a finding.
Example 17-3 shows that the database found a better plan for a SELECT
statement that uses several expensive joins. The recommendation is to run DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
to accept the profile, which should enable the statement to run 98.53% faster.
Example 17-3 Sample SQL Profile Finding
------------------------------------------------------------------------------- FINDINGS SECTION (2 findings) ------------------------------------------------------------------------------- 1- SQL Profile Finding (see explain plans section below) -------------------------------------------------------- A potentially better execution plan was found for this statement. Choose one of the following SQL profiles to implement. Recommendation (estimated benefit: 99.45%) ------------------------------------------ - Consider accepting the recommended SQL profile. execute dbms_sqltune.accept_sql_profile(task_name => 'my_task', object_id => 3, task_owner => 'SH', replace => TRUE); Validation results ------------------ The SQL profile was tested by executing both its plan and the original plan and measuring their respective execution statistics. A plan may have been only partially executed if the other could be run to completion in less time. Original Plan With SQL Profile % Improved ------------- ---------------- ---------- Completion Status: PARTIAL COMPLETE Elapsed Time(us): 15467783 226902 98.53 % CPU Time(us): 15336668 226965 98.52 % User I/O Time(us): 0 0 Buffer Gets: 3375243 18227 99.45 % Disk Reads: 0 0 Direct Writes: 0 0 Rows Processed: 0 109 Fetches: 0 109 Executions: 0 1 Notes ----- 1. The SQL profile plan was first executed to warm the buffer cache. 2. Statistics for the SQL profile plan were averaged over next 3 executions.
Sometimes SQL Tuning Advisor may recommend accepting a profile that uses the Automatic Degree of Parallelism (Auto DOP) feature. A parallel query profile is only recommended when the original plan is serial and when parallel execution can significantly reduce the response time for a long-running query. When it recommends a profile that uses Auto DOP, SQL Tuning Advisor gives details about the performance overhead of using parallel execution for the SQL statement in the report.
For parallel execution recommendations, SQL Tuning Advisor may provide two SQL profile recommendations, one using serial execution and one using parallel. In this case, the parallel profile is identical to the standard profile except for the directive to run in parallel.
Example 17-4 shows a parallel query recommendation. In this example, a degree of parallelism of 7 improves response time significantly at the cost of increasing resource consumption by almost 25%. You must decide whether the reduction in database throughput is worth the increase in response time.
Example 17-4 Parallel Query Recommendation
Recommendation (estimated benefit: 99.99%) ------------------------------------------ - Consider accepting the recommended SQL profile to use parallel execution for this statement. execute dbms_sqltune.accept_sql_profile(task_name => 'gfk_task', object_id => 3, task_owner => 'SH', replace => TRUE, profile_type => DBMS_SQLTUNE.PX_PROFILE); Executing this query parallel with DOP 7 will improve its response time 82.22% over the SQL profile plan. However, there is some cost in enabling parallel execution. It will increase the statement's resource consumption by an estimated 24.43% which may result in a reduction of system throughput. Also, because these resources are consumed over a much smaller duration, the response time of concurrent statements might be negatively impacted if sufficient hardware capacity is not available. The following data shows some sampled statistics for this SQL from the past week and projected weekly values when parallel execution is enabled. Past week sampled statistics for this SQL ----------------------------------------- Number of executions 0 Percent of total activity .29 Percent of samples with #Active Sessions > 2*CPU 0 Weekly DB time (in sec) 76.51 Projected statistics with Parallel Execution -------------------------------------------- Weekly DB time (in sec) 95.21
When you accept a profile, the database creates the profile and stores it persistently in the data dictionary. If a user issues a statement for which a profile has been built, then the query optimizer (in normal mode) uses both the environment and the SQL profile to build a well-tuned plan.
If the database uses SQL plan management, and if a SQL plan baseline exists for the SQL statement, then the database adds a new plan to the baseline when a SQL profile is created. Otherwise, the database does not add a new plan baseline.
No strict relationship exists between the SQL profile and the plan baseline. When hard parsing, the optimizer uses the SQL profile to select the best plan baseline from the available plans. In some conditions, the SQL profile may cause the optimizer to select different plan baselines.
While SQL profiles are usually handled by Enterprise Manager as part of Automatic SQL tuning, you can manage SQL profiles with the DBMS_SQLTUNE
package. To use the APIs, you must have the ADMINISTER SQL MANAGEMENT OBJECT
privilege.
Table 17-3 shows the main procedures and functions for managing SQL profiles.
Table 17-3 DBMS_SQLTUNE APIs for SQL Profiles
Procedure or Function | Description | Section |
---|---|---|
|
Creates a SQL Profile for the specified tuning task | |
|
Alters specific attributes of an existing SQL Profile object |
|
|
Drops the named SQL Profile from the database |
|
|
Creates the staging table used for copying SQL profiles from one system to another |
|
|
Moves profile data out of the |
|
|
Uses the profile data stored in the staging table to create profiles on this system |
|
Figure 17-5 shows the possible actions when using SQL profile APIs.
As tables grow or indexes are created or dropped, the plan for a profile can change. The profile continues to be relevant even if the data distribution or access path of the corresponding statement changes. In general, you do not need to refresh SQL profiles.
Over a long period, profile content can become outdated. In this case, the performance of the corresponding SQL statement may degrade. The poorly performing statement may appear as high-load or top SQL. In this situation, the Automatic SQL Tuning task again captures the statement as high-load SQL. You can create a new profile for the statement.
See Also: Oracle Database PL/SQL Packages and Types Reference for information about theDBMS_SQLTUNE package |
You can use the DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
procedure or function to accept a SQL profile recommended by SQL Tuning Advisor. This procedure creates and stores a SQL profile in the database.
As a rule of thumb, accept a SQL profile recommended by SQL Tuning Advisor. If both an index and a SQL profile are recommended, then either use both or use the SQL profile only. If you create an index, then the optimizer may need the profile to pick the new index.
In some situations, SQL Tuning Advisor may find an improved serial plan in addition to an even better parallel plan. In this case, the advisor recommends both a standard and a parallel SQL profile, enabling you to choose between the best serial and best parallel plan for the statement. Accept a parallel plan only if the increase in response time is worth the decrease in throughput (see Example 17-4).
To accept a SQL profile:
Call the DBMS_SQLTUNE.ALTER_SQL_PROFILE
procedure.
In following example, my_sql_tuning_task
is the name of the SQL tuning task and my_sql_profile
is the name of the SQL profile. The PL/SQL block accepts a profile that uses parallel execution (profile_type
):
DECLARE my_sqlprofile_name VARCHAR2(30); BEGIN my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE ( task_name => 'my_sql_tuning_task', name => 'my_sql_profile', profile_type => DBMS_SQLTUNE.PX_PROFILE, force_match => TRUE ); END; /
The force_match
setting controls statement matching. Typically, an accepted SQL profile is associated with the SQL statement through a SQL signature that is generated using a hash function. This hash function changes the SQL statement to upper case and removes all extra whites spaces before generating the signature. Thus, the same SQL profile works for all SQL statements in which the only difference is case and white spaces.
By setting force_match
to TRUE
, the SQL profile additionally targets all SQL statements that have the same text after normalizing literal values to bind variables. This setting may be useful for applications that use only literal values because it allows SQL with text differing only in its literal values to share a SQL profile. If both literal values and bind variables are in the SQL text, or if force_match
is set to FALSE
(default), then literal values are not normalized.
You can view information about a SQL profile in the DBA_SQL_PROFILES
view.
You can alter attributes of an existing SQL profile with the ALTER_SQL_PROFILE
procedure. Modifiable attributes are STATUS
, NAME
, DESCRIPTION
, and CATEGORY
.
The CATEGORY
attribute determines which sessions can apply a profile. You can view the CATEGORY
attribute by querying DBA_SQL_PROFILES.CATEGORY
. By default, all profiles are in the DEFAULT
category, which means that all sessions in which the SQLTUNE_CATEGORY
initialization parameter is set to DEFAULT
can use the profile.
By altering the category of a SQL profile, you can determine which sessions are affected by profile creation. For example, by setting the category to DEV
, only sessions in which the SQLTUNE_CATEGORY
initialization parameter is set to DEV
can use the profile. Other sessions do not have access to the SQL profile and execution plans for SQL statements are not impacted by the SQL profile. This technique enables you to test a profile in a restricted environment before making it available to other sessions.
To alter a SQL profile:
Call the DBMS_SQLTUNE.ALTER_SQL_PROFILE
procedure.
In the following example, the STATUS
attribute of my_sql_profile
is changed to DISABLED
, which means the SQL profile is not used during SQL compilation:
BEGIN DBMS_SQLTUNE.ALTER_SQL_PROFILE( name => 'my_sql_profile', attribute_name => 'STATUS', value => 'DISABLED'); END; /
You can drop a SQL profile with the DROP_SQL_PROFILE
procedure. You can specify whether to ignore errors raised if the name does not exist. For this example, the default value of FALSE
is accepted
To drop a SQL profile:
Call the DBMS_SQLTUNE.DROP_SQL_PROFILE
procedure.
The following example drops the profile named my_sql_profile
:
BEGIN DBMS_SQLTUNE.DROP_SQL_PROFILE( name => 'my_sql_profile' ); END; /
You can transport SQL profiles. This operation involves exporting the SQL profile from the SYS
schema in one database to a staging table, and then importing the SQL profile from the staging table into another database. You can transport a SQL profile to any Oracle database created in the same release or later.
To transport a SQL profile:
Use the CREATE_STGTAB_SQLPROF
procedure to create a staging table where the SQL profiles will be exported.
The following example creates my_staging_table
in the DBA1
schema:
BEGIN DBMS_SQLTUNE.create_stgtab_sqlprof( table_name => 'my_staging_table', schema_name => 'DBA1' ); END; /
Use the PACK_STGTAB_SQLPROF
procedure to export SQL profiles into the staging table.
The following example populates dba1.my_staging_table
with the SQL profile my_profile
:
BEGIN DBMS_SQLTUNE.pack_stgtab_sqlprof( profile_name => 'my_profile', staging_table_name => 'my_staging_table', staging_schema_owner => 'dba1' ); END; /
Move the staging table to the database where the SQL profiles will be imported using the mechanism of choice (such as Oracle Data Pump or database link).
On the database where the SQL profiles will be imported, use the UNPACK_STGTAB_SQLPROF
procedure to import SQL profiles from the staging table.
The following example shows how to import SQL profiles contained in the staging table:
BEGIN DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF( replace => TRUE, staging_table_name => 'my_staging_table'); END; /
This section summarizes views that shows information gathered for tuning the SQL statements. You need DBA privileges to access these views.
Advisor information views, such as DBA_ADVISOR_TASKS
, DBA_ADVISOR_EXECUTIONS
, DBA_ADVISOR_FINDINGS
, DBA_ADVISOR_RECOMMENDATIONS
, and DBA_ADVISOR_RATIONALE
views.
SQL tuning information views, such as DBA_SQLTUNE_STATISTICS
, DBA_SQLTUNE_BINDS
, and DBA_SQLTUNE_PLANS
views.
SQL tuning set views, such as DBA_SQLSET
, DBA_SQLSET_BINDS
, DBA_SQLSET_STATEMENTS
, and DBA_SQLSET_REFERENCES
views.
Information on captured execution plans for statements in SQL tuning sets are displayed in the DBA_SQLSET_PLANS
and USER_SQLSET_PLANS
views.
SQL profile information is displayed in the DBA_SQL_PROFILES
view.
If TYPE
= MANUAL
, then the SQL profile was created manually by SQL Tuning Advisor. If TYPE
= AUTOMATIC
, then the profile was created by automatic SQL tuning.
Advisor execution progress information is displayed in the V$ADVISOR_PROGRESS
view.
Dynamic views containing information relevant to the SQL tuning, such as V$SQL
, V$SQLAREA
, V$SQLSTATS
, and V$SQL_BINDS
views.
See Also: Oracle Database Reference for descriptions of the static data dictionary and dynamic views |
Performance Tuning Guide
11g Release 2 (11.2)
E16638-07
August 2012
Oracle Database Performance Tuning Guide, 11g Release 2 (11.2)
E16638-07
Copyright © 2000, 2012, Oracle and/or its affiliates. All rights reserved.
Primary Authors: Immanuel Chan, Lance Ashdown
Contributors: Aditya Agrawal, Hermann Baer, Vladimir Barriere, Mehul Bastawala, Eric Belden, Pete Belknap, Supiti Buranawatanachoke, Sunil Chakkappen, Maria Colgan, Benoit Dageville, Dinesh Das, Karl Dias, Kurt Engeleiter, Marcus Fallen, Mike Feng, Leonidas Galanis, Ray Glasstone, Prabhaker Gongloor, Kiran Goyal, Cecilia Grant, Connie Dialeris Green, Shivani Gupta, Karl Haas, Bill Hodak, Andrew Holdsworth, Hakan Jacobsson, Shantanu Joshi, Ameet Kini, Sergey Koltakov, Vivekanada Kolla, Paul Lane, Sue K. Lee, Herve Lejeune, Ilya Listvinsky, Bryn Llewellyn, George Lumpkin, Mughees Minhas, Gary Ngai, Mark Ramacher, Yair Sarig, Uri Shaft, Vishwanath Sreeraman, Vinay Srihari, Randy Urbano, Amir Valiani, Venkateshwaran Venkataramani, Yujun Wang, Graham Wood, Khaled Yagoub, Mohamed Zait, Mohamed Ziauddin
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
You can use optimizer hints with SQL statements to alter execution plans. This chapter explains how to use hints to instruct the optimizer to use specific approaches.
The chapter contains the following sections:
A hint is an instruction to the optimizer. When writing SQL code, you may know information about the data unknown to the optimizer. Hints enable you to make decisions normally made by the optimizer, sometimes causing the optimizer to select a plan that it sees as higher cost.
In a test or development environments, hints are useful for testing the performance of a specific access path. For example, you may know that a certain index is more selective for certain queries. In this case, you may use hints to instruct the optimizer to use a better execution plan.
The disadvantage of hints is the extra code that must be managed, checked, and controlled. Changes in the database and host environment can make hints obsolete or even have negative consequences. For this reason, test by means of hints, but use other techniques to manage the SQL execution plans, such as SQL Tuning advisor and SQL Plan Baselines.
Oracle Database supports more than 60 hints, each of which may have zero or more parameters. A statement block can have only one comment containing hints, and that comment must follow the SELECT
, UPDATE
, INSERT
, MERGE
, or DELETE
keyword. For example, the following hint directs the optimizer to pick the query plan that produces the first 10 rows from the employees
table at the lowest cost:
SELECT /*+ FIRST_ROWS(10) */ * FROM employees;
See Also: Oracle Database SQL Language Reference to a complete list of hints supported by Oracle Database |
Hints can be of the following general types:
Single-table
Single-table hints are specified on one table or view. INDEX
and USE_NL
are examples of single-table hints.
Multi-table
Multi-table hints are like single-table hints, except that the hint can specify one or more tables or views. LEADING
is an example of a multi-table hint. Note that USE_NL(table1 table2)
is not considered a multi-table hint because it is a shortcut for USE_NL(table1)
and USE_NL(table2)
.
Query block
Query block hints operate on single query blocks. STAR_TRANSFORMATION
and UNNEST
are examples of query block hints.
Statement
Statement hints apply to the entire SQL statement. ALL_ROWS
is an example of a statement hint.
Optimizer hints are grouped into the following categories:
These categories, and the hints contained within each category, are listed in the sections that follow.
See Also: Oracle Database SQL Language Reference for syntax and a more detailed description of each hint |
The ALL_ROWS
and FIRST_ROWS(
n
)
hints let you choose between optimization approaches and goals. If a SQL statement has a hint specifying an optimization approach and goal, then the optimizer uses the specified approach regardless of the presence or absence of statistics, the value of the OPTIMIZER_MODE
initialization parameter, and the OPTIMIZER_MODE
parameter of the ALTER
SESSION
statement.
Note: The optimizer goal applies only to queries submitted directly. Use hints to specify the access path for any SQL statements submitted from within PL/SQL. TheALTER SESSION ... SET OPTIMIZER_MODE statement does not affect SQL run within PL/SQL. |
If you specify either the ALL_ROWS
or the FIRST_ROWS
(n
) hint in a SQL statement, and if the data dictionary does not have statistics about tables accessed by the statement, then the optimizer uses default statistical values, such as allocated storage for such tables, to estimate the missing statistics and choose an execution plan. These estimates might not be as accurate as those gathered by the DBMS_STATS
package, so use DBMS_STATS
to gather statistics.
If you specify hints for access paths or join operations along with either the ALL_ROWS
or FIRST_ROWS
(n
) hint, then the optimizer gives precedence to the access paths and join operations specified by the hints.
The OPTIMIZER_FEATURES_ENABLE
hint acts as an umbrella parameter for enabling a series of optimizer features based on an Oracle Database release number. This hint is a useful way to check for plan regressions after database upgrades.
Specify the release number as an argument to the hint. The following example runs a query with the optimizer features from Oracle Database 11g Release 1 (11.1.0.6):
SELECT /*+ optimizer_features_enable('11.1.0.6') */ employee_id, last_name FROM employees ORDER BY employee_id;
See Also: Oracle Database Reference to learn about theOPTIMIZER_FEATURES_ENABLE initialization parameter |
The following hints instructs the optimizer to use a specific access path for a table:
INDEX_ASC
and INDEX_DESC
INDEX_SS
and NO_INDEX_SS
Specifying one of the preceding hints causes the optimizer to choose the specified access path only if the access path is available based on the existence of an index or cluster and on the syntactic constructs of the SQL statement. If a hint specifies an unavailable access path, then the optimizer ignores it.
You must specify the table to be accessed exactly as it appears in the statement. If the statement uses an alias for the table, then use the alias rather than the table name in the hint. The table name within the hint should not include the schema name if the schema name is present in the statement.
Note: For access path hints, Oracle Database ignores the hint if you specify theSAMPLE option in the FROM clause of a SELECT statement. |
See Also:
|
The following hints instructs the optimizer to use a specific join operation for a table:
USE_HASH
and NO_USE_HASH
Use of the USE_NL
and USE_MERGE
hints is recommended with any join order hint. See "Hints for Join Orders". Oracle Database uses these hints when the referenced table is forced to be the inner table of a join; the hints are ignored if the referenced table is the outer table.
See "Access Path and Join Hints on Views" and "Access Path and Join Hints Inside Views" for hint behavior with mergeable views.
The online application upgrade hints suggest how to handle conflicting INSERT
and UPDATE
operations when performing an online application upgrade using edition-based redefinition:
You can use the CHANGE_DUPKEY_ERROR_INDEX
and IGNORE_ROW_ON_DUPKEY_INDEX
hints to handle conflicting INSERT
operations during an online application upgrade. You can use the CHANGE_DUPKEY_ERROR_INDEX
hint to identify unique key violations for a specified set of columns or index. When a unique key violation is encountered during an INSERT
or UPDATE
operation, an ORA-38911
error is reported instead of an ORA-001
. You can use the IGNORE_ROW_ON_DUPKEY_INDEX
hint to ignore unique key violations for a specified set of columns or index. When a unique key violation is encountered during a single-table INSERT
operation, a row-level rollback occurs and execution resumes with the next input row. Therefore, a unique key violation does not cause the INSERT
to terminate or an error to be reported.
You can use the RETRY_ON_ROW_CHANGE
hint to handle conflicting UPDATE
operations during an online application upgrade. You can use this hint to retry an UPDATE
or DELETE
operation if one or more rows changed from the time when the set of rows to be modified was determined to the time when the set of rows was actually modified.
See Also: Oracle Database Advanced Application Developer's Guide for more information about performing an online application upgrade using edition-based redefinition |
The parallel execution hints instruct the optimizer about whether and how to parallelize operations. You can use the following parallel hints:
The following sections group the hints into functional categories.
See Also:
|
Hints beginning with the keyword PARALLEL
indicate the degree of parallelism for the query. Hints beginning with NO_PARALLEL
disable parallelism.
Note: You can perform conventional inserts in parallel mode using the /*+ NOAPPEND PARALLEL */ hint. |
You can specify parallelism at the statement or object level. If you do not explicitly specify an object in the hint, then parallelism occurs at the statement level. In contrast to most hints, parallel statement-level hints take precedence over object-level hints.
To illustrate the difference between object-level and statement-level parallelism settings, suppose that you perform the following steps:
You set the parallelism setting on the employees
table to 2
and disable parallelism on the departments
table as follows:
ALTER TABLE employees PARALLEL 2; ALTER TABLE departments NOPARALLEL;
You execute the following SELECT
statement:
SELECT /*+ PARALLEL(employees 3) */ e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id=d.department_id;
The PARALLEL
hint for employees
overrides the degree of parallelism of 2
for this table specified in Step 1.
In the explain plan in Example 19-1, the IN-OUT
column shows PCWP
for parallel access of employees
and S
for serial access of departments
. Access to departments
is serialized because a NOPARALLEL
setting was applied to this table in Step 1.
Example 19-1 Explain Plan for Query with /*+ PARALLEL(employees 3) */ Hint
---------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib ---------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 14 | 588 | 5 (20)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 14 | 588 | 5 (20)| 00:00:01 | Q1,01 | P->S | QC (RAND)| |* 3 | HASH JOIN | | 14 | 588 | 5 (20)| 00:00:01 | Q1,01 | PCWP | | | 4 | BUFFER SORT | | | | | | Q1,01 | PCWC | | | 5 | PX RECEIVE | | 4 | 88 | 2 (0)| 00:00:01 | Q1,01 | PCWP | | | 6 | PX SEND BROADCAST | :TQ10000 | 4 | 88 | 2 (0)| 00:00:01 | | S->P | BROADCAST| | 7 | TABLE ACCESS FULL| DEPARTMENTS | 4 | 88 | 2 (0)| 00:00:01 | | | | | 8 | PX BLOCK ITERATOR | | 14 | 280 | 2 (0)| 00:00:01 | Q1,01 | PCWC | | | 9 | TABLE ACCESS FULL | EMPLOYEES | 14 | 280 | 2 (0)| 00:00:01 | Q1,01 | PCWP | | ----------------------------------------------------------------------------------------------------------------
You execute the following SELECT
statement:
SELECT /*+ PARALLEL(4) */ hr_emp.last_name, d.department_name FROM employees hr_emp, departments d WHERE hr_emp.department_id=d.department_id;
Because no schema object is specified in the PARALLEL
hint, the scope of the hint is the statement, not an object. This statement forces the query of the employees
and departments
tables to execute with a degree of parallelism of 4
, overriding the parallelism setting defined on the tables.
The PQ_DISTRIBUTE
hint controls the distribution method for a specified join operation. The basic syntax is as follows, where distribution
is the distribution method to use between the producer and the consumer slaves for the left and the right side of the join:
/*+ PQ_DISTRIBUTE(tablespec, distribution) */
For example, in a HASH,HASH
distribution the rows of each table are mapped to consumer query servers, using a hash function on the join keys. When mapping is complete, each query server performs the join between a pair of resulting partitions. This distribution is recommended when the tables are comparable in size and the join operation is implemented by hash join or sort merge join. The following query contains a hint to use hash distribution:
SELECT /*+ORDERED PQ_DISTRIBUTE(departments HASH, HASH) USE_HASH (departments)*/ e.employee_id, d.department_name FROM employees e, departments d WHERE e.department_id=d.department_id;
See Also: Oracle Database SQL Language Reference for valid syntax and semantics for thePQ_DISTRIBUTE hint |
The PQ_DISTRIBUTE
hint applies to parallel INSERT ... SELECT
and parallel CREATE TABLE AS SELECT
statements to specify how rows should be distributed between the producer (query) and the consumer (load) slaves.
For example, a PARTITION
distribution use the partitioning information of the table being loaded to distribute rows from the query slaves to the load slaves. Use this method when the following conditions are met:
It is not possible or desirable to combine the query and load operations into each slave.
The number of partitions being loaded is greater than or equal to the number of load slaves.
The input data is evenly distributed across the partitions being loaded.
The following sample statement creates a table and specifies the PARTITION
distribution method:
CREATE /*+ PQ_DISTRIBUTE(lineitem, PARTITION) */ TABLE lineitem NOLOGGING PARALLEL 16 PARTITION BY HASH (l_orderkey) PARTITIONS 512 AS SELECT * FROM lineitemxt;
In contrast, a NONE
distribution combines the query and load operation into each slave. Thus, all slaves load all partitions. Use this distribution to avoid the overhead of distribution of rows when there is no skew. The following sample SQL statement specifies a distribution of NONE
for an insert into the lineitem
table:
INSERT /*+ APPEND PARALLEL(LINEITEM, 16) PQ_DISTRIBUTE(LINEITEM, NONE) */ INTO lineitem (SELECT * FROM lineitemxt);
Each of the following hints instructs the optimizer to use a specific SQL query transformation:
The following are several additional hints:
Hints apply only to the optimization of the block of a statement in which they appear. A statement block is any one of the following statements or parts of statements:
A simple SELECT
, UPDATE
, or DELETE
statement
A parent statement or subquery of a complex statement
A part of a compound query
For example, a compound query consisting of two component queries combined by the UNION
operator has two blocks, one for each component query. For this reason, hints in the first component query apply only to its optimization, not to the optimization of the second component query.
The following sections discuss the use of hints in more detail.
When using hints, in some cases, you might need to specify a full set of hints to ensure the optimal execution plan. For example, if you have a very complex query, which consists of many table joins, and if you specify only the INDEX
hint for a given table, then the optimizer must determine the remaining access paths to be used, and the corresponding join methods. Therefore, even though you gave the INDEX
hint, the optimizer might not necessarily use that hint, because the optimizer might have determined that the requested index cannot be used due to the join methods and access paths selected by the optimizer.
In Example 19-2, the LEADING
hint specifies the exact join order. The join methods are also specified.
Example 19-2 Specifying a Full Set of Hints
SELECT /*+ LEADING(e2 e1) USE_NL(e1) INDEX(e1 emp_emp_id_pk) USE_MERGE(j) FULL(j) */ e1.first_name, e1.last_name, j.job_id, sum(e2.salary) total_sal FROM employees e1, employees e2, job_history j WHERE e1.employee_id = e2.manager_id AND e1.employee_id = j.employee_id AND e1.hire_date = j.start_date GROUP BY e1.first_name, e1.last_name, j.job_id ORDER BY total_sal;
To identify a query block in a query, you can use an optional query block name in a hint to specify the block to which the hint applies. The syntax of the query block argument is of the form @
queryblock
, where queryblock
is an identifier that specifies a block in the query. The queryblock
identifier can either be system-generated or user-specified.
Note the following guidelines:
You can obtain the system-generated identifier by using EXPLAIN
PLAN
for the query. You can determine pre-transformation query block names by running EXPLAIN
PLAN
for the query using the NO_QUERY_TRANSFORMATION
hint.
You can set the user-specified name with the QB_NAME
hint.
Assumptions
This tutorial assumes the following:
You intend to create a join view of employees
and job_history
that contains a nested query block.
You want to query all rows in the view, but apply the NO_UNNEST
hint to the query block only.
To apply the NO_UNNEST hint to the query block:
Start SQL*Plus and log in as user hr
.
Create the view.
For example, run the following statement:
CREATE OR REPLACE VIEW v_emp_job_history AS SELECT e1.first_name, e1.last_name, j.job_id, sum(e2.salary) total_sal FROM employees e1, (SELECT * FROM employees e3) e2, job_history j WHERE e1.employee_id = e2.manager_id AND e1.employee_id = j.employee_id AND e1.hire_date = j.start_date AND e1.salary = ( SELECT max(e2.salary) FROM employees e2 WHERE e2.department_id = e1.department_id ) GROUP BY e1.first_name, e1.last_name, j.job_id ORDER BY total_sal;
Explain the plan for a query of v_emp_job_history
.
For example, run the following SQL statement:
EXPLAIN PLAN FOR SELECT * FROM v_emp_job_history;
Query the plan table.
For example, run the following SQL statement:
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY(NULL, NULL, 'ALL'));
The database displays the plan.
In the query plan output, obtain the operation ID associated with the query block, and then use the ID to find the query block name.
For example, the following plan shows that the full scan of the employees
table occurs in operation 11
, which corresponds to query block @SEL$4
:
------------------------------------------------------------------------------ |Id| Operation |Name |Rows|Bytes|Cost |Time ------------------------------------------------------------------------------ |0 | SELECT STATEMENT | |1 |46 |9(34)|00:00:01| . . . |11| TABLE ACCESS FULL | EMPLOYEES |107 |749 |3(0) |00:00:01| . . . ------------------------------------------------------------------------------- Query Block Name / Object Alias (identified by operation id): ------------------------------------------------------------- 1 - SEL$2980E977 / V_EMP_JOB_HISTORY@SEL$1 2 - SEL$2980E977 8 - SEL$8F9407EC / VW_SQ_1@SEL$32F848CB 9 - SEL$8F9407EC 11 - SEL$8F9407EC / E2@SEL$4
Query the view using the NO_UNNEST
hint.
For example, run the following SQL statement to apply the NO_UNNEST
hint to query block @SEL$4
(sample output included):
SQL> SELECT /*+ NO_UNNEST( @SEL$4 ) */ * FROM v_emp_job_history;
FIRST_NAME LAST_NAME JOB_ID TOTAL_SAL
-------------------- ------------------------- ---------- ----------
Michael Hartstein MK_REP 6000
Hints that specify a table generally refer to tables in the DELETE
, SELECT
, or UPDATE
query block in which the hint occurs, not to tables inside any views referenced by the statement. When you want to specify hints for tables that appear inside views, Oracle recommends using global hints instead of embedding the hint in the view. You can transform the table hints described in this chapter into a global hint by using an extended tablespec
syntax that includes view names with the table name.
In addition, an optional query block name can precede the tablespec
syntax. See "Specifying a Query Block in a Hint".
Hints that specify a table use the following syntax, where view
specifies a view name and table
specifies the name or alias of the table:
tablespec::=
Description of the illustration tablespec.gif
If the view path is specified, then the database resolves the hint from left to right, where the first view must be present in the FROM
clause, and each subsequent view must be specified in the FROM
clause of the preceding view.
Example 19-3 creates a view v
to return the first and last name of the employee, his or her first job, and the total salary of all direct reports of that employee for each employee with the highest salary in his or her department. When querying the data, you want to force the use of the index emp_job_ix
for the table e3
in view e2
.
Example 19-3 Using Global Hints Example
CREATE OR REPLACE VIEW v AS SELECT e1.first_name, e1.last_name, j.job_id, sum(e2.salary) total_sal FROM employees e1, ( SELECT * FROM employees e3) e2, job_history j WHERE e1.employee_id = e2.manager_id AND e1.employee_id = j.employee_id AND e1.hire_date = j.start_date AND e1.salary = ( SELECT max(e2.salary) FROM employees e2 WHERE e2.department_id = e1.department_id ) GROUP BY e1.first_name, e1.last_name, j.job_id ORDER BY total_sal;
By using the global hint structure, you can avoid the modification of view v
with the specification of the index hint in the body of view e2
. To force the use of the index emp_job_ix
for the table e3
, you can use one of the following statements:
SELECT /*+ INDEX(v.e2.e3 emp_job_ix) */ * FROM v; SELECT /*+ INDEX(@SEL$2 e2.e3 emp_job_ix) */ * FROM v; SELECT /*+ INDEX(@SEL$3 e3 emp_job_ix) */ * FROM v;
Note: Oracle Database ignores global hints that refer to multiple query blocks. For example, theLEADING hint is ignored in the following query because it uses the dot notation to the main query block containing table a and view query block v :
SELECT /*+ LEADING(v.b a v.c) */ * FROM a, v WHERE a.id = v.id; To avoid this issue, Oracle recommends that you specify a query block in the hint using the SELECT /*+ LEADING(A@SEL$1 B@SEL$2 C@SEL$2) */ FROM a a, v v WHERE a.id = v.id; |
Example 19-4 Using Global Hints with NO_MERGE
The global hint syntax also applies to unmergeable views as in Example 19-4.
CREATE OR REPLACE VIEW v1 AS SELECT * FROM employees WHERE employee_id < 150; CREATE OR REPLACE VIEW v2 AS SELECT v1.employee_id employee_id, departments.department_id department_id FROM v1, departments WHERE v1.department_id = departments.department_id; SELECT /*+ NO_MERGE(v2) INDEX(v2.v1.employees emp_emp_id_pk) FULL(v2.departments) */ * FROM v2 WHERE department_id = 30;
The hints cause v2
not to be merged and specify access path hints for the employee and department tables. These hints are pushed down into the (nonmerged) view v2
.
Hints that specify an index can use either a simple index name or a parenthesized list of columns as follows:
indexspec::=
The semantics are as follows:
table
specifies the name
column
specifies the name of a column in the specified table
The columns can optionally be prefixed with table qualifiers allowing the hint to specify bitmap join indexes where the index columns are on a different table than the indexed table. If tables qualifiers are present, then they must be base tables, not aliases in the query.
Each column in an index specification must be a base column in the specified table, not an expression. Function-based indexes cannot be hinted using a column specification unless the columns specified in the index specification form the prefix of a function-based index.
index
specifies an index name
When tablespec
is followed by indexspec
in the specification of a hint, a comma separating the table name and index name is permitted but not required. Commas are also permitted, but not required, to separate multiple occurrences of indexspec
.
The hint is resolved as follows:
If an index name is specified, then the database only considered the specified index.
If a column list is specified, and if an index exists whose columns match the specified columns in number and order, then the database only consider this index. If no such index exists, then any index on the table with the specified columns as the prefix in the order specified is considered. In either case, the behavior is exactly as if the user had specified the same hint individually on all the matching indexes.
For example, in Example 19-3 the job_history
table has a single-column index on the employee_id
column and a concatenated index on employee_id
and start_date
columns. To specifically instruct the optimizer on index use, you can hint the query as follows:
SELECT /*+ INDEX(v.j jhist_employee_ix (employee_id start_date)) */ * FROM v;
Oracle does not encourage hints inside or on views (or subqueries) because you can define views in one context and use them in another. Also, such hints can result in unexpected execution plans. In particular, hints inside views or on views are handled differently, depending on whether the view is mergeable into the top-level query.
To specify a hint for a table in a view or subquery, the global hint syntax is preferable. See "Specifying Global Table Hints".
If you decide to use hints with views, then the following sections describe the behavior.
By default, hints do not propagate inside a complex view. For example, if you specify a hint in a query that selects against a complex view, then this hint is not honored, because it is not pushed inside the view.
Note: If the view is on a single table, then the hint is propagated. |
Unless the hints are inside the base view, they might not be honored from a query against the view.
A mergeable view is a view that Oracle Database can replace with the query that defines the view. For example, suppose you create a view as follows:
CREATE OR REPLACE VIEW emp_view AS SELECT last_name, department_name FROM employees e, departments d WHERE e.department_id=d.department_id;
This view is mergeable because the database can optimize the following query to use the SELECT
statement that defines the view, avoiding use of the view itself.
SELECT * FROM emp_view;
Optimization approach and goal hints can occur in a top-level query or inside views.
If such a hint exists in the top-level query, then the database uses this hint regardless of any such hints inside the views.
If there is no top-level optimizer mode hint, then the database uses mode hints in referenced views as long as all mode hints in the views are consistent.
If two or more mode hints in the referenced views conflict, then the database discards all mode hints in the views and uses the session mode, whether default or user-specified.
Access path and join hints on referenced views are ignored unless the view contains a single table or references an Additional Hints view with a single table. For such single-table views, an access path hint or a join hint on the view applies to the table inside the view.
Access path and join hints can appear in a view definition.
If the view is an inline view (that is, if it appears in the FROM
clause of a SELECT
statement), then all access path and join hints inside the view are preserved when the view is merged with the top-level query.
For views that are non-inline views, access path and join hints in the view are preserved only if the referencing query references no other tables or views (that is, if the FROM
clause of the SELECT
statement contains only the view).
With nonmergeable views, optimization approach and goal hints inside the view are ignored; the top-level query decides the optimization mode.
Because nonmergeable views are optimized separately from the top-level query, access path and join hints inside the view are preserved. For the same reason, access path hints on the view in the top-level query are ignored.
However, join hints on the view in the top-level query are preserved because, in this case, a nonmergeable view is similar to a table.
Part I provides an introduction and overview of performance tuning.
The chapter in this part is:
access path
The means by which data is retrieved from a database. For example, a query using an index and a query using a full table scan use different access paths.
asynchronous I/O
Independent I/O, in which there is no timing requirement for transmission, and other processes can start before the transmission has finished.
Automatic Workload Repository
Collects, processes, and maintains performance statistics for problem detection and self-tuning purposes.
Autotrace
Generates a report on the execution path used by the SQL optimizer and the statement execution statistics. The report is useful to monitor and tune the performance of DML statements.
bind variable
A variable in a SQL statement that must be replaced with a valid value, or the address of a value, in order for the statement to successfully execute.
block
A unit of data transfer between main memory and disk. Many blocks from one section of memory address space form a segment.
bottleneck
The delay in transmission of data, typically when a system's bandwidth cannot support the amount of information being relayed at the speed it is being processed. However, many factors can create a bottleneck in a system.
buffer
A main memory address where the buffer manager caches currently and recently used data read from disk. Over time, a buffer can hold different blocks. When a new block is needed, the buffer manager can discard an old block and replace it with a new one.
cache recovery
The part of instance recovery where Oracle Database applies all committed and uncommitted changes in the redo log files to the affected data blocks. Also known as the rolling forward phase of instance recovery.
Cartesian product
A join with no join condition results in a Cartesian product, or a cross product. A Cartesian product is the set of all possible combinations of rows drawn one from each table. Thus, for a join of two tables, each row in one table is matched in turn with every row in the other. A Cartesian product for more than two tables is the result of pairing each row of one table with every row of the Cartesian product of the remaining tables. All other types of joins are subsets of Cartesian products effectively created by deriving the product and then excluding rows that fail the join condition.
compound query
A query that uses set operators (UNION
, UNION
ALL
, INTERSECT
, or MINUS
) to combine two or more simple or complex statements. Each simple or complex statement in a compound query is called a component query.
dictionary cache
A collection of database tables and views containing reference information about the database, its structures, and its users. Oracle Database accesses the data dictionary frequently during the parsing of SQL statements. Two special locations in memory are designated to hold dictionary data. One area is called the data dictionary cache, also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data). The other area is the library cache. All Oracle processes share these two caches for access to data dictionary information.
distributed statement
A statement that accesses data on two or more distinct nodes/instances of a distributed database. A remote statement accesses data on one remote node of a distributed database.
dynamic performance views
The views database administrators create on dynamic performance tables (virtual tables that record current database activity). Dynamic performance views are called fixed views because they cannot be altered or removed by the database administrator.
dynamic sampling
An optimization technique in which the database executes a recursive SQL statement to scan a small random sample of a table's blocks to estimate predicate selectivities.
estimator
Uses statistics to estimate the selectivity, cardinality, and cost of execution plans. The main goal of the estimator is to estimate the overall cost of an execution plan.
execution plan
The combination of steps used by the database to execute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the user issuing the statement. You can override execution plans by using hints.
EXPLAIN PLAN
A SQL statement that enables examination of the execution plan chosen by the optimizer for DML statements. EXPLAIN
PLAN
causes the optimizer to choose an execution plan and then to put data describing the plan into a database table.
fast full index scan
A full index scan in which the database reads the index blocks in no particular order. The database accesses the data in the index itself, without accessing the table.
full table scan
A scan of table data in which the database sequentially reads all rows from a table and filters out those that do not meet the selection criteria. During full table scans the database scans all data blocks under the high water mark.
hash join
A join in which the database uses the smaller of two tables or data sources to build a hash table in memory. In hash joins, the database scans the larger table, probing the hash table for the addresses of the matching rows in the smaller table.
hint
An instruction passed to the optimizer through comments in a SQL statement. The optimizer uses hints to choose an execution plan for the statement.
index clustering factor
A measure of the row order in relation to an indexed value such as last name. The more order that exists in row storage for this value, the lower the clustering factor.
instance recovery
The automatic application of redo log records to data blocks after a database failure.
join
A query that selects data from multiple tables. A join is characterized by multiple tables in the FROM
clause. Oracle Database pairs the rows from these tables using the condition specified in the WHERE
clause and returns the resulting rows. This condition is called the join condition and usually compares columns of all the joined tables.
latch
A simple, low-level serialization mechanism to protect shared data structures in the System Global Area.
library cache
A memory structure containing shared SQL and PL/SQL areas. The library cache is one of three parts of the shared pool.
literal
A constant value, written at compile-time and read-only at run-time. The database can access literals quickly and uses them when modification is not necessary.
mirroring
Maintaining identical copies of data on one or more disks. Typically, mirroring occurs on duplicate hard disks at the operating system level, so that if one disk becomes unavailable, the other disk can service requests without interruptions.
optimizer
Built-in database software that determines the most efficient way to execute a SQL statement. The query optimizer is made up of the query transformer, the estimator, and the plan generator.
The optimizer generates a set of potential execution plans for SQL statements, estimates the cost of each plan, calls the plan generator to generate the plan, compares the costs, and chooses the plan with the lowest cost. The database uses this approach when the data dictionary has statistics for at least one of the tables accessed by the SQL statements.
optimizer mode
The optimizer operates in either normal mode or tuning mode. In normal mode, the optimizer compiles the SQL and generates an execution plan. In tuning mode, the optimizer performs additional analysis and generates a series of actions, along with their rationale and expected benefit for producing a significantly better plan. When running in tuning mode, the optimizer is known as the Automatic Tuning Optimizer.
outer join
A join condition using the outer join operator (+
) with one or more columns of one of the tables. Oracle Database returns all rows that meet the join condition. Oracle Database also returns all rows from the table without the outer join operator for which there are no matching rows in the table with the outer join operator.
paging
A technique for increasing the memory space available by moving infrequently-used parts of a program's working memory from main memory to a secondary storage medium, usually a disk. The unit of transfer is called a page.
parse
A hard parse occurs when a SQL statement is executed, and the SQL statement is either not in the shared pool, or it is in the shared pool but it cannot be shared. A SQL statement is not shared if the metadata for the two SQL statements is different. This can happen if a SQL statement is textually identical as a preexisting SQL statement, but the tables referred to in the two statements resolve to physically different tables, or if the optimizer environment is different.
A soft parse occurs when a session attempts to execute a SQL statement, and the statement is in the shared pool, and it can be used (that is, shared). For a statement to be shared, all data, (including metadata, such as the optimizer execution plan) pertaining to the existing SQL statement must be equally applicable to the current statement being issued.
parse call
A call to Oracle Database to prepare a SQL statement for execution. This includes syntactically checking the SQL statement, optimizing it, and building (or locating) an executable form of that statement.
parser
Performs syntax analysis and semantic analysis of SQL statements, and expands views (referenced in a query) into separate query blocks.
PGA
Program Global Area. A nonshared memory region that contains data and control information for a server process, created when the server process is started.
PIO
Physical I/O. A block read which could not be satisfied from the buffer cache, either because the block was not present or because the I/O is a direct I/O which bypasses the buffer cache.
plan generator
Tries out different possible plans for a given query so that the query optimizer can choose the plan with the lowest cost. It explores different plans for a query block by trying out different access paths, join methods, and join orders.
query transformer
Decides whether to rewrite a user query to generate a better query plan, merges views, and performs subquery unnesting.
RAID
Redundant arrays of inexpensive disks. RAID configurations provide improved data reliability with the option of striping (manually distributing data). Different RAID configurations (levels) are chosen based on performance and cost, and are suited to different types of applications, depending on their I/O characteristics.
row source generator
Receives the optimal plan from the optimizer and outputs the execution plan for the SQL statement. A row source is an iterative control structure that processes a set of rows in an iterated manner and produces a row set.
segment
A set of extents allocated for a specific type of database object, such as a table, index, or cluster.
selectivity
In a query, the measure of how many rows from a row set pass a predicate test, for example, WHERE last_name = 'Smith'
.
simple query
A SELECT
statement that references only one table and does not make reference to GROUP
BY
functions.
SGA
System Global Area. A memory region within main memory used to store data for fast access. Oracle database uses the shared pool to allocate SGA memory for shared SQL and PL/SQL procedures.
SQL Compiler
Compiles SQL statements into a shared cursor. The SQL Compiler is made up of the parser, the optimizer, and the row source generator.
SQL profile
A collection of information that enables the query optimizer to create an optimal execution plan for a SQL statement.
SQL statements (similar)
Similar SQL statements differ only due to changing literal values. If literal values were replaced with bind variables, then the SQL statements would be textually identical.
SQL Trace
A basic performance diagnostic tool to help monitor and tune applications running against the Oracle database. SQL Trace lets you assess the efficiency of the SQL statements an application runs and generates statistics for each statement. The trace files produced by this tool serve as input for TKPROF
.
SQL tuning set (STS)
A database object that includes one or more SQL statements along with their execution statistics and execution context.
Statspack
A set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. This feature has been replaced by the Automatic Workload Repository.
striping
The interleaving of a related block of data across disks. Proper striping reduces I/O and improves performance.
Stripe depth is the size of the stripe, sometimes called stripe unit.
Stripe width is the product of the stripe depth and the number of drives in the striped set.
TKPROF
A diagnostic tool to help monitor and tune applications running against the Oracle database. TKPROF
primarily processes SQL trace output files and translates them into readable output files, providing a summary of user-level statements and recursive SQL calls for the trace files. It can also assess the efficiency of SQL statements, generate execution plans, and create SQL scripts to store statistics in the database.
transaction recovery
The part of instance recovery where Oracle Database applies the rollback segments to undo the uncommitted changes. Also known as the rolling back phase of instance recovery.
wait events
Statistics that are incremented by a server process/thread to indicate that it had to wait for an event to complete before being able to continue processing. Wait events are one of the first places for investigation when performing reactive performance tuning.
wait events (idle)
These events indicate that the server process is idle and waiting for work. Ignore these events when tuning because they do not indicate the nature of the performance bottleneck.
work area
A private allocation of memory used for sorts, hash joins, and other operations that are memory-intensive. A sort operator uses a work area (the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.
This chapter describes how to use plan stability to preserve performance characteristics. Plan stability also facilitates migration from the rule-based optimizer to the query optimizer when you upgrade to a new Oracle Database release.
This chapter contains the following topics:
Note: Stored outlines will be desupported in a future release in favor of SQL plan management. In Oracle Database 11g Release 1 (11.1), stored outlines continue to function as in past releases. However, Oracle strongly recommends that you use SQL plan management for new applications. SQL plan management creates SQL plan baselines, which offer superior SQL performance and stability compared with stored outlines.If you have existing stored outlines, then consider migrating them to SQL plan baselines by following the steps in "Migrating Stored Outlines to SQL Plan Baselines". When the migration is complete, you should disable or remove the stored outlines. |
See Also:
|
Plan stability prevents certain database environment changes from affecting the performance characteristics of applications. Such changes include changes in optimizer statistics, changes to the optimizer mode settings, and changes to parameters affecting the sizes of memory structures, such as SORT_AREA_SIZE
and BITMAP_MERGE_AREA_SIZE
. Plan stability is most useful when you cannot risk any performance changes in an application.
Plan stability preserves execution plans in stored outlines. An outline is implemented as a set of optimizer hints that are associated with the SQL statement. If the use of the outline is enabled for the statement, then Oracle Database automatically considers the stored hints and tries to generate an execution plan in accordance with those hints.
Oracle Database can create a public or private stored outline for one or all SQL statements. The optimizer then generates equivalent execution plans from the outlines when you enable the use of stored outlines. You can group outlines into categories and control which category of outlines Oracle Database uses to simplify outline administration and deployment.
The plans that Oracle Database maintains in stored outlines remain consistent despite changes to a system's configuration or statistics. Using stored outlines also stabilizes the generated execution plan if the optimizer changes in subsequent Oracle Database releases.
Note: If you develop applications for mass distribution, then you can use stored outlines to ensure that all customers access the same execution plans. |
The degree to which plan stability controls execution plans is dictated by how much the Oracle Database hint mechanism controls execution plans, because Oracle Database uses hints to record stored plans.
There is a one-to-one correspondence between SQL text and its stored outline. If you specify a different literal in a predicate, then a different outline applies. To avoid this situation, replace literals in applications with bind variables.
See Also: Oracle Database can allow similar statements to share SQL by replacing literals with system-generated bind variables. This works with plan stability if the outline was generated using theCREATE_STORED_OUTLINES parameter, not the CREATE OUTLINE statement. Also, the outline must have been created with the CURSOR_SHARING parameter set to FORCE , and the parameter must also set to FORCE when attempting to use the outline. See Chapter 7, "Configuring and Using Memory" for more information. |
Plan stability relies on preserving execution plans at a point in time when performance is satisfactory. In many environments, however, attributes for data types such as dates
or order numbers
can change rapidly. In these cases, permanent use of an execution plan can result in performance degradation over time as the data characteristics change.
This implies that techniques that rely on preserving plans in dynamic environments are somewhat contrary to the purpose of using query optimization. Query optimization attempts to produce execution plans based on statistics that accurately reflect the state of the data. Thus, you must balance the need to control plan stability with the benefit obtained from the optimizer's ability to adjust to changes in data characteristics.
An outline consists primarily of a set of hints that is equivalent to the optimizer's results for the execution plan generation of a particular SQL statement. When Oracle Database creates an outline, plan stability examines the optimization results using the same data used to generate the execution plan. That is, Oracle Database uses the input to the execution plan to generate an outline, and not the execution plan itself.
Note: Oracle Database creates theUSER_OUTLINES and USER_OUTLINE_HINTS views in the SYS tablespace based on data in the OL$ and OL$HINTS tables, respectively. Direct manipulation of the OL$ , OL$HINTS , and OL$NODES tables is prohibited.
You can embed hints in SQL statements, but this has no effect on how Oracle Database uses outlines. Oracle Database considers a SQL statement that you revised with hints to be different from the original SQL statement stored in the outline. |
Oracle Database stores outline data in the OL$
, OL$HINTS
, and OL$NODES
tables. Unless you remove them, Oracle Database retains outlines indefinitely.
The only effect outlines have on caching execution plans is that the database uses the outline category name in addition to the SQL text to determine whether the plan is in cache. This ensures that Oracle Database does not use an execution plan compiled under one category to execute a SQL statement that the database should compile under a different category.
Settings for several parameters, especially those ending with the suffix _ENABLED
, must be consistent across execution environments for outlines to function properly. These parameters are:
QUERY_REWRITE_ENABLED
STAR_TRANSFORMATION_ENABLED
OPTIMIZER_FEATURES_ENABLE
The DBMS_OUTLN
and DBMS_OUTLN_EDIT
package provides procedures used for managing stored outlines and their outline categories.
Users need the EXECUTE_CATALOG_ROLE
role to execute DBMS_OUTLN
, but public has execute privileges on DBMS_OUTLN_EDIT
. The DBMS_OUTLN_EDIT
package is an invoker's rights package.
Some of the useful DBMS_OUTLN
and DBMS_OUTLN_EDIT
procedures are:
CLEAR_USED
- Clears specified outline
DROP_BY_CAT
- Drops outlines that belong to a specified category
UPDATE_BY_CAT
- Changes the category of outlines in one specified category to a new specified category
EXACT_TEXT_SIGNATURES
- Computes an outline signature according to an exact text matching scheme
GENERATE_SIGNATURE
- Generates a signature for the specified SQL text
See Also: Oracle Database PL/SQL Packages and Types Reference for detailed information on usingDBMS_OUTLN package procedures |
Oracle Database can automatically create outlines for all SQL statements, or you can create them for specific SQL statements. In either case, the outlines derive their input from the optimizer.
Oracle Database creates stored outlines automatically when you set the initialization parameter CREATE_STORED_OUTLINES
to true
. When activated, Oracle Database creates outlines for all compiled SQL statements. You can create stored outlines for specific statements using the CREATE
OUTLINE
statement.
When creating or editing a private outline, the outline data is written to global temporary tables in the SYSTEM
schema. These tables are accessible with the OL$
, OL$HINTS
, and OL$NODES
synonyms.
Note: You must ensure that schemas in which outlines are to be created have theCREATE ANY OUTLINE privilege. Otherwise, despite having turned on the CREATE_STORED_OUTLINE initialization parameter, no outlines appear in the database after you run the application.
Also, the default system tablespace can become exhausted if the |
See Also:
|
You can categorize outlines to simplify the management task. The CREATE
OUTLINE
statement allows for specification of a category. The DEFAULT
category is chosen if unspecified. Likewise, the CREATE_STORED_OUTLINES
initialization parameter lets you specify a category name, where specifying true
produces outlines in the DEFAULT
category.
If you specify a category name using the CREATE_STORED_OUTLINES
initialization parameter, then Oracle Database assigns all subsequently created outlines to that category until you reset the category name. Set the parameter to false
to suspend outline generation.
If you set CREATE_STORED_OUTLINES
to true
, or if you use the CREATE
OUTLINE
statement without a category name, then Oracle Database assigns outlines to the category name of DEFAULT
.
When you activate the use of stored outlines, Oracle Database always uses the query optimizer. Outlines rely on hints. To be effective, most hints require the optimizer.
To use stored outlines when Oracle Database compiles a SQL statement, set the system parameter USE_STORED_OUTLINES
to true
or to a category name. If you set USE_STORED_OUTLINES
to true
, then Oracle Database uses outlines in the default
category. If you specify a category with the USE_STORED_OUTLINES
parameter, then Oracle Database uses outlines in that category until you reset the parameter to another category name or until you suspend outline use by setting USE_STORED_OUTLINES
to false
. If you specify a category name, and if Oracle Database does not find an outline in that category that matches the SQL statement, then the database searches for an outline in the default
category.
To use a specific outline rather than all the outlines in a category, execute the ALTER
OUTLINE
statement to enable the specific outline. To use the outlines in a category except for a specific outline, use the ALTER
OUTLINE
statement to disable the specific outline in the category that is being used. The ALTER
OUTLINE
statement can also rename a stored outline, reassign it to a different category, or regenerate it.
The designated outlines only control the compilation of SQL statements that have outlines. If you set USE_STORED_OUTLINES
to false
, then Oracle Database does not use outlines. When you set USE_STORED_OUTLINES
to false
and you set CREATE_STORED_OUTLINES
to true
, Oracle Database creates outlines but does not use them.
The USE_PRIVATE_OUTLINES
parameter lets you control the use of private outlines. A private outline is an outline seen only in the current session and whose data resides in the current parsing schema. Any changes made to such an outline are not seen by any other session on the system, and applying a private outline to the compilation of a statement can only be done in the current session with the USE_PRIVATE_OUTLINES
parameter. Only when you explicitly choose to save your edits back to the public area are they seen by the rest of the users.
While the optimizer usually chooses optimal plans for queries, there are times when users know things about the execution environment that are inconsistent with the heuristics that the optimizer follows. By editing outlines directly, you can tune the SQL query without having to alter the application.
When the USE_PRIVATE_OUTLINES
parameter is enabled and an outlined SQL statement is issued, the optimizer retrieves the outline from the session private area rather than the public area used when USE_STORED_OUTLINES
is enabled. If no outline exists in the session private area, then the optimizer does not use an outline to compile the statement.
Any CREATE
OUTLINE
statement requires the CREATE
ANY
OUTLINE
privilege. Specification of the FROM
clause also requires the SELECT
privilege. This privilege should be granted only to those users who would have the authority to view SQL text and hint text associated with the outlined statements. This role is required for the CREATE
OUTLINE
FROM
command unless the issuer of the command is also the owner of the outline.
Note: TheUSE_STORED_OUTLINES and USE_PRIVATE_OUTLINES parameters are system or session specific. They are not initialization parameters. For more information on these parameters, see the Oracle Database SQL Language Reference. |
You can test whether the database is using an outline with the V$SQL
view. Query the OUTLINE_CATEGORY
column in conjunction with the SQL statement. If the database applied an outline, then this column contains the category to which the outline belongs. Otherwise, it is NULL
. The OUTLINE_SID
column tells you whether this particular cursor is using a public outline (value is 0) or a private outline (session's SID of the corresponding session using it).
For example:
SELECT OUTLINE_CATEGORY, OUTLINE_SID FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT COUNT(*) FROM emp%';
You can access information about outlines and related hint data that Oracle Database stores in the data dictionary from the following views:
USER_OUTLINES
USER_OUTLINE_HINTS
ALL_OUTLINES
ALL_OUTLINE_HINTS
DBA_OUTLINES
DBA_OUTLINE_HINTS
Use the following syntax to obtain outline information from the USER_OUTLINES
view, where the outline category is mycat
:
SELECT NAME, SQL_TEXT FROM USER_OUTLINES WHERE CATEGORY='mycat';
Oracle Database responds by displaying the names and text of all outlines in category mycat
.
To see all generated hints for the outline name1
, use the following syntax:
SELECT HINT FROM USER_OUTLINE_HINTS WHERE NAME='name1';
You can check the flags in _OUTLINES
views for information about compatibility, format, and whether an outline is enabled. For example, check the ENABLED
field in the USER_OUTLINES
view to determine whether an outline is enabled or not.
SELECT NAME, CATEGORY, ENABLED FROM USER_OUTLINES;
Oracle Database creates the USER_OUTLINES
and USER_OUTLINE_HINTS
views based on data in the OL$
and OL$HINTS
tables, respectively. These tables and the OL$NODES
table reside in the outln
schema.
The outln
schema stores data in the SYSTEM
tablespace. If outlines use too much space in the SYSTEM
tablespace, then you can move them. To achieve this goal, create a separate tablespace and move the outline tables into this tablespace.
Note: The default system tablespace could become exhausted if theCREATE_STORED_OUTLINES parameter is on and if the running application has many literal SQL statements. In this case, use the DBMS_OUTLN .DROP_UNUSED procedure to remove the literal SQL outlines. |
To move outline tables into a new tablespace:
Use the Oracle Data Pump Export utility to export the OL$
, OL$HINTS
, and OL$NODES
tables.
The following example exports these tables to the exp.dmp
file located in the directory that maps to the outln_dir
object:
% expdp outln DIRECTORY=outln_dir DUMPFILE=exp.dmp TABLES=OL$,OL$HINTS,OL$NODES
Password: password
Start SQL*Plus and connect to the database as the outln
user, as shown in the following example:
SQL> CONNECT outln
Enter password: password
Remove the previous OL$
, OL$HINTS
, and OL$NODES
tables, as shown in the following example:
SQL> DROP TABLE OL$; SQL> DROP TABLE OL$HINTS; SQL> DROP TABLE OL$NODES;
Create a new tablespace for the tables.
The following example connects as SYSTEM
and creates a tablespace named outln_ts
:
SQL> CONNECT SYSTEM
Enter password: password
SQL> CREATE TABLESPACE outln_ts DATAFILE 'tspace.dat' SIZE 2M
2 DEFAULT STORAGE ( INITIAL 10K NEXT 20K MINEXTENTS 1 MAXEXTENTS 999
3 PCTINCREASE 10 ) ONLINE;
Change the default tablespace for the outln
schema.
The following statement changes the default tablespace to outln_ts
:
SQL> ALTER USER OUTLN DEFAULT TABLESPACE outln_ts;
To force the import into the outln_ts
tablespace, perform the following tasks:
Set the quota for the SYSTEM
tablespace to 0K
for the outln
user.
Revoke the UNLIMITED TABLESPACE
privilege and all roles, such as the RESOURCE
role, that have unlimited tablespace privileges or quotas.
Set a quota for the outln
tablespace.
Use the Data Pump Import utility to import the OL$
, OL$HINTS
, and OL$NODES
tables, as in the following example:
% impdp outln DIRECTORY=outln_dir DUMPFILE=exp.dmp TABLES=OL$,OL$HINTS,OL$NODES
Password: password
When the import completes, the OL$
, OL$HINTS
, and OL$NODES
tables are re-created in the schema named outln
and reside in the outln_ts
tablespace.
Optionally, adjust the tablespace quotas for the outln
user appropriately by adding any privileges and roles that were removed in a previous step.
See Also:
|
This section describes procedures you can use to significantly improve performance by taking advantage of query optimizer functionality. Plan stability provides a way to preserve a system's targeted execution plans with satisfactory performance while also taking advantage of new query optimizer features for the rest of the SQL statements.
While there are classes of SQL statements and features where an exact reproduction of the original execution plan is not guaranteed, plan stability can still be a highly useful part of the migration. Before the migration, outline capturing of execution plan should be turned on until all or most of the applications SQL-statement have been covered.
If performance problems for some specific SQL-statement occur after migration, then you can turn on the stored outline for the specified statement as a way of restoring the old behavior. Stored outlines are not always the best way of resolving a migration related performance problem because they prevent plans from adapting to changing data properties. However, stored outlines add to the arsenal of techniques that you can use to address such problems.
Topics covered in this section are:
If an application was developed using the rule-based optimizer, then a considerable amount of effort might have gone into manually tuning the S|QL statements to optimize performance. You can use plan stability to leverage the effort that has gone into performance tuning by preserving the behavior of the application when upgrading from rule-based to query optimization.
By creating outlines for an application before switching to query optimization, the plans generated by the rule-based optimizer can be used, while statements generated by newly written applications developed after the switch use query plans. To create and use outlines for an application, use the following process.
Note: Carefully read this procedure and consider its implications before executing it! |
Ensure that schemas in which outlines are to be created have the CREATE
ANY
OUTLINE
privilege. For example, from SYS
:
GRANT CREATE ANY OUTLINE TO user-name
Execute syntax similar to the following to designate; for example, the RBOCAT
outline category.
ALTER SESSION SET CREATE_STORED_OUTLINES = rbocat;
Run the application long enough to capture stored outlines for all important SQL statements.
Suspend outline generation:
ALTER SESSION SET CREATE_STORED_OUTLINES = FALSE;
Gather statistics with the DBMS_STATS
package.
Alter the parameter OPTIMIZER_MODE
to CHOOSE
.
Enter the following syntax to make Oracle database use the outlines in category RBOCAT
:
ALTER SESSION SET USE_STORED_OUTLINES = rbocat;
Run the application.
Subject to the limitations of plan stability, access paths for this application's SQL statements should be unchanged.
Note: If a query was not executed in step 2, then you can capture the old behavior of the query even after switching to query optimization. To achieve this goal, change the optimizer mode toRULE , create an outline for the query, and then change the optimizer mode back to CHOOSE . |
When upgrading to a new Oracle Database release under query optimization, some SQL statements may have their execution plans changed because of changes in the optimizer. While such changes benefit performance, you might have applications that perform so well that you would consider any changes in their behavior to be an unnecessary risk. For such applications, you can create outlines before the upgrade using the following procedure.
Note: Carefully read this procedure and consider its implications before running it! |
Enter the following syntax to enable outline creation:
ALTER SESSION SET CREATE_STORED_OUTLINES = ALL_QUERIES;
Run the application long enough to capture stored outlines for all critical SQL statements.
Enter this syntax to suspend outline generation:
ALTER SESSION SET CREATE_STORED_OUTLINES = FALSE;
Upgrade the production system to the new version of the RDBMS.
Run the application.
After the upgrade, you can enable the use of stored outlines, or alternatively, you can use the outlines that were stored as a backup if you find that some statements exhibit performance degradation after the upgrade.
With the latter approach, you can selectively use the stored outlines for such problematic statements as follows:
For each problematic SQL statement, change the CATEGORY
of the associated stored outline to a category name similar to this:
ALTER OUTLINE outline_name CHANGE CATEGORY TO problemcat;
Enter this syntax to make Oracle database use outlines from the category problemcat
.
ALTER SESSION SET USE_STORED_OUTLINES = problemcat;
A test database, separate from the production database, is useful for conducting experiments with optimizer behavior after an upgrade. You can migrate statistics from the production system to the test system using import/export. This technique alleviates the need to fill the tables in the test database with data.
You can move outlines between the systems by category. For example, after you create outlines in the problemcat
category, export them by category using the query-based export option. This is a convenient and efficient way to export only selected outlines from one database to another without exporting all outlines in the source database. Use the Data Pump Export utility with the QUERY
parameter as in the following example (note the use of the line continuation character):
% expdp outln DIRECTORY=outln_dir DUMPFILE=exp_file.dmp \
? TABLES=OL$,OL$HINTS,OL$NODES QUERY='WHERE CATEGORY="problemcat"'
Password: password
See Also: Oracle Database Utilities for detailed information about using the Data Pump Export and Import utilities |
This preface contains these topics:
Oracle Database Performance Tuning Guide is intended for database administrators (DBAs) who are responsible for the operation, maintenance, and performance of Oracle Database. This guide describes how to use Oracle Database performance tools in the command-line interface to optimize database performance and tune SQL statements. This guide also describes performance best practices for creating an initial database and includes performance-related reference information.
See Also: Oracle Database 2 Day + Performance Tuning Guide to learn how to use Oracle Enterprise Manager to tune database performance |
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc
.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info
or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.
Before reading this guide, you should be familiar with the following manuals:
To learn how to use Oracle Enterprise Manager to tune the performance of Oracle Database, see Oracle Database 2 Day + Performance Tuning Guide.
To learn how to tune data warehouse environments, see Oracle Database Data Warehousing Guide.
Many of the examples in this book use the sample schemas, which are installed by default when you select the Basic Installation option during an Oracle Database installation. To learn how to install and use these schemas, see Oracle Database Sample Schemas.
To learn about Oracle Database error messages, see Oracle Database Error Messages. Oracle Database error message documentation is only available in HTML. If you are accessing the error message documentation on the Oracle Documentation CD, you can browse the error messages by range. After you find the specific range, use your browser's find feature to locate the specific message. When connected to the Internet, you can search for a specific error message using the error message search feature of the Oracle online documentation.
The following text conventions are used in this document:
Convention | Meaning |
---|---|
boldface | Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. |
italic | Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. |
monospace | Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter. |