PK N:Aoa,mimetypeapplication/epub+zipPKN:AiTunesMetadata.plist? artistName Oracle Corporation book-info cover-image-hash 855447460 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 688548761 publisher-unique-id E22490-05 unique-id 735210408 genre Oracle Documentation itemName Oracle® Database Utilities, 11g Release 2 (11.2) releaseDate 2012-12-17T08:29:56Z year 2012 PK+D?PKN:AMETA-INF/container.xml PKYuPKN:AOEBPS/logminer.htm Using LogMiner to Analyze Redo Log Files

19 Using LogMiner to Analyze Redo Log Files

Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.

This chapter contains the following sections:

This chapter describes LogMiner as it is used from the command line. You can also access LogMiner through the Oracle LogMiner Viewer graphical user interface. Oracle LogMiner Viewer is a part of Oracle Enterprise Manager. See the Oracle Enterprise Manager online Help for more information about Oracle LogMiner Viewer.

LogMiner Benefits

All changes made to user data or to the database dictionary are recorded in the Oracle redo log files so that database recovery operations can be performed.

Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data auditing tool, and also as a sophisticated data analysis tool. The following list describes some key capabilities of LogMiner:

Introduction to LogMiner

The following sections provide a brief introduction to LogMiner, including the following topics:

The remaining sections in this chapter describe these concepts and related topics in more detail.

LogMiner Configuration

There are four basic objects in a LogMiner configuration that you should be familiar with: the source database, the mining database, the LogMiner dictionary, and the redo log files containing the data of interest:

  • The source database is the database that produces all the redo log files that you want LogMiner to analyze.

  • The mining database is the database that LogMiner uses when it performs the analysis.

  • The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.

    LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.

    For example, consider the following the SQL statement:

     INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY)  VALUES('IT_WT','Technical Writer', 4000, 11000);
    

    Without the dictionary, LogMiner will display:

    insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values
    (HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),
    HEXTORAW('c229'),HEXTORAW('c3020b'));
    
  • The redo log files contain the changes made to the database or database dictionary.

Sample Configuration

Figure 19-1 shows a sample LogMiner configuration. In this figure, the source database in Boston generates redo log files that are archived and shipped to a database in San Francisco. A LogMiner dictionary has been extracted to these redo log files. The mining database, where LogMiner will actually analyze the redo log files, is in San Francisco. The Boston database is running Oracle9i, and the San Francisco database is running Oracle Database 10g.

Figure 19-1 Sample LogMiner Database Configuration

Description of Figure 19-1 follows
Description of "Figure 19-1 Sample LogMiner Database Configuration"

Figure 19-1 shows just one valid LogMiner configuration. Other valid configurations are those that use the same database for both the source and mining database, or use another method for providing the data dictionary. These other data dictionary options are described in "LogMiner Dictionary Options".

Requirements

The following are requirements for the source and mining database, the data dictionary, and the redo log files that LogMiner will mine:

  • Source and mining database

    • Both the source database and the mining database must be running on the same hardware platform.

    • The mining database can be the same as, or completely separate from, the source database.

    • The mining database must run the same release or a later release of the Oracle Database software as the source database.

    • The mining database must use the same character set (or a superset of the character set) used by the source database.

  • LogMiner dictionary

    • The dictionary must be produced by the same source database that generates the redo log files that LogMiner will analyze.

  • All redo log files:

    • Must be produced by the same source database.

    • Must be associated with the same database RESETLOGS SCN.

    • Must be from a release 8.0 or later Oracle Database. However, several of the LogMiner features introduced as of release 9.0.1 work only with redo log files produced on an Oracle9i or later database. See "Supported Databases and Redo Log File Versions".

LogMiner does not allow you to mix redo log files from different databases or to use a dictionary from a different database than the one that generated the redo log files to be analyzed.


Note:

You must enable supplemental logging before generating log files that will be analyzed by LogMiner.

When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you. Therefore, at the very least, you must enable minimal supplemental logging, as the following SQL statement shows:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If the query returns a value of YES or IMPLICIT, then minimal supplemental logging is enabled. See "Supplemental Logging" for complete information about supplemental logging.


Directing LogMiner Operations and Retrieving Data of Interest

You direct LogMiner operations using the DBMS_LOGMNR and DBMS_LOGMNR_D PL/SQL packages, and retrieve data of interest using the V$LOGMNR_CONTENTS view, as follows:

  1. Specify a LogMiner dictionary.

    Use the DBMS_LOGMNR_D.BUILD procedure or specify the dictionary when you start LogMiner (in Step 3), or both, depending on the type of dictionary you plan to use.

  2. Specify a list of redo log files for analysis.

    Use the DBMS_LOGMNR.ADD_LOGFILE procedure, or direct LogMiner to create a list of log files for analysis automatically when you start LogMiner (in Step 3).

  3. Start LogMiner.

    Use the DBMS_LOGMNR.START_LOGMNR procedure.

  4. Request the redo data of interest.

    Query the V$LOGMNR_CONTENTS view. (You must have the SELECT ANY TRANSACTION privilege to query this view.)

  5. End the LogMiner session.

    Use the DBMS_LOGMNR.END_LOGMNR procedure.

You must have been granted the EXECUTE_CATALOG_ROLE role to use the LogMiner PL/SQL packages and to query the V$LOGMNR_CONTENTS view.


Note:

When mining a specified time or SCN range of interest within archived logs generated by an Oracle RAC database, you must ensure that you have specified all archived logs from all redo threads that were active during that time or SCN range. If you fail to do this, then any queries of V$LOGMNR_CONTENTS return only partial results (based on the archived logs specified to LogMiner through the DBMS_LOGMNR.ADD_LOGFILE procedure). This restriction is also in effect when you are mining the archived logs at the source database using the CONTINUOUS_MINE option. You should only use CONTINUOUS_MINE on an Oracle RAC database if no thread is being enabled or disabled.


See Also:

"Steps in a Typical LogMiner Session" for an example of using LogMiner

LogMiner Dictionary Files and Redo Log Files

Before you begin using LogMiner, it is important to understand how LogMiner works with the LogMiner dictionary file (or files) and redo log files. This will help you to get accurate results and to plan the use of your system resources.

The following concepts are discussed in this section:

LogMiner Dictionary Options

LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

  • Using the Online Catalog

    Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

  • Extracting a LogMiner Dictionary to the Redo Log Files

    Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

  • Extracting the LogMiner Dictionary to a Flat File

    This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.

Figure 19-2 shows a decision tree to help you select a LogMiner dictionary, depending on your situation.

Figure 19-2 Decision Tree for Choosing a LogMiner Dictionary

Description of Figure 19-2 follows
Description of "Figure 19-2 Decision Tree for Choosing a LogMiner Dictionary"

The following sections provide instructions on how to specify each of the available dictionary options.

Using the Online Catalog

To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

In addition to using the online catalog to analyze online redo log files, you can use it to analyze archived redo log files, if you are on the same system that generated the archived redo log files.

The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you need for your analysis.

Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table. Instead, LogMiner generates nonexecutable SQL (including hexadecimal-to-raw formatting of binary values) in the SQL_REDO column of the V$LOGMNR_CONTENTS view similar to the following example:

insert into HR.EMPLOYEES(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
hextoraw('c306'));"

The online catalog option requires that the database be open.

The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.

Extracting a LogMiner Dictionary to the Redo Log Files

To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not).

To extract dictionary information to the redo log files, execute the PL/SQL DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a file name or location.

EXECUTE DBMS_LOGMNR_D.BUILD( -
   OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

See Also:


The process of extracting the dictionary to the redo log files does consume database resources, but if you limit the extraction to off-peak hours, then this should not be a problem, and it is faster than extracting to a flat file. Depending on the size of the dictionary, it may be contained in multiple redo log files. If the relevant redo log files have been archived, then you can find out which redo log files contain the start and end of an extracted dictionary. To do so, query the V$ARCHIVED_LOG view, as follows:

SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';

Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE procedure when you are preparing to begin a LogMiner session.

Oracle recommends that you periodically back up the redo log files so that the information is saved and available at a later date. Ideally, this will not involve any extra steps because if your database is being properly managed, then there should already be a process in place for backing up and restoring archived redo log files. Again, because of the time required, it is good practice to do this during off-peak hours.

Extracting the LogMiner Dictionary to a Flat File

When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files.

To extract database dictionary information to a flat file, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_FLAT_FILE option.

Be sure that no DDL operations occur while the dictionary is being built.

The following steps describe how to extract a dictionary to a flat file. Steps 1 and 2 are preparation steps. You only need to do them once, and then you can extract a dictionary to a flat file as many times as you want to.

  1. The DBMS_LOGMNR_D.BUILD procedure requires access to a directory where it can place the dictionary file. Because PL/SQL procedures do not normally access user directories, you must specify a directory for use by the DBMS_LOGMNR_D.BUILD procedure or the procedure will fail. To specify a directory, set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file.

    For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, place the following in the initialization parameter file:

    UTL_FILE_DIR = /oracle/database
    

    Remember that for the changes to the initialization parameter file to take effect, you must stop and restart the database.

  2. If the database is closed, then use SQL*Plus to mount and open the database whose redo log files you want to analyze. For example, entering the SQL STARTUP command mounts and opens the database:

    STARTUP
    
  3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a file name for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:

    EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', - 
       '/oracle/database/', -
        DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
    

    You could also specify a file name and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.

Redo Log File Options

To mine data in the redo log files, LogMiner needs information about which redo log files to mine. Changes made to the database that are found in these redo log files are delivered to you through the V$LOGMNR_CONTENTS view.

You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:

  • Automatically

    If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner with the DBMS_LOGMNR.START_LOGMNR procedure, and specify a time or SCN range. Although this example specifies the dictionary from the online catalog, any LogMiner dictionary can be used.


    Note:

    The CONTINUOUS_MINE option requires that the database be mounted and that archiving be enabled.

    LogMiner will use the database control file to find and add redo log files that satisfy your specified time or SCN range to the LogMiner redo log file list. For example:

    ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
    EXECUTE DBMS_LOGMNR.START_LOGMNR( -
       STARTTIME => '01-Jan-2003 08:30:00', -
       ENDTIME => '01-Jan-2003 08:45:00', -
       OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
       DBMS_LOGMNR.CONTINUOUS_MINE);
    

    (To avoid the need to specify the date format in the PL/SQL call to the DBMS_LOGMNR.START_LOGMNR procedure, this example uses the SQL ALTER SESSION SET NLS_DATE_FORMAT statement first.)

    You can also direct LogMiner to automatically build a list of redo log files to analyze by specifying just one redo log file using DBMS_LOGMNR.ADD_LOGFILE, and then specifying the CONTINUOUS_MINE option when you start LogMiner. The previously described method is more typical, however.

  • Manually

    Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.

    For example, to start a new list of redo log files, specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list. For example, enter the following to specify /oracle/logs/log1.f:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME => '/oracle/logs/log1.f', -
       OPTIONS => DBMS_LOGMNR.NEW);
    

    If desired, add more redo log files by specifying the ADDFILE option of the PL/SQL DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to add /oracle/logs/log2.f:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME => '/oracle/logs/log2.f', -
       OPTIONS => DBMS_LOGMNR.ADDFILE);
    

    To determine which redo log files are being analyzed in the current LogMiner session, you can query the V$LOGMNR_LOGS view, which contains one row for each redo log file.

Starting LogMiner

You call the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner. Because the options available with the DBMS_LOGMNR.START_LOGMNR procedure allow you to control output to the V$LOGMNR_CONTENTS view, you must call DBMS_LOGMNR.START_LOGMNR before querying the V$LOGMNR_CONTENTS view.

When you start LogMiner, you can:

The following list is a summary of LogMiner settings that you can specify with the OPTIONS parameter to DBMS_LOGMNR.START_LOGMNR and where to find more information about them.

When you execute the DBMS_LOGMNR.START_LOGMNR procedure, LogMiner checks to ensure that the combination of options and parameters that you have specified is valid and that the dictionary and redo log files that you have specified are available. However, the V$LOGMNR_CONTENTS view is not populated until you query the view, as described in "How the V$LOGMNR_CONTENTS View Is Populated".

Note that parameters and options are not persistent across calls to DBMS_LOGMNR.START_LOGMNR. You must specify all desired parameters and options (including SCN and time ranges) each time you call DBMS_LOGMNR.START_LOGMNR.

Querying V$LOGMNR_CONTENTS for Redo Data of Interest

You access the redo data of interest by querying the V$LOGMNR_CONTENTS view. (Note that you must have the SELECT ANY TRANSACTION privilege to query V$LOGMNR_CONTENTS.) This view provides historical information about changes made to the database, including (but not limited to) the following:


Note:

LogMiner supports Oracle Advanced Security transparent data encryption (TDE) in that V$LOGMNR_CONTENTS shows DML operations performed on tables with encrypted columns (including the encrypted columns being updated), provided the LogMiner data dictionary contains the metadata for the object in question and provided the appropriate master key is in the Oracle wallet. The wallet must be open or V$LOGMNR_CONTENTS cannot interpret the associated redo records. TDE support is not available if the database is not open (either read-only or read-write). See Oracle Database Advanced Security Administrator's Guide for more information about transparent data encryption.

Example of Querying V$LOGMNR_CONTENTS

Suppose you wanted to find out about any delete operations that a user named Ron had performed on the oe.orders table. You could issue a SQL query similar to the following:

SELECT OPERATION, SQL_REDO, SQL_UNDO
   FROM V$LOGMNR_CONTENTS
   WHERE SEG_OWNER = 'OE' AND SEG_NAME = 'ORDERS' AND
   OPERATION = 'DELETE' AND USERNAME = 'RON';

The following output would be produced. The formatting may be different on your display than that shown here.

OPERATION   SQL_REDO                        SQL_UNDO

DELETE      delete from "OE"."ORDERS"       insert into "OE"."ORDERS"        
            where "ORDER_ID" = '2413'       ("ORDER_ID","ORDER_MODE",
            and "ORDER_MODE" = 'direct'      "CUSTOMER_ID","ORDER_STATUS",
            and "CUSTOMER_ID" = '101'        "ORDER_TOTAL","SALES_REP_ID",
            and "ORDER_STATUS" = '5'         "PROMOTION_ID")
            and "ORDER_TOTAL" = '48552'      values ('2413','direct','101',
            and "SALES_REP_ID" = '161'       '5','48552','161',NULL);     
            and "PROMOTION_ID" IS NULL  
            and ROWID = 'AAAHTCAABAAAZAPAAN';

DELETE      delete from "OE"."ORDERS"        insert into "OE"."ORDERS"
            where "ORDER_ID" = '2430'        ("ORDER_ID","ORDER_MODE",
            and "ORDER_MODE" = 'direct'       "CUSTOMER_ID","ORDER_STATUS",
            and "CUSTOMER_ID" = '101'         "ORDER_TOTAL","SALES_REP_ID",
            and "ORDER_STATUS" = '8'          "PROMOTION_ID")
            and "ORDER_TOTAL" = '29669.9'     values('2430','direct','101',
            and "SALES_REP_ID" = '159'        '8','29669.9','159',NULL);
            and "PROMOTION_ID" IS NULL 
            and ROWID = 'AAAHTCAABAAAZAPAAe';

This output shows that user Ron deleted two rows from the oe.orders table. The reconstructed SQL statements are equivalent, but not necessarily identical, to the actual statement that Ron issued. The reason for this is that the original WHERE clause is not logged in the redo log files, so LogMiner can only show deleted (or updated or inserted) rows individually.

Therefore, even though a single DELETE statement may have been responsible for the deletion of both rows, the output in V$LOGMNR_CONTENTS does not reflect that. Thus, the actual DELETE statement may have been DELETE FROM OE.ORDERS WHERE CUSTOMER_ID ='101' or it might have been DELETE FROM OE.ORDERS WHERE PROMOTION_ID = NULL.

How the V$LOGMNR_CONTENTS View Is Populated

The V$LOGMNR_CONTENTS fixed view is unlike other views in that it is not a selective presentation of data stored in a table. Instead, it is a relational presentation of the data that you request from the redo log files. LogMiner populates the view only in response to a query against it. You must successfully start LogMiner before you can query V$LOGMNR_CONTENTS.

When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially. Translated information from the redo log files is returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup are met or the end of the redo log file is reached.

In some cases, certain columns in V$LOGMNR_CONTENTS may not be populated. For example:

  • The TABLE_SPACE column is not populated for rows where the value of the OPERATION column is DDL. This is because a DDL may operate on more than one tablespace. For example, a table can be created with multiple partitions spanning multiple table spaces; hence it would not be accurate to populate the column.

  • LogMiner does not generate SQL redo or SQL undo for temporary tables. The SQL_REDO column will contain the string "/* No SQL_REDO for temporary tables */" and the SQL_UNDO column will contain the string "/* No SQL_UNDO for temporary tables */".

LogMiner returns all the rows in SCN order unless you have used the COMMITTED_DATA_ONLY option to specify that only committed transactions should be retrieved. SCN order is the order normally applied in media recovery.


See Also:

"Showing Only Committed Transactions" for more information about the COMMITTED_DATA_ONLY option to DBMS_LOGMNR.START_LOGMNR


Note:

Because LogMiner populates the V$LOGMNR_CONTENTS view only in response to a query and does not store the requested data in the database, the following is true:
  • Every time you query V$LOGMNR_CONTENTS, LogMiner analyzes the redo log files for the data you request.

  • The amount of memory consumed by the query is not dependent on the number of rows that must be returned to satisfy a query.

  • The time it takes to return the requested data is dependent on the amount and type of redo log data that must be mined to find that data.


For the reasons stated in the previous note, Oracle recommends that you create a table to temporarily hold the results from a query of V$LOGMNR_CONTENTS if you need to maintain the data for further analysis, particularly if the amount of data returned by a query is small in comparison to the amount of redo data that LogMiner must analyze to provide that data.

Querying V$LOGMNR_CONTENTS Based on Column Values

LogMiner lets you make queries based on column values. For instance, you can perform a query to show all updates to the hr.employees table that increase salary more than a certain amount. Data such as this can be used to analyze system behavior and to perform auditing tasks.

LogMiner data extraction from redo log files is performed using two mine functions: DBMS_LOGMNR.MINE_VALUE and DBMS_LOGMNR.COLUMN_PRESENT. Support for these mine functions is provided by the REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view.

The following is an example of how you could use the MINE_VALUE function to select all updates to hr.employees that increased the salary column to more than twice its original value:

SELECT SQL_REDO FROM V$LOGMNR_CONTENTS
   WHERE
   SEG_NAME = 'EMPLOYEES' AND
   SEG_OWNER = 'HR' AND
   OPERATION = 'UPDATE' AND
   DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'HR.EMPLOYEES.SALARY') >
   2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'HR.EMPLOYEES.SALARY');

As shown in this example, the MINE_VALUE function takes two arguments:

  • The first one specifies whether to mine the redo (REDO_VALUE) or undo (UNDO_VALUE) portion of the data. The redo portion of the data is the data that is in the column after an insert, update, or delete operation; the undo portion of the data is the data that was in the column before an insert, update, or delete operation. It may help to think of the REDO_VALUE as the new value and the UNDO_VALUE as the old value.

  • The second argument is a string that specifies the fully qualified name of the column to be mined (in this case, hr.employees.salary). The MINE_VALUE function always returns a string that can be converted back to the original datatype.

The Meaning of NULL Values Returned by the MINE_VALUE Function

If the MINE_VALUE function returns a NULL value, then it can mean either:

  • The specified column is not present in the redo or undo portion of the data.

  • The specified column is present and has a null value.

To distinguish between these two cases, use the DBMS_LOGMNR.COLUMN_PRESENT function which returns a 1 if the column is present in the redo or undo portion of the data. Otherwise, it returns a 0. For example, suppose you wanted to find out the increment by which the values in the salary column were modified and the corresponding transaction identifier. You could issue the following SQL query:

SELECT 
  (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
  (DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'HR.EMPLOYEES.SALARY') -
   DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'HR.EMPLOYEES.SALARY')) AS INCR_SAL
   FROM V$LOGMNR_CONTENTS
   WHERE
   OPERATION = 'UPDATE' AND
   DBMS_LOGMNR.COLUMN_PRESENT(REDO_VALUE, 'HR.EMPLOYEES.SALARY') = 1 AND
   DBMS_LOGMNR.COLUMN_PRESENT(UNDO_VALUE, 'HR.EMPLOYEES.SALARY') = 1;

Usage Rules for the MINE_VALUE and COLUMN_PRESENT Functions

The following usage rules apply to the MINE_VALUE and COLUMN_PRESENT functions:

  • They can only be used within a LogMiner session.

  • They must be invoked in the context of a select operation from the V$LOGMNR_CONTENTS view.

  • They do not support LONG, LONG RAW, CLOB, BLOB, NCLOB, ADT, or COLLECTION datatypes.

Querying V$LOGMNR_CONTENTS Based on XMLType Columns and Tables

LogMiner supports redo generated for XMLType columns. XMLType data stored as CLOB is supported when redo is generated at a compatibility setting of 11.0.0.0 or higher. XMLType data stored as object-relational and binary XML is supported for redo generated at a compatibility setting of 11.2.0.3 and higher.

LogMiner presents the SQL_REDO in V$LOGMNR_CONTENTS in different ways depending on the XMLType storage. In all cases, the contents of the SQL_REDO column, in combination with the STATUS column, require careful scrutiny, and usually require reassembly before a SQL or PL/SQL statement can be generated to redo the change. There may be cases when it is not possible to use the SQL_REDO data to construct such a change. The examples in the following subsections are based on XMLType stored as CLOB which is generally the simplest to use for reconstruction of the complete row change.

Querying V$LOGMNR_CONTENTS For Changes to Tables With XMLType Columns

The example in this section is for a table named XML_CLOB_COL_TAB that has the following columns:

  • f1 NUMBER

  • f2 VARCHAR2(100)

  • f3 XMLTYPE

  • f4 XMLTYPE

  • f5 VARCHAR2(10)

Assume that a LogMiner session has been started with the logs and with the COMMITED_DATA_ONLY option. The following query is executed against V$LOGMNR_CONTENTS for changes to the XML_CLOB_COL_TAB table.

SELECT OPERATION, STATUS, SQL_REDO FROM V$LOGMNR_CONTENTS 
  WHERE SEG_OWNER = 'SCOTT' AND TABLE_NAME = 'XML_CLOB_COL_TAB';

The query output looks similar to the following:

OPERATION         STATUS  SQL_REDO

INSERT            0       insert into "SCOTT"."XML_CLOB_COL_TAB"("F1","F2","F5") values
                             ('5010','Aho40431','PETER')
         
XML DOC BEGIN     5       update "SCOTT"."XML_CLOB_COL_TAB" a set a."F3" = XMLType(:1)
                             where a."F1" = '5010' and a."F2" = 'Aho40431' and a."F5" = 'PETER'

XML DOC WRITE     5       XML Data

XML DOC WRITE     5       XML Data

XML DOC WRITE     5       XML Data

XML DOC END       5
                                                                  

In the SQL_REDO columns for the XML DOC WRITE operations there will be actual data for the XML document. It will not be the string 'XML Data'.

This output shows that the general model for an insert into a table with an XMLType column is the following:

  1. An initial insert with all of the scalar columns.

  2. An XML DOC BEGIN operation with an update statement that sets the value for one XMLType column using a bind variable.

  3. One or more XML DOC WRITE operations with the data for the XML document.

  4. An XML DOC END operation to indicate that all of the data for that XML document has been seen.

  5. If there is more than one XMLType column in the table, then steps 2 through 4 will be repeated for each XMLType column that is modified by the original DML.

If the XML document is not stored as an out-of-line column, then there will be no XML DOC BEGIN, XML DOC WRITE, or XML DOC END operations for that column. The document will be included in an update statement similar to the following:

OPERATION   STATUS         SQL_REDO

UPDATE      0              update "SCOTT"."XML_CLOB_COL_TAB" a
                           set a."F3" = XMLType('<?xml version="1.0"?>
                           <PO pono="1">
                           <PNAME>Po_99</PNAME> 
                           <CUSTNAME>Dave Davids</CUSTNAME> 
                           </PO>') 
                           where a."F1" = '5006' and a."F2" = 'Janosik' and a."F5" = 'MMM' 

Queries V$LOGMNR_CONTENTS For Changes to XMLType Tables

DMLs to XMLType tables are slightly different from DMLs to XMLType columns. The XML document represents the value for the row in the XMLType table. Unlike the XMLType column case, an initial insert cannot be done which is then followed by an update containing the XML document. Rather, the whole document must be assembled before anything can be inserted into the table.

Another difference for XMLType tables is the presence of the OBJECT_ID column. An object identifier is used to uniquely identify every object in an object table. For XMLType tables stored as CLOBs, this value is generated by Oracle Database when the row is inserted into the table. The OBJECT_ID value cannot be directly inserted into the table using SQL. Therefore, LogMiner cannot generate SQL_REDO which is executable that includes this value.

The V$LOGMNR_CONTENTS view has a new OBJECT_ID column which is populated for changes to XMLType tables. This value is the object identifier from the original table. However, even if this same XML document is inserted into the same XMLType table, a new object identifier will be generated. The SQL_REDO for subsequent DMLs, such as updates and deletes, on the XMLType table will include the object identifier in the WHERE clause to uniquely identify the row from the original table.

The following shows an example of mining changes to an XMLType table stored as CLOB:

select operation, status, object_id, sql_redo from v$logmnr_contents 
where  seg_owner = 'SCOTT' and table_name = 'XML_TYPE_CLOB_TAB';
OPERATION     STATUS   OBJECT_ID                         SQL_REDO

INSERT          2      300A9394B0F7B2D0E040578CF5025CC3  insert into "SCOTT"."XML_TYPE_CLOB_TAB"
                                                           values(EMPTY_CLOB()) 

XML DOC BEGIN   5      300A9394B0F7B2D0E040578CF5025CC3  insert into "SCOTT"."XML_TYPE_CLOB_TAB"
                                                           values (XMLType(:1)

XML DOC WRITE   5      300A9394B0F7B2D0E040578CF5025CC3  XML Data

XML DOC WRITE   5      300A9394B0F7B2D0E040578CF5025CC3  XML Data

XML DOC WRITE   5      300A9394B0F7B2D0E040578CF5025CC3  XML Data

XML DOC END     5

The general pattern is very similar to XMLType columns. However, there are a few key differences. The first is that now the OBJECT_ID column is populated. The second difference is that there is an initial insert, but its status is 2 for INVALID_SQL. This indicates that this record occurs in the redo as a placeholder for the change to come, but that the SQL generated for this change should not be applied. The SQL_REDO from the XML DOC BEGIN operation reflects the changes that were made to the row when used with the assembled XML document.

If the XML document is not stored as an out-of-line column, then there will be no XML DOC BEGIN, XML DOC WRITE, or XML DOC END operations for that document. The document will be included in an INSERT statement similar to the following:

OPERATION   STATUS  OBJECT_ID                           SQL_REDO

INSERT      2       300AD8CECBA75ACAE040578CF502640C    insert into "SCOTT"."XML_TYPE_CLOB_TAB"
                                                           values (EMPTY_CLOB())

INSERT      0       300AD8CECBA75ACAE040578CF502640C    insert into "SCOTT"."XML_TYPE_CLOB_TAB"
                                                           values (XMLType(
                                                           '<?xml version="1.0"?>
                                                           <PO pono="1">
                                                           <PNAME>Po_99</PNAME>
                                                           <CUSTNAME>
                                                           Dave Davids
                                                           </CUSTNAME>
                                                           </PO>'))

Restrictions When Using LogMiner With XMLType Data

Mining XMLType data should only be done when using the DBMS_LOGMNR.COMMITTED_DATA_ONLY option. Otherwise, incomplete changes could be displayed or changes which should be displayed as XML might be displayed as CLOB changes due to missing parts of the row change. This can lead to incomplete and invalid SQL_REDO for these SQL DML statements.

The SQL_UNDO column is not populated for changes to XMLType data.

Example of a PL/SQL Procedure for Assembling XMLType Data

The example presented in this section shows a procedure that can be used to mine and assemble XML redo for tables that contain out of line XML data. This shows how to assemble the XML data using a temporary LOB. Once the XML document is assembled, it can be used in a meaningful way. This example queries the assembled document for the EmployeeName element and then stores the returned name, the XML document and the SQL_REDO for the original DML in the EMPLOYEE_XML_DOCS table.


Note:

This procedure is an example only and is simplified. It is only intended to illustrate that DMLs to tables with XMLType data can be mined and assembled using LogMiner.

Before calling this procedure, all of the relevant logs must be added to a LogMiner session and DBMS_LOGMNR.START_LOGMNR() must be called with the COMMITTED_DATA_ONLY option. The MINE_AND_ASSEMBLE() procedure can then be called with the schema and table name of the table that has XML data to be mined.

-- table to store assembled XML documents
create table employee_xml_docs  (
  employee_name         varchar2(100),
  sql_stmt                     varchar2(4000),
  xml_doc                     SYS.XMLType);
        
-- procedure to assemble the XML documents
create or replace procedure mine_and_assemble(
  schemaname        in varchar2,
  tablename         in varchar2)
AS
  loc_c      CLOB; 
  row_op     VARCHAR2(100); 
  row_status NUMBER; 
  stmt       VARCHAR2(4000);
  row_redo   VARCHAR2(4000);
  xml_data   VARCHAR2(32767 CHAR); 
  data_len   NUMBER; 
  xml_lob    clob;
  xml_doc    XMLType;
BEGIN 
 
-- Look for the rows in V$LOGMNR_CONTENTS that are for the appropriate schema 
-- and table name but limit it to those that are valid sql or that need assembly
-- because they are XML documents.
 
 For item in ( SELECT operation, status, sql_redo  FROM v$logmnr_contents
 where seg_owner = schemaname and table_name = tablename
 and status IN (DBMS_LOGMNR.VALID_SQL, DBMS_LOGMNR.ASSEMBLY_REQUIRED_SQL))
 LOOP
    row_op := item.operation;
    row_status := item.status;
    row_redo := item.sql_redo;
 
     CASE row_op 
 
          WHEN 'XML DOC BEGIN' THEN 
             BEGIN 
               -- save statement and begin assembling XML data 
               stmt := row_redo; 
               xml_data := ''; 
               data_len := 0; 
               DBMS_LOB.CreateTemporary(xml_lob, TRUE);
             END; 
 
          WHEN 'XML DOC WRITE' THEN 
             BEGIN 
               -- Continue to assemble XML data
               xml_data := xml_data || row_redo; 
               data_len := data_len + length(row_redo); 
               DBMS_LOB.WriteAppend(xml_lob, length(row_redo), row_redo);
             END; 
 
          WHEN 'XML DOC END' THEN 
             BEGIN 
               -- Now that assembly is complete, we can use the XML document 
              xml_doc := XMLType.createXML(xml_lob);
              insert into employee_xml_docs values
                        (extractvalue(xml_doc, '/EMPLOYEE/NAME'), stmt, xml_doc);
              commit;
 
              -- reset
              xml_data := ''; 
              data_len := 0; 
              xml_lob := NULL;
             END; 
 
          WHEN 'INSERT' THEN 
             BEGIN 
                stmt := row_redo;
             END; 
 
          WHEN 'UPDATE' THEN 
             BEGIN 
                stmt := row_redo;
             END; 
 
          WHEN 'INTERNAL' THEN 
             DBMS_OUTPUT.PUT_LINE('Skip rows marked INTERNAL'); 
 
          ELSE 
             BEGIN 
                stmt := row_redo;
                DBMS_OUTPUT.PUT_LINE('Other - ' || stmt); 
                IF row_status != DBMS_LOGMNR.VALID_SQL then 
                   DBMS_OUTPUT.PUT_LINE('Skip rows marked non-executable'); 
                ELSE 
                   dbms_output.put_line('Status : ' || row_status);
                END IF; 
             END; 
 
     END CASE;
 
 End LOOP; 
 
End;
/
 
show errors;

This procedure can then be called to mine the changes to the SCOTT.XML_DATA_TAB and apply the DMLs.

EXECUTE MINE_AND_ASSEMBLE ('SCOTT', 'XML_DATA_TAB');

As a result of this procedure, the EMPLOYEE_XML_DOCS table will have a row for each out-of-line XML column that was changed. The EMPLOYEE_NAME column will have the value extracted from the XML document and the SQL_STMT column and the XML_DOC column reflect the original row change.

The following is an example query to the resulting table that displays only the employee name and SQL statement:

SELECT EMPLOYEE_NAME, SQL_STMT FROM EMPLOYEE_XML_DOCS;
                
EMPLOYEE_NAME          SQL_STMT                                                                                           
 
Scott Davis          update "SCOTT"."XML_DATA_TAB" a set a."F3" = XMLType(:1) 
                         where a."F1" = '5000' and a."F2" = 'Chen' and a."F5" = 'JJJ'
        
Richard Harry        update "SCOTT"."XML_DATA_TAB" a set a."F4" = XMLType(:1)  
                         where a."F1" = '5000' and a."F2" = 'Chen' and a."F5" = 'JJJ'
        
Margaret Sally       update "SCOTT"."XML_DATA_TAB" a set a."F4" = XMLType(:1)  
                         where a."F1" = '5006' and a."F2" = 'Janosik' and a."F5" = 'MMM'

Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS

LogMiner can potentially deal with large amounts of information. You can limit the information that is returned to the V$LOGMNR_CONTENTS view, and the speed at which it is returned. The following sections demonstrate how to specify these limits and their impact on the data returned when you query V$LOGMNR_CONTENTS.

In addition, LogMiner offers features for formatting the data that is returned to V$LOGMNR_CONTENTS, as described in the following sections:

You request each of these filtering and formatting features using parameters or options to the DBMS_LOGMNR.START_LOGMNR procedure.

Showing Only Committed Transactions

When you use the COMMITTED_DATA_ONLY option to DBMS_LOGMNR.START_LOGMNR, only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter out rolled back transactions, transactions that are in progress, and internal operations.

To enable this option, specify it when you start LogMiner, as follows:

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
  DBMS_LOGMNR.COMMITTED_DATA_ONLY);

When you specify the COMMITTED_DATA_ONLY option, LogMiner groups together all DML operations that belong to the same transaction. Transactions are returned in the order in which they were committed.


Note:

If the COMMITTED_DATA_ONLY option is specified and you issue a query, then LogMiner stages all redo records within a single transaction in memory until LogMiner finds the commit record for that transaction. Therefore, it is possible to exhaust memory, in which case an "Out of Memory" error will be returned. If this occurs, then you must restart LogMiner without the COMMITTED_DATA_ONLY option specified and reissue the query.

The default is for LogMiner to show rows corresponding to all transactions and to return them in the order in which they are encountered in the redo log files.

For example, suppose you start LogMiner without specifying the COMMITTED_DATA_ONLY option and you execute the following query:

SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, 
   USERNAME, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME != 'SYS' 
   AND SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM');

The output is as follows. Both committed and uncommitted transactions are returned and rows from different transactions are interwoven.

XID         USERNAME  SQL_REDO

1.15.3045   RON       set transaction read write;
1.15.3045   RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                      "MIN_SALARY","MAX_SALARY") values ('9782',
                      'HR_ENTRY',NULL,NULL);
1.18.3046   JANE      set transaction read write;
1.18.3046   JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                      "CUST_FIRST_NAME","CUST_LAST_NAME",
                      "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                      "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                      "ACCOUNT_MGR_ID") values ('9839','Edgar',
                      'Cummings',NULL,NULL,NULL,NULL,
                       NULL,NULL,NULL);
1.9.3041    RAJIV      set transaction read write;
1.9.3041    RAJIV      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                       "CUST_FIRST_NAME","CUST_LAST_NAME","CUST_ADDRESS",
                       "PHONE_NUMBERS","NLS_LANGUAGE","NLS_TERRITORY",
                       "CREDIT_LIMIT","CUST_EMAIL","ACCOUNT_MGR_ID") 
                       values ('9499','Rodney','Emerson',NULL,NULL,NULL,NULL,
                       NULL,NULL,NULL);
1.15.3045    RON       commit;
1.8.3054     RON       set transaction read write;
1.8.3054     RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                       "MIN_SALARY","MAX_SALARY") values ('9566',
                       'FI_ENTRY',NULL,NULL);
1.18.3046    JANE      commit;
1.11.3047    JANE      set transaction read write;
1.11.3047    JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                       "CUST_FIRST_NAME","CUST_LAST_NAME",
                       "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                       "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                       "ACCOUNT_MGR_ID") values ('8933','Ronald',
                       'Frost',NULL,NULL,NULL,NULL,NULL,NULL,NULL);
1.11.3047    JANE      commit;
1.8.3054     RON       commit;

Now suppose you start LogMiner, but this time you specify the COMMITTED_DATA_ONLY option. If you execute the previous query again, then the output is as follows:

1.15.3045   RON       set transaction read write;
1.15.3045   RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                      "MIN_SALARY","MAX_SALARY") values ('9782',
                      'HR_ENTRY',NULL,NULL);
1.15.3045    RON       commit;
1.18.3046   JANE      set transaction read write;
1.18.3046   JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                      "CUST_FIRST_NAME","CUST_LAST_NAME",
                      "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                      "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                      "ACCOUNT_MGR_ID") values ('9839','Edgar',
                      'Cummings',NULL,NULL,NULL,NULL,
                       NULL,NULL,NULL);
1.18.3046    JANE      commit;
1.11.3047    JANE      set transaction read write;
1.11.3047    JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                       "CUST_FIRST_NAME","CUST_LAST_NAME",
                       "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                       "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                       "ACCOUNT_MGR_ID") values ('8933','Ronald',
                       'Frost',NULL,NULL,NULL,NULL,NULL,NULL,NULL);
1.11.3047    JANE      commit;
1.8.3054     RON       set transaction read write;
1.8.3054     RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                       "MIN_SALARY","MAX_SALARY") values ('9566',
                       'FI_ENTRY',NULL,NULL);
1.8.3054     RON       commit;

Because the COMMIT statement for the 1.15.3045 transaction was issued before the COMMIT statement for the 1.18.3046 transaction, the entire 1.15.3045 transaction is returned first. This is true even though the 1.18.3046 transaction started before the 1.15.3045 transaction. None of the 1.9.3041 transaction is returned because a COMMIT statement was never issued for it.


See Also:

See "Examples Using LogMiner" for a complete example that uses the COMMITTED_DATA_ONLY option

Skipping Redo Corruptions

When you use the SKIP_CORRUPTION option to DBMS_LOGMNR.START_LOGMNR, any corruptions in the redo log files are skipped during select operations from the V$LOGMNR_CONTENTS view. For every corrupt redo record encountered, a row is returned that contains the value CORRUPTED_BLOCKS in the OPERATION column, 1343 in the STATUS column, and the number of blocks skipped in the INFO column.

Be aware that the skipped records may include changes to ongoing transactions in the corrupted blocks; such changes will not be reflected in the data returned from the V$LOGMNR_CONTENTS view.

The default is for the select operation to terminate at the first corruption it encounters in the redo log file.

The following SQL example shows how this option works:

-- Add redo log files of interest.
--
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   logfilename => '/usr/oracle/data/db1arch_1_16_482701534.log' -
   options => DBMS_LOGMNR.NEW);

-- Start LogMiner
--
EXECUTE DBMS_LOGMNR.START_LOGMNR();

-- Select from the V$LOGMNR_CONTENTS view. This example shows corruptions are -- in the redo log files.
-- 
SELECT rbasqn, rbablk, rbabyte, operation, status, info 
   FROM V$LOGMNR_CONTENTS;

ERROR at line 3:
ORA-00368: checksum error in redo log block 
ORA-00353: log corruption near block 6 change 73528 time 11/06/2002 11:30:23 
ORA-00334: archived log: /usr/oracle/data/dbarch1_16_482701534.log

-- Restart LogMiner. This time, specify the SKIP_CORRUPTION option.
-- 
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   options => DBMS_LOGMNR.SKIP_CORRUPTION);

-- Select from the V$LOGMNR_CONTENTS view again. The output indicates that 
-- corrupted blocks were skipped: CORRUPTED_BLOCKS is in the OPERATION 
-- column, 1343 is in the STATUS column, and the number of corrupt blocks 
-- skipped is in the INFO column.
--
SELECT rbasqn, rbablk, rbabyte, operation, status, info 
   FROM V$LOGMNR_CONTENTS;

RBASQN  RBABLK RBABYTE  OPERATION        STATUS  INFO
13      2        76     START              0
13      2        76     DELETE             0
13      3       100     INTERNAL           0
13      3       380     DELETE             0
13      0         0     CORRUPTED_BLOCKS   1343  corrupt blocks 4 to 19 skipped
13      20      116     UPDATE             0

Filtering Data by Time

To filter data by time, set the STARTTIME and ENDTIME parameters in the DBMS_LOGMNR.START_LOGMNR procedure.

To avoid the need to specify the date format in the call to the PL/SQL DBMS_LOGMNR.START_LOGMNR procedure, you can use the SQL ALTER SESSION SET NLS_DATE_FORMAT statement first, as shown in the following example.

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   DICTFILENAME => '/oracle/database/dictionary.ora', -
   STARTTIME => '01-Jan-2008 08:30:00', -
   ENDTIME => '01-Jan-2008 08:45:00'-
   OPTIONS => DBMS_LOGMNR.CONTINUOUS_MINE); 

The timestamps should not be used to infer ordering of redo records. You can infer the order of redo records by using the SCN.


See Also:


Filtering Data by SCN

To filter data by SCN (system change number), use the STARTSCN and ENDSCN parameters to the PL/SQL DBMS_LOGMNR.START_LOGMNR procedure, as shown in this example:

 EXECUTE DBMS_LOGMNR.START_LOGMNR(-
    STARTSCN => 621047, -
    ENDSCN   => 625695, -
    OPTIONS  => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
                DBMS_LOGMNR.CONTINUOUS_MINE);

The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters in situations where all are specified.


See Also:


Formatting Reconstructed SQL Statements for Re-execution

By default, a ROWID clause is included in the reconstructed SQL_REDO and SQL_UNDO statements and the statements are ended with a semicolon.

However, you can override the default settings, as follows:

  • Specify the NO_ROWID_IN_STMT option when you start LogMiner.

    This excludes the ROWID clause from the reconstructed statements. Because row IDs are not consistent between databases, if you intend to re-execute the SQL_REDO or SQL_UNDO statements against a different database than the one against which they were originally executed, then specify the NO_ROWID_IN_STMT option when you start LogMiner.

  • Specify the NO_SQL_DELIMITER option when you start LogMiner.

    This suppresses the semicolon from the reconstructed statements. This is helpful for applications that open a cursor and then execute the reconstructed statements.

Note that if the STATUS field of the V$LOGMNR_CONTENTS view contains the value 2 (invalid sql), then the associated SQL statement cannot be executed.

Formatting the Appearance of Returned Data for Readability

Sometimes a query can result in a large number of columns containing reconstructed SQL statements, which can be visually busy and hard to read. LogMiner provides the PRINT_PRETTY_SQL option to address this problem. The PRINT_PRETTY_SQL option to the DBMS_LOGMNR.START_LOGMNR procedure formats the reconstructed SQL statements as follows, which makes them easier to read:

insert into "HR"."JOBS"
 values
    "JOB_ID" = '9782',
    "JOB_TITLE" = 'HR_ENTRY',
    "MIN_SALARY" IS NULL,
    "MAX_SALARY" IS NULL;
  update "HR"."JOBS"
  set
    "JOB_TITLE" = 'FI_ENTRY'
  where
    "JOB_TITLE" = 'HR_ENTRY' and
    ROWID = 'AAAHSeAABAAAY+CAAX';

update "HR"."JOBS"
  set
    "JOB_TITLE" = 'FI_ENTRY'
  where
    "JOB_TITLE" = 'HR_ENTRY' and
    ROWID = 'AAAHSeAABAAAY+CAAX';

delete from "HR"."JOBS"
 where
    "JOB_ID" = '9782' and
    "JOB_TITLE" = 'FI_ENTRY' and
    "MIN_SALARY" IS NULL and
    "MAX_SALARY" IS NULL and
    ROWID = 'AAAHSeAABAAAY+CAAX';

SQL statements that are reconstructed when the PRINT_PRETTY_SQL option is enabled are not executable, because they do not use standard SQL syntax.


See Also:

"Examples Using LogMiner" for a complete example of using the PRINT_PRETTY_SQL option

Reapplying DDL Statements Returned to V$LOGMNR_CONTENTS

Be aware that some DDL statements issued by a user cause Oracle to internally execute one or more other DDL statements. If you want to reapply SQL DDL from the SQL_REDO or SQL_UNDO columns of the V$LOGMNR_CONTENTS view as it was originally applied to the database, then you should not execute statements that were executed internally by Oracle.


Note:

If you execute DML statements that were executed internally by Oracle, then you may corrupt your database. See Step 5 of "Example 4: Using the LogMiner Dictionary in the Redo Log Files" for an example.

To differentiate between DDL statements that were issued by a user from those that were issued internally by Oracle, query the INFO column of V$LOGMNR_CONTENTS. The value of the INFO column indicates whether the DDL was executed by a user or by Oracle.

If you want to reapply SQL DDL as it was originally applied, then you should only re-execute the DDL SQL contained in the SQL_REDO or SQL_UNDO column of V$LOGMNR_CONTENTS if the INFO column contains the value USER_DDL.

Calling DBMS_LOGMNR.START_LOGMNR Multiple Times

Even after you have successfully called DBMS_LOGMNR.START_LOGMNR and selected from the V$LOGMNR_CONTENTS view, you can call DBMS_LOGMNR.START_LOGMNR again without ending the current LogMiner session and specify different options and time or SCN ranges. The following list presents reasons why you might want to do this:

The following examples illustrate situations where it might be useful to call DBMS_LOGMNR.START_LOGMNR multiple times.

Example 1   Mining Only a Subset of the Data in the Redo Log Files

Suppose the list of redo log files that LogMiner has to mine include those generated for an entire week. However, you want to analyze only what happened from 12:00 to 1:00 each day. You could do this most efficiently by:

  1. Calling DBMS_LOGMNR.START_LOGMNR with this time range for Monday.

  2. Selecting changes from the V$LOGMNR_CONTENTS view.

  3. Repeating Steps 1 and 2 for each day of the week.

If the total amount of redo data is large for the week, then this method would make the whole analysis much faster, because only a small subset of each redo log file in the list would be read by LogMiner.

Example 1   Adjusting the Time Range or SCN Range

Suppose you specify a redo log file list and specify a time (or SCN) range when you start LogMiner. When you query the V$LOGMNR_CONTENTS view, you find that only part of the data of interest is included in the time range you specified. You can call DBMS_LOGMNR.START_LOGMNR again to expand the time range by an hour (or adjust the SCN range).

Example 2   Analyzing Redo Log Files As They Arrive at a Remote Database

Suppose you have written an application to analyze changes or to replicate changes from one database to another database. The source database sends its redo log files to the mining database and drops them into an operating system directory. Your application:

  1. Adds all redo log files currently in the directory to the redo log file list

  2. Calls DBMS_LOGMNR.START_LOGMNR with appropriate settings and selects from the V$LOGMNR_CONTENTS view

  3. Adds additional redo log files that have newly arrived in the directory

  4. Repeats Steps 2 and 3, indefinitely

Supplemental Logging

Redo log files are generally used for instance recovery and media recovery. The data needed for such operations is automatically recorded in the redo log files. However, a redo-based application may require that additional columns be logged in the redo log files. The process of logging these additional columns is called supplemental logging.

By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. Therefore, you must enable at least minimal supplemental logging before generating log files which will be analyzed by LogMiner.

The following are examples of situations in which additional columns may be needed:

A supplemental log group is the set of additional columns to be logged when supplemental logging is enabled. There are two types of supplemental log groups that determine when columns in the log group are logged:

Supplemental log groups can be system-generated or user-defined.

In addition to the two types of supplemental logging, there are two levels of supplemental logging, as described in the following sections:

Database-Level Supplemental Logging

There are two types of database-level supplemental logging: minimal supplemental logging and identification key logging, as described in the following sections. Minimal supplemental logging does not impose significant overhead on the database generating the redo log files. However, enabling database-wide identification key logging can impose overhead on the database generating the redo log files. Oracle recommends that you at least enable minimal supplemental logging for LogMiner.

Minimal Supplemental Logging

Minimal supplemental logging logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes. It ensures that LogMiner (and any product building on LogMiner technology) has sufficient information to support chained rows and various storage arrangements, such as cluster tables and index-organized tables. To enable minimal supplemental logging, execute the following SQL statement:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Note:

In Oracle Database release 9.0.1, minimal supplemental logging was the default behavior in LogMiner. In release 9.2 and later, the default is no supplemental logging. Supplemental logging must be specifically enabled.

Database-Level Identification Key Logging

Identification key logging is necessary when redo log files will not be mined at the source database instance, for example, when the redo log files will be mined at a logical standby database.

Using database identification key logging, you can enable database-wide before-image logging for all updates by specifying one or more of the following options to the SQL ALTER DATABASE ADD SUPPLEMENTAL LOG statement:

  • ALL system-generated unconditional supplemental log group

    This option specifies that when a row is updated, all columns of that row (except for LOBs, LONGS, and ADTs) are placed in the redo log file.

    To enable all column logging at the database level, execute the following statement:

    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    
  • PRIMARY KEY system-generated unconditional supplemental log group

    This option causes the database to place all columns of a row's primary key in the redo log file whenever a row containing a primary key is updated (even if no value in the primary key has changed).

    If a table does not have a primary key, but has one or more non-null unique index key constraints or index keys, then one of the unique index keys is chosen for logging as a means of uniquely identifying the row being updated.

    If the table has neither a primary key nor a non-null unique index key, then all columns except LONG and LOB are supplementally logged; this is equivalent to specifying ALL supplemental logging for that row. Therefore, Oracle recommends that when you use database-level primary key supplemental logging, all or most tables be defined to have primary or unique index keys.

    To enable primary key logging at the database level, execute the following statement:

    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    
  • UNIQUE system-generated conditional supplemental log group

    This option causes the database to place all columns of a row's composite unique key or bitmap index in the redo log file if any column belonging to the composite unique key or bitmap index is modified. The unique key can be due to either a unique constraint or a unique index.

    To enable unique index key and bitmap index logging at the database level, execute the following statement:

    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
    
  • FOREIGN KEY system-generated conditional supplemental log group

    This option causes the database to place all columns of a row's foreign key in the redo log file if any column belonging to the foreign key is modified.

    To enable foreign key logging at the database level, execute the following SQL statement:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
    

    Note:

    Regardless of whether identification key logging is enabled, the SQL statements returned by LogMiner always contain the ROWID clause. You can filter out the ROWID clause by using the NO_ROWID_IN_STMT option to the DBMS_LOGMNR.START_LOGMNR procedure call. See "Formatting Reconstructed SQL Statements for Re-execution" for details.

Keep the following in mind when you use identification key logging:

  • If the database is open when you enable identification key logging, then all DML cursors in the cursor cache are invalidated. This can affect performance until the cursor cache is repopulated.

  • When you enable identification key logging at the database level, minimal supplemental logging is enabled implicitly.

  • Supplemental logging statements are cumulative. If you issue the following SQL statements, then both primary key and unique key supplemental logging is enabled:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
    

Disabling Database-Level Supplemental Logging

You disable database-level supplemental logging using the SQL ALTER DATABASE statement with the DROP SUPPLEMENTAL LOGGING clause. You can drop supplemental logging attributes incrementally. For example, suppose you issued the following SQL statements, in the following order:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

The statements would have the following effects:

  • After the first statement, primary key supplemental logging is enabled.

  • After the second statement, primary key and unique key supplemental logging are enabled.

  • After the third statement, only unique key supplemental logging is enabled.

  • After the fourth statement, all supplemental logging is not disabled. The following error is returned: ORA-32589: unable to drop minimal supplemental logging.

To disable all database supplemental logging, you must first disable any identification key logging that has been enabled, then disable minimal supplemental logging. The following example shows the correct order:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Dropping minimal supplemental log data is allowed only if no other variant of database-level supplemental logging is enabled.

Table-Level Supplemental Logging

Table-level supplemental logging specifies, at the table level, which columns are to be supplementally logged. You can use identification key logging or user-defined conditional and unconditional supplemental log groups to log supplemental information, as described in the following sections.

Table-Level Identification Key Logging

Identification key logging at the table level offers the same options as those provided at the database level: all, primary key, foreign key, and unique key. However, when you specify identification key logging at the table level, only the specified table is affected. For example, if you enter the following SQL statement (specifying database-level supplemental logging), then whenever a column in any database table is changed, the entire row containing that column (except columns for LOBs, LONGs, and ADTs) will be placed in the redo log file:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

However, if you enter the following SQL statement (specifying table-level supplemental logging) instead, then only when a column in the employees table is changed will the entire row (except for LOB, LONGs, and ADTs) of the table be placed in the redo log file. If a column changes in the departments table, then only the changed column will be placed in the redo log file.

ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Keep the following in mind when you use table-level identification key logging:

  • If the database is open when you enable identification key logging on a table, then all DML cursors for that table in the cursor cache are invalidated. This can affect performance until the cursor cache is repopulated.

  • Supplemental logging statements are cumulative. If you issue the following SQL statements, then both primary key and unique index key table-level supplemental logging is enabled:

    ALTER TABLE HR.EMPLOYEES 
      ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER TABLE HR.EMPLOYEES 
      ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
    

See "Database-Level Identification Key Logging" for a description of each of the identification key logging options.

Table-Level User-Defined Supplemental Log Groups

In addition to table-level identification key logging, Oracle supports user-defined supplemental log groups. With user-defined supplemental log groups, you can specify which columns are supplementally logged. You can specify conditional or unconditional log groups, as follows:

  • User-defined unconditional log groups

    To enable supplemental logging that uses user-defined unconditional log groups, use the ALWAYS clause as shown in the following example:

    ALTER TABLE HR.EMPLOYEES
       ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME, 
       DEPARTMENT_ID) ALWAYS;
    

    This creates a log group named emp_parttime on the hr.employees table that consists of the columns employee_id, last_name, and department_id. These columns will be logged every time an UPDATE statement is executed on the hr.employees table, regardless of whether the update affected these columns. (If you want to have the entire row image logged any time an update was made, then use table-level ALL identification key logging, as described previously).


    Note:

    LOB, LONG, and ADT columns cannot be supplementally logged.

  • User-defined conditional supplemental log groups

    To enable supplemental logging that uses user-defined conditional log groups, omit the ALWAYS clause from the SQL ALTER TABLE statement, as shown in the following example:

    ALTER TABLE HR.EMPLOYEES
       ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME, 
       DEPARTMENT_ID);
    

    This creates a log group named emp_fulltime on table hr.employees. Just like the previous example, it consists of the columns employee_id, last_name, and department_id. But because the ALWAYS clause was omitted, before-images of the columns will be logged only if at least one of the columns is updated.

For both unconditional and conditional user-defined supplemental log groups, you can explicitly specify that a column in the log group be excluded from supplemental logging by specifying the NO LOG option. When you specify a log group and use the NO LOG option, you must specify at least one column in the log group without the NO LOG option, as shown in the following example:

ALTER TABLE HR.EMPLOYEES
   ADD SUPPLEMENTAL LOG GROUP emp_parttime(
   DEPARTMENT_ID NO LOG, EMPLOYEE_ID);

This enables you to associate this column with other columns in the named supplemental log group such that any modification to the NO LOG column causes the other columns in the supplemental log group to be placed in the redo log file. This might be useful, for example, if you want to log certain columns in a group if a LONG column changes. You cannot supplementally log the LONG column itself; however, you can use changes to that column to trigger supplemental logging of other columns in the same row.

Usage Notes for User-Defined Supplemental Log Groups

Keep the following in mind when you specify user-defined supplemental log groups:

  • A column can belong to more than one supplemental log group. However, the before-image of the columns gets logged only once.

  • If you specify the same columns to be logged both conditionally and unconditionally, then the columns are logged unconditionally.

Tracking DDL Statements in the LogMiner Dictionary

LogMiner automatically builds its own internal dictionary from the LogMiner dictionary that you specify when you start LogMiner (either an online catalog, a dictionary in the redo log files, or a flat file). This dictionary provides a snapshot of the database objects and their definitions.

If your LogMiner dictionary is in the redo log files or is a flat file, then you can use the DDL_DICT_TRACKING option to the PL/SQL DBMS_LOGMNR.START_LOGMNR procedure to direct LogMiner to track data definition language (DDL) statements. DDL tracking enables LogMiner to successfully track structural changes made to a database object, such as adding or dropping columns from a table. For example:

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
   DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.DICT_FROM_REDO_LOGS);

See "Example 5: Tracking DDL Statements in the Internal Dictionary" for a complete example.

With this option set, LogMiner applies any DDL statements seen in the redo log files to its internal dictionary.


Note:

In general, it is a good idea to keep supplemental logging and the DDL tracking feature enabled, because if they are not enabled and a DDL event occurs, then LogMiner returns some of the redo data as binary data. Also, a metadata version mismatch could occur.

When you enable DDL_DICT_TRACKING, data manipulation language (DML) operations performed on tables created after the LogMiner dictionary was extracted can be shown correctly.

For example, if a table employees is updated through two successive DDL operations such that column gender is added in one operation, and column commission_pct is dropped in the next, then LogMiner will keep versioned information for employees for each of these changes. This means that LogMiner can successfully mine redo log files that are from before and after these DDL changes, and no binary data will be presented for the SQL_REDO or SQL_UNDO columns.

Because LogMiner automatically assigns versions to the database metadata, it will detect and notify you of any mismatch between its internal dictionary and the dictionary in the redo log files. If LogMiner detects a mismatch, then it generates binary data in the SQL_REDO column of the V$LOGMNR_CONTENTS view, the INFO column contains the string "Dictionary Version Mismatch", and the STATUS column will contain the value 2.


Note:

It is important to understand that the LogMiner internal dictionary is not the same as the LogMiner dictionary contained in a flat file, in redo log files, or in the online catalog. LogMiner does update its internal dictionary, but it does not update the dictionary that is contained in a flat file, in redo log files, or in the online catalog.

The following list describes the requirements for specifying the DDL_DICT_TRACKING option with the DBMS_LOGMNR.START_LOGMNR procedure.

  • The DDL_DICT_TRACKING option is not valid with the DICT_FROM_ONLINE_CATALOG option.

  • The DDL_DICT_TRACKING option requires that the database be open.

  • Supplemental logging must be enabled database-wide, or log groups must have been created for the tables of interest.

DDL_DICT_TRACKING and Supplemental Logging Settings

Note the following interactions that occur when various settings of dictionary tracking and supplemental logging are combined:

  • If DDL_DICT_TRACKING is enabled, but supplemental logging is not enabled and:

    • A DDL transaction is encountered in the redo log file, then a query of V$LOGMNR_CONTENTS will terminate with the ORA-01347 error.

    • A DML transaction is encountered in the redo log file, then LogMiner will not assume that the current version of the table (underlying the DML) in its dictionary is correct, and columns in V$LOGMNR_CONTENTS will be set as follows:

      • The SQL_REDO column will contain binary data.

      • The STATUS column will contain a value of 2 (which indicates that the SQL is not valid).

      • The INFO column will contain the string 'Dictionary Mismatch'.

  • If DDL_DICT_TRACKING is not enabled and supplemental logging is not enabled, and the columns referenced in a DML operation match the columns in the LogMiner dictionary, then LogMiner assumes that the latest version in its dictionary is correct, and columns in V$LOGMNR_CONTENTS will be set as follows:

    • LogMiner will use the definition of the object in its dictionary to generate values for the SQL_REDO and SQL_UNDO columns.

    • The status column will contain a value of 3 (which indicates that the SQL is not guaranteed to be accurate).

    • The INFO column will contain the string 'no supplemental log data found'.

  • If DDL_DICT_TRACKING is not enabled and supplemental logging is not enabled and there are more modified columns in the redo log file for a table than the LogMiner dictionary definition for the table defines, then:

    • The SQL_REDO and SQL_UNDO columns will contain the string 'Dictionary Version Mismatch'.

    • The STATUS column will contain a value of 2 (which indicates that the SQL is not valid).

    • The INFO column will contain the string 'Dictionary Mismatch'.

    Also be aware that it is possible to get unpredictable behavior if the dictionary definition of a column indicates one type but the column is really another type.

DDL_DICT_TRACKING and Specified Time or SCN Ranges

Because LogMiner must not miss a DDL statement if it is to ensure the consistency of its dictionary, LogMiner may start reading redo log files before your requested starting time or SCN (as specified with DBMS_LOGMNR.START_LOGMNR) when the DDL_DICT_TRACKING option is enabled. The actual time or SCN at which LogMiner starts reading redo log files is referred to as the required starting time or the required starting SCN.

No missing redo log files (based on sequence numbers) are allowed from the required starting time or the required starting SCN.

LogMiner determines where it will start reading redo log data as follows:

  • After the dictionary is loaded, the first time that you call DBMS_LOGMNR.START_LOGMNR, LogMiner begins reading as determined by one of the following, whichever causes it to begin earlier:

    • Your requested starting time or SCN value

    • The commit SCN of the dictionary dump

  • On subsequent calls to DBMS_LOGMNR.START_LOGMNR, LogMiner begins reading as determined for one of the following, whichever causes it to begin earliest:

    • Your requested starting time or SCN value

    • The start of the earliest DDL transaction where the COMMIT statement has not yet been read by LogMiner

    • The highest SCN read by LogMiner

The following scenario helps illustrate this:

Suppose you create a redo log file list containing five redo log files. Assume that a dictionary is contained in the first redo file, and the changes that you have indicated you want to see (using DBMS_LOGMNR.START_LOGMNR) are recorded in the third redo log file. You then do the following:

  1. Call DBMS_LOGMNR.START_LOGMNR. LogMiner will read:

    1. The first log file to load the dictionary

    2. The second redo log file to pick up any possible DDLs contained within it

    3. The third log file to retrieve the data of interest

  2. Call DBMS_LOGMNR.START_LOGMNR again with the same requested range.

    LogMiner will begin with redo log file 3; it no longer needs to read redo log file 2, because it has already processed any DDL statements contained within it.

  3. Call DBMS_LOGMNR.START_LOGMNR again, this time specifying parameters that require data to be read from redo log file 5.

    LogMiner will start reading from redo log file 4 to pick up any DDL statements that may be contained within it.

Query the REQUIRED_START_DATE or the REQUIRED_START_SCN columns of the V$LOGMNR_PARAMETERS view to see where LogMiner will actually start reading. Regardless of where LogMiner starts reading, only rows in your requested range will be returned from the V$LOGMNR_CONTENTS view.

Accessing LogMiner Operational Information in Views

LogMiner operational information (as opposed to redo data) is contained in the following views. You can use SQL to query them as you would any other view.

Querying V$LOGMNR_LOGS

You can query the V$LOGMNR_LOGS view to determine which redo log files have been manually or automatically added to the list of redo log files for LogMiner to analyze. This view contains one row for each redo log file. It provides valuable information about each of the redo log files including file name, sequence #, SCN and time ranges, and whether it contains all or part of the LogMiner dictionary.

After a successful call to DBMS_LOGMNR.START_LOGMNR, the STATUS column of the V$LOGMNR_LOGS view contains one of the following values:

  • 0

    Indicates that the redo log file will be processed during a query of the V$LOGMNR_CONTENTS view.

  • 1

    Indicates that this will be the first redo log file to be processed by LogMiner during a select operation against the V$LOGMNR_CONTENTS view.

  • 2

    Indicates that the redo log file has been pruned and therefore will not be processed by LogMiner during a query of the V$LOGMNR_CONTENTS view. It has been pruned because it is not needed to satisfy your requested time or SCN range.

  • 4

    Indicates that a redo log file (based on sequence number) is missing from the LogMiner redo log file list.

The V$LOGMNR_LOGS view contains a row for each redo log file that is missing from the list, as follows:

  • The FILENAME column will contain the consecutive range of sequence numbers and total SCN range gap.

    For example: 'Missing log file(s) for thread number 1, sequence number(s) 100 to 102'.

  • The INFO column will contain the string 'MISSING_LOGFILE'.

Information about files missing from the redo log file list can be useful for the following reasons:

  • The DDL_DICT_TRACKING and CONTINUOUS_MINE options that can be specified when you call DBMS_LOGMNR.START_LOGMNR will not allow redo log files to be missing from the LogMiner redo log file list for the requested time or SCN range. If a call to DBMS_LOGMNR.START_LOGMNR fails, then you can query the STATUS column in the V$LOGMNR_LOGS view to determine which redo log files are missing from the list. You can then find and manually add these redo log files and attempt to call DBMS_LOGMNR.START_LOGMNR again.

  • Although all other options that can be specified when you call DBMS_LOGMNR.START_LOGMNR allow files to be missing from the LogMiner redo log file list, you may not want to have missing files. You can query the V$LOGMNR_LOGS view before querying the V$LOGMNR_CONTENTS view to ensure that all required files are in the list. If the list is left with missing files and you query the V$LOGMNR_CONTENTS view, then a row is returned in V$LOGMNR_CONTENTS with the following column values:

    • In the OPERATION column, a value of 'MISSING_SCN'

    • In the STATUS column, a value of 1291

    • In the INFO column, a string indicating the missing SCN range (for example, 'Missing SCN 100 - 200')

Querying Views for Supplemental Logging Settings

You can query several views to determine the current settings for supplemental logging, as described in the following list:

  • V$DATABASE view

    • SUPPLEMENTAL_LOG_DATA_FK column

      This column contains one of the following values:

      • NO - if database-level identification key logging with the FOREIGN KEY option is not enabled

      • YES - if database-level identification key logging with the FOREIGN KEY option is enabled

    • SUPPLEMENTAL_LOG_DATA_ALL column

      This column contains one of the following values:

      • NO - if database-level identification key logging with the ALL option is not enabled

      • YES - if database-level identification key logging with the ALL option is enabled

    • SUPPLEMENTAL_LOG_DATA_UI column

      • NO - if database-level identification key logging with the UNIQUE option is not enabled

      • YES - if database-level identification key logging with the UNIQUE option is enabled

    • SUPPLEMENTAL_LOG_DATA_MIN column

      This column contains one of the following values:

      • NO - if no database-level supplemental logging is enabled

      • IMPLICIT - if minimal supplemental logging is enabled because database-level identification key logging options is enabled

      • YES - if minimal supplemental logging is enabled because the SQL ALTER DATABASE ADD SUPPLEMENTAL LOG DATA statement was issued

  • DBA_LOG_GROUPS, ALL_LOG_GROUPS, and USER_LOG_GROUPS views

    • ALWAYS column

      This column contains one of the following values:

      • ALWAYS - indicates that the columns in this log group will be supplementally logged if any column in the associated row is updated

      • CONDITIONAL - indicates that the columns in this group will be supplementally logged only if a column in the log group is updated

    • GENERATED column

      This column contains one of the following values:

      • GENERATED NAME - if the LOG_GROUP name was system-generated

      • USER NAME - if the LOG_GROUP name was user-defined

    • LOG_GROUP_TYPES column

      This column contains one of the following values to indicate the type of logging defined for this log group. USER LOG GROUP indicates that the log group was user-defined (as opposed to system-generated).

      • ALL COLUMN LOGGING

      • FOREIGN KEY LOGGING

      • PRIMARY KEY LOGGING

      • UNIQUE KEY LOGGING

      • USER LOG GROUP

  • DBA_LOG_GROUP_COLUMNS, ALL_LOG_GROUP_COLUMNS, and USER_LOG_GROUP_COLUMNS views

    • The LOGGING_PROPERTY column

      This column contains one of the following values:

      • LOG - indicates that this column in the log group will be supplementally logged

      • NO LOG - indicates that this column in the log group will not be supplementally logged

Steps in a Typical LogMiner Session

This section describes the steps in a typical LogMiner session. Each step is described in its own subsection.

  1. Enable Supplemental Logging

  2. Extract a LogMiner Dictionary (unless you plan to use the online catalog)

  3. Specify Redo Log Files for Analysis

  4. Start LogMiner

  5. Query V$LOGMNR_CONTENTS

  6. End the LogMiner Session

To run LogMiner, you use the DBMS_LOGMNR PL/SQL package. Additionally, you might also use the DBMS_LOGMNR_D package if you choose to extract a LogMiner dictionary rather than use the online catalog.

The DBMS_LOGMNR package contains the procedures used to initialize and run LogMiner, including interfaces to specify names of redo log files, filter criteria, and session characteristics. The DBMS_LOGMNR_D package queries the database dictionary tables of the current database to create a LogMiner dictionary file.

The LogMiner PL/SQL packages are owned by the SYS schema. Therefore, if you are not connected as user SYS, then:

Enable Supplemental Logging

Enable the type of supplemental logging you want to use. At the very least, you must enable minimal supplemental logging, as follows:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

See "Supplemental Logging" for more information.

Extract a LogMiner Dictionary

To use LogMiner, you must supply it with a dictionary by doing one of the following:

Specify Redo Log Files for Analysis

Before you can start LogMiner, you must specify the redo log files that you want to analyze. To do so, execute the DBMS_LOGMNR.ADD_LOGFILE procedure, as demonstrated in the following steps. You can add and remove redo log files in any order.


Note:

If you will be mining in the database instance that is generating the redo log files, then you only need to specify the CONTINUOUS_MINE option and one of the following when you start LogMiner:
  • The STARTSCN parameter

  • The STARTTIME parameter

For more information, see "Redo Log File Options".


  1. Use SQL*Plus to start an Oracle instance, with the database either mounted or unmounted. For example, enter the STARTUP statement at the SQL prompt:

    STARTUP
    
  2. Create a list of redo log files. Specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list. For example, enter the following to specify the /oracle/logs/log1.f redo log file:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME => '/oracle/logs/log1.f', -
       OPTIONS => DBMS_LOGMNR.NEW);
    
  3. If desired, add more redo log files by specifying the ADDFILE option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure. For example, enter the following to add the /oracle/logs/log2.f redo log file:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME => '/oracle/logs/log2.f', -
       OPTIONS => DBMS_LOGMNR.ADDFILE);
    

    The OPTIONS parameter is optional when you are adding additional redo log files. For example, you could simply enter the following:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME=>'/oracle/logs/log2.f');
    
  4. If desired, remove redo log files by using the DBMS_LOGMNR.REMOVE_LOGFILE PL/SQL procedure. For example, enter the following to remove the /oracle/logs/log2.f redo log file:

    EXECUTE DBMS_LOGMNR.REMOVE_LOGFILE( -
       LOGFILENAME => '/oracle/logs/log2.f');
    

Start LogMiner

After you have created a LogMiner dictionary file and specified which redo log files to analyze, you must start LogMiner. Take the following steps:

  1. Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.

    Oracle recommends that you specify a LogMiner dictionary option. If you do not, then LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would return internal object IDs and present data as binary data. Additionally, the MINE_VALUE and COLUMN_PRESENT functions cannot be used without a dictionary.

    If you are specifying the name of a flat file LogMiner dictionary, then you must supply a fully qualified file name for the dictionary file. For example, to start LogMiner using /oracle/database/dictionary.ora, issue the following statement:

    EXECUTE DBMS_LOGMNR.START_LOGMNR( -
       DICTFILENAME =>'/oracle/database/dictionary.ora');
    

    If you are not specifying a flat file dictionary name, then use the OPTIONS parameter to specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG option.

    If you specify DICT_FROM_REDO_LOGS, then LogMiner expects to find a dictionary in the redo log files that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure. To determine which redo log files contain a dictionary, look at the V$ARCHIVED_LOG view. See "Extracting a LogMiner Dictionary to the Redo Log Files" for an example.


    Note:

    If you add additional redo log files after LogMiner has been started, you must restart LogMiner. LogMiner will not retain options that were included in the previous call to DBMS_LOGMNR.START_LOGMNR; you must respecify the options you want to use. However, LogMiner will retain the dictionary specification from the previous call if you do not specify a dictionary in the current call to DBMS_LOGMNR.START_LOGMNR.

    For more information about the DICT_FROM_ONLINE_CATALOG option, see "Using the Online Catalog".

  2. Optionally, you can filter your query by time or by SCN. See "Filtering Data by Time" or "Filtering Data by SCN".

  3. You can also use the OPTIONS parameter to specify additional characteristics of your LogMiner session. For example, you might decide to use the online catalog as your LogMiner dictionary and to have only committed transactions shown in the V$LOGMNR_CONTENTS view, as follows:

    EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
       DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
       DBMS_LOGMNR.COMMITTED_DATA_ONLY);
    

    For more information about DBMS_LOGMNR.START_LOGMNR options, see Oracle Database PL/SQL Packages and Types Reference.

    You can execute the DBMS_LOGMNR.START_LOGMNR procedure multiple times, specifying different options each time. This can be useful, for example, if you did not get the desired results from a query of V$LOGMNR_CONTENTS, and want to restart LogMiner with different options. Unless you need to respecify the LogMiner dictionary, you do not need to add redo log files if they were already added with a previous call to DBMS_LOGMNR.START_LOGMNR.

Query V$LOGMNR_CONTENTS

At this point, LogMiner is started and you can perform queries against the V$LOGMNR_CONTENTS view. See "Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS" for examples of this.

End the LogMiner Session

To properly end a LogMiner session, use the DBMS_LOGMNR.END_LOGMNR PL/SQL procedure, as follows:

EXECUTE DBMS_LOGMNR.END_LOGMNR;

This procedure closes all the redo log files and allows all the database and system resources allocated by LogMiner to be released.

If this procedure is not executed, then LogMiner retains all its allocated resources until the end of the Oracle session in which it was invoked. It is particularly important to use this procedure to end the LogMiner session if either the DDL_DICT_TRACKING option or the DICT_FROM_REDO_LOGS option was used.

Examples Using LogMiner

This section provides several examples of using LogMiner in each of the following general categories:

Examples of Mining by Explicitly Specifying the Redo Log Files of Interest

The following examples demonstrate how to use LogMiner when you know which redo log files contain the data of interest. This section contains the following list of examples; these examples are best read sequentially, because each example builds on the example or examples that precede it:

The SQL output formatting may be different on your display than that shown in these examples.

Example 1: Finding All Modifications in the Last Archived Redo Log File

The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. This example shows how to do the simplest analysis using LogMiner.

This example finds all modifications that are contained in the last archived redo log generated by the database (assuming that the database is not an Oracle Real Application Clusters (Oracle RAC) database).

Step 1   Determine which redo log file was most recently archived.

This example assumes that you know you want to mine the redo log file that was most recently archived.

SELECT NAME FROM V$ARCHIVED_LOG
   WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME                            
-------------------------------------------
/usr/oracle/data/db1arch_1_16_482701534.dbf
Step 2   Specify the list of redo log files to be analyzed.

Specify the redo log file that was returned by the query in Step 1. The list will consist of one redo log file.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
  LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', -
  OPTIONS => DBMS_LOGMNR.NEW);
Step 3   Start LogMiner.

Start LogMiner and specify the dictionary to use.

EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Step 4   Query the V$LOGMNR_CONTENTS view.

Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.

SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' ||  XIDSQN) AS XID, 
   SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');

USR    XID          SQL_REDO                        SQL_UNDO
----   ---------  ----------------------------------------------------
HR     1.11.1476  set transaction read write;

HR     1.11.1476  insert into "HR"."EMPLOYEES"(     delete from "HR"."EMPLOYEES" 
                  "EMPLOYEE_ID","FIRST_NAME",       where "EMPLOYEE_ID" = '306'
                  "LAST_NAME","EMAIL",              and "FIRST_NAME" = 'Nandini'
                  "PHONE_NUMBER","HIRE_DATE",       and "LAST_NAME" = 'Shastry'
                  "JOB_ID","SALARY",                and "EMAIL" = 'NSHASTRY'
                  "COMMISSION_PCT","MANAGER_ID",    and "PHONE_NUMBER" = '1234567890'
                  "DEPARTMENT_ID") values           and "HIRE_DATE" = TO_DATE('10-JAN-2003
                  ('306','Nandini','Shastry',       13:34:43', 'dd-mon-yyyy hh24:mi:ss') 
                  'NSHASTRY', '1234567890',         and "JOB_ID" = 'HR_REP' and 
                  TO_DATE('10-jan-2003 13:34:43',   "SALARY" = '120000' and 
                  'dd-mon-yyyy hh24:mi:ss'),         "COMMISSION_PCT" = '.05' and
                  'HR_REP','120000', '.05',         "DEPARTMENT_ID" = '10' and
                  '105','10');                      ROWID = 'AAAHSkAABAAAY6rAAO';
     
OE     1.1.1484   set transaction read write;

OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION" 
                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+05-00') where      TO_YMINTERVAL('+01-00') where
                  "PRODUCT_ID" = '1799' and          "PRODUCT_ID" = '1799' and
                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+01-00') and        TO_YMINTERVAL('+05-00') and
                  ROWID = 'AAAHTKAABAAAY9mAAB';      ROWID = 'AAAHTKAABAAAY9mAAB'; 
                                                                                
OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION"
                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" =
                  TO_YMINTERVAL('+05-00') where      TO_YMINTERVAL('+01-00') where
                  "PRODUCT_ID" = '1801' and          "PRODUCT_ID" = '1801' and
                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+01-00') and        TO_YMINTERVAL('+05-00') and
                  ROWID = 'AAAHTKAABAAAY9mAAC';      ROWID ='AAAHTKAABAAAY9mAAC';

HR     1.11.1476  insert into "HR"."EMPLOYEES"(     delete from "HR"."EMPLOYEES"
                  "EMPLOYEE_ID","FIRST_NAME",       "EMPLOYEE_ID" = '307' and 
                  "LAST_NAME","EMAIL",              "FIRST_NAME" = 'John' and
                  "PHONE_NUMBER","HIRE_DATE",       "LAST_NAME" = 'Silver' and
                  "JOB_ID","SALARY",                "EMAIL" = 'JSILVER' and 
                  "COMMISSION_PCT","MANAGER_ID",    "PHONE_NUMBER" = '5551112222'
                  "DEPARTMENT_ID") values           and "HIRE_DATE" = TO_DATE('10-jan-2003
                  ('307','John','Silver',           13:41:03', 'dd-mon-yyyy hh24:mi:ss') 
                   'JSILVER', '5551112222',         and "JOB_ID" ='105' and "DEPARTMENT_ID" 
                  TO_DATE('10-jan-2003 13:41:03',   = '50' and ROWID = 'AAAHSkAABAAAY6rAAP'; 
                  'dd-mon-yyyy hh24:mi:ss'),
                  'SH_CLERK','110000', '.05',
                  '105','50');                

OE     1.1.1484   commit;

HR     1.15.1481   set transaction read write;

HR     1.15.1481  delete from "HR"."EMPLOYEES"      insert into "HR"."EMPLOYEES"(
                  where "EMPLOYEE_ID" = '205' and   "EMPLOYEE_ID","FIRST_NAME",
                  "FIRST_NAME" = 'Shelley' and      "LAST_NAME","EMAIL","PHONE_NUMBER",
                  "LAST_NAME" = 'Higgins' and       "HIRE_DATE", "JOB_ID","SALARY",
                  "EMAIL" = 'SHIGGINS' and          "COMMISSION_PCT","MANAGER_ID",
                  "PHONE_NUMBER" = '515.123.8080'   "DEPARTMENT_ID") values
                  and "HIRE_DATE" = TO_DATE(        ('205','Shelley','Higgins',
                  '07-jun-1994 10:05:01',           and     'SHIGGINS','515.123.8080',
                  'dd-mon-yyyy hh24:mi:ss')         TO_DATE('07-jun-1994 10:05:01',
                  and "JOB_ID" = 'AC_MGR'           'dd-mon-yyyy hh24:mi:ss'),
                  and "SALARY"= '12000'            'AC_MGR','12000',NULL,'101','110'); 
                  and "COMMISSION_PCT" IS NULL 
                  and "MANAGER_ID" 
                  = '101' and "DEPARTMENT_ID" = 
                  '110' and ROWID = 
                  'AAAHSkAABAAAY6rAAM';


OE     1.8.1484   set transaction read write;

OE     1.8.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION"
                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+12-06') where      TO_YMINTERVAL('+20-00') where
                  "PRODUCT_ID" = '2350' and          "PRODUCT_ID" = '2350' and
                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" =
                  TO_YMINTERVAL('+20-00') and        TO_YMINTERVAL('+20-00') and
                  ROWID = 'AAAHTKAABAAAY9tAAD';       ROWID ='AAAHTKAABAAAY9tAAD'; 

HR     1.11.1476  commit;
Step 5   End the LogMiner session.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 2: Grouping DML Statements into Committed Transactions

As shown in the first example, "Example 1: Finding All Modifications in the Last Archived Redo Log File", LogMiner displays all modifications it finds in the redo log files that it analyzes by default, regardless of whether the transaction has been committed or not. In addition, LogMiner shows modifications in the same order in which they were executed. Because DML statements that belong to the same transaction are not grouped together, visual inspection of the output can be difficult. Although you can use SQL to group transactions, LogMiner provides an easier way. In this example, the latest archived redo log file will again be analyzed, but it will return only committed transactions.

Step 1   Determine which redo log file was most recently archived by the database.

This example assumes that you know you want to mine the redo log file that was most recently archived.

SELECT NAME FROM V$ARCHIVED_LOG
   WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME                            
-------------------------------------------
/usr/oracle/data/db1arch_1_16_482701534.dbf
Step 2   Specify the list of redo log files to be analyzed.

Specify the redo log file that was returned by the query in Step 1. The list will consist of one redo log file.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', -
   OPTIONS => DBMS_LOGMNR.NEW);
Step 3   Start LogMiner.

Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY option.

EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
   DBMS_LOGMNR.COMMITTED_DATA_ONLY); 
Step 4   Query the V$LOGMNR_CONTENTS view.

Although transaction 1.11.1476 was started before transaction 1.1.1484 (as revealed in "Example 1: Finding All Modifications in the Last Archived Redo Log File"), it committed after transaction 1.1.1484 committed. In this example, therefore, transaction 1.1.1484 is shown in its entirety before transaction 1.11.1476. The two transactions that did not commit within the redo log file being analyzed are not returned.

SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' ||  XIDSQN) AS XID, SQL_REDO, 
   SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
;
USR    XID          SQL_REDO                        SQL_UNDO
----   ---------    ------------------------------- ---------------------------------
     
OE     1.1.1484   set transaction read write;

OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION" 
                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+05-00') where      TO_YMINTERVAL('+01-00') where
                  "PRODUCT_ID" = '1799' and          "PRODUCT_ID" = '1799' and
                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+01-00') and        TO_YMINTERVAL('+05-00') and
                  ROWID = 'AAAHTKAABAAAY9mAAB';      ROWID = 'AAAHTKAABAAAY9mAAB'; 
                                                                                
OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION"
                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" =
                  TO_YMINTERVAL('+05-00') where      TO_YMINTERVAL('+01-00') where
                  "PRODUCT_ID" = '1801' and          "PRODUCT_ID" = '1801' and
                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" = 
                  TO_YMINTERVAL('+01-00') and        TO_YMINTERVAL('+05-00') and
                  ROWID = 'AAAHTKAABAAAY9mAAC';      ROWID ='AAAHTKAABAAAY9mAAC';

OE     1.1.1484   commit;
                            
HR     1.11.1476  set transaction read write;

HR     1.11.1476  insert into "HR"."EMPLOYEES"(     delete from "HR"."EMPLOYEES" 
                  "EMPLOYEE_ID","FIRST_NAME",       where "EMPLOYEE_ID" = '306'
                  "LAST_NAME","EMAIL",              and "FIRST_NAME" = 'Nandini'
                  "PHONE_NUMBER","HIRE_DATE",       and "LAST_NAME" = 'Shastry'
                  "JOB_ID","SALARY",                and "EMAIL" = 'NSHASTRY'
                  "COMMISSION_PCT","MANAGER_ID",    and "PHONE_NUMBER" = '1234567890'
                  "DEPARTMENT_ID") values           and "HIRE_DATE" = TO_DATE('10-JAN-2003
                  ('306','Nandini','Shastry',       13:34:43', 'dd-mon-yyyy hh24:mi:ss') 
                  'NSHASTRY', '1234567890',         and "JOB_ID" = 'HR_REP' and 
                  TO_DATE('10-jan-2003 13:34:43',   "SALARY" = '120000' and 
                  'dd-mon-yyy hh24:mi:ss'),         "COMMISSION_PCT" = '.05' and
                  'HR_REP','120000', '.05',         "DEPARTMENT_ID" = '10' and
                  '105','10');                      ROWID = 'AAAHSkAABAAAY6rAAO';

HR     1.11.1476  insert into "HR"."EMPLOYEES"(     delete from "HR"."EMPLOYEES"
                  "EMPLOYEE_ID","FIRST_NAME",       "EMPLOYEE_ID" = '307' and 
                  "LAST_NAME","EMAIL",              "FIRST_NAME" = 'John' and
                  "PHONE_NUMBER","HIRE_DATE",       "LAST_NAME" = 'Silver' and
                  "JOB_ID","SALARY",                "EMAIL" = 'JSILVER' and 
                  "COMMISSION_PCT","MANAGER_ID",    "PHONE_NUMBER" = '5551112222'
                  "DEPARTMENT_ID") values           and "HIRE_DATE" = TO_DATE('10-jan-2003
                  ('307','John','Silver',           13:41:03', 'dd-mon-yyyy hh24:mi:ss') 
                   'JSILVER', '5551112222',         and "JOB_ID" ='105' and "DEPARTMENT_ID" 
                  TO_DATE('10-jan-2003 13:41:03',   = '50' and ROWID = 'AAAHSkAABAAAY6rAAP'; 
                  'dd-mon-yyyy hh24:mi:ss'),
                  'SH_CLERK','110000', '.05',
                  '105','50');                

HR     1.11.1476  commit;
Step 5   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 3: Formatting the Reconstructed SQL

As shown in "Example 2: Grouping DML Statements into Committed Transactions", using the COMMITTED_DATA_ONLY option with the dictionary in the online redo log file is an easy way to focus on committed transactions. However, one aspect remains that makes visual inspection difficult: the association between the column names and their respective values in an INSERT statement are not apparent. This can be addressed by specifying the PRINT_PRETTY_SQL option. Note that specifying this option will make some of the reconstructed SQL statements nonexecutable.

Step 1   Determine which redo log file was most recently archived.

This example assumes that you know you want to mine the redo log file that was most recently archived.

SELECT NAME FROM V$ARCHIVED_LOG
   WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME                            
-------------------------------------------
/usr/oracle/data/db1arch_1_16_482701534.dbf
Step 2   Specify the list of redo log files to be analyzed.

Specify the redo log file that was returned by the query in Step 1. The list will consist of one redo log file.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', -
   OPTIONS => DBMS_LOGMNR.NEW);
Step 3   Start LogMiner.

Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY and PRINT_PRETTY_SQL options.

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
              DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
              DBMS_LOGMNR.PRINT_PRETTY_SQL);

The DBMS_LOGMNR.PRINT_PRETTY_SQL option changes only the format of the reconstructed SQL, and therefore is useful for generating reports for visual inspection.

Step 4   Query the V$LOGMNR_CONTENTS view for SQL_REDO statements.
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' ||  XIDSQN) AS XID, SQL_REDO 
   FROM V$LOGMNR_CONTENTS;

USR    XID          SQL_REDO                     
----   ---------  -----------------------------------------------------

OE     1.1.1484   set transaction read write;

OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  
                    set 
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') 
                    where
                      "PRODUCT_ID" = '1799' and          
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and        
                      ROWID = 'AAAHTKAABAAAY9mAAB';  
                                                                                
OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"
                    set 
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') 
                    where
                      "PRODUCT_ID" = '1801' and
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and   
                      ROWID = 'AAAHTKAABAAAY9mAAC'; 

OE     1.1.1484   commit;
                            
HR     1.11.1476  set transaction read write;

HR     1.11.1476  insert into "HR"."EMPLOYEES"
                   values
                     "EMPLOYEE_ID" = 306,
                     "FIRST_NAME" = 'Nandini',
                     "LAST_NAME" = 'Shastry',
                     "EMAIL" = 'NSHASTRY',
                     "PHONE_NUMBER" = '1234567890',
                     "HIRE_DATE" = TO_DATE('10-jan-2003 13:34:43', 
                     'dd-mon-yyyy hh24:mi:ss',
                     "JOB_ID" = 'HR_REP',
                     "SALARY" = 120000,
                     "COMMISSION_PCT" = .05,
                     "MANAGER_ID" = 105,
                     "DEPARTMENT_ID" = 10;

HR     1.11.1476   insert into "HR"."EMPLOYEES"
                    values
                       "EMPLOYEE_ID" = 307,
                       "FIRST_NAME" = 'John',
                       "LAST_NAME" = 'Silver',
                       "EMAIL" = 'JSILVER',
                       "PHONE_NUMBER" = '5551112222',
                       "HIRE_DATE" = TO_DATE('10-jan-2003 13:41:03',
                       'dd-mon-yyyy hh24:mi:ss'),
                       "JOB_ID" = 'SH_CLERK',
                       "SALARY" = 110000,
                       "COMMISSION_PCT" = .05,
                       "MANAGER_ID" = 105,
                       "DEPARTMENT_ID" = 50;
HR     1.11.1476    commit;
Step 5   Query the V$LOGMNR_CONTENTS view for reconstructed SQL_UNDO statements.
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' ||  XIDSQN) AS XID, SQL_UNDO 
   FROM V$LOGMNR_CONTENTS;

USR   XID        SQL_UNDO                     
----   ---------  -----------------------------------------------------

     
OE     1.1.1484   set transaction read write;

OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  
                    set 
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') 
                    where
                      "PRODUCT_ID" = '1799' and          
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and        
                      ROWID = 'AAAHTKAABAAAY9mAAB';  
                                                                                
OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"
                    set 
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') 
                    where
                      "PRODUCT_ID" = '1801' and
                      "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and   
                      ROWID = 'AAAHTKAABAAAY9mAAC'; 

OE     1.1.1484   commit;
                            
HR     1.11.1476  set transaction read write;

HR     1.11.1476  delete from "HR"."EMPLOYEES"
                  where
                     "EMPLOYEE_ID" = 306 and
                     "FIRST_NAME" = 'Nandini' and
                     "LAST_NAME" = 'Shastry' and
                     "EMAIL" = 'NSHASTRY' and
                     "PHONE_NUMBER" = '1234567890' and
                     "HIRE_DATE" = TO_DATE('10-jan-2003 13:34:43',
                     'dd-mon-yyyy hh24:mi:ss') and
                     "JOB_ID" = 'HR_REP' and 
                     "SALARY" = 120000 and
                     "COMMISSION_PCT" = .05 and
                     "MANAGER_ID" = 105 and
                     "DEPARTMENT_ID" = 10 and
                     ROWID = 'AAAHSkAABAAAY6rAAO';

HR     1.11.1476   delete from "HR"."EMPLOYEES"
                   where
                       "EMPLOYEE_ID" = 307 and
                       "FIRST_NAME" = 'John' and
                       "LAST_NAME" = 'Silver' and
                       "EMAIL" = 'JSILVER' and
                       "PHONE_NUMBER" = '555122122' and
                       "HIRE_DATE" = TO_DATE('10-jan-2003 13:41:03',
                       'dd-mon-yyyy hh24:mi:ss') and
                       "JOB_ID" = 'SH_CLERK' and
                       "SALARY" = 110000 and
                       "COMMISSION_PCT" = .05 and
                       "MANAGER_ID" = 105 and
                       "DEPARTMENT_ID" = 50 and
                       ROWID = 'AAAHSkAABAAAY6rAAP'; 
HR     1.11.1476    commit;
Step 6   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 4: Using the LogMiner Dictionary in the Redo Log Files

This example shows how to use the dictionary that has been extracted to the redo log files. When you use the dictionary in the online catalog, you must mine the redo log files in the same database that generated them. Using the dictionary contained in the redo log files enables you to mine redo log files in a different database.

Step 1   Determine which redo log file was most recently archived by the database.

This example assumes that you know you want to mine the redo log file that was most recently archived.

SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG
   WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME                                           SEQUENCE#
--------------------------------------------   --------------
/usr/oracle/data/db1arch_1_210_482701534.dbf   210
Step 2   Find the redo log files containing the dictionary.

The dictionary may be contained in more than one redo log file. Therefore, you need to determine which redo log files contain the start and end of the dictionary. Query the V$ARCHIVED_LOG view, as follows:

  1. Find a redo log file that contains the end of the dictionary extract. This redo log file must have been created before the redo log file that you want to analyze, but should be as recent as possible.

    SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end
       FROM V$ARCHIVED_LOG
       WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG
       WHERE DICTIONARY_END = 'YES' and SEQUENCE# <= 210);
    
    NAME                                           SEQUENCE#    D_BEG  D_END
    --------------------------------------------   ----------   -----  ------
    /usr/oracle/data/db1arch_1_208_482701534.dbf   208          NO     YES
    
  2. Find the redo log file that contains the start of the data dictionary extract that matches the end of the dictionary found in the previous step:

    SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end
       FROM V$ARCHIVED_LOG
       WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG
       WHERE DICTIONARY_BEGIN = 'YES' and SEQUENCE# <= 208);
    
    NAME                                           SEQUENCE#    D_BEG  D_END
    --------------------------------------------   ----------   -----  ------
    /usr/oracle/data/db1arch_1_207_482701534.dbf   207          YES     NO
    
  3. Specify the list of the redo log files of interest. Add the redo log files that contain the start and end of the dictionary and the redo log file that you want to analyze. You can add the redo log files in any order.

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
       LOGFILENAME => '/usr/oracle/data/db1arch_1_210_482701534.dbf', -
           OPTIONS => DBMS_LOGMNR.NEW);
    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
       LOGFILENAME => '/usr/oracle/data/db1arch_1_208_482701534.dbf');
    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
       LOGFILENAME => '/usr/oracle/data/db1arch_1_207_482701534.dbf');
    
  4. Query the V$LOGMNR_LOGS view to display the list of redo log files to be analyzed, including their timestamps.

    In the output, LogMiner flags a missing redo log file. LogMiner lets you proceed with mining, provided that you do not specify an option that requires the missing redo log file for proper functioning.

SQL> SELECT FILENAME AS name, LOW_TIME, HIGH_TIME FROM V$LOGMNR_LOGS;
 NAME                                  LOW_TIME              HIGH_TIME
-------------------------------------   --------------------  --------------------
/usr/data/db1arch_1_207_482701534.dbf   10-jan-2003 12:01:34  10-jan-2003 13:32:46

/usr/data/db1arch_1_208_482701534.dbf   10-jan-2003 13:32:46  10-jan-2003 15:57:03

Missing logfile(s) for thread number 1, 10-jan-2003 15:57:03  10-jan-2003 15:59:53 
sequence number(s) 209 to 209

/usr/data/db1arch_1_210_482701534.dbf   10-jan-2003 15:59:53  10-jan-2003 16:07:41
Step 3   Start LogMiner.

Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY and PRINT_PRETTY_SQL options.

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + -
              DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
              DBMS_LOGMNR.PRINT_PRETTY_SQL);
Step 4   Query the V$LOGMNR_CONTENTS view.

To reduce the number of rows returned by the query, exclude from the query all DML statements done in the SYS or SYSTEM schemas. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.)

The output shows three transactions: two DDL transactions and one DML transaction. The DDL transactions, 1.2.1594 and 1.18.1602, create the table oe.product_tracking and create a trigger on table oe.product_information, respectively. In both transactions, the DML statements done to the system tables (tables owned by SYS) are filtered out because of the query predicate.

The DML transaction, 1.9.1598, updates the oe.product_information table. The update operation in this transaction is fully translated. However, the query output also contains some untranslated reconstructed SQL statements. Most likely, these statements were done on the oe.product_tracking table that was created after the data dictionary was extracted to the redo log files.

(The next example shows how to run LogMiner with the DDL_DICT_TRACKING option so that all SQL statements are fully translated; no binary data is returned.)

SELECT USERNAME AS usr, SQL_REDO FROM V$LOGMNR_CONTENTS 
   WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND
   TIMESTAMP > '10-jan-2003 15:59:53';

USR             XID         SQL_REDO
---             --------    -----------------------------------
SYS             1.2.1594    set transaction read write;
SYS             1.2.1594    create table oe.product_tracking (product_id number not null,
                            modified_time date,
                            old_list_price number(8,2),
                            old_warranty_period interval year(2) to month);
SYS             1.2.1594    commit;

SYS             1.18.1602   set transaction read write;
SYS             1.18.1602   create or replace trigger oe.product_tracking_trigger
                            before update on oe.product_information
                            for each row
                            when (new.list_price <> old.list_price or
                                  new.warranty_period <> old.warranty_period)
                            declare
                            begin
                            insert into oe.product_tracking values 
                               (:old.product_id, sysdate,
                                :old.list_price, :old.warranty_period);
                            end;
SYS             1.18.1602   commit;

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"
                              set
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
                                "LIST_PRICE" = 100
                              where
                                "PRODUCT_ID" = 1729 and
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
                                "LIST_PRICE" = 80 and
                                ROWID = 'AAAHTKAABAAAY9yAAA';

OE              1.9.1598    insert into "UNKNOWN"."OBJ# 33415"
                              values
                                "COL 1" = HEXTORAW('c2121e'),
                                "COL 2" = HEXTORAW('7867010d110804'),
                                "COL 3" = HEXTORAW('c151'),
                                "COL 4" = HEXTORAW('800000053c');

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"
                              set
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
                                "LIST_PRICE" = 92
                              where
                                "PRODUCT_ID" = 2340 and
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
                                "LIST_PRICE" = 72 and
                                ROWID = 'AAAHTKAABAAAY9zAAA';

OE              1.9.1598    insert into "UNKNOWN"."OBJ# 33415"
                              values
                                "COL 1" = HEXTORAW('c21829'),
                                "COL 2" = HEXTORAW('7867010d110808'),
                                "COL 3" = HEXTORAW('c149'),
                                "COL 4" = HEXTORAW('800000053c');

OE              1.9.1598     commit;
Step 5   Issue additional queries, if desired.

Display all the DML statements that were executed as part of the CREATE TABLE DDL statement. This includes statements executed by users and internally by Oracle.


Note:

If you choose to reapply statements displayed by a query such as the one shown here, then reapply DDL statements only. Do not reapply DML statements that were executed internally by Oracle, or you risk corrupting your database. In the following output, the only statement that you should use in a reapply operation is the CREATE TABLE OE.PRODUCT_TRACKING statement.

SELECT SQL_REDO FROM V$LOGMNR_CONTENTS
   WHERE XIDUSN  = 1 and XIDSLT = 2 and XIDSQN = 1594;

SQL_REDO
--------------------------------------------------------------------------------
set transaction read write;

insert into "SYS"."OBJ$"
 values
    "OBJ#" = 33415,
    "DATAOBJ#" = 33415,
    "OWNER#" = 37,
    "NAME" = 'PRODUCT_TRACKING',
    "NAMESPACE" = 1,
    "SUBNAME" IS NULL,
    "TYPE#" = 2,
    "CTIME" = TO_DATE('13-jan-2003 14:01:03', 'dd-mon-yyyy hh24:mi:ss'),
    "MTIME" = TO_DATE('13-jan-2003 14:01:03', 'dd-mon-yyyy hh24:mi:ss'),
    "STIME" = TO_DATE('13-jan-2003 14:01:03', 'dd-mon-yyyy hh24:mi:ss'),
    "STATUS" = 1,
    "REMOTEOWNER" IS NULL,
    "LINKNAME" IS NULL,
    "FLAGS" = 0,
    "OID$" IS NULL,
    "SPARE1" = 6,
    "SPARE2" = 1,
    "SPARE3" IS NULL,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "SPARE6" IS NULL;

insert into "SYS"."TAB$"
 values
    "OBJ#" = 33415,
    "DATAOBJ#" = 33415,
    "TS#" = 0,
    "FILE#" = 1,
    "BLOCK#" = 121034,
    "BOBJ#" IS NULL,
    "TAB#" IS NULL,
    "COLS" = 5,
    "CLUCOLS" IS NULL,
    "PCTFREE$" = 10,
    "PCTUSED$" = 40,
    "INITRANS" = 1,
    "MAXTRANS" = 255,
    "FLAGS" = 1,
    "AUDIT$" = '--------------------------------------',
    "ROWCNT" IS NULL,
    "BLKCNT" IS NULL,
    "EMPCNT" IS NULL,
    "AVGSPC" IS NULL,
    "CHNCNT" IS NULL,
    "AVGRLN" IS NULL,
    "AVGSPC_FLB" IS NULL,
    "FLBCNT" IS NULL,
    "ANALYZETIME" IS NULL,
    "SAMPLESIZE" IS NULL,
    "DEGREE" IS NULL,
    "INSTANCES" IS NULL,
    "INTCOLS" = 5,
    "KERNELCOLS" = 5,
    "PROPERTY" = 536870912,
    "TRIGFLAG" = 0,
    "SPARE1" = 178,
    "SPARE2" IS NULL,
    "SPARE3" IS NULL,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "SPARE6" = TO_DATE('13-jan-2003 14:01:05', 'dd-mon-yyyy hh24:mi:ss'),

insert into "SYS"."COL$"
 values
    "OBJ#" = 33415,
    "COL#" = 1,
    "SEGCOL#" = 1,
    "SEGCOLLENGTH" = 22,
    "OFFSET" = 0,
    "NAME" = 'PRODUCT_ID',
    "TYPE#" = 2,
    "LENGTH" = 22,
    "FIXEDSTORAGE" = 0,
    "PRECISION#" IS NULL,
    "SCALE" IS NULL,
    "NULL$" = 1,
    "DEFLENGTH" IS NULL,
    "SPARE6" IS NULL,
    "INTCOL#" = 1,
    "PROPERTY" = 0,
    "CHARSETID" = 0,
    "CHARSETFORM" = 0,
    "SPARE1" = 0,
    "SPARE2" = 0,
    "SPARE3" = 0,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "DEFAULT$" IS NULL;

insert into "SYS"."COL$"
 values
    "OBJ#" = 33415,
    "COL#" = 2,
    "SEGCOL#" = 2,
    "SEGCOLLENGTH" = 7,
    "OFFSET" = 0,
    "NAME" = 'MODIFIED_TIME',
    "TYPE#" = 12,
    "LENGTH" = 7,
    "FIXEDSTORAGE" = 0,
    "PRECISION#" IS NULL,
    "SCALE" IS NULL,
    "NULL$" = 0,
    "DEFLENGTH" IS NULL,
    "SPARE6" IS NULL,
    "INTCOL#" = 2,
    "PROPERTY" = 0,
    "CHARSETID" = 0,
    "CHARSETFORM" = 0,
    "SPARE1" = 0,
    "SPARE2" = 0,
    "SPARE3" = 0,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "DEFAULT$" IS NULL;

insert into "SYS"."COL$"
 values
    "OBJ#" = 33415,
    "COL#" = 3,
    "SEGCOL#" = 3,
    "SEGCOLLENGTH" = 22,
    "OFFSET" = 0,
    "NAME" = 'OLD_LIST_PRICE',
    "TYPE#" = 2,
    "LENGTH" = 22,
    "FIXEDSTORAGE" = 0,
    "PRECISION#" = 8,
    "SCALE" = 2,
    "NULL$" = 0,
    "DEFLENGTH" IS NULL,
    "SPARE6" IS NULL,
    "INTCOL#" = 3,
    "PROPERTY" = 0,
    "CHARSETID" = 0,
    "CHARSETFORM" = 0,
    "SPARE1" = 0,
    "SPARE2" = 0,
    "SPARE3" = 0,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "DEFAULT$" IS NULL;

insert into "SYS"."COL$"
 values
    "OBJ#" = 33415,
    "COL#" = 4,
    "SEGCOL#" = 4,
    "SEGCOLLENGTH" = 5,
    "OFFSET" = 0,
    "NAME" = 'OLD_WARRANTY_PERIOD',
    "TYPE#" = 182,
    "LENGTH" = 5,
    "FIXEDSTORAGE" = 0,
    "PRECISION#" = 2,
    "SCALE" = 0,
    "NULL$" = 0,
    "DEFLENGTH" IS NULL,
    "SPARE6" IS NULL,
    "INTCOL#" = 4,
    "PROPERTY" = 0,
    "CHARSETID" = 0,
    "CHARSETFORM" = 0,
    "SPARE1" = 0,
    "SPARE2" = 2,
    "SPARE3" = 0,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "DEFAULT$" IS NULL;

insert into "SYS"."CCOL$"
 values
    "OBJ#" = 33415,
    "CON#" = 2090,
    "COL#" = 1,
    "POS#" IS NULL,
    "INTCOL#" = 1,
    "SPARE1" = 0,
    "SPARE2" IS NULL,
    "SPARE3" IS NULL,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "SPARE6" IS NULL;

insert into "SYS"."CDEF$"
 values
    "OBJ#" = 33415,
    "CON#" = 2090,
    "COLS" = 1,
    "TYPE#" = 7,
    "ROBJ#" IS NULL,
    "RCON#" IS NULL,
    "RRULES" IS NULL,
    "MATCH#" IS NULL,
    "REFACT" IS NULL,
    "ENABLED" = 1,
    "CONDLENGTH" = 24,
    "SPARE6" IS NULL,
    "INTCOLS" = 1,
    "MTIME" = TO_DATE('13-jan-2003 14:01:08', 'dd-mon-yyyy hh24:mi:ss'),
    "DEFER" = 12,
    "SPARE1" = 6,
    "SPARE2" IS NULL,
    "SPARE3" IS NULL,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "CONDITION" = '"PRODUCT_ID" IS NOT NULL';

create table oe.product_tracking (product_id number not null,
  modified_time date,
  old_product_description varchar2(2000),
  old_list_price number(8,2),
  old_warranty_period interval year(2) to month);

update "SYS"."SEG$"
  set
    "TYPE#" = 5,
    "BLOCKS" = 5,
    "EXTENTS" = 1,
    "INIEXTS" = 5,
    "MINEXTS" = 1,
    "MAXEXTS" = 121,
    "EXTSIZE" = 5,
    "EXTPCT" = 50,
    "USER#" = 37,
    "LISTS" = 0,
    "GROUPS" = 0,
    "CACHEHINT" = 0,
    "HWMINCR" = 33415,
    "SPARE1" = 1024
  where
    "TS#" = 0 and
    "FILE#" = 1 and
    "BLOCK#" = 121034 and
    "TYPE#" = 3 and
    "BLOCKS" = 5 and
    "EXTENTS" = 1 and
    "INIEXTS" = 5 and
    "MINEXTS" = 1 and
    "MAXEXTS" = 121 and
    "EXTSIZE" = 5 and
    "EXTPCT" = 50 and
    "USER#" = 37 and
    "LISTS" = 0 and
    "GROUPS" = 0 and
    "BITMAPRANGES" = 0 and
    "CACHEHINT" = 0 and
    "SCANHINT" = 0 and
    "HWMINCR" = 33415 and
    "SPARE1" = 1024 and
    "SPARE2" IS NULL and
    ROWID = 'AAAAAIAABAAAdMOAAB';

insert into "SYS"."CON$"
 values
    "OWNER#" = 37,
    "NAME" = 'SYS_C002090',
    "CON#" = 2090,
    "SPARE1" IS NULL,
    "SPARE2" IS NULL,
    "SPARE3" IS NULL,
    "SPARE4" IS NULL,
    "SPARE5" IS NULL,
    "SPARE6" IS NULL;

commit;
Step 6   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 5: Tracking DDL Statements in the Internal Dictionary

By using the DBMS_LOGMNR.DDL_DICT_TRACKING option, this example ensures that the LogMiner internal dictionary is updated with the DDL statements encountered in the redo log files.

Step 1   Determine which redo log file was most recently archived by the database.

This example assumes that you know you want to mine the redo log file that was most recently archived.

SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG 
   WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME                                           SEQUENCE#
--------------------------------------------   --------------
/usr/oracle/data/db1arch_1_210_482701534.dbf   210
Step 2   Find the dictionary in the redo log files.

Because the dictionary may be contained in more than one redo log file, you need to determine which redo log files contain the start and end of the data dictionary. Query the V$ARCHIVED_LOG view, as follows:

  1. Find a redo log that contains the end of the data dictionary extract. This redo log file must have been created before the redo log files that you want to analyze, but should be as recent as possible.

    SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end
       FROM V$ARCHIVED_LOG
       WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG
       WHERE DICTIONARY_END = 'YES' and SEQUENCE# < 210);
    
    
    NAME                                           SEQUENCE#    D_BEG  D_END
    --------------------------------------------   ----------   -----  ------
    /usr/oracle/data/db1arch_1_208_482701534.dbf   208          NO     YES
    
  2. Find the redo log file that contains the start of the data dictionary extract that matches the end of the dictionary found by the previous SQL statement:

    SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end
       FROM V$ARCHIVED_LOG
       WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG
       WHERE DICTIONARY_BEGIN = 'YES' and SEQUENCE# <= 208);
    
    NAME                                           SEQUENCE#    D_BEG  D_END
    --------------------------------------------   ----------   -----  ------
    /usr/oracle/data/db1arch_1_208_482701534.dbf   207          YES     NO
    
Step 3   Ensure that you have a complete list of redo log files.

To successfully apply DDL statements encountered in the redo log files, ensure that all files are included in the list of redo log files to mine. The missing log file corresponding to sequence# 209 must be included in the list. Determine the names of the redo log files that you need to add to the list by issuing the following query:

SELECT NAME FROM V$ARCHIVED_LOG
   WHERE SEQUENCE# >= 207 AND SEQUENCE# <= 210 
   ORDER BY SEQUENCE# ASC;

NAME                                           
--------------------------------------------   
/usr/oracle/data/db1arch_1_207_482701534.dbf  
/usr/oracle/data/db1arch_1_208_482701534.dbf  
/usr/oracle/data/db1arch_1_209_482701534.dbf  
/usr/oracle/data/db1arch_1_210_482701534.dbf  
Step 4   Specify the list of the redo log files of interest.

Include the redo log files that contain the beginning and end of the dictionary, the redo log file that you want to mine, and any redo log files required to create a list without gaps. You can add the redo log files in any order.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   LOGFILENAME => '/usr/oracle/data/db1arch_1_210_482701534.dbf', -
       OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   LOGFILENAME => '/usr/oracle/data/db1arch_1_209_482701534.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   LOGFILENAME => '/usr/oracle/data/db1arch_1_208_482701534.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   LOGFILENAME => '/usr/oracle/data/db1arch_1_207_482701534.dbf');
Step 5   Start LogMiner.

Start LogMiner by specifying the dictionary to use and the DDL_DICT_TRACKING, COMMITTED_DATA_ONLY, and PRINT_PRETTY_SQL options.

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + -
              DBMS_LOGMNR.DDL_DICT_TRACKING + -
              DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
              DBMS_LOGMNR.PRINT_PRETTY_SQL);
Step 6   Query the V$LOGMNR_CONTENTS view.

To reduce the number of rows returned, exclude from the query all DML statements done in the SYS or SYSTEM schemas. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.)

The query returns all the reconstructed SQL statements correctly translated and the insert operations on the oe.product_tracking table that occurred because of the trigger execution.

SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM  
   V$LOGMNR_CONTENTS 
   WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND
   TIMESTAMP > '10-jan-2003 15:59:53';

USR             XID         SQL_REDO
-----------     --------    -----------------------------------
SYS             1.2.1594    set transaction read write;
SYS             1.2.1594    create table oe.product_tracking (product_id number not null,
                            modified_time date,
                            old_list_price number(8,2),
                            old_warranty_period interval year(2) to month);
SYS             1.2.1594    commit;

SYS             1.18.1602   set transaction read write;
SYS             1.18.1602   create or replace trigger oe.product_tracking_trigger
                            before update on oe.product_information
                            for each row
                            when (new.list_price <> old.list_price or
                                  new.warranty_period <> old.warranty_period)
                            declare
                            begin
                            insert into oe.product_tracking values 
                               (:old.product_id, sysdate,
                                :old.list_price, :old.warranty_period);
                            end;
SYS             1.18.1602   commit;

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"
                              set
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
                                "LIST_PRICE" = 100
                              where
                                "PRODUCT_ID" = 1729 and
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
                                "LIST_PRICE" = 80 and
                                ROWID = 'AAAHTKAABAAAY9yAAA';
OE              1.9.1598    insert into "OE"."PRODUCT_TRACKING"
                              values
                                "PRODUCT_ID" = 1729,
                                "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03', 
                                'dd-mon-yyyy hh24:mi:ss'),
                                "OLD_LIST_PRICE" = 80,
                                "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"
                              set
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
                                "LIST_PRICE" = 92
                              where
                                "PRODUCT_ID" = 2340 and
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
                                "LIST_PRICE" = 72 and
                                ROWID = 'AAAHTKAABAAAY9zAAA';

OE              1.9.1598    insert into "OE"."PRODUCT_TRACKING"
                              values
                                "PRODUCT_ID" = 2340,
                                "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07', 
                                'dd-mon-yyyy hh24:mi:ss'),
                                "OLD_LIST_PRICE" = 72,
                                "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

OE              1.9.1598     commit;
Step 7   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 6: Filtering Output by Time Range

In the previous two examples, rows were filtered by specifying a timestamp-based predicate (timestamp > '10-jan-2003 15:59:53') in the query. However, a more efficient way to filter out redo records based on timestamp values is by specifying the time range in the DBMS_LOGMNR.START_LOGMNR procedure call, as shown in this example.

Step 1   Create a list of redo log files to mine.

Suppose you want to mine redo log files generated since a given time. The following procedure creates a list of redo log files based on a specified time. The subsequent SQL EXECUTE statement calls the procedure and specifies the starting time as 2 p.m. on Jan-13-2003.

--
-- my_add_logfiles
-- Add all archived logs generated after a specified start_time.
--
CREATE OR REPLACE PROCEDURE my_add_logfiles (in_start_time  IN DATE) AS
  CURSOR  c_log IS 
    SELECT NAME FROM V$ARCHIVED_LOG 
      WHERE FIRST_TIME >= in_start_time;

count      pls_integer := 0;
my_option  pls_integer := DBMS_LOGMNR.NEW;

BEGIN
  FOR c_log_rec IN c_log
  LOOP
    DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => c_log_rec.name, 
                            OPTIONS => my_option);
    my_option := DBMS_LOGMNR.ADDFILE;
    DBMS_OUTPUT.PUT_LINE('Added logfile ' || c_log_rec.name);
  END LOOP;
END;
/

EXECUTE my_add_logfiles(in_start_time => '13-jan-2003 14:00:00');
Step 2   Query the V$LOGMNR_LOGS to see the list of redo log files.

This example includes the size of the redo log files in the output.

SELECT FILENAME name, LOW_TIME start_time, FILESIZE bytes 
    FROM V$LOGMNR_LOGS;

NAME                                START_TIME            BYTES
----------------------------------- --------------------  ----------------
/usr/orcl/arch1_310_482932022.dbf    13-jan-2003 14:02:35  23683584
/usr/orcl/arch1_311_482932022.dbf    13-jan-2003 14:56:35  2564096
/usr/orcl/arch1_312_482932022.dbf    13-jan-2003 15:10:43  23683584
/usr/orcl/arch1_313_482932022.dbf    13-jan-2003 15:17:52  23683584
/usr/orcl/arch1_314_482932022.dbf    13-jan-2003 15:23:10  23683584
/usr/orcl/arch1_315_482932022.dbf    13-jan-2003 15:43:22  23683584
/usr/orcl/arch1_316_482932022.dbf    13-jan-2003 16:03:10  23683584
/usr/orcl/arch1_317_482932022.dbf    13-jan-2003 16:33:43  23683584
/usr/orcl/arch1_318_482932022.dbf    13-jan-2003 17:23:10  23683584
Step 3   Adjust the list of redo log files.

Suppose you realize that you want to mine just the redo log files generated between 3 p.m. and 4 p.m.

You could use the query predicate (timestamp > '13-jan-2003 15:00:00' and timestamp < '13-jan-2003 16:00:00') to accomplish this. However, the query predicate is evaluated on each row returned by LogMiner, and the internal mining engine does not filter rows based on the query predicate. Thus, although you only wanted to get rows out of redo log files arch1_311_482932022.dbf to arch1_315_482932022.dbf, your query would result in mining all redo log files registered to the LogMiner session.

Furthermore, although you could use the query predicate and manually remove the redo log files that do not fall inside the time range of interest, the simplest solution is to specify the time range of interest in the DBMS_LOGMNR.START_LOGMNR procedure call.

Although this does not change the list of redo log files, LogMiner will mine only those redo log files that fall in the time range specified.

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   STARTTIME => '13-jan-2003 15:00:00', -
   ENDTIME   => '13-jan-2003 16:00:00', -
   OPTIONS   => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
                DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
                DBMS_LOGMNR.PRINT_PRETTY_SQL);
Step 4   Query the V$LOGMNR_CONTENTS view.
SELECT TIMESTAMP, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
 SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'OE';

TIMESTAMP              XID          SQL_REDO
---------------------  -----------  --------------------------------
13-jan-2003 15:29:31   1.17.2376    update "OE"."PRODUCT_INFORMATION"
                                      set
                                        "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00')
                                      where
                                        "PRODUCT_ID" = 3399 and
                                        "WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00') and
                                        ROWID = 'AAAHTKAABAAAY9TAAE';
13-jan-2003 15:29:34   1.17.2376      insert into "OE"."PRODUCT_TRACKING"
                                        values
                                        "PRODUCT_ID" = 3399,
                                        "MODIFIED_TIME" = TO_DATE('13-jan-2003 15:29:34', 
                                        'dd-mon-yyyy hh24:mi:ss'),
                                        "OLD_LIST_PRICE" = 815,
                                        "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00');

13-jan-2003 15:52:43   1.15.1756      update "OE"."PRODUCT_INFORMATION"
                                        set
                                          "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00')
                                        where
                                          "PRODUCT_ID" = 1768 and
                                          "WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00') and
                                          ROWID = 'AAAHTKAABAAAY9UAAB';

13-jan-2003 15:52:43   1.15.1756      insert into "OE"."PRODUCT_TRACKING"
                                        values
                                        "PRODUCT_ID" = 1768,
                                        "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:52:43', 
                                        'dd-mon-yyyy hh24:mi:ss'),
                                        "OLD_LIST_PRICE" = 715,
                                        "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00');
Step 5   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Examples of Mining >uWithout Specifying the List of Redo Log Files Explicitly

The previous set of examples explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a set of redo log files without explicitly specifying them, use the DBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or an SCN range of interest.

This section contains the following list of examples; these examples are best read in sequential order, because each example builds on the example or examples that precede it:

The SQL output formatting may be different on your display than that shown in these examples.

Example 1: Mining Redo Log Files in a Given Time Range

This example is similar to "Example 4: Using the LogMiner Dictionary in the Redo Log Files", except the list of redo log files are not specified explicitly. This example assumes that you want to use the data dictionary extracted to the redo log files.

Step 1   Determine the timestamp of the redo log file that contains the start of the data dictionary.
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG
    WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG 
    WHERE DICTIONARY_BEGIN = 'YES');

NAME                                          FIRST_TIME
--------------------------------------------  --------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf  10-jan-2003 12:01:34
Step 2   Display all the redo log files that have been generated so far.

This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.

SELECT FILENAME name FROM V$LOGMNR_LOGS
   WHERE LOW_TIME > '10-jan-2003 12:01:34';

NAME
----------------------------------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf
Step 3   Start LogMiner.

Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY, PRINT_PRETTY_SQL, and CONTINUOUS_MINE options.

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   STARTTIME => '10-jan-2003 12:01:34', -
     ENDTIME => SYSDATE, -
     OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + -
                DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
                DBMS_LOGMNR.PRINT_PRETTY_SQL + -
                    DBMS_LOGMNR.CONTINUOUS_MINE);
Step 4   Query the V$LOGMNR_LOGS view.

This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.)

SELECT FILENAME name FROM V$LOGMNR_LOGS;

NAME
------------------------------------------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf
Step 5   Query the V$LOGMNR_CONTENTS view.

To reduce the number of rows returned by the query, exclude all DML statements done in the SYS or SYSTEM schema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.)

Note that all reconstructed SQL statements returned by the query are correctly translated.

SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, 
   SQL_REDO FROM V$LOGMNR_CONTENTS 
   WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND
   TIMESTAMP > '10-jan-2003 15:59:53';

USR             XID         SQL_REDO
-----------     --------    -----------------------------------
SYS             1.2.1594    set transaction read write;
SYS             1.2.1594    create table oe.product_tracking (product_id number not null,
                            modified_time date,
                            old_list_price number(8,2),
                            old_warranty_period interval year(2) to month);
SYS             1.2.1594    commit;

SYS             1.18.1602   set transaction read write;
SYS             1.18.1602   create or replace trigger oe.product_tracking_trigger
                            before update on oe.product_information
                            for each row
                            when (new.list_price <> old.list_price or
                                  new.warranty_period <> old.warranty_period)
                            declare
                            begin
                            insert into oe.product_tracking values 
                               (:old.product_id, sysdate,
                                :old.list_price, :old.warranty_period);
                            end;
SYS             1.18.1602   commit;

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"
                              set
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
                                "LIST_PRICE" = 100
                              where
                                "PRODUCT_ID" = 1729 and
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
                                "LIST_PRICE" = 80 and
                                ROWID = 'AAAHTKAABAAAY9yAAA';
OE              1.9.1598    insert into "OE"."PRODUCT_TRACKING"
                              values
                                "PRODUCT_ID" = 1729,
                                "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03', 
                                'dd-mon-yyyy hh24:mi:ss'),
                                "OLD_LIST_PRICE" = 80,
                                "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"
                              set
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
                                "LIST_PRICE" = 92
                              where
                                "PRODUCT_ID" = 2340 and
                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
                                "LIST_PRICE" = 72 and
                                ROWID = 'AAAHTKAABAAAY9zAAA';

OE              1.9.1598    insert into "OE"."PRODUCT_TRACKING"
                              values
                                "PRODUCT_ID" = 2340,
                                "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07', 
                                'dd-mon-yyyy hh24:mi:ss'),
                                "OLD_LIST_PRICE" = 72,
                                "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

OE              1.9.1598     commit;
Step 6   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 2: Mining the Redo Log Files in a Given SCN Range

This example shows how to specify an SCN range of interest and mine the redo log files that satisfy that range. You can use LogMiner to see all committed DML statements whose effects have not yet been made permanent in the data files.

Note that in this example (unlike the other examples) it is not assumed that you have set the NLS_DATE_FORMAT parameter.

Step 1   Determine the SCN of the last checkpoint taken.
SELECT CHECKPOINT_CHANGE#, CURRENT_SCN FROM V$DATABASE;
CHECKPOINT_CHANGE#  CURRENT_SCN
------------------  ---------------
          56453576         56454208
Step 2   Start LogMiner and specify the CONTINUOUS_MINE option.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   STARTSCN => 56453576, -
   ENDSCN   => 56454208, -
   OPTIONS  => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
               DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
               DBMS_LOGMNR.PRINT_PRETTY_SQL + -
               DBMS_LOGMNR.CONTINUOUS_MINE);
Step 3   Display the list of archived redo log files added by LogMiner.
SELECT FILENAME name, LOW_SCN, NEXT_SCN FROM V$LOGMNR_LOGS;
NAME                                           LOW_SCN   NEXT_SCN
--------------------------------------------   --------  --------
/usr/oracle/data/db1arch_1_215_482701534.dbf   56316771  56453579

Note that the redo log file that LogMiner added does not contain the whole SCN range. When you specify the CONTINUOUS_MINE option, LogMiner adds only archived redo log files when you call the DBMS_LOGMNR.START_LOGMNR procedure. LogMiner will add the rest of the SCN range contained in the online redo log files automatically, as needed during the query execution. Use the following query to determine whether the redo log file added is the latest archived redo log file produced.

SELECT NAME FROM V$ARCHIVED_LOG 
   WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG);

NAME
-------------------------------------------- 
/usr/oracle/data/db1arch_1_215_482701534.dbf 
Step 4   Query the V$LOGMNR_CONTENTS view for changes made to the user tables.

The following query does not return the SET TRANSACTION READ WRITE and COMMIT statements associated with transaction 1.6.1911 because these statements do not have a segment owner (SEG_OWNER) associated with them.

Note that the default NLS_DATE_FORMAT, 'DD-MON-RR', is used to display the column MODIFIED_TIME of type DATE.

SELECT SCN, (XIDUSN || '.' || XIDSLT || '.' ||  XIDSQN) as XID, SQL_REDO 
    FROM V$LOGMNR_CONTENTS
    WHERE SEG_OWNER NOT IN ('SYS', 'SYSTEM');


SCN        XID        SQL_REDO
---------- ---------- -------------
56454198   1.6.1911   update "OE"."PRODUCT_INFORMATION"
                        set
                          "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00')
                        where
                          "PRODUCT_ID" = 2430 and
                          "WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00') and
                          ROWID = 'AAAHTKAABAAAY9AAAC';

56454199   1.6.1911   insert into "OE"."PRODUCT_TRACKING"
                        values
                          "PRODUCT_ID" = 2430,
                          "MODIFIED_TIME" = TO_DATE('17-JAN-03', 'DD-MON-RR'),
                          "OLD_LIST_PRICE" = 175,
                          "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00');

56454204   1.6.1911    update "OE"."PRODUCT_INFORMATION"
                         set
                           "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00')
                         where
                           "PRODUCT_ID" = 2302 and
                           "WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00') and
                           ROWID = 'AAAHTKAABAAAY9QAAA';
56454206   1.6.1911    insert into "OE"."PRODUCT_TRACKING"
                         values
                           "PRODUCT_ID" = 2302,
                           "MODIFIED_TIME" = TO_DATE('17-JAN-03', 'DD-MON-RR'),
                           "OLD_LIST_PRICE" = 150,
                           "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+02-00');
Step 5   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example 3: Using Continuous Mining to Include Future Values in a Query

To specify that a query not finish until some future time occurs or SCN is reached, use the CONTINUOUS_MINE option and set either the ENDTIME or ENDSCAN option in your call to the DBMS_LOGMNR.START_LOGMNR procedure to a time in the future or to an SCN value that has not yet been reached.

This examples assumes that you want to monitor all changes made to the table hr.employees from now until 5 hours from now, and that you are using the dictionary in the online catalog.

Step 1   Start LogMiner.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   STARTTIME => SYSDATE, -
   ENDTIME   => SYSDATE + 5/24, -
   OPTIONS   => DBMS_LOGMNR.CONTINUOUS_MINE  + -
                DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Step 2   Query the V$LOGMNR_CONTENTS view.

This select operation will not complete until it encounters the first redo log file record that is generated after the time range of interest (5 hours from now). You can end the select operation prematurely by entering Ctrl+C.

This example specifies the SET ARRAYSIZE statement so that rows are displayed as they are entered in the redo log file. If you do not specify the SET ARRAYSIZE statement, then rows are not returned until the SQL internal buffer is full.

SET ARRAYSIZE 1;
SELECT USERNAME AS usr, SQL_REDO FROM V$LOGMNR_CONTENTS
   WHERE  SEG_OWNER = 'HR' AND TABLE_NAME = 'EMPLOYEES';
Step 3   End the LogMiner session.
EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example Scenarios

The examples in this section demonstrate how to use LogMiner for typical scenarios. This section includes the following examples:

Scenario 1: Using LogMiner to Track Changes Made by a Specific User

This example shows how to see all changes made to the database in a specific time range by a single user: joedevo. Connect to the database and then take the following steps:

  1. Create the LogMiner dictionary file.

    To use LogMiner to analyze joedevo's data, you must either create a LogMiner dictionary file before any table definition changes are made to tables that joedevo uses or use the online catalog at LogMiner startup. See "Extract a LogMiner Dictionary" for examples of creating LogMiner dictionaries. This example uses a LogMiner dictionary that has been extracted to the redo log files.

  2. Add redo log files.

    Assume that joedevo has made some changes to the database. You can now specify the names of the redo log files that you want to analyze, as follows:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME => 'log1orc1.ora', -
       OPTIONS => DBMS_LOGMNR.NEW);
    

    If desired, add additional redo log files, as follows:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
       LOGFILENAME => 'log2orc1.ora', -
       OPTIONS => DBMS_LOGMNR.ADDFILE);
    
  3. Start LogMiner and limit the search to the specified time range:

    EXECUTE DBMS_LOGMNR.START_LOGMNR( -
       DICTFILENAME => 'orcldict.ora', -
       STARTTIME => TO_DATE('01-Jan-1998 08:30:00','DD-MON-YYYY HH:MI:SS'), -
       ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
    
  4. Query the V$LOGMNR_CONTENTS view.

    At this point, the V$LOGMNR_CONTENTS view is available for queries. You decide to find all of the changes made by user joedevo to the salary table. Execute the following SELECT statement:

    SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS 
       WHERE USERNAME = 'joedevo' AND SEG_NAME = 'salary';
    

    For both the SQL_REDO and SQL_UNDO columns, two rows are returned (the format of the data display will be different on your screen). You discover that joedevo requested two operations: he deleted his old salary and then inserted a new, higher salary. You now have the data necessary to undo this operation.

    SQL_REDO                              SQL_UNDO
    --------                              --------
    delete from SALARY                    insert into SALARY(NAME, EMPNO, SAL)
    where EMPNO = 12345                    values ('JOEDEVO', 12345, 500)
    and NAME='JOEDEVO'
    and SAL=500;
    
    insert into SALARY(NAME, EMPNO, SAL)  delete from SALARY
    values('JOEDEVO',12345, 2500)         where EMPNO = 12345
                                          and NAME = 'JOEDEVO'
    2 rows selected                       and SAL = 2500;
    
  5. End the LogMiner session.

    Use the DBMS_LOGMNR.END_LOGMNR procedure to finish the LogMiner session properly:

    DBMS_LOGMNR.END_LOGMNR( );
    

Scenario 2: Using LogMiner to Calculate Table Access Statistics

In this example, assume you manage a direct marketing database and want to determine how productive the customer contacts have been in generating revenue for a 2-week period in January. Assume that you have already created the LogMiner dictionary and added the redo log files that you want to search (as demonstrated in the previous example). Take the following steps:

  1. Start LogMiner and specify a range of times:

    EXECUTE DBMS_LOGMNR.START_LOGMNR( -
       STARTTIME => TO_DATE('07-Jan-2003 08:30:00','DD-MON-YYYY HH:MI:SS'), -
       ENDTIME => TO_DATE('21-Jan-2003 08:45:00','DD-MON-YYYY HH:MI:SS'), -
       DICTFILENAME => '/usr/local/dict.ora');
    
  2. Query the V$LOGMNR_CONTENTS view to determine which tables were modified in the time range you specified, as shown in the following example. (This query filters out system tables that traditionally have a $ in their name.)

    SELECT SEG_OWNER, SEG_NAME, COUNT(*) AS Hits FROM
       V$LOGMNR_CONTENTS WHERE SEG_NAME NOT LIKE '%$' GROUP BY
       SEG_OWNER, SEG_NAME ORDER BY Hits DESC;
    
  3. The following data is displayed. (The format of your display may be different.)

    SEG_OWNER          SEG_NAME          Hits
    ---------          --------          ----
    CUST               ACCOUNT            384
    UNIV               EXECDONOR          325
    UNIV               DONOR              234
    UNIV               MEGADONOR           32
    HR                 EMPLOYEES           12
    SYS                DONOR               12
    

    The values in the Hits column show the number of times that the named table had an insert, delete, or update operation performed on it during the 2-week period specified in the query. In this example, the cust.account table was modified the most during the specified 2-week period, and the hr.employees and sys.donor tables were modified the least during the same time period.

  4. End the LogMiner session.

    Use the DBMS_LOGMNR.END_LOGMNR procedure to finish the LogMiner session properly:

    DBMS_LOGMNR.END_LOGMNR( );
    

Supported Datatypes, Storage Attributes, and Database and Redo Log File Versions

The following sections provide information about datatype and storage attribute support and the releases of the database and redo log files supported:

Supported Datatypes and Table Storage Attributes

LogMiner supports the following datatypes and table storage attributes. As described in information following this list, some datatypes are supported only in certain releases.

  • CHAR

  • NCHAR

  • VARCHAR2 and VARCHAR

  • NVARCHAR2

  • NUMBER

  • DATE

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

  • RAW

  • CLOB

  • NCLOB

  • BLOB

  • LONG

  • LONG RAW

  • BINARY_FLOAT

  • BINARY_DOUBLE

  • Index-organized tables (IOTs), including those with overflows or LOB columns

  • Function-based indexes

  • Tables using basic table compression and OLTP table compression

  • XMLType data stored in CLOB format

  • XMLType data stored in object-relational format. The contents of the SQL_REDO column for the XML data-related operations is never valid SQL or PL/SQL.

  • XMLType data stored as binary XML. The contents of the SQL_REDO column for the XML data-related operations is never valid SQL or PL/SQL.

  • Hybrid Columnar Compression (Support depends on the underlying storage system. See Oracle Database Concepts for more information about Hybrid Columnar Compression. Compatibility must be set to 11.2.)

Support for multibyte CLOBs is available only for redo logs generated by a database with compatibility set to a value of 10.1 or higher.

Support for LOB and LONG datatypes is available only for redo logs generated by a database with compatibility set to a value of 9.2.0.0 or higher.

Support for index-organized tables without overflow segment or with no LOB columns in them is available only for redo logs generated by a database with compatibility set to 10.0.0.0 or higher. Support for index-organized tables with overflow segment or with LOB columns is available only for redo logs generated by a database with compatibility set to 10.2.0.0 or higher.

Support for XMLType data stored as binary XML is available only on Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher.

Support for XMLType data stored in object-relational format is available only on Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher.

Unsupported Datatypes and Table Storage Attributes

LogMiner does not support the following data types and table storage attributes. If a table contains columns having any of these unsupported data types, then the entire table is ignored by LogMiner.

  • BFILE datatype

  • Simple and nested abstract datatypes (ADTs)

  • Collections (nested tables and VARRAYs)

  • Object refs

  • SecureFiles (unless database compatibility is set to 11.2 or higher)

Supported Databases and Redo Log File Versions

LogMiner runs only on databases of release 8.1 or later, but you can use it to analyze redo log files from release 8.0 databases. However, the information that LogMiner is able to retrieve from a redo log file depends on the version of the log, not the release of the database in use. For example, redo log files for Oracle9i can be augmented to capture additional information when supplemental logging is enabled. This allows LogMiner functionality to be used to its fullest advantage. Redo log files created with older releases of Oracle will not have that additional data and may therefore have limitations on the operations and datatypes supported by LogMiner.

SecureFiles LOB Considerations

SecureFiles LOBs are supported when database compatibility is set to 11.2 or higher. Only SQL_REDO columns can be filled in for SecureFiles LOB columns; SQL_UNDO columns are not filled in.

Transparent data encryption and data compression can be enabled on SecureFiles LOB columns at the primary database.

De-duplication of SecureFiles LOB columns, fragment-based operations on SecureFiles LOB columns, and SecureFiles Database File System (DBFS) operations are not supported. Specifically, the following operations contained within the DBMS_LOB PL/SQL package are not supported on SecureFiles LOB columns:

FRAGMENT_DELETE, FRAGMENT_INSERT, FRAGMENT_MOVE, FRAGMENT_REPLACE, COPY_FROM_DBFS_LINK, MOVE_TO_DBFS_LINK, SET_DBFS_LINK, COPY_DBFS_LINK, and SETCONTENTTYPE.

If LogMiner encounters redo generated by any of these operations, then it generates rows with the OPERATION column set to UNSUPPORTED. No SQL_REDO or SQL_UNDO will be generated for these redo records.

PK0+au>uPKN:AOEBPS/dp_perf.htm(0 Data Pump Performance

5 Data Pump Performance

The Data Pump utilities are designed especially for very large databases. If your site has very large quantities of data versus metadata, then you should experience a dramatic increase in performance compared to the original Export and Import utilities. This chapter briefly discusses why the performance is better and also suggests specific steps you can take to enhance performance of export and import operations.

This chapter contains the following sections:

Performance of metadata extraction and database object creation in Data Pump Export and Import remains essentially equivalent to that of the original Export and Import utilities.

Data Performance Improvements for Data Pump Export and Import

The improved performance of the Data Pump Export and Import utilities is attributable to several factors, including the following:

Tuning Performance

Data Pump technology fully uses all available resources to maximize throughput and minimize elapsed job time. For this to happen, a system must be well balanced across CPU, memory, and I/O. In addition, standard performance tuning principles apply. For example, for maximum performance you should ensure that the files that are members of a dump file set reside on separate disks, because the dump files are written and read in parallel. Also, the disks should not be the same ones on which the source or target tablespaces reside.

Any performance tuning activity involves making trade-offs between performance and resource consumption.

Controlling Resource Consumption

The Data Pump Export and Import utilities enable you to dynamically increase and decrease resource consumption for each job. This is done using the PARALLEL parameter to specify a degree of parallelism for the job. (The PARALLEL parameter is the only tuning parameter that is specific to Data Pump.) For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).


See Also:

  • "PARALLEL" for more information about the Export PARALLEL parameter

  • "PARALLEL" for more information about the Import PARALLEL parameter


As you increase the degree of parallelism, CPU usage, memory consumption, and I/O bandwidth usage also increase. You must ensure that adequate amounts of these resources are available. If necessary, you can distribute files across different disk devices or channels to get the needed I/O bandwidth.

To maximize parallelism, you must supply at least one file for each degree of parallelism. The simplest way of doing this is to use substitution variables in your file names (for example, file%u.dmp). However, depending upon your disk set up (for example, simple, non-striped disks), you might not want to put all dump files on one device. In this case, it is best to specify multiple file names using substitution variables, with each in a separate directory resolving to a separate disk. Even with fast CPUs and fast disks, the path between the CPU and the disk may be the constraining factor in the degree of parallelism that can be sustained.

The PARALLEL parameter is valid only in the Enterprise Edition of Oracle Database 11g.

Effects of Compression and Encryption on Performance

The use of Data Pump parameters related to compression and encryption can possibly have a negative impact upon performance of export and import operations. This is because additional CPU resources are required to perform transformations on the raw data.

Initialization Parameters That Affect Data Pump Performance

The settings for certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, you can try using the following settings to improve performance, although the effect may not be the same on all platforms.

The following initialization parameters must have values set high enough to allow for maximum parallelism:

Additionally, the SHARED_POOL_SIZE and UNDO_TABLESPACE initialization parameters should be generously sized. The exact values depend upon the size of your database.

Setting the Size Of the Buffer Cache In a Streams Environment

Oracle Data Pump uses Streams functionality to communicate between processes. If the SGA_TARGET initialization parameter is set, then the STREAMS_POOL_SIZE initialization parameter is automatically set to a reasonable value.

If the SGA_TARGET initialization parameter is not set and the STREAMS_POOL_SIZE initialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of the size of the shared pool.

When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE initialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly. A minimum size of 10 MB is recommended for STREAMS_POOL_SIZE to ensure successful Data Pump operations.

PKEF((PKN:AOEBPS/part_ldr.htm q SQL*Loader

Part II

SQL*Loader

The chapters in this part describe the SQL*Loader utility:

Chapter 7, "SQL*Loader Concepts"

This chapter introduces SQL*Loader and describes its features. It also introduces data loading concepts (including object support). It discusses input to SQL*Loader, database preparation, and output from SQL*Loader.

Chapter 8, "SQL*Loader Command-Line Reference"

This chapter describes the command-line syntax used by SQL*Loader. It discusses command-line arguments, suppressing SQL*Loader messages, sizing the bind array, and more.

Chapter 9, "SQL*Loader Control File Reference"

This chapter describes the control file syntax you use to configure SQL*Loader and to describe to SQL*Loader how to map your data to Oracle format. It provides detailed syntax diagrams and information about specifying data files, tables and columns, the location of data, the type and format of data to be loaded, and more.

Chapter 10, "SQL*Loader Field List Reference"

This chapter describes the field list section of a SQL*Loader control file. The field list provides information about fields being loaded, such as position, datatype, conditions, and delimiters.

Chapter 11, "Loading Objects, LOBs, and Collections"

This chapter describes how to load column objects in various formats. It also discusses how to load object tables, REF columns, LOBs, and collections.

Chapter 12, "Conventional and Direct Path Loads"

This chapter describes the differences between a conventional path load and a direct path load. A direct path load is a high-performance option that significantly reduces the time required to load large quantities of data.

PKԦ{ PKN:AOEBPS/ldr_control_file.htm SQL*Loader Control File Reference

9 SQL*Loader Control File Reference

This chapter describes the SQL*Loader control file. The following topics are discussed:

Control File Contents

The SQL*Loader control file is a text file that contains data definition language (DDL) instructions. DDL is used to control the following aspects of a SQL*Loader session:

See Appendix A for syntax diagrams of the SQL*Loader DDL.

To create the SQL*Loader control file, use a text editor such as vi or xemacs.

In general, the control file has three main sections, in the following order:

Example 9-1 shows a sample control file.

Example 9-1 Sample Control File

1    -- This is a sample control file
2    LOAD DATA
3    INFILE 'sample.dat'
4    BADFILE 'sample.bad'
5    DISCARDFILE 'sample.dsc'
6    APPEND
7    INTO TABLE emp
8    WHEN (57) = '.'
9    TRAILING NULLCOLS
10  (hiredate SYSDATE,
      deptno POSITION(1:2)  INTEGER EXTERNAL(2)
              NULLIF deptno=BLANKS,
       job    POSITION(7:14)  CHAR  TERMINATED BY WHITESPACE
              NULLIF job=BLANKS  "UPPER(:job)",
       mgr    POSITION(28:31) INTEGER EXTERNAL 
              TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
       ename  POSITION(34:41) CHAR 
              TERMINATED BY WHITESPACE  "UPPER(:ename)",
       empno  POSITION(45) INTEGER EXTERNAL 
              TERMINATED BY WHITESPACE,
       sal    POSITION(51) CHAR  TERMINATED BY WHITESPACE
              "TO_NUMBER(:sal,'$99,999.99')",
       comm   INTEGER EXTERNAL  ENCLOSED BY '(' AND '%'
              ":comm * 100"
    )

In this sample control file, the numbers that appear to the left would not appear in a real control file. They are keyed in this sample to the explanatory notes in the following list:

  1. This is how comments are entered in a control file. See "Comments in the Control File".

  2. The LOAD DATA statement tells SQL*Loader that this is the beginning of a new data load. See Appendix A for syntax information.

  3. The INFILE clause specifies the name of a data file containing the data you want to load. See "Specifying Data Files".

  4. The BADFILE clause specifies the name of a file into which rejected records are placed. See "Specifying the Bad File".

  5. The DISCARDFILE clause specifies the name of a file into which discarded records are placed. See "Specifying the Discard File".

  6. The APPEND clause is one of the options you can use when loading data into a table that is not empty. See "Loading Data into Nonempty Tables".

    To load data into a table that is empty, you would use the INSERT clause. See "Loading Data into Empty Tables".

  7. The INTO TABLE clause enables you to identify tables, fields, and datatypes. It defines the relationship between records in the data file and tables in the database. See "Specifying Table Names".

  8. The WHEN clause specifies one or more field conditions. SQL*Loader decides whether to load the data based on these field conditions. See "Loading Records Based on a Condition".

  9. The TRAILING NULLCOLS clause tells SQL*Loader to treat any relatively positioned columns that are not present in the record as null columns. See "Handling Short Records with Missing Data".

  10. The remainder of the control file contains the field list, which provides information about column formats in the table being loaded. See Chapter 10 for information about that section of the control file.

Comments in the Control File

Comments can appear anywhere in the command section of the file, but they should not appear within the data. Precede any comment with two hyphens, for example:

--This is a comment

All text to the right of the double hyphen is ignored, until the end of the line.

Specifying Command-Line Parameters in the Control File

You can specify command-line parameters in the SQL*Loader control file using the OPTIONS clause. This can be useful when you typically invoke a control file with the same set of options. The OPTIONS clause precedes the LOAD DATA statement.

OPTIONS Clause

The following command-line parameters can be specified using the OPTIONS clause. These parameters are described in greater detail in Chapter 8.

BINDSIZE = n
COLUMNARRAYROWS = n
DATE_CACHE = n
DIRECT = {TRUE | FALSE} 
ERRORS = n
EXTERNAL_TABLE = {NOT_USED | GENERATE_ONLY | EXECUTE}
FILE
LOAD = n 
MULTITHREADING = {TRUE | FALSE}
PARALLEL = {TRUE | FALSE}
READSIZE = n
RESUMABLE = {TRUE | FALSE}
RESUMABLE_NAME = 'text string'
RESUMABLE_TIMEOUT = n
ROWS = n 
SILENT = {HEADER | FEEDBACK | ERRORS | DISCARDS | PARTITIONS | ALL} 
SKIP = n   
SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}
SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}
STREAMSIZE = n

The following is an example use of the OPTIONS clause that you could use in a SQL*Loader control file:

OPTIONS (BINDSIZE=100000, SILENT=(ERRORS, FEEDBACK) )

Note:

Parameter values specified on the command line override parameter values specified in the control file OPTIONS clause.

Specifying File Names and Object Names

In general, SQL*Loader follows the SQL standard for specifying object names (for example, table and column names). The information in this section discusses the following topics:

File Names That Conflict with SQL and SQL*Loader Reserved Words

SQL and SQL*Loader reserved words must be specified within double quotation marks. The only SQL*Loader reserved word is CONSTANT.

You must use double quotation marks if the object name contains special characters other than those recognized by SQL ($, #, _), or if the name is case sensitive.

Specifying SQL Strings

You must specify SQL strings within double quotation marks. The SQL string applies SQL operators to data fields.

Operating System Considerations

The following sections discuss situations in which your course of action may depend on the operating system you are using.

Specifying a Complete Path

If you encounter problems when trying to specify a complete path name, it may be due to an operating system-specific incompatibility caused by special characters in the specification. In many cases, specifying the path name within single quotation marks prevents errors.

Backslash Escape Character

In DDL syntax, you can place a double quotation mark inside a string delimited by double quotation marks by preceding it with the escape character, "\" (if the escape character is allowed on your operating system). The same rule applies when single quotation marks are required in a string delimited by single quotation marks.

For example, homedir\data"norm\mydata contains a double quotation mark. Preceding the double quotation mark with a backslash indicates that the double quotation mark is to be taken literally:

INFILE 'homedir\data\"norm\mydata'

You can also put the escape character itself into a string by entering it twice.

For example:

"so'\"far"     or  'so\'"far'     is parsed as   so'"far 
"'so\\far'"    or  '\'so\\far\''  is parsed as  'so\far' 
"so\\\\far"    or  'so\\\\far'    is parsed as   so\\far 

Note:

A double quotation mark in the initial position cannot be preceded by an escape character. Therefore, you should avoid creating strings with an initial quotation mark.

Nonportable Strings

There are two kinds of character strings in a SQL*Loader control file that are not portable between operating systems: filename and file processing option strings. When you convert to a different operating system, you will probably need to modify these strings. All other strings in a SQL*Loader control file should be portable between operating systems.

Using the Backslash as an Escape Character

If your operating system uses the backslash character to separate directories in a path name, and if the release of the Oracle database running on your operating system implements the backslash escape character for file names and other nonportable strings, then you must specify double backslashes in your path names and use single quotation marks.

Escape Character Is Sometimes Disallowed

The release of the Oracle database running on your operating system may not implement the escape character for nonportable strings. When the escape character is disallowed, a backslash is treated as a normal character, rather than as an escape character (although it is still usable in all other strings). Then path names such as the following can be specified normally:

INFILE 'topdir\mydir\myfile'

Double backslashes are not needed.

Because the backslash is not recognized as an escape character, strings within single quotation marks cannot be embedded inside another string delimited by single quotation marks. This rule also holds for double quotation marks. A string within double quotation marks cannot be embedded inside another string delimited by double quotation marks.

Identifying XMLType Tables

As of Oracle Database 10g, the XMLTYPE clause is available for use in a SQL*Loader control file. This clause is of the format XMLTYPE(field name). It is used to identify XMLType tables so that the correct SQL statement can be constructed. Example 9-2 shows how the XMLTYPE clause can be used in a SQL*Loader control file to load data into a schema-based XMLType table.

Example 9-2 Identifying XMLType Tables in the SQL*Loader Control File

The XML schema definition is as follows. It registers the XML schema, xdb_user.xsd, in the Oracle XML DB, and then creates the table, xdb_tab5.

begin dbms_xmlschema.registerSchema('xdb_user.xsd',
'<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:xdb="http://xmlns.oracle.com/xdb">
 <xs:element name = "Employee"
        xdb:defaultTable="EMP31B_TAB">
   <xs:complexType>
    <xs:sequence>
      <xs:element name = "EmployeeId" type = "xs:positiveInteger"/>
      <xs:element name = "Name" type = "xs:string"/>
      <xs:element name = "Salary" type = "xs:positiveInteger"/>
      <xs:element name = "DeptId" type = "xs:positiveInteger"
             xdb:SQLName="DEPTID"/>
    </xs:sequence>
   </xs:complexType>
 </xs:element>
</xs:schema>',
TRUE, TRUE, FALSE); end;
/

The table is defined as follows:

CREATE TABLE xdb_tab5 OF XMLTYPE XMLSCHEMA "xdb_user.xsd" ELEMENT "Employee";

The control file used to load data into the table, xdb_tab5, looks as follows. It loads XMLType data using the registered XML schema, xdb_user.xsd. The XMLTYPE clause is used to identify this table as an XMLType table. Either direct path or conventional mode can be used to load the data into the table.

LOAD DATA
INFILE *
INTO TABLE xdb_tab5 TRUNCATE
xmltype(xmldata)
(
  xmldata   char(4000)
)
BEGINDATA
<Employee>  <EmployeeId>111</EmployeeId>  <Name>Ravi</Name>  <Salary>100000</Sal
ary>  <DeptId>12</DeptId></Employee>
<Employee>  <EmployeeId>112</EmployeeId>  <Name>John</Name>  <Salary>150000</Sal
ary>  <DeptId>12</DeptId></Employee>
<Employee>  <EmployeeId>113</EmployeeId>  <Name>Michael</Name>  <Salary>75000</S
alary>  <DeptId>12</DeptId></Employee>
<Employee>  <EmployeeId>114</EmployeeId>  <Name>Mark</Name>  <Salary>125000</Sal
ary>  <DeptId>16</DeptId></Employee>
<Employee>  <EmployeeId>115</EmployeeId>  <Name>Aaron</Name>  <Salary>600000</Sa
lary>  <DeptId>16</DeptId></Employee>

See Also:

Oracle XML DB Developer's Guide for more information about loading XML data using SQL*Loader

Specifying Data Files

To specify a data file that contains the data to be loaded, use the INFILE keyword, followed by the file name and optional file processing options string. You can specify multiple files by using multiple INFILE keywords.


Note:

You can also specify the data file from the command line, using the DATA parameter described in "Command-Line Parameters". A file name specified on the command line overrides the first INFILE clause in the control file.

If no file name is specified, then the file name defaults to the control file name with an extension or file type of .dat.

If the control file itself contains the data to be loaded, then specify an asterisk (*). This specification is described in "Identifying Data in the Control File with BEGINDATA" .


Note:

The information in this section applies only to primary data files. It does not apply to LOBFILEs or SDFs.

For information about LOBFILES, see "Loading LOB Data from LOBFILEs".

For information about SDFs, see "Secondary Data Files (SDFs)".


The syntax for INFILE is as follows:

Description of infile.gif follows
Description of the illustration infile.gif

Table 9-1 describes the parameters for the INFILE keyword.

Table 9-1 Parameters for the INFILE Keyword

ParameterDescription

INFILE

Specifies that a data file specification follows.

input_filename

Name of the file containing the data.

Any spaces or punctuation marks in the file name must be enclosed in single quotation marks. See "Specifying File Names and Object Names".

*


If your data is in the control file itself, then use an asterisk instead of the file name. If you have data in the control file and in data files, then you must specify the asterisk first in order for the data to be read.

os_file_proc_clause

This is the file-processing options string. It specifies the data file format. It also optimizes data file reads. The syntax used for this string is specific to your operating system. See "Specifying Data File Format and Buffering".


Examples of INFILE Syntax

The following list shows different ways you can specify INFILE syntax:

  • Data contained in the control file itself:

    INFILE  *
    
  • Data contained in a file named sample with a default extension of .dat:

    INFILE  sample
    
  • Data contained in a file named datafile.dat with a full path specified:

    INFILE 'c:/topdir/subdir/datafile.dat' 
    

    Note:

    file names that include spaces or punctuation marks must be enclosed in single quotation marks.

Specifying Multiple Data Files

To load data from multiple data files in one SQL*Loader run, use an INFILE clause for each data file. Data files need not have the same file processing options, although the layout of the records must be identical. For example, two files could be specified with completely different file processing options strings, and a third could consist of data in the control file.

You can also specify a separate discard file and bad file for each data file. In such a case, the separate bad files and discard files must be declared immediately after each data file name. For example, the following excerpt from a control file specifies four data files with separate bad and discard files:

INFILE  mydat1.dat  BADFILE  mydat1.bad  DISCARDFILE mydat1.dis 
INFILE  mydat2.dat 
INFILE  mydat3.dat  DISCARDFILE  mydat3.dis 
INFILE  mydat4.dat  DISCARDMAX  10 0
  • For mydat1.dat, both a bad file and discard file are explicitly specified. Therefore both files are created, as needed.

  • For mydat2.dat, neither a bad file nor a discard file is specified. Therefore, only the bad file is created, as needed. If created, the bad file has the default file name and extension mydat2.bad. The discard file is not created, even if rows are discarded.

  • For mydat3.dat, the default bad file is created, if needed. A discard file with the specified name (mydat3.dis) is created, as needed.

  • For mydat4.dat, the default bad file is created, if needed. Because the DISCARDMAX option is used, SQL*Loader assumes that a discard file is required and creates it with the default name mydat4.dsc.

Identifying Data in the Control File with BEGINDATA

If the data is included in the control file itself, then the INFILE clause is followed by an asterisk rather than a file name. The actual data is placed in the control file after the load configuration specifications.

Specify the BEGINDATA statement before the first data record. The syntax is:

BEGINDATA 
data

Keep the following points in mind when using the BEGINDATA statement:

Specifying Data File Format and Buffering

When configuring SQL*Loader, you can specify an operating system-dependent file processing options string (os_file_proc_clause) in the control file to specify file format and buffering.

For example, suppose that your operating system has the following option-string syntax:

Description of recsize_spec.gif follows
Description of the illustration recsize_spec.gif

In this syntax, RECSIZE is the size of a fixed-length record, and BUFFERS is the number of buffers to use for asynchronous I/O.

To declare a file named mydata.dat as a file that contains 80-byte records and instruct SQL*Loader to use 8 I/O buffers, you would use the following control file entry:

INFILE 'mydata.dat' "RECSIZE 80 BUFFERS 8" 

Note:

This example uses the recommended convention of single quotation marks for file names and double quotation marks for everything else.


See Also:

Oracle Database Platform Guide for Microsoft Windows for information about using the os_file_proc_clause on Windows systems.

Specifying the Bad File

When SQL*Loader executes, it can create a file called a bad file or reject file in which it places records that were rejected because of formatting errors or because they caused Oracle errors. If you have specified that a bad file is to be created, then the following applies:

To specify the name of the bad file, use the BADFILE clause, followed by a file name. If you do not specify a name for the bad file, then the name defaults to the name of the data file with an extension or file type of .bad. You can also specify the bad file from the command line with the BAD parameter described in "Command-Line Parameters".

A file name specified on the command line is associated with the first INFILE clause in the control file, overriding any bad file that may have been specified as part of that clause.

The bad file is created in the same record and file format as the data file so that you can reload the data after you correct it. For data files in stream record format, the record terminator that is found in the data file is also used in the bad file.

The syntax for the bad file is as follows:

Description of badfile.gif follows
Description of the illustration badfile.gif

The BADFILE clause specifies that a file name for the bad file follows.

The filename parameter specifies a valid file name specification for your platform. Any spaces or punctuation marks in the file name must be enclosed in single quotation marks.

Examples of Specifying a Bad File Name

To specify a bad file with file name sample and default file extension or file type of .bad, enter the following in the control file:

BADFILE sample 

To specify a bad file with file name bad0001 and file extension or file type of .rej, enter either of the following lines:

BADFILE bad0001.rej
BADFILE '/REJECT_DIR/bad0001.rej' 

How Bad Files Are Handled with LOBFILEs and SDFs

Data from LOBFILEs and SDFs is not written to a bad file when there are rejected rows. If there is an error loading a LOB, then the row is not rejected. Rather, the LOB column is left empty (not null with a length of zero (0) bytes). However, when the LOBFILE is being used to load an XML column and there is an error loading this LOB data, then the XML column is left as null.

Criteria for Rejected Records

A record can be rejected for the following reasons:

  1. Upon insertion, the record causes an Oracle error (such as invalid data for a given datatype).

  2. The record is formatted incorrectly so that SQL*Loader cannot find field boundaries.

  3. The record violates a constraint or tries to make a unique index non-unique.

If the data can be evaluated according to the WHEN clause criteria (even with unbalanced delimiters), then it is either inserted or rejected.

Neither a conventional path nor a direct path load will write a row to any table if it is rejected because of reason number 2 in the previous list.

A conventional path load will not write a row to any tables if reason number 1 or 3 in the previous list is violated for any one table. The row is rejected for that table and written to the reject file.

In a conventional path load, if the data file has a record that is being loaded into multiple tables and that record is rejected from at least one of the tables, then that record is not loaded into any of the tables.

The log file indicates the Oracle error for each rejected record. Case study 4 demonstrates rejected records. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Specifying the Discard File

During execution, SQL*Loader can create a discard file for records that do not meet any of the loading criteria. The records contained in this file are called discarded records. Discarded records do not satisfy any of the WHEN clauses specified in the control file. These records differ from rejected records. Discarded records do not necessarily have any bad data. No insert is attempted on a discarded record.

A discard file is created according to the following rules:

To create a discard file from within a control file, specify any of the following: DISCARDFILE filename, DISCARDS, or DISCARDMAX.

To create a discard file from the command line, specify either DISCARD or DISCARDMAX.

You can specify the discard file directly by specifying its name, or indirectly by specifying the maximum number of discards.

The discard file is created in the same record and file format as the data file. For data files in stream record format, the same record terminator that is found in the data file is also used in the discard file.

Specifying the Discard File in the Control File

To specify the name of the file, use the DISCARDFILE clause, followed by the file name.

Description of discard.gif follows
Description of the illustration discard.gif

The DISCARDFILE clause specifies that a discard file name follows.

The filename parameter specifies a valid file name specification for your platform. Any spaces or punctuation marks in the file name must be enclosed in single quotation marks.

The default file name is the name of the data file, and the default file extension or file type is .dsc. A discard file name specified on the command line overrides one specified in the control file. If a discard file with that name already exists, then it is either overwritten or a new version is created, depending on your operating system.

Specifying the Discard File from the Command Line

See "DISCARD (file name)" for information about how to specify a discard file from the command line.

A file name specified on the command line overrides any discard file that you may have specified in the control file.

Examples of Specifying a Discard File Name

The following list shows different ways you can specify a name for the discard file from within the control file:

  • To specify a discard file with file name circular and default file extension or file type of .dsc:

    DISCARDFILE  circular 
    
  • To specify a discard file named notappl with the file extension or file type of .may:

    DISCARDFILE notappl.may 
    
  • To specify a full path to the discard file forget.me:

    DISCARDFILE  '/discard_dir/forget.me'
    

Criteria for Discarded Records

If there is no INTO TABLE clause specified for a record, then the record is discarded. This situation occurs when every INTO TABLE clause in the SQL*Loader control file has a WHEN clause and, either the record fails to match any of them, or all fields are null.

No records are discarded if an INTO TABLE clause is specified without a WHEN clause. An attempt is made to insert every record into such a table. Therefore, records may be rejected, but none are discarded.

Case study 7, Extracting Data from a Formatted Report, provides an example of using a discard file. (See "SQL*Loader Case Studies" for information on how to access case studies.)

How Discard Files Are Handled with LOBFILEs and SDFs

Data from LOBFILEs and SDFs is not written to a discard file when there are discarded rows.

Limiting the Number of Discarded Records

You can limit the number of records to be discarded for each data file by specifying an integer for either the DISCARDS or DISCARDMAX keyword.

When the discard limit is reached, processing of the data file terminates and continues with the next data file, if one exists.

You can specify a different number of discards for each data file. Or, if you specify the number of discards only once, then the maximum number of discards specified applies to all files.

If you specify a maximum number of discards, but no discard file name, then SQL*Loader creates a discard file with the default file name and file extension or file type.

Handling Different Character Encoding Schemes

SQL*Loader supports different character encoding schemes (called character sets, or code pages). SQL*Loader uses features of Oracle's globalization support technology to handle the various single-byte and multibyte character encoding schemes available today.

The following sections provide a brief introduction to some of the supported character encoding schemes.

Multibyte (Asian) Character Sets

Multibyte character sets support Asian languages. Data can be loaded in multibyte format, and database object names (fields, tables, and so on) can be specified with multibyte characters. In the control file, comments and object names can also use multibyte characters.

Unicode Character Sets

SQL*Loader supports loading data that is in a Unicode character set.

Unicode is a universal encoded character set that supports storage of information from most languages in a single character set. Unicode provides a unique code value for every character, regardless of the platform, program, or language. There are two different encodings for Unicode, UTF-16 and UTF-8.


Note:

In this manual, you will see the terms UTF-16 and UTF16 both used. The term UTF-16 is a general reference to UTF-16 encoding for Unicode. The term UTF16 (no hyphen) is the specific name of the character set and is what you should specify for the CHARACTERSET parameter when you want to use UTF-16 encoding. This also applies to UTF-8 and UTF8.

The UTF-16 Unicode encoding is a fixed-width multibyte encoding in which the character codes 0x0000 through 0x007F have the same meaning as the single-byte ASCII codes 0x00 through 0x7F.

The UTF-8 Unicode encoding is a variable-width multibyte encoding in which the character codes 0x00 through 0x7F have the same meaning as ASCII. A character in UTF-8 can be 1 byte, 2 bytes, or 3 bytes long.


See Also:


Database Character Sets

The Oracle database uses the database character set for data stored in SQL CHAR datatypes (CHAR, VARCHAR2, CLOB, and LONG), for identifiers such as table names, and for SQL statements and PL/SQL source code. Only single-byte character sets and varying-width character sets that include either ASCII or EBCDIC characters are supported as database character sets. Multibyte fixed-width character sets (for example, AL16UTF16) are not supported as the database character set.

An alternative character set can be used in the database for data stored in SQL NCHAR datatypes (NCHAR, NVARCHAR2, and NCLOB). This alternative character set is called the database national character set. Only Unicode character sets are supported as the database national character set.

Data File Character Sets

By default, the data file is in the character set defined by the NLS_LANG parameter. The data file character sets supported with NLS_LANG are the same as those supported as database character sets. SQL*Loader supports all Oracle-supported character sets in the data file (even those not supported as database character sets).

For example, SQL*Loader supports multibyte fixed-width character sets (such as AL16UTF16 and JA16EUCFIXED) in the data file. SQL*Loader also supports UTF-16 encoding with little-endian byte ordering. However, the Oracle database supports only UTF-16 encoding with big-endian byte ordering (AL16UTF16) and only as a database national character set, not as a database character set.

The character set of the data file can be set up by using the NLS_LANG parameter or by specifying a SQL*Loader CHARACTERSET parameter.

Input Character Conversion

The default character set for all data files, if the CHARACTERSET parameter is not specified, is the session character set defined by the NLS_LANG parameter. The character set used in input data files can be specified with the CHARACTERSET parameter.

SQL*Loader can automatically convert data from the data file character set to the database character set or the database national character set, when they differ.

When data character set conversion is required, the target character set should be a superset of the source data file character set. Otherwise, characters that have no equivalent in the target character set are converted to replacement characters, often a default character such as a question mark (?). This causes loss of data.

The sizes of the database character types CHAR and VARCHAR2 can be specified in bytes (byte-length semantics) or in characters (character-length semantics). If they are specified in bytes, and data character set conversion is required, then the converted values may take more bytes than the source values if the target character set uses more bytes than the source character set for any character that is converted. This will result in the following error message being reported if the larger target value exceeds the size of the database column:

ORA-01401: inserted value too large for column

You can avoid this problem by specifying the database column size in characters and also by using character sizes in the control file to describe the data. Another way to avoid this problem is to ensure that the maximum column size is large enough, in bytes, to hold the converted value.


See Also:


Considerations When Loading Data into VARRAYs or Primary-Key-Based REFs

If you use SQL*Loader conventional path or the Oracle Call Interface (OCI) to load data into VARRAYs or into primary-key-based REFs, and the data being loaded is in a different character set than the database character set, then problems such as the following might occur:

  • Rows might be rejected because a field is too large for the database column, but in reality the field is not too large.

  • A load might be abnormally terminated without any rows being loaded, when only the field that really was too large should have been rejected.

  • Rows might be reported as loaded correctly, but the primary-key-based REF columns are returned as blank when they are selected with SQL*Plus.

To avoid these problems, set the client character set (using the NLS_LANG environment variable) to the database character set before you load the data.

CHARACTERSET Parameter

Specifying the CHARACTERSET parameter tells SQL*Loader the character set of the input data file. The default character set for all data files, if the CHARACTERSET parameter is not specified, is the session character set defined by the NLS_LANG parameter. Only character data (fields in the SQL*Loader datatypes CHAR, VARCHAR, VARCHARC, numeric EXTERNAL, and the datetime and interval datatypes) is affected by the character set of the data file.

The CHARACTERSET syntax is as follows:

CHARACTERSET char_set_name 

The char_set_name variable specifies the character set name. Normally, the specified name must be the name of an Oracle-supported character set.

For UTF-16 Unicode encoding, use the name UTF16 rather than AL16UTF16. AL16UTF16, which is the supported Oracle character set name for UTF-16 encoded data, is only for UTF-16 data that is in big-endian byte order. However, because you are allowed to set up data using the byte order of the system where you create the data file, the data in the data file can be either big-endian or little-endian. Therefore, a different character set name (UTF16) is used. The character set name AL16UTF16 is also supported. But if you specify AL16UTF16 for a data file that has little-endian byte order, then SQL*Loader issues a warning message and processes the data file as big-endian.

The CHARACTERSET parameter can be specified for primary data files and also for LOBFILEs and SDFs. All primary data files are assumed to be in the same character set. A CHARACTERSET parameter specified before the INFILE parameter applies to the entire list of primary data files. If the CHARACTERSET parameter is specified for primary data files, then the specified value will also be used as the default for LOBFILEs and SDFs. This default setting can be overridden by specifying the CHARACTERSET parameter with the LOBFILE or SDF specification.

The character set specified with the CHARACTERSET parameter does not apply to data in the control file (specified with INFILE). To load data in a character set other than the one specified for your session by the NLS_LANG parameter, you must place the data in a separate data file.


See Also:


Control File Character Set

The SQL*Loader control file itself is assumed to be in the character set specified for your session by the NLS_LANG parameter. If the control file character set is different from the data file character set, then keep the following issue in mind. Delimiters and comparison clause values specified in the SQL*Loader control file as character strings are converted from the control file character set to the data file character set before any comparisons are made. To ensure that the specifications are correct, you may prefer to specify hexadecimal strings, rather than character string values.

If hexadecimal strings are used with a data file in the UTF-16 Unicode encoding, then the byte order is different on a big-endian versus a little-endian system. For example, "," (comma) in UTF-16 on a big-endian system is X'002c'. On a little-endian system it is X'2c00'. SQL*Loader requires that you always specify hexadecimal strings in big-endian format. If necessary, SQL*Loader swaps the bytes before making comparisons. This allows the same syntax to be used in the control file on both a big-endian and a little-endian system.

Record terminators for data files that are in stream format in the UTF-16 Unicode encoding default to "\n" in UTF-16 (that is, 0x000A on a big-endian system and 0x0A00 on a little-endian system). You can override these default settings by using the "STR 'char_str'" or the "STR x'hex_str'" specification on the INFILE line. For example, you could use either of the following to specify that 'ab' is to be used as the record terminator, instead of '\n'.

INFILE myfile.dat "STR 'ab'"

INFILE myfile.dat "STR x'00410042'"

Any data included after the BEGINDATA statement is also assumed to be in the character set specified for your session by the NLS_LANG parameter.

For the SQL*Loader datatypes (CHAR, VARCHAR, VARCHARC, DATE, and EXTERNAL numerics), SQL*Loader supports lengths of character fields that are specified in either bytes (byte-length semantics) or characters (character-length semantics). For example, the specification CHAR(10) in the control file can mean 10 bytes or 10 characters. These are equivalent if the data file uses a single-byte character set. However, they are often different if the data file uses a multibyte character set.

To avoid insertion errors caused by expansion of character strings during character set conversion, use character-length semantics in both the data file and the target database columns.

Character-Length Semantics

Byte-length semantics are the default for all data files except those that use the UTF16 character set (which uses character-length semantics by default). To override the default you can specify CHAR or CHARACTER, as shown in the following syntax:

Description of char_length.gif follows
Description of the illustration char_length.gif

The LENGTH parameter is placed after the CHARACTERSET parameter in the SQL*Loader control file. The LENGTH parameter applies to the syntax specification for primary data files and also to LOBFILEs and secondary data files (SDFs). A LENGTH specification before the INFILE parameters applies to the entire list of primary data files. The LENGTH specification specified for the primary data file is used as the default for LOBFILEs and SDFs. You can override that default by specifying LENGTH with the LOBFILE or SDF specification. Unlike the CHARACTERSET parameter, the LENGTH parameter can also apply to data contained within the control file itself (that is, INFILE * syntax).

You can specify CHARACTER instead of CHAR for the LENGTH parameter.

If character-length semantics are being used for a SQL*Loader data file, then the following SQL*Loader datatypes will use character-length semantics:

  • CHAR

  • VARCHAR

  • VARCHARC

  • DATE

  • EXTERNAL numerics (INTEGER, FLOAT, DECIMAL, and ZONED)

For the VARCHAR datatype, the length subfield is still a binary SMALLINT length subfield, but its value indicates the length of the character string in characters.

The following datatypes use byte-length semantics even if character-length semantics are being used for the data file, because the data is binary, or is in a special binary-encoded form in the case of ZONED and DECIMAL:

  • INTEGER

  • SMALLINT

  • FLOAT

  • DOUBLE

  • BYTEINT

  • ZONED

  • DECIMAL

  • RAW

  • VARRAW

  • VARRAWC

  • GRAPHIC

  • GRAPHIC EXTERNAL

  • VARGRAPHIC

The start and end arguments to the POSITION parameter are interpreted in bytes, even if character-length semantics are in use in a data file. This is necessary to handle data files that have a mix of data of different datatypes, some of which use character-length semantics, and some of which use byte-length semantics. It is also needed to handle position with the VARCHAR datatype, which has a SMALLINT length field and then the character data. The SMALLINT length field takes up a certain number of bytes depending on the system (usually 2 bytes), but its value indicates the length of the character string in characters.

Character-length semantics in the data file can be used independent of whether character-length semantics are used for the database columns. Therefore, the data file and the database columns can use either the same or different length semantics.

Shift-sensitive Character Data

In general, loading shift-sensitive character data can be much slower than loading simple ASCII or EBCDIC data. The fastest way to load shift-sensitive character data is to use fixed-position fields without delimiters. To improve performance, remember the following points:

  • The field data must have an equal number of shift-out/shift-in bytes.

  • The field must start and end in single-byte mode.

  • It is acceptable for the first byte to be shift-out and the last byte to be shift-in.

  • The first and last characters cannot be multibyte.

  • If blanks are not preserved and multibyte-blank-checking is required, then a slower path is used. This can happen when the shift-in byte is the last byte of a field after single-byte blank stripping is performed.

Interrupted Loads

Loads are interrupted and discontinued for several reasons. A primary reason is space errors, in which SQL*Loader runs out of space for data rows or index entries. A load might also be discontinued because the maximum number of errors was exceeded, an unexpected error was returned to SQL*Loader from the server, a record was too long in the data file, or a Ctrl+C was executed.

The behavior of SQL*Loader when a load is discontinued varies depending on whether it is a conventional path load or a direct path load, and on the reason the load was interrupted. Additionally, when an interrupted load is continued, the use and value of the SKIP parameter can vary depending on the particular case. The following sections explain the possible scenarios.

Discontinued Conventional Path Loads

In a conventional path load, data is committed after all data in the bind array is loaded into all tables. If the load is discontinued, then only the rows that were processed up to the time of the last commit operation are loaded. There is no partial commit of data.

Discontinued Direct Path Loads

In a direct path load, the behavior of a discontinued load varies depending on the reason the load was discontinued:

Load Discontinued Because of Space Errors

If a load is discontinued because of space errors, then the behavior of SQL*Loader depends on whether you are loading data into multiple subpartitions.

  • Space errors when loading data into multiple subpartitions (that is, loading into a partitioned table, a composite partitioned table, or one partition of a composite partitioned table):

    If space errors occur when loading into multiple subpartitions, then the load is discontinued and no data is saved unless ROWS has been specified (in which case, all data that was previously committed will be saved). The reason for this behavior is that it is possible rows might be loaded out of order. This is because each row is assigned (not necessarily in order) to a partition and each partition is loaded separately. If the load discontinues before all rows assigned to partitions are loaded, then the row for record "n" may have been loaded, but not the row for record "n-1". Therefore, the load cannot be continued by simply using SKIP=N .

  • Space errors when loading data into an unpartitioned table, one partition of a partitioned table, or one subpartition of a composite partitioned table:

    If there is one INTO TABLE statement in the control file, then SQL*Loader commits as many rows as were loaded before the error occurred.

    If there are multiple INTO TABLE statements in the control file, then SQL*Loader loads data already read from the data file into other tables and then commits the data.

    In either case, this behavior is independent of whether the ROWS parameter was specified. When you continue the load, you can use the SKIP parameter to skip rows that have already been loaded. In the case of multiple INTO TABLE statements, a different number of rows could have been loaded into each table, so to continue the load you would need to specify a different value for the SKIP parameter for every table. SQL*Loader only reports the value for the SKIP parameter if it is the same for all tables.

Load Discontinued Because Maximum Number of Errors Exceeded

If the maximum number of errors is exceeded, then SQL*Loader stops loading records into any table and the work done to that point is committed. This means that when you continue the load, the value you specify for the SKIP parameter may be different for different tables. SQL*Loader reports the value for the SKIP parameter only if it is the same for all tables.

Load Discontinued Because of Fatal Errors

If a fatal error is encountered, then the load is stopped and no data is saved unless ROWS was specified at the beginning of the load. In that case, all data that was previously committed is saved. SQL*Loader reports the value for the SKIP parameter only if it is the same for all tables.

Load Discontinued Because a Ctrl+C Was Issued

If SQL*Loader is in the middle of saving data when a Ctrl+C is issued, then it continues to do the save and then stops the load after the save completes. Otherwise, SQL*Loader stops the load without committing any work that was not committed already. This means that the value of the SKIP parameter will be the same for all tables.

Status of Tables and Indexes After an Interrupted Load

When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a valid state. If the conventional path is used, then all indexes are left in a valid state.

If the direct path load method is used, then any indexes on the table are left in an unusable state. You can either rebuild or re-create the indexes before continuing, or after the load is restarted and completes.

Other indexes are valid if no other errors occurred. See "Indexes Left in an Unusable State" for other reasons why an index might be left in an unusable state.

Using the Log File to Determine Load Status

The SQL*Loader log file tells you the state of the tables and indexes and the number of logical records already read from the input data file. Use this information to resume the load where it left off.

Continuing Single-Table Loads

When SQL*Loader must discontinue a direct path or conventional path load before it is finished, some rows have probably already been committed or marked with savepoints. To continue the discontinued load, use the SKIP parameter to specify the number of logical records that have already been processed by the previous load. At the time the load is discontinued, the value for SKIP is written to the log file in a message similar to the following:

Specify SKIP=1001 when continuing the load.

This message specifying the value of the SKIP parameter is preceded by a message indicating why the load was discontinued.

Note that for multiple-table loads, the value of the SKIP parameter is displayed only if it is the same for all tables.

Assembling Logical Records from Physical Records

As of Oracle9i, user-defined record sizes larger than 64 KB are supported (see "READSIZE (read buffer size)"). This reduces the need to break up logical records into multiple physical records. However, there may still be situations in which you may want to do so. At some point, when you want to combine those multiple physical records back into one logical record, you can use one of the following clauses, depending on your data:

Using CONCATENATE to Assemble Logical Records

Use CONCATENATE when you want SQL*Loader to always combine the same number of physical records to form one logical record. In the following example, integer specifies the number of physical records to combine.

CONCATENATE  integer 

The integer value specified for CONCATENATE determines the number of physical record structures that SQL*Loader allocates for each row in the column array. In direct path loads, the default value for COLUMNARRAYROWS is large, so if you also specify a large value for CONCATENATE, then excessive memory allocation can occur. If this happens, you can improve performance by reducing the value of the COLUMNARRAYROWS parameter to lower the number of rows in a column array.

Using CONTINUEIF to Assemble Logical Records

Use CONTINUEIF if the number of physical records to be combined varies. The CONTINUEIF clause is followed by a condition that is evaluated for each physical record, as it is read. For example, two records might be combined if a pound sign (#) were in byte position 80 of the first record. If any other character were there, then the second record would not be added to the first.

The full syntax for CONTINUEIF adds even more flexibility:

Description of continueif.gif follows
Description of the illustration continueif.gif

Table 9-2 describes the parameters for the CONTINUEIF clause.

Table 9-2 Parameters for the CONTINUEIF Clause

ParameterDescription

THIS

If the condition is true in the current record, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false. If the condition is false, then the current physical record becomes the last physical record of the current logical record. THIS is the default.

NEXT

If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false.

operator

The supported operators are equal (=) and not equal (!= or <>).

For the equal operator, the field and comparison string must match exactly for the condition to be true. For the not equal operator, they can differ in any character.

LAST

This test is similar to THIS, but the test is always against the last nonblank character. If the last nonblank character in the current physical record meets the test, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false. If the condition is false in the current record, then the current physical record is the last physical record of the current logical record.

LAST allows only a single character-continuation field (as opposed to THIS and NEXT, which allow multiple character-continuation fields).

pos_spec

Specifies the starting and ending column numbers in the physical record.

Column numbers start with 1. Either a hyphen or a colon is acceptable (start-end or start:end).

If you omit end, then the length of the continuation field is the length of the byte string or character string. If you use end, and the length of the resulting continuation field is not the same as that of the byte string or the character string, then the shorter one is padded. Character strings are padded with blanks, hexadecimal strings with zeros.

str

A string of characters to be compared to the continuation field defined by start and end, according to the operator. The string must be enclosed in double or single quotation marks. The comparison is made character by character, blank padding on the right if necessary.

X'hex-str'

A string of bytes in hexadecimal format used in the same way as str.X'1FB033' would represent the three bytes with values 1F, B0, and 33 (hexadecimal).

PRESERVE

Includes 'char_string' or X'hex_string' in the logical record. The default is to exclude them.


The positions in the CONTINUEIF clause refer to positions in each physical record. This is the only time you refer to positions in physical records. All other references are to logical records.

For CONTINUEIF THIS and CONTINUEIF LAST, if the PRESERVE parameter is not specified, then the continuation field is removed from all physical records when the logical record is assembled. That is, data values are allowed to span the records with no extra characters (continuation characters) in the middle. For example, if CONTINUEIF THIS(3:5)='***' is specified, then positions 3 through 5 are removed from all records. This means that the continuation characters are removed if they are in positions 3 through 5 of the record. It also means that the characters in positions 3 through 5 are removed from the record even if the continuation characters are not in positions 3 through 5.

For CONTINUEIF THIS and CONTINUEIF LAST, if the PRESERVE parameter is used, then the continuation field is kept in all physical records when the logical record is assembled.

CONTINUEIF LAST differs from CONTINUEIF THIS and CONTINUEIF NEXT. For CONTINUEIF LAST, where the positions of the continuation field vary from record to record, the continuation field is never removed, even if PRESERVE is not specified.

Example 9-3 through Example 9-6 show the use of CONTINUEIF THIS and CONTINUEIF NEXT, with and without the PRESERVE parameter.

Example 9-3 CONTINUEIF THIS Without the PRESERVE Parameter

Assume that you have physical records 14 bytes long and that a period represents a space:

        %%aaaaaaaa....
        %%bbbbbbbb....
        ..cccccccc....
        %%dddddddddd..
        %%eeeeeeeeee..
        ..ffffffffff..

In this example, the CONTINUEIF THIS clause does not use the PRESERVE parameter:

CONTINUEIF THIS (1:2) = '%%'

Therefore, the logical records are assembled as follows:

        aaaaaaaa....bbbbbbbb....cccccccc....
        dddddddddd..eeeeeeeeee..ffffffffff..

Note that columns 1 and 2 (for example, %% in physical record 1) are removed from the physical records when the logical records are assembled.

Example 9-4 CONTINUEIF THIS with the PRESERVE Parameter

Assume that you have the same physical records as in Example 9-3.

In this example, the CONTINUEIF THIS clause uses the PRESERVE parameter:

CONTINUEIF THIS PRESERVE (1:2) = '%%'

Therefore, the logical records are assembled as follows:

        %%aaaaaaaa....%%bbbbbbbb......cccccccc....
        %%dddddddddd..%%eeeeeeeeee....ffffffffff..

Note that columns 1 and 2 are not removed from the physical records when the logical records are assembled.

Example 9-5 CONTINUEIF NEXT Without the PRESERVE Parameter

Assume that you have physical records 14 bytes long and that a period represents a space:

        ..aaaaaaaa....
        %%bbbbbbbb....
        %%cccccccc....
        ..dddddddddd..
        %%eeeeeeeeee..
        %%ffffffffff..

In this example, the CONTINUEIF NEXT clause does not use the PRESERVE parameter:

CONTINUEIF NEXT (1:2) = '%%'

Therefore, the logical records are assembled as follows (the same results as for Example 9-3).

        aaaaaaaa....bbbbbbbb....cccccccc....
        dddddddddd..eeeeeeeeee..ffffffffff..

Example 9-6 CONTINUEIF NEXT with the PRESERVE Parameter

Assume that you have the same physical records as in Example 9-5.

In this example, the CONTINUEIF NEXT clause uses the PRESERVE parameter:

CONTINUEIF NEXT PRESERVE (1:2) = '%%'

Therefore, the logical records are assembled as follows:

        ..aaaaaaaa....%%bbbbbbbb....%%cccccccc....
        ..dddddddddd..%%eeeeeeeeee..%%ffffffffff..

See Also:

Case study 4, Loading Combined Physical Records, for an example of the CONTINUEIF clause. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Loading Logical Records into Tables

This section describes the way in which you specify:

Specifying Table Names

The INTO TABLE clause of the LOAD DATA statement enables you to identify tables, fields, and datatypes. It defines the relationship between records in the data file and tables in the database. The specification of fields and datatypes is described in later sections.

INTO TABLE Clause

Among its many functions, the INTO TABLE clause enables you to specify the table into which you load data. To load multiple tables, you include one INTO TABLE clause for each table you want to load.

To begin an INTO TABLE clause, use the keywords INTO TABLE, followed by the name of the Oracle table that is to receive the data.

The syntax is as follows:

Description of into_table1.gif follows
Description of the illustration into_table1.gif

The table must already exist. The table name should be enclosed in double quotation marks if it is the same as any SQL or SQL*Loader reserved keyword, if it contains any special characters, or if it is case sensitive.

INTO TABLE scott."CONSTANT"
INTO TABLE scott."Constant" 
INTO TABLE scott."-CONSTANT" 

The user must have INSERT privileges for the table being loaded. If the table is not in the user's schema, then the user must either use a synonym to reference the table or include the schema name as part of the table name (for example, scott.emp refers to the table emp in the scott schema).


Note:

SQL*Loader considers the default schema to be whatever schema is current after your connect to the database finishes executing. This means that the default schema will not necessarily be the one you specified in the connect string, if there are logon triggers present that get executed during connection to a database.

If you have a logon trigger that changes your current schema to a different one when you connect to a certain database, then SQL*Loader uses that new schema as the default.


Table-Specific Loading Method

When you are loading a table, you can use the INTO TABLE clause to specify a table-specific loading method (INSERT, APPEND, REPLACE, or TRUNCATE) that applies only to that table. That method overrides the global table-loading method. The global table-loading method is INSERT, by default, unless a different method was specified before any INTO TABLE clauses. The following sections discuss using these options to load data into empty and nonempty tables.

Loading Data into Empty Tables

If the tables you are loading into are empty, then use the INSERT option.

INSERT

This is SQL*Loader's default method. It requires the table to be empty before loading. SQL*Loader terminates with an error if the table contains rows. Case study 1, Loading Variable-Length Data, provides an example. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Loading Data into Nonempty Tables

If the tables you are loading into already contain data, then you have three options:

  • APPEND

  • REPLACE

  • TRUNCATE


    Caution:

    When REPLACE or TRUNCATE is specified, the entire table is replaced, not just individual rows. After the rows are successfully deleted, a COMMIT statement is issued. You cannot recover the data that was in the table before the load, unless it was saved with Export or a comparable utility.

APPEND

If data already exists in the table, then SQL*Loader appends the new rows to it. If data does not already exist, then the new rows are simply loaded. You must have SELECT privilege to use the APPEND option. Case study 3, Loading a Delimited Free-Format File, provides an example. (See "SQL*Loader Case Studies" for information on how to access case studies.)

REPLACE

The REPLACE option executes a SQL DELETE FROM TABLE statement. All rows in the table are deleted and the new data is loaded. The table must be in your schema, or you must have DELETE privilege on the table. Case study 4, Loading Combined Physical Records, provides an example. (See "SQL*Loader Case Studies" for information on how to access case studies.)

The row deletes cause any delete triggers defined on the table to fire. If DELETE CASCADE has been specified for the table, then the cascaded deletes are carried out. For more information about cascaded deletes, see the information about data integrity in Oracle Database Concepts.

Updating Existing Rows

The REPLACE method is a table replacement, not a replacement of individual rows. SQL*Loader does not update existing records, even if they have null columns. To update existing rows, use the following procedure:

  1. Load your data into a work table.

  2. Use the SQL UPDATE statement with correlated subqueries.

  3. Drop the work table.

TRUNCATE

The TRUNCATE option executes a SQL TRUNCATE TABLE table_name REUSE STORAGE statement, which means that the table's extents will be reused. The TRUNCATE option quickly and efficiently deletes all rows from a table or cluster, to achieve the best possible performance. For the TRUNCATE statement to operate, the table's referential integrity constraints must first be disabled. If they have not been disabled, then SQL*Loader returns an error.

Once the integrity constraints have been disabled, DELETE CASCADE is no longer defined for the table. If the DELETE CASCADE functionality is needed, then the contents of the table must be manually deleted before the load begins.

The table must be in your schema, or you must have the DROP ANY TABLE privilege.


See Also:

Oracle Database SQL Language Reference for more information about the SQL statements discussed in this section

Table-Specific OPTIONS Parameter

The OPTIONS parameter can be specified for individual tables in a parallel load. (It is valid only for a parallel load.)

The syntax for the OPTIONS parameter is as follows:

Description of into_table3.gif follows
Description of the illustration into_table3.gif

Loading Records Based on a Condition

You can choose to load or discard a logical record by using the WHEN clause to test a condition in the record.

The WHEN clause appears after the table name and is followed by one or more field conditions. The syntax for field_condition is as follows:

Description of fld_cond.gif follows
Description of the illustration fld_cond.gif

For example, the following clause indicates that any record with the value "q" in the fifth column position should be loaded:

WHEN (5) = 'q' 

A WHEN clause can contain several comparisons, provided each is preceded by AND. Parentheses are optional, but should be used for clarity with multiple comparisons joined by AND. For example:

WHEN (deptno = '10') AND (job = 'SALES') 

See Also:


Using the WHEN Clause with LOBFILEs and SDFs

If a record with a LOBFILE or SDF is discarded, then SQL*Loader skips the corresponding data in that LOBFILE or SDF.

Specifying Default Data Delimiters

If all data fields are terminated similarly in the data file, then you can use the FIELDS clause to indicate the default delimiters. The syntax for the fields_spec, termination_spec, and enclosure_spec clauses is as follows:

termination_spec

Description of terminat.gif follows
Description of the illustration terminat.gif


Note:

Terminator strings can contain one or more characters. Also, TERMINATED BY EOF applies only to loading LOBs from a LOBFILE.

enclosure_spec

Description of enclose.gif follows
Description of the illustration enclose.gif


Note:

Enclosure strings can contain one or more characters.

You can override the delimiter for any given column by specifying it after the column name. Case study 3, Loading a Delimited Free-Format File, provides an example. (See "SQL*Loader Case Studies" for information on how to access case studies.)


See Also:


Handling Short Records with Missing Data

When the control file definition specifies more fields for a record than are present in the record, SQL*Loader must determine whether the remaining (specified) columns should be considered null or whether an error should be generated.

If the control file definition explicitly states that a field's starting position is beyond the end of the logical record, then SQL*Loader always defines the field as null. If a field is defined with a relative position (such as dname and loc in the following example), and the record ends before the field is found, then SQL*Loader could either treat the field as null or generate an error. SQL*Loader uses the presence or absence of the TRAILING NULLCOLS clause (shown in the following syntax diagram) to determine the course of action.

Description of into_table6.gif follows
Description of the illustration into_table6.gif

Description of into_table7.gif follows
Description of the illustration into_table7.gif

TRAILING NULLCOLS Clause

The TRAILING NULLCOLS clause tells SQL*Loader to treat any relatively positioned columns that are not present in the record as null columns.

For example, consider the following data:

10 Accounting 

Assume that the preceding data is read with the following control file and the record ends after dname:

INTO TABLE dept 
    TRAILING NULLCOLS 
( deptno CHAR TERMINATED BY " ", 
  dname  CHAR TERMINATED BY WHITESPACE, 
  loc    CHAR TERMINATED BY WHITESPACE 
) 

In this case, the remaining loc field is set to null. Without the TRAILING NULLCOLS clause, an error would be generated due to missing data.


See Also:

Case study 7, Extracting Data from a Formatted Report, for an example of using TRAILING NULLCOLS (see "SQL*Loader Case Studies" for information on how to access case studies)

Index Options

This section describes the following SQL*Loader options that control how index entries are created:

SORTED INDEXES Clause

The SORTED INDEXES clause applies to direct path loads. It tells SQL*Loader that the incoming data has already been sorted on the specified indexes, allowing SQL*Loader to optimize performance.

SINGLEROW Option

The SINGLEROW option is intended for use during a direct path load with APPEND on systems with limited memory, or when loading a small number of records into a large table. This option inserts each index entry directly into the index, one record at a time.

By default, SQL*Loader does not use SINGLEROW to append records to a table. Instead, index entries are put into a separate, temporary storage area and merged with the original index at the end of the load. This method achieves better performance and produces an optimal index, but it requires extra storage space. During the merge operation, the original index, the new index, and the space for new entries all simultaneously occupy storage space.

With the SINGLEROW option, storage space is not required for new index entries or for a new index. The resulting index may not be as optimal as a freshly sorted one, but it takes less space to produce. It also takes more time because additional UNDO information is generated for each index insert. This option is suggested for use when either of the following situations exists:

  • Available storage is limited.

  • The number of records to be loaded is small compared to the size of the table (a ratio of 1:20 or less is recommended).

Benefits of Using Multiple INTO TABLE Clauses

Multiple INTO TABLE clauses enable you to:

In the first case, it is common for the INTO TABLE clauses to refer to the same table. This section illustrates the different ways to use multiple INTO TABLE clauses and shows you how to use the POSITION parameter.


Note:

A key point when using multiple INTO TABLE clauses is that field scanning continues from where it left off when a new INTO TABLE clause is processed. The remainder of this section details important ways to make use of that behavior. It also describes alternative ways of using fixed field locations or the POSITION parameter.

Extracting Multiple Logical Records

Some data storage and transfer media have fixed-length physical records. When the data records are short, more than one can be stored in a single, physical record to use the storage space efficiently.

In this example, SQL*Loader treats a single physical record in the input file as two logical records and uses two INTO TABLE clauses to load the data into the emp table. For example, assume the data is as follows:

1119 Smith      1120 Yvonne 
1121 Albert     1130 Thomas 

The following control file extracts the logical records:

INTO TABLE emp 
     (empno POSITION(1:4)  INTEGER EXTERNAL, 
      ename POSITION(6:15) CHAR) 
INTO TABLE emp 
     (empno POSITION(17:20) INTEGER EXTERNAL, 
      ename POSITION(21:30) CHAR) 

Relative Positioning Based on Delimiters

The same record could be loaded with a different specification. The following control file uses relative positioning instead of fixed positioning. It specifies that each field is delimited by a single blank (" ") or with an undetermined number of blanks and tabs (WHITESPACE):

INTO TABLE emp 
     (empno INTEGER EXTERNAL TERMINATED BY " ", 
      ename CHAR             TERMINATED BY WHITESPACE) 
INTO TABLE emp 
     (empno INTEGER EXTERNAL TERMINATED BY " ", 
      ename CHAR)            TERMINATED BY WHITESPACE) 

The important point in this example is that the second empno field is found immediately after the first ename, although it is in a separate INTO TABLE clause. Field scanning does not start over from the beginning of the record for a new INTO TABLE clause. Instead, scanning continues where it left off.

To force record scanning to start in a specific location, you use the POSITION parameter. That mechanism is described in "Distinguishing Different Input Record Formats" and in "Loading Data into Multiple Tables".

Distinguishing Different Input Record Formats

A single data file might contain records in a variety of formats. Consider the following data, in which emp and dept records are intermixed:

1 50   Manufacturing       — DEPT record 
2 1119 Smith      50       — EMP record 
2 1120 Snyder     50 
1 60   Shipping 
2 1121 Stevens    60 

A record ID field distinguishes between the two formats. Department records have a 1 in the first column, while employee records have a 2. The following control file uses exact positioning to load this data:

INTO TABLE dept 
   WHEN recid = 1 
   (recid  FILLER POSITION(1:1)  INTEGER EXTERNAL,
    deptno POSITION(3:4)  INTEGER EXTERNAL, 
    dname  POSITION(8:21) CHAR) 
INTO TABLE emp 
   WHEN recid <> 1 
   (recid  FILLER POSITION(1:1)   INTEGER EXTERNAL,
    empno  POSITION(3:6)   INTEGER EXTERNAL, 
    ename  POSITION(8:17)  CHAR, 
    deptno POSITION(19:20) INTEGER EXTERNAL) 

Relative Positioning Based on the POSITION Parameter

The records in the previous example could also be loaded as delimited data. In this case, however, it is necessary to use the POSITION parameter. The following control file could be used:

INTO TABLE dept 
   WHEN recid = 1 
   (recid  FILLER INTEGER EXTERNAL TERMINATED BY WHITESPACE, 
    deptno INTEGER EXTERNAL TERMINATED BY WHITESPACE, 
    dname  CHAR TERMINATED BY WHITESPACE) 
INTO TABLE emp 
   WHEN recid <> 1 
   (recid  FILLER POSITION(1) INTEGER EXTERNAL TERMINATED BY ' ', 
    empno  INTEGER EXTERNAL TERMINATED BY ' ' 
    ename  CHAR TERMINATED BY WHITESPACE, 
    deptno INTEGER EXTERNAL TERMINATED BY ' ') 

The POSITION parameter in the second INTO TABLE clause is necessary to load this data correctly. It causes field scanning to start over at column 1 when checking for data that matches the second format. Without it, SQL*Loader would look for the recid field after dname.

Distinguishing Different Input Row Object Subtypes

A single data file may contain records made up of row objects inherited from the same base row object type. For example, consider the following simple object type and object table definitions, in which a nonfinal base object type is defined along with two object subtypes that inherit their row objects from the base type:

CREATE TYPE person_t AS OBJECT 
 (name    VARCHAR2(30), 
  age     NUMBER(3)) not final; 

CREATE TYPE employee_t UNDER person_t 
 (empid   NUMBER(5), 
  deptno  NUMBER(4), 
  dept    VARCHAR2(30)) not final; 

CREATE TYPE student_t UNDER person_t 
 (stdid   NUMBER(5), 
  major   VARCHAR2(20)) not final; 

CREATE TABLE persons OF person_t;

The following input data file contains a mixture of these row objects subtypes. A type ID field distinguishes between the three subtypes. person_t objects have a P in the first column, employee_t objects have an E, and student_t objects have an S.

P,James,31, 
P,Thomas,22, 
E,Pat,38,93645,1122,Engineering, 
P,Bill,19, 
P,Scott,55, 
S,Judy,45,27316,English, 
S,Karen,34,80356,History, 
E,Karen,61,90056,1323,Manufacturing, 
S,Pat,29,98625,Spanish, 
S,Cody,22,99743,Math, 
P,Ted,43, 
E,Judy,44,87616,1544,Accounting, 
E,Bob,50,63421,1314,Shipping, 
S,Bob,32,67420,Psychology, 
E,Cody,33,25143,1002,Human Resources,

The following control file uses relative positioning based on the POSITION parameter to load this data. Note the use of the TREAT AS clause with a specific object type name. This informs SQL*Loader that all input row objects for the object table will conform to the definition of the named object type.


Note:

Multiple subtypes cannot be loaded with the same INTO TABLE statement. Instead, you must use multiple INTO TABLE statements and have each one load a different subtype.

INTO TABLE persons 
REPLACE 
WHEN typid = 'P' TREAT AS person_t 
FIELDS TERMINATED BY "," 
 (typid   FILLER  POSITION(1) CHAR, 
  name            CHAR, 
  age             CHAR) 

INTO TABLE persons 
REPLACE 
WHEN typid = 'E' TREAT AS employee_t 
FIELDS TERMINATED BY "," 
 (typid   FILLER  POSITION(1) CHAR, 
  name            CHAR, 
  age             CHAR, 
  empid           CHAR, 
  deptno          CHAR, 
  dept            CHAR) 

INTO TABLE persons 
REPLACE 
WHEN typid = 'S' TREAT AS student_t 
FIELDS TERMINATED BY "," 
 (typid   FILLER  POSITION(1) CHAR, 
  name            CHAR, 
  age             CHAR, 
  stdid           CHAR, 
  major           CHAR)

See Also:

"Loading Column Objects" for more information about loading object types

Loading Data into Multiple Tables

By using the POSITION parameter with multiple INTO TABLE clauses, data from a single record can be loaded into multiple normalized tables. See case study 5, Loading Data into Multiple Tables, for an example. (See "SQL*Loader Case Studies" for information about how to access case studies.).

Summary

Multiple INTO TABLE clauses allow you to extract multiple logical records from a single input record and recognize different record formats in the same file.

For delimited data, proper use of the POSITION parameter is essential for achieving the expected results.

When the POSITION parameter is not used, multiple INTO TABLE clauses process different parts of the same (delimited data) input record, allowing multiple tables to be loaded from one record. When the POSITION parameter is used, multiple INTO TABLE clauses can process the same record in different ways, allowing multiple formats to be recognized in one input file.

Bind Arrays and Conventional Path Loads

SQL*Loader uses the SQL array-interface option to transfer data to the database. Multiple rows are read at one time and stored in the bind array. When SQL*Loader sends the Oracle database an INSERT command, the entire array is inserted at one time. After the rows in the bind array are inserted, a COMMIT statement is issued.

The determination of bind array size pertains to SQL*Loader's conventional path option. It does not apply to the direct path load method because a direct path load uses the direct path API, rather than Oracle's SQL interface.


See Also:

Oracle Call Interface Programmer's Guide for more information about the concepts of direct path loading

Size Requirements for Bind Arrays

The bind array must be large enough to contain a single row. If the maximum row length exceeds the size of the bind array, as specified by the BINDSIZE parameter, then SQL*Loader generates an error. Otherwise, the bind array contains as many rows as can fit within it, up to the limit set by the value of the ROWS parameter. (The maximum value for ROWS in a conventional path load is 65534.)

Although the entire bind array need not be in contiguous memory, the buffer for each field in the bind array must occupy contiguous memory. If the operating system cannot supply enough contiguous memory to store a field, then SQL*Loader generates an error.

Performance Implications of Bind Arrays

Large bind arrays minimize the number of calls to the Oracle database and maximize performance. In general, you gain large improvements in performance with each increase in the bind array size up to 100 rows. Increasing the bind array size to be greater than 100 rows generally delivers more modest improvements in performance. The size (in bytes) of 100 rows is typically a good value to use.

In general, any reasonably large size permits SQL*Loader to operate effectively. It is not usually necessary to perform the detailed calculations described in this section. Read this section when you need maximum performance or an explanation of memory usage.

Specifying Number of Rows Versus Size of Bind Array

When you specify a bind array size using the command-line parameter BINDSIZE or the OPTIONS clause in the control file, you impose an upper limit on the bind array. The bind array never exceeds that maximum.

As part of its initialization, SQL*Loader determines the size in bytes required to load a single row. If that size is too large to fit within the specified maximum, then the load terminates with an error.

SQL*Loader then multiplies that size by the number of rows for the load, whether that value was specified with the command-line parameter ROWS or the OPTIONS clause in the control file.

If that size fits within the bind array maximum, then the load continues—SQL*Loader does not try to expand the number of rows to reach the maximum bind array size. If the number of rows and the maximum bind array size are both specified, then SQL*Loader always uses the smaller value for the bind array.

If the maximum bind array size is too small to accommodate the initial number of rows, then SQL*Loader uses a smaller number of rows that fits within the maximum.

Calculations to Determine Bind Array Size

The bind array's size is equivalent to the number of rows it contains times the maximum length of each row. The maximum length of a row equals the sum of the maximum field lengths, plus overhead, as follows:

bind array size =
    (number of rows) * (  SUM(fixed field lengths)
                        + SUM(maximum varying field lengths)
                        + ( (number of varying length fields)
                             * (size of length indicator) )
                       )

Many fields do not vary in size. These fixed-length fields are the same for each loaded row. For these fields, the maximum length of the field is the field size, in bytes, as described in "SQL*Loader Datatypes". There is no overhead for these fields.

The fields that can vary in size from row to row are:

  • CHAR

  • DATE

  • INTERVAL DAY TO SECOND

  • INTERVAL DAY TO YEAR

  • LONG VARRAW

  • numeric EXTERNAL

  • TIME

  • TIMESTAMP

  • TIME WITH TIME ZONE

  • TIMESTAMP WITH TIME ZONE

  • VARCHAR

  • VARCHARC

  • VARGRAPHIC

  • VARRAW

  • VARRAWC

The maximum length of these datatypes is described in "SQL*Loader Datatypes". The maximum lengths describe the number of bytes that the fields can occupy in the input data record. That length also descrDibes the amount of storage that each field occupies in the bind array, but the bind array includes additional overhead for fields that can vary in size.

When the character datatypes (CHAR, DATE, and numeric EXTERNAL) are specified with delimiters, any lengths specified for these fields are maximum lengths. When specified without delimiters, the size in the record is fixed, but the size of the inserted field may still vary, due to whitespace trimming. So internally, these datatypes are always treated as varying-length fields—even when they are fixed-length fields.

A length indicator is included for each of these fields in the bind array. The space reserved for the field in the bind array is large enough to hold the longest possible value of the field. The length indicator gives the actual length of the field for each row.


Note:

In conventional path loads, LOBFILEs are not included when allocating the size of a bind array.

Determining the Size of the Length Indicator

On most systems, the size of the length indicator is 2 bytes. On a few systems, it is 3 bytes. To determine its size, use the following control file:

OPTIONS (ROWS=1) 
LOAD DATA 
INFILE * 
APPEND 
INTO TABLE DEPT 
(deptno POSITION(1:1) CHAR(1)) 
BEGINDATA 
a 

This control file loads a 1-byte CHAR using a 1-row bind array. In this example, no data is actually loaded because a conversion error occurs when the character a is loaded into a numeric column (deptno). The bind array size shown in the log file, minus one (the length of the character field) is the value of the length indicator.


Note:

A similar technique can determine bind array size without doing any calculations. Run your control file without any data and with ROWS=1 to determine the memory requirements for a single row of data. Multiply by the number of rows you want in the bind array to determine the bind array size.

Calculating the Size of Field Buffers

Table 9-3 through Table 9-6 summarize the memory requirements for each datatype. "L" is the length specified in the control file. "P" is precision. "S" is the size of the length indicator. For more information about these values, see "SQL*Loader Datatypes".

Table 9-3 Fixed-Length Fields

DatatypeSize in Bytes (Operating System-Dependent)

INTEGER

The size of the INT datatype, in C

INTEGER(N)

N bytes

SMALLINT

The size of SHORT INT datatype, in C

FLOAT

The size of the FLOAT datatype, in C

DOUBLE

The size of the DOUBLE datatype, in C

BYTEINT

The size of UNSIGNED CHAR, in C

VARRAW

The size of UNSIGNED SHORT, plus 4096 bytes or whatever is specified as max_length

LONG VARRAW

The size of UNSIGNED INT, plus 4096 bytes or whatever is specified as max_length

VARCHARC

Composed of 2 numbers. The first specifies length, and the second (which is optional) specifies max_length (default is 4096 bytes).

VARRAWC

This datatype is for RAW data. It is composed of 2 numbers. The first specifies length, and the second (which is optional) specifies max_length (default is 4096 bytes).


Table 9-4 Nongraphic Fields

DatatypeDefault SizeSpecified Size

(packed) DECIMAL

None

(N+1)/2, rounded up

ZONED

None

P

RAW

None

L

CHAR (no delimiters)

1

L + S

datetime and interval (no delimiters)

None

L + S

numeric EXTERNAL (no delimiters)

None

L + S


Table 9-5 Graphic Fields

DatatypeDefault SizeLength Specified with POSITIONLength Specified with DATATYPE

GRAPHIC

None

L

2*L

GRAPHIC EXTERNAL

None

L - 2

2*(L-2)

VARGRAPHIC

4KB*2

L+S

(2*L)+S


Table 9-6 Variable-Length Fields

DatatypeDefault SizeMaximum Length Specified (L)

VARCHAR

4 KB

L+S

CHAR (delimited)

255

L+S

datetime and interval (delimited)

255

L+S

numeric EXTERNAL (delimited)

255

L+S


Minimizing Memory Requirements for Bind Arrays

Pay particular attention to the default sizes allocated for VARCHAR, VARGRAPHIC, and the delimited forms of CHAR, DATE, and numeric EXTERNAL fields. They can consume enormous amounts of memory—especially when multiplied by the number of rows in the bind array. It is best to specify the smallest possible maximum length for these fields. Consider the following example:

CHAR(10) TERMINATED BY "," 

With byte-length semantics, this example uses (10 + 2) * 64 = 768 bytes in the bind array, assuming that the length indicator is 2 bytes long and that 64 rows are loaded at a time.

With character-length semantics, the same example uses ((10 * s) + 2) * 64 bytes in the bind array, where "s" is the maximum size in bytes of a character in the data file character set.

Now consider the following example:

CHAR TERMINATED BY "," 

Regardless of whether byte-length semantics or character-length semantics are used, this example uses (255 + 2) * 64 = 16,448 bytes, because the default maximum size for a delimited field is 255 bytes. This can make a considerable difference in the number of rows that fit into the bind array.

Calculating Bind Array Size for Multiple INTO TABLE Clauses

When calculating a bind array size for a control file that has multiple INTO TABLE clauses, calculate as if the INTO TABLE clauses were not present. Imagine all of the fields listed in the control file as one, long data structure—that is, the format of a single row in the bind array.

If the same field in the data record is mentioned in multiple INTO TABLE clauses, then additional space in the bind array is required each time it is mentioned. It is especially important to minimize the buffer allocations for such fields.


Note:

Generated data is produced by the SQL*Loader functions CONSTANT, EXPRESSION, RECNUM, SYSDATE, and SEQUENCE. Such generated data does not require any space in the bind array.

PKS2DDPKN:AOEBPS/cover.htmO Cover

Oracle Corporation

PK[pTOPKN:AOEBPS/original_import.htm Original Import

22 Original Import

This chapter describes how to use the original Import utility (imp) to import dump files that were created using the original Export utility.

This chapter discusses the following topics:

What Is the Import Utility?

The Import utility reads object definitions and table data from dump files created by the original Export utility. The dump file is in an Oracle binary-format that can be read only by original Import.

The version of the Import utility cannot be earlier than the version of the Export utility used to create the dump file.

Table Objects: Order of Import

Table objects are imported as they are read from the export dump file. The dump file contains objects in the following order:

  1. Type definitions

  2. Table definitions

  3. Table data

  4. Table indexes

  5. Integrity constraints, views, procedures, and triggers

  6. Bitmap, function-based, and domain indexes

The order of import is as follows: new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it is originally inserted and again during the import).

Before Using Import

Before you begin using Import, be sure you take care of the following items (described in detail in the following sections):

Running catexp.sql or catalog.sql

To use Import, you must run the script catexp.sql or catalog.sql (which runs catexp.sql) after the database has been created or migrated to a newer version.

The catexp.sql or catalog.sql script needs to be run only once on a database. The script performs the following tasks to prepare the database for export and import operations:

  • Creates the necessary import views in the data dictionary

  • Creates the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles

  • Assigns all necessary privileges to the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles

  • Assigns EXP_FULL_DATABASE and IMP_FULL_DATABASE to the DBA role

  • Records the version of catexp.sql that has been installed

Verifying Access Privileges for Import Operations

To use Import, you must have the CREATE SESSION privilege on an Oracle database. This privilege belongs to the CONNECT role established during database creation.

You can perform an import operation even if you did not create the export file. However, keep in mind that if the export file was created by a user with the EXP_FULL_DATABASE role, then you must have the IMP_FULL_DATABASE role to import it. Both of these roles are typically assigned to database administrators (DBAs).

Importing Objects Into Your Own Schema

Table 22-1 lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE role.

Table 22-1 Privileges Required to Import Objects into Your Own Schema

ObjectRequired Privilege (Privilege Type, If Applicable)

Clusters

CREATE CLUSTER (System) or UNLIMITED TABLESPACE (System). The user must also be assigned a tablespace quota.

Database links

CREATE DATABASE LINK (System) and CREATE SESSION (System) on remote database

Triggers on tables

CREATE TRIGGER (System)

Triggers on schemas

CREATE ANY TRIGGER (System)

Indexes

CREATE INDEX (System) or UNLIMITED TABLESPACE (System). The user must also be assigned a tablespace quota.

Integrity constraints

ALTER TABLE (Object)

Libraries

CREATE ANY LIBRARY (System)

Packages

CREATE PROCEDURE (System)

Private synonyms

CREATE SYNONYM (System)

Sequences

CREATE SEQUENCE (System)

Snapshots

CREATE SNAPSHOT (System)

Stored functions

CREATE PROCEDURE (System)

Stored procedures

CREATE PROCEDURE (System)

Table data

INSERT TABLE (Object)

Table definitions (including comments and audit options)

CREATE TABLE (System) or UNLIMITED TABLESPACE (System). The user must also be assigned a tablespace quota.

Views

CREATE VIEW (System) and SELECT (Object) on the base table, or SELECT ANY TABLE (System)

Object types

CREATE TYPE (System)

Foreign function libraries

CREATE LIBRARY (System)

Dimensions

CREATE DIMENSION (System)

Operators

CREATE OPERATOR (System)

Indextypes

CREATE INDEXTYPE (System)


Importing Grants

To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. Table 22-2 shows the required conditions for the authorizations to be valid on the target system.

Table 22-2 Privileges Required to Import Grants

GrantConditions

Object privileges

The object must exist in the user's schema, or

the user must have the object privileges with the WITH GRANT OPTION or,

the user must have the IMP_FULL_DATABASE role enabled.

System privileges

User must have the SYSTEM privilege and also the WITH ADMIN OPTION.


Importing Objects Into Other Schemas

To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.

Importing System Objects

To import system objects from a full database export file, the IMP_FULL_DATABASE role must be enabled. The parameter FULL specifies that the following system objects are included in the import:

  • Profiles

  • Public database links

  • Public synonyms

  • Roles

  • Rollback segment definitions

  • Resource costs

  • Foreign function libraries

  • Context objects

  • System procedural objects

  • System audit options

  • System privileges

  • Tablespace definitions

  • Tablespace quotas

  • User definitions

  • Directory aliases

  • System event triggers

Processing Restrictions

The following restrictions apply when you process data with the Import utility:

  • When a type definition has evolved and data referencing that evolved type is exported, the type definition on the import system must have evolved in the same manner.

  • The table compression attribute of tables and partitions is preserved during export and import. However, the import process does not use the direct path API, hence the data will not be stored in the compressed format when imported.

Importing into Existing Tables

This section describes factors to consider when you import data into existing tables:

Manually Creating Tables Before Importing Data

When you choose to create tables manually before importing data into them from an export file, you should use either the same table definition previously used or a compatible format. For example, although you can increase the width of columns and change their order, you cannot do the following:

  • Add NOT NULL columns

  • Change the datatype of a column to an incompatible datatype (LONG to NUMBER, for example)

  • Change the definition of object types used in a table

  • Change DEFAULT column values


    Note:

    When tables are manually created before data is imported, the CREATE TABLE statement in the export dump file will fail because the table already exists. To avoid this failure and continue loading data into the table, set the Import parameter IGNORE=y. Otherwise, no data will be loaded into the table because of the table creation error.

Disabling Referential Constraints

In the normal import order, referential constraints are imported only after all tables are imported. This sequence prevents errors that could occur if a referential integrity constraint exists for data that has not yet been imported.

These errors can still occur when data is loaded into existing tables. For example, if table emp has a referential integrity constraint on the mgr column that verifies that the manager number exists in emp, then a legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported.

When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.

Referential constraints between tables can also cause problems. For example, if the emp table appears before the dept table in the export dump file, but a referential check exists from the emp table into the dept table, then some of the rows from the emp table may not be imported due to a referential constraint violation.

To prevent errors like these, you should disable referential integrity constraints when importing data into existing tables.

Manually Ordering the Import

When the constraints are reenabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, then it may be beneficial to order the import manually.

To do so, perform several imports from an export file instead of one. First, import tables that are the targets of referential checks. Then, import the tables that reference them. This option works if tables do not reference each other in a circular fashion, and if a table does not reference itself.

Effect of Schema and Database Triggers on Import Operations

Triggers that are defined to trigger on DDL events for a specific schema or on DDL-related events for the database, are system triggers. These triggers can have detrimental effects on certain import operations. For example, they can prevent successful re-creation of database objects, such as tables. This causes errors to be returned that give no indication that a trigger caused the problem.

Database administrators and anyone creating system triggers should verify that such triggers do not prevent users from performing database operations for which they are authorized. To test a system trigger, take the following steps:

  1. Define the trigger.

  2. Create some database objects.

  3. Export the objects in table or user mode.

  4. Delete the objects.

  5. Import the objects.

  6. Verify that the objects have been successfully re-created.


    Note:

    A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import.

Invoking Import

You can invoke Import, and specify parameters by using any of the following methods:

Before you use one of these methods, be sure to read the descriptions of the available parameters. See "Import Parameters".

Command-Line Entries

You can specify all valid parameters and their values from the command line using the following syntax (you will then be prompted for a username and password):

imp PARAMETER=value

or

imp PARAMETER=(value1,value2,...,valuen)

The number of parameters cannot exceed the maximum length of a command line on the system.

Parameter Files

You can specify all valid parameters and their values in a parameter file. Storing the parameters in a file allows them to be easily modified or reused. If you use different parameters for different databases, then you can have multiple parameter files.

Create the parameter file using any flat file text editor. The command-line option PARFILE=filename tells Import to read the parameters from the specified file rather than from the command line. For example:

The syntax for parameter file specifications can be any of the following:

PARAMETER=value
PARAMETER=(value)
PARAMETER=(value1, value2, ...)

The following example shows a partial parameter file listing:

FULL=y
FILE=dba.dmp
GRANTS=y
INDEXES=y
CONSISTENT=y

Note:

The maximum size of the parameter file may be limited by the operating system. The name of the parameter file is subject to the file-naming conventions of the operating system.

You can add comments to the parameter file by preceding them with the pound (#) sign. Import ignores all characters to the right of the pound (#) sign.

You can specify a parameter file at the same time that you are entering parameters on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines which parameters take precedence. For example, assume the parameter file params.dat contains the parameter INDEXES=y and Import is invoked with the following line:

imp PARFILE=params.dat INDEXES=n

In this case, because INDEXES=n occurs after PARFILE=params.dat, INDEXES=n overrides the value of the INDEXES parameter in the parameter file.


See Also:


Interactive Mode

If you prefer to be prompted for the value of each parameter, then you can simply specify imp at the command line. You will be prompted for a username and password.

Commonly used parameters are then displayed. You can accept the default value, if one is provided, or enter a different value. The command-line interactive method does not provide prompts for all functionality and is provided only for backward compatibility.

Invoking Import As SYSDBA

SYSDBA is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Import as SYSDBA, except in the following situations:

  • At the request of Oracle technical support

  • When importing a transportable tablespace set

Getting Online Help

Import provides online help. Enter imp help=y to invoke Import help.

Import Modes

The Import utility supports four modes of operation:

See Table 22-3 for a list of objects that are imported in each mode.


Caution:

When you use table mode to import tables that have columns of type ANYDATA, you may receive the following error:

ORA-22370: Incorrect usage of method. Nonexistent type.

This indicates that the ANYDATA column depends on other types that are not present in the database. You must manually create dependent types in the target database before you use table mode to import tables that use the ANYDATA type.


A user with the IMP_FULL_DATABASE role must specify one of these modes. Otherwise, an error results. If a user without the IMP_FULL_DATABASE role fails to specify one of these modes, then a user-level Import is performed.

Table 22-3 Objects Imported in Each Mode

ObjectTable ModeUser ModeFull Database ModeTablespace Mode

Analyze cluster

No

Yes

Yes

No

Analyze tables/statistics

Yes

Yes

Yes

Yes

Application contexts

No

No

Yes

No

Auditing information

Yes

Yes

Yes

No

B-tree, bitmap, domain function-based indexes

YesFoot 1 

Yes

Yes

Yes

Cluster definitions

No

Yes

Yes

Yes

Column and table comments

Yes

Yes

Yes

Yes

Database links

No

Yes

Yes

No

Default roles

No

No

Yes

No

Dimensions

No

Yes

Yes

No

Directory aliases

No

No

Yes

No

External tables (without data)

Yes

Yes

Yes

No

Foreign function libraries

No

Yes

Yes

No

Indexes owned by users other than table owner

Yes (Privileged users only)

Yes

Yes

Yes

Index types

No

Yes

Yes

No

Java resources and classes

No

Yes

Yes

No

Job queues

No

Yes

Yes

No

Nested table data

Yes

Yes

Yes

Yes

Object grants

Yes (Only for tables and indexes)

Yes

Yes

Yes

Object type definitions used by table

Yes

Yes

Yes

Yes

Object types

No

Yes

Yes

No

Operators

No

Yes

Yes

No

Password history

No

No

Yes

No

Postinstance actions and objects

No

No

Yes

No

Postschema procedural actions and objects

No

Yes

Yes

No

Posttable actions

Yes

Yes

Yes

Yes

Posttable procedural actions and objects

Yes

Yes

Yes

Yes

Preschema procedural objects and actions

No

Yes

Yes

No

Pretable actions

Yes

Yes

Yes

Yes

Pretable procedural actions

Yes

Yes

Yes

Yes

Private synonyms

No

Yes

Yes

No

Procedural objects

No

Yes

Yes

No

Profiles

No

No

Yes

No

Public synonyms

No

No

Yes

No

Referential integrity constraints

Yes

Yes

Yes

No

Refresh groups

No

Yes

Yes

No

Resource costs

No

No

Yes

No

Role grants

No

No

Yes

No

Roles

No

No

Yes

No

Rollback segment definitions

No

No

Yes

No

Security policies for table

Yes

Yes

Yes

Yes

Sequence numbers

No

Yes

Yes

No

Snapshot logs

No

Yes

Yes

No

Snapshots and materialized views

No

Yes

Yes

No

System privilege grants

No

No

Yes

No

Table constraints (primary, unique, check)

Yes

Yes

Yes

Yes

Table data

Yes

Yes

Yes

Yes

Table definitions

Yes

Yes

Yes

Yes

Tablespace definitions

No

No

Yes

No

Tablespace quotas

No

No

Yes

No

Triggers

Yes

YesFoot 2 

YesFoot 3 

Yes

Triggers owned by other users

Yes (Privileged users only)

No

No

No

User definitions

No

No

Yes

No

User proxies

No

No

Yes

No

User views

No

Yes

Yes

No

User-stored procedures, packages, and functions

No

Yes

Yes

No


Footnote 1 Nonprivileged users can export and import only indexes they own on tables they own. They cannot export indexes they own that are on tables owned by other users, nor can they export indexes owned by other users on their own tables. Privileged users can export and import indexes on the specified users' tables, even if the indexes are owned by other users. Indexes owned by the specified user on other users' tables are not included, unless those other users are included in the list of users to export.

Footnote 2 Nonprivileged and privileged users can export and import all triggers owned by the user, even if they are on tables owned by other users.

Footnote 3 A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import.

Import Parameters

This section contains descriptions of the Import command-line parameters.

BUFFER

Default: operating system-dependent

The integer specified for BUFFER is the size, in bytes, of the buffer through which data rows are transferred.

BUFFER determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:

buffer_size = rows_in_array * maximum_row_size

For tables containing LOBs, LONG, BFILE, REF, ROWID,UROWID, or TIMESTAMP columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB and LONG columns. If the buffer cannot hold the longest row in a table, then Import attempts to allocate a larger buffer.

For DATE columns, two or more rows are inserted at once if the buffer is large enough.


Note:

See your Oracle operating system-specific documentation to determine the default value for this parameter.

COMMIT

Default: n

Specifies whether Import should commit after each array insert. By default, Import commits only after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.

If a table has nested table columns or attributes, then the contents of the nested tables are imported as separate tables. Therefore, the contents of the nested tables are always committed in a transaction distinct from the transaction used to commit the outer table.

If COMMIT=n and a table is partitioned, then each partition and subpartition in the Export file is imported in a separate transaction.

For tables containing LOBs, LONG, BFILE, REF, ROWID, UROWID, or TIMESTAMP columns, array inserts are not done. If COMMIT=y, then Import commits these tables after each row.

COMPILE

Default: y

Specifies whether Import should compile packages, procedures, and functions as they are created.

If COMPILE=n, then these units are compiled on their first use. For example, packages that are used to build domain indexes are compiled when the domain indexes are created.

CONSTRAINTS

Default: y

Specifies whether table constraints are to be imported. The default is to import constraints. If you do not want constraints to be imported, then you must set the parameter value to n.

Note that primary key constraints for index-organized tables (IOTs) and object tables are always imported.

DATA_ONLY

Default: n

To import only data (no metadata) from a dump file, specify DATA_ONLY=y.

When you specify DATA_ONLY=y, any import parameters related to metadata that are entered on the command line (or in a parameter file) become invalid. This means that no metadata from the dump file will be imported.

The metadata-related parameters are the following: COMPILE, CONSTRAINTS, DATAFILES, DESTROY, GRANTS, IGNORE, INDEXES, INDEXFILE, ROWS=n, SHOW, SKIP_UNUSABLE_INDEXES, STATISTICS, STREAMS_CONFIGURATION, STREAMS_INSTANTIATION, TABLESPACES, TOID_NOVALIDATE, TRANSPORT_TABLESPACE, TTS_OWNERS.

DATAFILES

Default: none

When TRANSPORT_TABLESPACE is specified as y, use this parameter to list the data files to be transported into the database.

DESTROY

Default: n

Specifies whether the existing data files making up the database should be reused. That is, specifying DESTROY=y causes Import to include the REUSE option in the data file clause of the SQL CREATE TABLESPACE statement, which causes Import to reuse the original database's data files after deleting their contents.

Note that the export file contains the data file names used in each tablespace. If you specify DESTROY=y and attempt to create a second database on the same system (for testing or other purposes), then the Import utility will overwrite the first database's data files when it creates the tablespace. In this situation you should use the default, DESTROY=n, so that an error occurs if the data files already exist when the tablespace is created. Also, when you need to import into the original database, you will need to specify IGNORE=y to add to the existing data files without replacing them.


Caution:

If data files are stored on a raw device, thenDESTROY=n does not prevent files from being overwritten.

FEEDBACK

Default: 0 (zero)

Specifies that Import should display a progress meter in the form of a period for n number of rows imported. For example, if you specify FEEDBACK=10, then Import displays a period each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be individually set for each table.

FILE

Default: expdat.dmp

Specifies the names of the export files to import. The default extension is .dmp. Because Export supports multiple export files (see the following description of the FILESIZE parameter), you may need to specify multiple file names to be imported. For example:

imp scott IGNORE=y FILE = dat1.dmp, dat2.dmp, dat3.dmp FILESIZE=2048
 

You need not be the user who exported the export files; however, you must have read access to the files. If you were not the exporter of the export files, then you must also have the IMP_FULL_DATABASE role granted to you.

FILESIZE

Default: operating system-dependent

Lets you specify the same maximum dump file size you specified on export.


Note:

The maximum size allowed is operating system-dependent. You should verify this maximum value in your Oracle operating system-specific documentation before specifying FILESIZE.

The FILESIZE value can be specified as a number followed by KB (number of kilobytes). For example, FILESIZE=2KB is the same as FILESIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to obtain the final file size (FILESIZE=2048B is the same as FILESIZE=2048).

FROMUSER

Default: none

A comma-delimited list of schemas to import. This parameter is relevant only to users with the IMP_FULL_DATABASE role. The parameter enables you to import a subset of schemas from an export file containing multiple schemas (for example, a full export dump file or a multischema, user-mode export dump file).

Schema names that appear inside function-based indexes, functions, procedures, triggers, type bodies, views, and so on, are not affected by FROMUSER or TOUSER processing. Only the name of the object is affected. After the import has completed, items in any TOUSER schema should be manually checked for references to old (FROMUSER) schemas, and corrected if necessary.

You will typically use FROMUSER in conjunction with the Import parameter TOUSER, which you use to specify a list of usernames whose schemas will be targets for import (see "TOUSER"). The user that you specify with TOUSER must exist in the target database before the import operation; otherwise an error is returned.

If you do not specify TOUSER, then Import will do the following:

  • Import objects into the FROMUSER schema if the export file is a full dump or a multischema, user-mode export dump file

  • Create objects in the importer's schema (regardless of the presence of or absence of the FROMUSER schema on import) if the export file is a single-schema, user-mode export dump file created by an unprivileged user


    Note:

    Specifying FROMUSER=SYSTEM causes only schema objects belonging to user SYSTEM to be imported; it does not cause system objects to be imported.

FULL

Default: y

Specifies whether to import the entire export dump file.

Points to Consider for Full Database Exports and Imports

A full database export and import can be a good way to replicate or clean up a database. However, to avoid problems be sure to keep the following points in mind:

  • A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import.

  • A full export also does not export the default profile. If you have modified the default profile in the source database (for example, by adding a password verification function owned by schema SYS), then you must manually pre-create the function and modify the default profile in the target database after the import completes.

  • If possible, before beginning, make a physical copy of the exported database and the database into which you intend to import. This ensures that any mistakes are reversible.

  • Before you begin the export, it is advisable to produce a report that includes the following information:

    • A list of tablespaces and data files

    • A list of rollback segments

    • A count, by user, of each object type such as tables, indexes, and so on

    This information lets you ensure that tablespaces have already been created and that the import was successful.

  • If you are creating a completely new database from an export, then remember to create an extra rollback segment in SYSTEM and to make it available in your initialization parameter file (init.ora)before proceeding with the import.

  • When you perform the import, ensure you are pointing at the correct instance. This is very important because on some UNIX systems, just the act of entering a subshell can change the database against which an import operation was performed.

  • Do not perform a full import on a system that has more than one database unless you are certain that all tablespaces have already been created. A full import creates any undefined tablespaces using the same data file names as the exported database. This can result in problems in the following situations:

    • If the data files belong to any other database, then they will become corrupted. This is especially true if the exported database is on the same system, because its data files will be reused by the database into which you are importing.

    • If the data files have names that conflict with existing operating system files.

GRANTS

Default: y

Specifies whether to import object grants.

By default, the Import utility imports any object grants that were exported. If the export was a user-mode export, then the export file contains only first-level object grants (those granted by the owner).

If the export was a full database mode export, then the export file contains all object grants, including lower-level grants (those granted by users given a privilege with the WITH GRANT OPTION). If you specify GRANTS=n, then the Import utility does not import object grants. (Note that system grants are imported even if GRANTS=n.)


Note:

Export does not export grants on data dictionary views for security reasons that affect Import. If such grants were exported, then access privileges would be changed and the importer would not be aware of this.

HELP

Default: none

Displays a description of the Import parameters. Enter imp HELP=y on the command line to invoke it.

IGNORE

Default: n

Specifies how object creation errors should be handled. If you accept the default, IGNORE=n, then Import logs or displays object creation errors before continuing.

If you specify IGNORE=y, then Import overlooks object creation errors when it attempts to create database objects, and continues without reporting the errors.

Note that only object creation errors are ignored; other errors, such as operating system, database, and SQL errors, are not ignored and may cause processing to stop.

In situations where multiple refreshes from a single export file are done with IGNORE=y, certain objects can be created multiple times (although they will have unique system-defined names). You can prevent this for certain objects (for example, constraints) by doing an import with CONSTRAINTS=n. If you do a full import with CONSTRAINTS=n, then no constraints for any tables are imported.

If a table already exists and IGNORE=y, then rows are imported into existing tables without any errors or messages being given. You might want to import data into tables that already exist in order to use new storage parameters or because you have already created the table in a cluster.

If a table already exists and IGNORE=n, then errors are reported and the table is skipped with no rows inserted. Also, objects dependent on tables, such as indexes, grants, and constraints, will not be created.


Caution:

When you import into existing tables, if no column in the table is uniquely indexed, rows could be duplicated.

INDEXES

Default: y

Specifies whether to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.

You can postpone all user-generated index creation until after Import completes, by specifying INDEXES=n.

If indexes for the target table already exist at the time of the import, then Import performs index maintenance when data is inserted into the table.

INDEXFILE

Default: none

Specifies a file to receive index-creation statements.

When this parameter is specified, index-creation statements for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. No database objects are imported.

If the Import parameter CONSTRAINTS is set to y, then Import also writes table constraints to the index file.

The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes.

To make it easier to identify the indexes defined in the file, the export file's CREATE TABLE statements and CREATE CLUSTER statements are included as comments.

Perform the following steps to use this feature:

  1. Import using the INDEXFILE parameter to create a file of index-creation statements.

  2. Edit the file, making certain to add a valid password to the connect strings.

  3. Rerun Import, specifying INDEXES=n.

    (This step imports the database objects while preventing Import from using the index definitions stored in the export file.)

  4. Execute the file of index-creation statements as a SQL script to create the index.

    The INDEXFILE parameter can be used only with the FULL=y, FROMUSER, TOUSER, or TABLES parameters.

LOG

Default: none

Specifies a file (for example, import.log) to receive informational and error messages. If you specify a log file, then the Import utility writes all information to the log in addition to the terminal display.

PARFILE

Default: none

Specifies a file name for a file that contains a list of Import parameters. For more information about using a parameter file, see "Parameter Files".

RECORDLENGTH

Default: operating system-dependent

Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value.

If you do not define this parameter, then it defaults to your platform-dependent value for BUFSIZ.

You can set RECORDLENGTH to any value equal to or greater than your system's BUFSIZ. (The highest value is 64 KB.) Changing the RECORDLENGTH parameter affects only the size of data that accumulates before writing to the database. It does not affect the operating system file block size.

You can also use this parameter to specify the size of the Import I/O buffer.

RESUMABLE

Default: n

The RESUMABLE parameter is used to enable and disable resumable space allocation. Because this parameter is disabled by default, you must set RESUMABLE=y to use its associated parameters, RESUMABLE_NAME and RESUMABLE_TIMEOUT.


See Also:


RESUMABLE_NAME

Default: 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'

The value for this parameter identifies the statement that is resumable. This value is a user-defined text string that is inserted in either the USER_RESUMABLE or DBA_RESUMABLE view to help you identify a specific resumable statement that has been suspended.

This parameter is ignored unless the RESUMABLE parameter is set to y to enable resumable space allocation.

RESUMABLE_TIMEOUT

Default: 7200 seconds (2 hours)

The value of the parameter specifies the time period during which an error must be fixed. If the error is not fixed within the timeout period, then execution of the statement is terminated.

This parameter is ignored unless the RESUMABLE parameter is set to y to enable resumable space allocation.

ROWS

Default: y

Specifies whether to import the rows of table data.

If ROWS=n, then statistics for all imported tables will be locked after the import operation is finished.

SHOW

Default: n

When SHOW=y, the contents of the export dump file are listed to the display and not imported. The SQL statements contained in the export are displayed in the order in which Import will execute them.

The SHOW parameter can be used only with the FULL=y, FROMUSER, TOUSER, or TABLES parameter.

SKIP_UNUSABLE_INDEXES

Default: the value of the Oracle database configuration parameter, SKIP_UNUSABLE_INDEXES, as specified in the initialization parameter file

Both Import and the Oracle database provide a SKIP_UNUSABLE_INDEXES parameter. The Import SKIP_UNUSABLE_INDEXES parameter is specified at the Import command line. The Oracle database SKIP_UNUSABLE_INDEXES parameter is specified as a configuration parameter in the initialization parameter file. It is important to understand how they affect each other.

If you do not specify a value for SKIP_UNUSABLE_INDEXES at the Import command line, then Import uses the database setting for the SKIP_UNUSABLE_INDEXES configuration parameter, as specified in the initialization parameter file.

If you do specify a value for SKIP_UNUSABLE_INDEXES at the Import command line, then it overrides the value of the SKIP_UNUSABLE_INDEXES configuration parameter in the initialization parameter file.

A value of y means that Import will skip building indexes that were set to the Index Unusable state (by either system or user). Other indexes (not previously set to Index Unusable) continue to be updated as rows are inserted.

This parameter enables you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the Import.


Note:

Indexes that are unique and marked Unusable are not allowed to skip index maintenance. Therefore, the SKIP_UNUSABLE_INDEXES parameter has no effect on unique indexes.

You can use the INDEXFILE parameter in conjunction with INDEXES=n to provide the SQL scripts for re-creating the index. If the SKIP_UNUSABLE_INDEXES parameter is not specified, then row insertions that attempt to update unusable indexes will fail.


See Also:

The ALTER SESSION statement in the Oracle Database SQL Language Reference

STATISTICS

Default: ALWAYS

Specifies what is done with the database optimizer statistics at import time.

The options are:

  • ALWAYS

    Always import database optimizer statistics regardless of whether they are questionable.

  • NONE

    Do not import or recalculate the database optimizer statistics.

  • SAFE

    Import database optimizer statistics only if they are not questionable. If they are questionable, then recalculate the optimizer statistics.

  • RECALCULATE

    Do not import the database optimizer statistics. Instead, recalculate them on import. This requires that the original export operation that created the dump file must have generated the necessary ANALYZE statements (that is, the export was not performed with STATISTICS=NONE). These ANALYZE statements are included in the dump file and used by the import operation for recalculation of the table's statistics.


    See Also:


STREAMS_CONFIGURATION

Default: y

Specifies whether to import any general Streams metadata that may be present in the export dump file.

STREAMS_INSTANTIATION

Default: n

Specifies whether to import Streams instantiation metadata that may be present in the export dump file. Specify y if the import is part of an instantiation in a Streams environment.

TABLES

Default: none

Specifies that the import is a table-mode import and lists the table names and partition and subpartition names to import. Table-mode import lets you import entire partitioned or nonpartitioned tables. The TABLES parameter restricts the import to the specified tables and their associated objects, as listed in Table 22-3. You can specify the following values for the TABLES parameter:

  • tablename specifies the name of the table or tables to be imported. If a table in the list is partitioned and you do not specify a partition name, then all its partitions and subpartitions are imported. To import all the exported tables, specify an asterisk (*) as the only table name parameter.

    tablename can contain any number of '%' pattern matching characters, which can each match zero or more characters in the table names in the export file. All the tables whose names match all the specified patterns of a specific table name in the list are selected for import. A table name in the list that consists of all pattern matching characters and no partition name results in all exported tables being imported.

  • partition_name and subpartition_name let you restrict the import to one or more specified partitions or subpartitions within a partitioned table.

The syntax you use to specify the preceding is in the form:

tablename:partition_name

tablename:subpartition_name

If you use tablename:partition_name, then the specified table must be partitioned, and partition_name must be the name of one of its partitions or subpartitions. If the specified table is not partitioned, then the partition_name is ignored and the entire table is imported.

The number of tables that can be specified at the same time is dependent on command-line limits.

As the export file is processed, each table name in the export file is compared against each table name in the list, in the order in which the table names were specified in the parameter. To avoid ambiguity and excessive processing time, specific table names should appear at the beginning of the list, and more general table names (those with patterns) should appear at the end of the list.

Although you can qualify table names with schema names (as in scott.emp) when exporting, you cannot do so when importing. In the following example, the TABLES parameter is specified incorrectly:

imp TABLES=(jones.accts, scott.emp, scott.dept)

The valid specification to import these tables is as follows:

imp FROMUSER=jones TABLES=(accts)
imp FROMUSER=scott TABLES=(emp,dept)

For a more detailed example, see "Example Import Using Pattern Matching to Import Various Tables".


Note:

Some operating systems, such as UNIX, require that you use escape characters before special characters, such as a parenthesis, so that the character is not treated as a special character. On UNIX, use a backslash (\) as the escape character, as shown in the following example:
TABLES=\(emp,dept\)

Table Name Restrictions

The following restrictions apply to table names:

  • By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, then you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.

    Some operating systems require that quotation marks on the command line be preceded by an escape character. The following are examples of how case-sensitivity can be preserved in the different Import modes.

    • In command-line mode:

      tables='\"Emp\"'
      
    • In interactive mode:

      Table(T) to be exported: "Exp"
      
    • In parameter file mode:

      tables='"Emp"'
      
  • Table names specified on the command line cannot include a pound (#) sign, unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound (#) sign, then the Import utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.

    For example, if the parameter file contains the following line, then Import interprets everything on the line after emp# as a comment and does not import the tables dept and mydata:

    TABLES=(emp#, dept, mydata)
    

    However, given the following line, the Import utility imports all three tables because emp# is enclosed in quotation marks:

    TABLES=("emp#", dept, mydata)
    

    Note:

    Some operating systems require single quotation marks rather than double quotation marks, or the reverse; see your Oracle operating system-specific documentation. Different operating systems also have other restrictions on table naming.

    For example, the UNIX C shell attaches a special meaning to a dollar sign ($) or pound sign (#) (or certain other special characters). You must use escape characters to get such characters in the name past the shell and into Import.


TABLESPACES

Default: none

When TRANSPORT_TABLESPACE is specified as y, use this parameter to list the tablespaces to be transported into the database. If there is more than one tablespace in the export file, then you must specify all of them as part of the import operation.

See "TRANSPORT_TABLESPACE" for more information.

TOID_NOVALIDATE

Default: none

When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).

To do this, Import compares the type's unique identifier (TOID) with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.

In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the TOID_NOVALIDATE parameter to specify types to exclude from TOID comparison.

The syntax is as follows:

TOID_NOVALIDATE=([schemaname.]typename [, ...])

For example:

imp scott TABLES=jobs TOID_NOVALIDATE=typ1
imp scott TABLES=salaries TOID_NOVALIDATE=(fred.typ0,sally.typ2,typ3)

If you do not specify a schema name for the type, then it defaults to the schema of the importing user. For example, in the first preceding example, the type typ1 defaults to scott.typ1 and in the second example, the type typ3 defaults to scott.typ3.

Note that TOID_NOVALIDATE deals only with table column types. It has no effect on table types.

The output of a typical import with excluded types would contain entries similar to the following:

[...]
. importing IMP3's objects into IMP3
. . skipping TOID validation on type IMP2.TOIDTYP0
. . importing table                  "TOIDTAB3"          
[...]

Caution:

When you inhibit validation of the type identifier, it is your responsibility to ensure that the attribute list of the imported type matches the attribute list of the existing type. If these attribute lists do not match, then results are unpredictable.

TOUSER

Default: none

Specifies a list of user names whose schemas will be targets for Import. The user names must exist before the import operation; otherwise an error is returned. The IMP_FULL_DATABASE role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

imp FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, then the schema names are paired. The following example imports scott's objects into joe's schema, and fred's objects into ted's schema:

imp FROMUSER=scott,fred TOUSER=joe,ted

If the FROMUSER list is longer than the TOUSER list, then the remaining schemas will be imported into either the FROMUSER schema, or into the importer's schema, based on normal defaulting rules. You can use the following syntax to ensure that any extra objects go into the TOUSER schema:

imp FROMUSER=scott,adams TOUSER=ted,ted

Note that user ted is listed twice.


See Also:

"FROMUSER" for information about restrictions when using FROMUSER and TOUSER

TRANSPORT_TABLESPACE

Default: n

When specified as y, instructs Import to import transportable tablespace metadata from an export file.

Encrypted columns are not supported in transportable tablespace mode.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

TTS_OWNERS

Default: none

When TRANSPORT_TABLESPACE is specified as y, use this parameter to list the users who own the data in the transportable tablespace set.

See "TRANSPORT_TABLESPACE".

USERID (username/password)

Default: none

Specifies the username, password, and an optional connect string of the user performing the import.

If you connect as user SYS, then you must also specify AS SYSDBA in the connect string. Your operating system may require you to treat AS SYSDBA as a special string, in which case the entire string would be enclosed in quotation marks.


See Also:


VOLSIZE

Default: none

Specifies the maximum number of bytes in a dump file on each volume of tape.

The VOLSIZE parameter has a maximum value equal to the maximum value that can be stored in 64 bits on your platform.

The VOLSIZE value can be specified as number followed by KB (number of kilobytes). For example, VOLSIZE=2KB is the same as VOLSIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). The shorthand for bytes remains B; the number is not multiplied to get the final file size (VOLSIZE=2048B is the same as VOLSIZE=2048).

Example Import Sessions

This section gives some examples of import sessions that show you how to use the parameter file and command-line methods. The examples illustrate the following scenarios:

Example Import of Selected Tables for a Specific User

In this example, using a full database export file, an administrator imports the dept and emp tables into the scott schema.

Parameter File Method

> imp PARFILE=params.dat

The params.dat file contains the following information:

FILE=dba.dmp
SHOW=n
IGNORE=n
GRANTS=y
FROMUSER=scott
TABLES=(dept,emp)

Command-Line Method

> imp FILE=dba.dmp FROMUSER=scott TABLES=(dept,emp)

Import Messages

Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Status messages are also displayed.

Example Import of Tables Exported by Another User

This example illustrates importing the unit and manager tables from a file exported by blake into the scott schema.

Parameter File Method

> imp PARFILE=params.dat

The params.dat file contains the following information:

FILE=blake.dmp
SHOW=n
IGNORE=n
GRANTS=y
ROWS=y
FROMUSER=blake
TOUSER=scott
TABLES=(unit,manager)

Command-Line Method

> imp FROMUSER=blake TOUSER=scott FILE=blake.dmp TABLES=(unit,manager)

Import Messages

Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Status messages are also displayed.

Example Import of Tables from One User to Another

In this example, a database administrator (DBA) imports all tables belonging to scott into user blake's account.

Parameter File Method

 > imp PARFILE=params.dat

The params.dat file contains the following information:

FILE=scott.dmp
FROMUSER=scott
TOUSER=blake
TABLES=(*)

Command-Line Method

> imp FILE=scott.dmp FROMUSER=scott TOUSER=blake TABLES=(*)

Import Messages

Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
Warning: the objects were exported by SCOTT, not by you

import done in WE8DEC character set and AL16UTF16 NCHAR character set
. importing SCOTT's objects into BLAKE
. . importing table                        "BONUS"          0 rows imported
. . importing table                         "DEPT"          4 rows imported
. . importing table                          "EMP"         14 rows imported
. . importing table                     "SALGRADE"          5 rows imported
Import terminated successfully without warnings.

Example Import Session Using Partition-Level Import

This section describes an import of a table with multiple partitions, a table with partitions and subpartitions, and repartitioning a table on different columns.

Example 1: A Partition-Level Import

In this example, emp is a partitioned table with three partitions: P1, P2, and P3.

A table-level export file was created using the following command:

> exp scott TABLES=emp FILE=exmpexp.dat ROWS=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                             P1          7 rows exported
. . exporting partition                             P2         12 rows exported
. . exporting partition                             P3          3 rows exported
Export terminated successfully without warnings.

In a partition-level Import you can specify the specific partitions of an exported table that you want to import. In this example, these are P1 and P3 of table emp:

> imp scott TABLES=(emp:p1,emp:p3) FILE=exmpexp.dat ROWS=y 

Import Messages

Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Status messages are also displayed.

Example 2: A Partition-Level Import of a Composite Partitioned Table

This example demonstrates that the partitions and subpartitions of a composite partitioned table are imported. emp is a partitioned table with two composite partitions: P1 and P2. Partition P1 has three subpartitions: P1_SP1, P1_SP2, and P1_SP3. Partition P2 has two subpartitions: P2_SP1 and P2_SP2.

A table-level export file was created using the following command:

> exp scott TABLES=emp FILE=exmpexp.dat ROWS=y 

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

When the command executes, the following Export messages are displayed:

.
.
.
About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting composite partition                   P1
. . exporting subpartition                      P1_SP1          2 rows exported
. . exporting subpartition                      P1_SP2         10 rows exported
. . exporting subpartition                      P1_SP3          7 rows exported
. . exporting composite partition                   P2
. . exporting subpartition                      P2_SP1          4 rows exported
. . exporting subpartition                      P2_SP2          2 rows exported
Export terminated successfully without warnings.

The following Import command results in the importing of subpartition P1_SP2 and P1_SP3 of composite partition P1 in table emp and all subpartitions of composite partition P2 in table emp.

> imp scott TABLES=(emp:p1_sp2,emp:p1_sp3,emp:p2) FILE=exmpexp.dat ROWS=y  

Import Messages

Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
. importing SCOTT's objects into SCOTT
. . importing subpartition              "EMP":"P1_SP2"         10 rows imported
. . importing subpartition              "EMP":"P1_SP3"          7 rows imported
. . importing subpartition              "EMP":"P2_SP1"          4 rows imported
. . importing subpartition              "EMP":"P2_SP2"          2 rows imported
Import terminated successfully without warnings.

Example 3: Repartitioning a Table on a Different Column

This example assumes the emp table has two partitions based on the empno column. This example repartitions the emp table on the deptno column.

Perform the following steps to repartition a table on a different column:

  1. Export the table to save the data.

  2. Drop the table from the database.

  3. Create the table again with the new partitions.

  4. Import the table data.

The following example illustrates these steps.

> exp scott table=emp file=empexp.dat 
.
.
.

About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                        EMP_LOW          4 rows exported
. . exporting partition                       EMP_HIGH         10 rows exported
Export terminated successfully without warnings.

SQL> connect scott
Connected.
SQL> drop table emp cascade constraints;
Statement processed.
SQL> create table emp
  2    (
  3    empno    number(4) not null,
  4    ename    varchar2(10),
  5    job      varchar2(9),
  6    mgr      number(4),
  7    hiredate date,
  8    sal      number(7,2),
  9    comm     number(7,2),
 10    deptno   number(2)
 11    )
 12 partition by range (deptno)
 13   (
 14   partition dept_low values less than (15)
 15     tablespace tbs_1,
 16   partition dept_mid values less than (25)
 17     tablespace tbs_2,
 18   partition dept_high values less than (35)
 19     tablespace tbs_3
 20   );
Statement processed.
SQL> exit

> imp scott tables=emp file=empexp.dat ignore=y
.
.
.
import done in WE8DEC character set and AL16UTF16 NCHAR character set
. importing SCOTT's objects into SCOTT
. . importing partition                "EMP":"EMP_LOW"          4 rows imported
. . importing partition               "EMP":"EMP_HIGH"         10 rows imported
Import terminated successfully without warnings.

The following SQL SELECT statements show that the data is partitioned on the deptno column:

SQL> connect scott
Connected.
SQL> select empno, deptno from emp partition (dept_low);
EMPNO      DEPTNO    
---------- ----------
      7782         10
      7839         10
      7934         10
3 rows selected.
SQL> select empno, deptno from emp partition (dept_mid);
EMPNO      DEPTNO    
---------- ----------
      7369         20
      7566         20
      7788         20
      7876         20
      7902         20
5 rows selected.
SQL> select empno, deptno from emp partition (dept_high);
EMPNO      DEPTNO    
---------- ----------
      7499         30
      7521         30
      7654         30
      7698         30
      7844         30
      7900         30
6 rows selected.
SQL> exit;

Example Import Using Pattern Matching to Import Various Tables

In this example, pattern matching is used to import various tables for user scott.

Parameter File Method

imp PARFILE=params.dat

The params.dat file contains the following information:

FILE=scott.dmp
IGNORE=n
GRANTS=y
ROWS=y
FROMUSER=scott
TABLES=(%d%,b%s)

Command-Line Method

imp FROMUSER=scott FILE=scott.dmp TABLES=(%d%,b%s)

Import Messages

Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses JA16SJIS character set (possible charset conversion)
. importing SCOTT's objects into SCOTT
. . importing table                  "BONUS"          0 rows imported
. . importing table                   "DEPT"          4 rows imported
. . importing table               "SALGRADE"          5 rows imported
Import terminated successfully without warnings.

Exit Codes for Inspection and Display

Import provides the results of an operation immediately upon completion. Depending on the platform, the outcome may be reported in a process exit code and the results recorded in the log file. This enables you to check the outcome from the command line or script. Table 22-4 shows the exit codes that get returned for various results.

Table 22-4 Exit Codes for Import

ResultExit Code

Import terminated successfully without warnings

EX_SUCC

Import terminated successfully with warnings

EX_OKWARN

Import terminated unsuccessfully

EX_FAIL


For UNIX, the exit codes are as follows:

EX_SUCC   0
EX_OKWARN 0
EX_FAIL   1

Error Handling During an Import

This section describes errors that can occur when you import database objects.

Row Errors

If a row is rejected due to an integrity constraint violation or invalid data, then Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.

A "tablespace full" error can suspend the import if the RESUMABLE=y parameter is specified.

Failed Integrity Constraints

A row error is generated if a row violates one of the integrity constraints in force on your system, including:

Invalid Data

Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid datatypes, or by any other INSERT error.

Errors Importing Database Objects

Errors can occur for many reasons when you import database objects, as described in this section. When these errors occur, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file.

Object Already Exists

If a database object to be imported already exists in the database, then an object creation error occurs. What happens next depends on the setting of the IGNORE parameter.

If IGNORE=n (the default), then the error is reported, and Import continues with the next database object. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.

If IGNORE=y, then object creation errors are not reported. The database object is not replaced. If the object is a table, then rows are imported into it. Note that only object creation errors are ignored; all other errors (such as operating system, database, and SQL errors) are reported and processing may stop.


Caution:

Specifying IGNORE=y can cause duplicate rows to be entered into a table unless one or more columns of the table are specified with the UNIQUE integrity constraint. This could occur, for example, if Import were run twice.

Sequences

If sequence numbers need to be reset to the value in an export file as part of an import, then you should drop sequences. If a sequence is not dropped before the import, then it is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. If the sequence already exists, then the export file's CREATE SEQUENCE statement fails and the sequence is not imported.

Resource Errors

Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur because of internal problems or when a resource such as memory has been exhausted.

If a resource error occurs while you are importing a row, then Import stops processing the current table and skips to the next table. If you have specified COMMIT=y, then Import commits the partial import of the current table. If not, then a rollback of the current table occurs before Import continues. See the description of "COMMIT".

Domain Index Metadata

Domain indexes can have associated application-specific metadata that is imported using anonymous PL/SQL blocks. These PL/SQL blocks are executed at import time, before the CREATE INDEX statement. If a PL/SQL block causes an error, then the associated index is not created because the metadata is considered an integral part of the index.

Table-Level and Partition-Level Import

You can import tables, partitions, and subpartitions in the following ways:

Guidelines for Using Table-Level Import

For each specified table, table-level Import imports all rows of the table. With table-level Import:

  • All tables exported using any Export mode (except TRANSPORT_TABLESPACES) can be imported.

  • Users can import the entire (partitioned or nonpartitioned) table, partitions, or subpartitions from a table-level export file into a (partitioned or nonpartitioned) target table with the same name.

If the table does not exist, and if the exported table was partitioned, then table-level Import creates a partitioned table. If the table creation is successful, then table-level Import reads all source data from the export file into the target table. After Import, the target table contains the partition definitions of all partitions and subpartitions associated with the source table in the export file. This operation ensures that the physical and logical attributes (including partition bounds) of the source partitions are maintained on import.

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode. It lets you selectively load data from specified partitions or subpartitions in an export file. Keep the following guidelines in mind when using partition-level Import.

  • Import always stores the rows according to the partitioning scheme of the target table.

  • Partition-level Import inserts only the row data from the specified source partitions or subpartitions.

  • If the target table is partitioned, then partition-level Import rejects any rows that fall above the highest partition of the target table.

  • Partition-level Import cannot import a nonpartitioned exported table. However, a partitioned table can be imported from a nonpartitioned exported table using table-level Import.

  • Partition-level Import is legal only if the source table (that is, the table called tablename at export time) was partitioned and exists in the export file.

  • If the partition or subpartition name is not a valid partition in the export file, then Import generates a warning.

  • The partition or subpartition name in the parameter refers to only the partition or subpartition in the export file, which may not contain all of the data of the table on the export source system.

  • If ROWS=y (default), and the table does not exist in the import target system, then the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table.

  • If ROWS=y (default) and IGNORE=y, but the table already existed before import, then all rows for the specified partition or subpartition in the table are inserted into the table. The rows are stored according to the existing partitioning scheme of the target table.

  • If ROWS=n, then Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file.

  • If the target table is nonpartitioned, then the partitions and subpartitions are imported into the entire table. Import requires IGNORE=y to import one or more partitions or subpartitions from the export file into a nonpartitioned table on the import target system.

Migrating Data Across Partitions and Tables

If you specify a partition name for a composite partition, then all subpartitions within the composite partition are used as the source.

In the following example, the partition specified by the partition name is a composite partition. All of its subpartitions will be imported:

imp SYSTEM FILE=expdat.dmp FROMUSER=scott TABLES=b:py

The following example causes row data of partitions qc and qd of table scott.e to be imported into the table scott.e:

imp scott FILE=expdat.dmp TABLES=(e:qc, e:qd) IGNORE=y

If table e does not exist in the import target database, then it is created and data is inserted into the same partitions. If table e existed on the target system before import, then the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc and qd.


Note:

With partition-level Import to an existing table, you must set up the target partitions or subpartitions properly and use IGNORE=y.

Controlling Index Creation and Maintenance

This section describes the behavior of Import with respect to index creation and maintenance.

Delaying Index Creation

Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index creation, re-creation, or maintenance after Import completes is generally faster than updating the indexes for each row inserted by Import.

Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. You can postpone creation of indexes until after the import completes by specifying INDEXES=n. (INDEXES=y is the default.) You can then store the missing index definitions in a SQL script by running Import while using the INDEXFILE parameter. The index-creation statements that would otherwise be issued by Import are instead stored in the specified file.

After the import is complete, you must create the indexes, typically by using the contents of the file (specified with INDEXFILE) as a SQL script after specifying passwords for the connect statements.

Index Creation and Maintenance Controls

If SKIP_UNUSABLE_INDEXES=y, then the Import utility postpones maintenance on all indexes that were set to Index Unusable before the Import. Other indexes (not previously set to Index Unusable) continue to be updated as rows are inserted. This approach saves on index updates during the import of existing tables.

Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index. The existence of a unique integrity constraint on a table does not prevent existence of duplicate keys in a table that was imported with INDEXES=n. The supporting index will be in an UNUSABLE state until the duplicates are removed and the index is rebuilt.

Example of Postponing Index Maintenance

For example, assume that partitioned table t with partitions p1 and p2 exists on the import target system. Assume that local indexes p1_ind on partition p1 and p2_ind on partition p2 exist also. Assume that partition p1 contains a much larger amount of data in the existing table t, compared with the amount of data to be inserted by the export file (expdat.dmp). Assume that the reverse is true for p2.

Consequently, performing index updates for p1_ind during table data insertion time is more efficient than at partition index rebuild time. The opposite is true for p2_ind.

Users can postpone local index maintenance for p2_ind during import by using the following steps:

  1. Issue the following SQL statement before import:

    ALTER TABLE t MODIFY PARTITION p2 UNUSABLE LOCAL INDEXES;
    
  2. Issue the following Import command:

    imp scott FILE=expdat.dmp TABLES = (t:p1, t:p2) IGNORE=y
    SKIP_UNUSABLE_INDEXES=y
    

    This example executes the ALTER SESSION SET SKIP_UNUSABLE_INDEXES=y statement before performing the import.

  3. Issue the following SQL statement after import:

    ALTER TABLE t MODIFY PARTITION p2 REBUILD UNUSABLE LOCAL INDEXES;
    

In this example, local index p1_ind on p1 will be updated when table data is inserted into partition p1 during import. Local index p2_ind on p2 will be updated at index rebuild time, after import.

Network Considerations

With Oracle Net, you can perform imports over a network. For example, if you run Import locally, then you can read data into a remote Oracle database.

To use Import with Oracle Net, include the connection qualifier string @connect_string when entering the username and password in the imp command. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.

Character Set and Globalization Support Considerations

The following sections describe the globalization support behavior of Import with respect to character set conversion of user data and data definition language (DDL).

User Data

The Export utility always exports user data, including Unicode data, in the character sets of the Export server. (Character sets are specified at database creation.) If the character sets of the source database are different than the character sets of the import database, then a single conversion is performed to automatically convert the data to the character sets of the Import server.

Effect of Character Set Sorting Order on Conversions

If the export character set has a different sorting order than the import character set, then tables that are partitioned on character columns may yield unpredictable results. For example, consider the following table definition, which is produced on a database having an ASCII character set:

CREATE TABLE partlist 
   ( 
   part     VARCHAR2(10), 
   partno   NUMBER(2) 
   ) 
PARTITION BY RANGE (part) 
  ( 
  PARTITION part_low VALUES LESS THAN ('Z') 
    TABLESPACE tbs_1, 
  PARTITION part_mid VALUES LESS THAN ('z') 
    TABLESPACE tbs_2, 
  PARTITION part_high VALUES LESS THAN (MAXVALUE) 
    TABLESPACE tbs_3 
  );

This partitioning scheme makes sense because z comes after Z in ASCII character sets.

When this table is imported into a database based upon an EBCDIC character set, all of the rows in the part_mid partition will migrate to the part_low partition because z comes before Z in EBCDIC character sets. To obtain the desired results, the owner of partlist must repartition the table following the import.

Data Definition Language (DDL)

Up to three character set conversions may be required for data definition language (DDL) during an export/import operation:

  1. Export writes export files using the character set specified in the NLS_LANG environment variable for the user session. A character set conversion is performed if the value of NLS_LANG differs from the database character set.

  2. If the export file's character set is different than the import user session character set, then Import converts the character set to its user session character set. Import can only perform this conversion for single-byte character sets. This means that for multibyte character sets, the import file's character set must be identical to the export file's character set.

  3. A final character set conversion may be performed if the target database's character set is different from the character set used by the import user session.

To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set.

Single-Byte Character Sets

Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when you import an 8-bit character set export file. This occurs if the system on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is apparent when accented characters lose the accent mark.

To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.

Multibyte Character Sets

During character set conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.


Caution:

When the character set width differs between the Export server and the Import server, truncation of data can occur if conversion causes expansion of data. If truncation occurs, then Import displays a warning message.

Using Instance Affinity

You can use instance affinity to associate jobs with instances in databases you plan to export and import. Be aware that there may be some compatibility issues if you are using a combination of releases.

Considerations When Importing Database Objects

The following sections describe restrictions and points you should consider when you import particular database objects.

Importing Object Identifiers

The Oracle database assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by Import.

When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).

To do this, Import compares the types's unique identifier (TOID) with the identifier stored in the export file. If those match, then Import then compares the type's unique hashcode with that stored in the export file. Import will not import table rows if the TOIDs or hashcodes do not match.

In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the parameter TOID_NOVALIDATE to specify types to exclude from the TOID and hashcode comparison. See "TOID_NOVALIDATE" for more information.


Caution:

Be very careful about using TOID_NOVALIDATE, because type validation provides an important capability that helps avoid data corruption. Be sure you are confident of your knowledge of type validation and how it works before attempting to perform an import operation with this feature disabled.

Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables:

  • For object types, if IGNORE=y, the object type already exists, and the object identifiers, hashcodes, and type descriptors match, then no error is reported. If the object identifiers or hashcodes do not match and the parameter TOID_NOVALIDATE has not been set to ignore the object type, then an error is reported and any tables using the object type are not imported.

  • For object types, if IGNORE=n and the object type already exists, then an error is reported. If the object identifiers, hashcodes, or type descriptors do not match and the parameter TOID_NOVALIDATE has not been set to ignore the object type, then any tables using the object type are not imported.

  • For object tables, if IGNORE=y, then the table already exists, and the object identifiers, hashcodes, and type descriptors match, no error is reported. Rows are imported into the object table. Import of rows may fail if rows with the same object identifier already exist in the object table. If the object identifiers, hashcodes, or type descriptors do not match, and the parameter TOID_NOVALIDATE has not been set to ignore the object type, then an error is reported and the table is not imported.

  • For object tables, if IGNORE=n and the table already exists, then an error is reported and the table is not imported.

Because Import preserves object identifiers of object types and object tables, consider the following when you import objects from one schema into another schema using the FROMUSER and TOUSER parameters:

  • If the FROMUSER object types and object tables already exist on the target system, then errors occur because the object identifiers of the TOUSER object types and object tables are already in use. The FROMUSER object types and object tables must be dropped from the system before the import is started.

  • If an object table was created using the OID AS option to assign it the same object identifier as another table, then both tables cannot be imported. You can import one of the tables, but the second table receives an error because the object identifier is already in use.

Importing Existing Object Tables and Tables That Contain Object Types

Users frequently create tables before importing data to reorganize tablespace usage or to change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.

For object tables and for tables containing columns of objects, each object the table references has its name, structure, and version information written out to the export file. Export also includes object type information from different schemas, as needed.

Import verifies the existence of each object type required by a table before importing the table data. This verification consists of a check of the object type's name followed by a comparison of the object type's structure and version from the import system with that found in the export file.

If an object type name is found on the import system, but the structure or version do not match that from the export file, then an error message is generated and the table data is not imported.

The Import parameter TOID_NOVALIDATE can be used to disable the verification of the object type's structure and version for specific objects.

Importing Nested Tables

Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:

  • Suppose a table with an inner nested table is exported and then imported without dropping the table or removing rows from the table. If the IGNORE=y parameter is used, then there will be a constraint violation when inserting each row in the outer table. However, data in the inner nested table may be successfully imported, resulting in duplicate rows in the inner table.

  • If nonrecoverable errors occur inserting data in outer tables, then the rest of the data in the outer table is skipped, but the corresponding inner table rows are not skipped. This may result in inner table rows not being referenced by any row in the outer table.

  • If an insert to an inner table fails after a recoverable error, then its outer table row will already have been inserted in the outer table and data will continue to be inserted into it and any other inner tables of the containing table. This circumstance results in a partial logical row.

  • If nonrecoverable errors occur inserting data in an inner table, then Import skips the rest of that inner table's data but does not skip the outer table or other nested tables.

You should always carefully examine the log file for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted.

Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user.

Importing REF Data

REF columns and attributes may contain a hidden ROWID that points to the referenced type instance. Import does not automatically recompute these ROWIDs for the target database. You should execute the following statement to reset the ROWIDs to their proper values:

ANALYZE TABLE [schema.]table VALIDATE REF UPDATE;

See Also:

Oracle Database SQL Language Reference for more information about the ANALYZE TABLE statement

Importing BFILE Columns and Directory Aliases

Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE columns and attributes.

When you import table data that contains BFILE columns, the BFILE locator is imported with the directory alias and file name that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, then an error occurs when the user accesses the BFILE data.

For directory aliases, if the operating system directory syntax used in the export system is not valid on the import system, then no error is reported at import time. The error occurs when the user seeks subsequent access to the file data. It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.

Importing Foreign Function Libraries

Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and file names used in the library's specification on the export file are invalid on the import system, then no error is reported at import time. Subsequent usage of the callout functions will receive an error.

It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.

Importing Stored Procedures, Functions, and Packages

The behavior of Import when a local stored procedure, function, or package is imported depends upon whether the COMPILE parameter is set to y or to n.

When a local stored procedure, function, or package is imported and COMPILE=y, the procedure, function, or package is recompiled upon import and retains its original timestamp specification. If the compilation is successful, then it can be accessed by remote procedures without error.

If COMPILE=n, then the procedure, function, or package is still imported, but the original timestamp is lost. The compilation takes place the next time the procedure, function, or package is used.


See Also:

"COMPILE"

Importing Java Objects

When you import Java objects into any schema, the Import utility leaves the resolver unchanged. (The resolver is the list of schemas used to resolve Java full names.) This means that after an import, all user classes are left in an invalid state until they are either implicitly or explicitly revalidated. An implicit revalidation occurs the first time the classes are referenced. An explicit revalidation occurs when the SQL statement ALTER JAVA CLASS...RESOLVE is used. Both methods result in the user classes being resolved successfully and becoming valid.

Importing External Tables

Import does not verify that the location referenced by the external table is correct. If the formats for directory and file names used in the table's specification on the export file are invalid on the import system, then no error is reported at import time. Subsequent usage of the callout functions will result in an error.

It is the responsibility of the DBA or user to manually move the table and ensure the table's specification is valid on the import system.

Importing Advanced Queue (AQ) Tables

Importing a queue table also imports any underlying queues and the related dictionary information. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pre-table and post-table action procedures maintain the queue dictionary.

Importing LONG Columns

LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row's data. The memory used to store LONG columns, however, does not need to be contiguous, because LONG data is loaded in sections.

Import can be used to convert LONG columns to CLOB columns. To do this, first create a table specifying the new CLOB column. When Import is run, the LONG data is converted to CLOB format. The same technique can be used to convert LONG RAW columns to BLOB columns.


Note:

Oracle recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases.

Importing LOB Columns When Triggers Are Present

As of Oracle Database 10g, LOB handling has been improved to ensure that triggers work properly and that performance remains high when LOBs are being loaded. To achieve these improvements, the Import utility automatically changes all LOBs that were empty at export time to be NULL after they are imported.

If you have applications that expect the LOBs to be empty rather than NULL, then after the import you can issue a SQL UPDATE statement for each LOB column. Depending on whether the LOB column type was a BLOB or a CLOB, the syntax would be one of the following:

UPDATE <tablename> SET <lob column> = EMPTY_BLOB() WHERE <lob column> = IS NULL;  
UPDATE <tablename> SET <lob column> = EMPTY_CLOB() WHERE <lob column> = IS NULL; 

It is important to note that once the import is performed, there is no way to distinguish between LOB columns that are NULL versus those that are empty. Therefore, if that information is important to the integrity of your data, then be sure you know which LOB columns are NULL and which are empty before you perform the import.

Importing Views

Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported, and the failure to import column comments on such views.

In particular, if viewa uses the stored procedure procb, and procb uses the view viewc, then Export cannot determine the proper ordering of viewa and viewc. If viewa is exported before viewc, and procb already exists on the import system, then viewa receives compilation warnings at import time.

Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, then the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION. Access violations could occur when the view is used if the grantor does not have the proper privileges after the missing tables are created.

Importing views that contain references to tables in other schemas requires that the importer have SELECT ANY TABLE privilege. If the importer has not been granted this privilege, then the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient. For the view to be compiled, the privilege must be granted directly to the importer.

Importing Partitioned Tables

Import attempts to create a partitioned table with the same partition or subpartition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, then Import processing depends on the value of the IGNORE parameter.

Unless SKIP_UNUSABLE_INDEXES=y,inserting the exported data into the target table fails if Import cannot update a nonpartitioned index or index partition that is marked Indexes Unusable or is otherwise not suitable.

Support for Fine-Grained Access Control

To restore the fine-grained access control policies, the user who imports from an export file containing such tables must have the EXECUTE privilege on the DBMS_RLS package, so that the security policies on the tables can be reinstated.

If a user without the correct privileges attempts to import from an export file that contains tables with fine-grained access control policies, then a warning message is issued.

Snapshots and Snapshot Logs


Note:

In certain situations, particularly those involving data warehousing, snapshots may be referred to as materialized views. This section retains the term snapshot.

Snapshot Log

The snapshot log in a dump file is imported if the master table already exists for the database to which you are importing and it has a snapshot log.

When a ROWID snapshot log is exported, ROWIDs stored in the snapshot log have no meaning upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.

To avoid the refresh error, do a complete refresh after importing a ROWID snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly. In contrast, when a primary key snapshot log is exported, the values of the primary keys do retain their meaning upon import. Therefore, primary key snapshots can do a fast refresh after the import.


See Also:

Oracle Database Advanced Replication for Import-specific information about migration and compatibility and for more information about snapshots and snapshot logs

Snapshots

A snapshot that has been restored from an export file has reverted to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.

Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).

Importing a Snapshot

When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.

Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), then they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.

Importing a Snapshot into a Different Schema

Snapshots and related items are exported with the schema name explicitly given in the DDL statements. To import them into a different schema, use the FROMUSER and TOUSER parameters. This does not apply to snapshot logs, which cannot be imported into a different schema.

Transportable Tablespaces

The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

To move or copy a set of tablespaces, you must make the tablespaces read-only, manually copy the data files of these tablespaces to the target database, and use Export and Import to move the database information (metadata) stored in the data dictionary over to the target database. The transport of the data files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.

After copying the data files and exporting the metadata, you can optionally put the tablespaces in read/write mode.

Export and Import provide the following parameters to enable movement of transportable tablespace metadata.

See "TABLESPACES" and "TRANSPORT_TABLESPACE" for information about using these parameters during an import operation.


See Also:


Storage Parameters

By default, a table is imported into its original tablespace.

If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, then the system uses the default tablespace for that user, unless the table:

If the user does not have sufficient quota in the default tablespace, then the user's tables are not imported. See "Reorganizing Tablespaces" to see how you can use this to your advantage.

The OPTIMAL Parameter

The storage parameter OPTIMAL for rollback segments is not preserved during export and import.

Storage Parameters for OID Indexes and LOB Columns

Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB, VARRAY, or OPAQUE type columns, LOB, VARRAY, or OPAQUE type data is created with their current storage parameters.

If you alter the storage parameters of existing tables before exporting, then the tables are exported using those altered storage parameters. Note, however, that storage parameters for LOB data cannot be altered before exporting (for example, chunk size for a LOB column, whether a LOB column is CACHE or NOCACHE, and so forth).

Note that LOB data might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.

If LOB data resides in a tablespace that does not exist at the time of import, or the user does not have the necessary quota in that tablespace, then the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.

Overriding Storage Parameters

Before using the Import utility to import data, you may want to create large tables with different storage parameters. If so, then you must specify IGNORE=y on the command line or in the parameter file.

Read-Only Tablespaces

Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, then the tablespace is created as a read/write tablespace. If you want read-only functionality, then you must manually make the tablespace read-only after the import.

If the tablespace already exists in the target database and is read-only, then you must make it read/write before the import.

Dropping a Tablespace

You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the imp command and specify IGNORE=y.

In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=y, the relevant CREATE TABLESPACE statement will fail and prevent the creation of the unwanted tablespace.

All objects from that tablespace will be imported into their owner's default tablespace except for partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow segments. Import cannot determine which tablespace caused the error. Instead, you must first create a table and then import the table again, specifying IGNORE=y.

Objects are not imported into the default tablespace if the tablespace does not exist, or you do not have the necessary quotas for your default tablespace.

Reorganizing Tablespaces

If a user's quota allows it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, then the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY columns, is not a type table, and is not an index-only table with an overflow segment. This scenario can be used to move a user's tables from one tablespace to another.

For example, you need to move joe's tables from tablespace A to tablespace B after a full database export. Follow these steps:

  1. If joe has the UNLIMITED TABLESPACE privilege, then revoke it. Set joe's quota on tablespace A to zero. Also revoke all roles that might have such privileges or quotas.

    When you revoke a role, it does not have a cascade effect. Therefore, users who were granted other roles by joe will be unaffected.

  2. Export joe's tables.

  3. Drop joe's tables from tablespace A.

  4. Give joe a quota on tablespace B and make it the default tablespace for joe.

  5. Import joe's tables. (By default, Import puts joe's tables into tablespace B.)

Importing Statistics

If statistics are requested at export time and analyzer statistics are available for a table, then Export will include the ANALYZE statement used to recalculate the statistics for the table into the dump file. In most circumstances, Export will also write the precalculated optimizer statistics for tables, indexes, and columns to the dump file. See the description of the Import parameter "STATISTICS".

Because of the time it takes to perform an ANALYZE statement, it is usually preferable for Import to use the precalculated optimizer statistics for a table (and its indexes and columns) rather than execute the ANALYZE statement saved by Export. By default, Import will always use the precalculated statistics that are found in the export dump file.

The Export utility flags certain precalculated statistics as questionable. The importer might want to import only unquestionable statistics, not precalculated statistics, in the following situations:

In certain situations, the importer might want to always use ANALYZE statements rather than precalculated statistics. For example, the statistics gathered from a fragmented database may not be relevant when the data is imported in a compressed form. In these cases, the importer should specify STATISTICS=RECALCULATE to force the recalculation of statistics.

If you do not want any statistics to be established by Import, then you should specify STATISTICS=NONE.

Using Export and Import to Partition a Database Migration

When you use the Export and Import utilities to migrate a large database, it may be more efficient to partition the migration into multiple export and import jobs. If you decide to partition the migration, then be aware of the following advantages and disadvantages.

Advantages of Partitioning a Migration

Partitioning a migration has the following advantages:

    <@Mli>

    Time required for the migration may be reduced, because many of the subjobs can be run in parallel.

  • The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.

Disadvantages of Partitioning a Migration

Partitioning a migration has the following disadvantages:

  • The export and import processes become more complex.

  • Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, then you may not have the required parent records when you import the table into the dependent schema.

How to Use Export and Import to Partition a Database Migration

To perform a database migration in a partitioned manner, take the following steps:

  1. For all top-level metadata in the database, issue the following commands:

    1. exp FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n

    2. imp FILE=full FULL=y

  2. For each scheman in the database, issue the following commands:

    1. exp OWNER=scheman FILE=scheman

    2. imp FILE=scheman FROMUSER=scheman TOUSER=scheman IGNORE=y

All exports can be done in parallel. When the import of full.dmp completes, all remaining imports can also be done in parallel.

Tuning Considerations for Import Operations

This section discusses some ways to possibly improve the performance of an import operation. The information is categorized as follows:

Changing System-Level Options

The following suggestions about system-level options may help improve performance of an import operation:

  • Create and use one large rollback segment and take all other rollback segments offline. Generally a rollback segment that is one half the size of the largest table being imported should be big enough. It can also help if the rollback segment is created with the minimum number of two extents, of equal size.


    Note:

    Oracle recommends that you use automatic undo management instead of rollback segments.

  • Put the database in NOARCHIVELOG mode until the import is complete. This will reduce the overhead of creating and managing archive logs.

  • Create several large redo files and take any small redo log files offline. This will result in fewer log switches being made.

  • If possible, have the rollback segment, table data, and redo log files all on separate disks. This will reduce I/O contention and increase throughput.

  • If possible, do not run any other jobs at the same time that may compete with the import operation for system resources.

  • Ensure that there are no statistics on dictionary tables.

  • Set TRACE_LEVEL_CLIENT=OFF in the sqlnet.ora file.

  • If possible, increase the value of DB_BLOCK_SIZE when you re-create the database. The larger the block size, the smaller the number of I/O cycles needed. This change is permanent, so be sure to carefully consider all effects it will have before making it.

Changing Initialization Parameters

The following suggestions about settings in your initialization parameter file may help improve performance of an import operation.

  • Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size of the redo log files. This number is in operating system blocks (512 on most UNIX systems). This reduces checkpoints to a minimum (at log switching time).

  • Increase the value of SORT_AREA_SIZE. The amount you increase it depends on other activity taking place on the system and on the amount of free memory available. (If the system begins swapping and paging, then the value is probably set too high.)

  • Increase the value for DB_BLOCK_BUFFERS and SHARED_POOL_SIZE.

Changing Import Options

The following suggestions about usage of import options may help improve performance. Be sure to also read the individual descriptions of all the available options in "Import Parameters".

  • Set COMMIT=N. This causes Import to commit after each object (table), not after each buffer. This is why one large rollback segment is needed. (Because rollback segments will be deprecated in future releases, Oracle recommends that you use automatic undo management instead.)

  • Specify a large value for BUFFER or RECORDLENGTH, depending on system activity, database size, and so on. A larger size reduces the number of times that the export file has to be accessed for data. Several megabytes is usually enough. Be sure to check your system for excessive paging and swapping activity, which can indicate that the buffer size is too large.

  • Consider setting INDEXES=N because indexes can be created at some point after the import, when time is not a factor. If you choose to do this, then you need to use the INDEXFILE parameter to extract the DLL for the index creation or to rerun the import with INDEXES=Y and ROWS=N.

Dealing with Large Amounts of LOB Data

Keep the following in mind when you are importing large amounts of LOB data:

Eliminating indexes significantly reduces total import time. This is because LOB data requires special consideration during an import because the LOB locator has a primary key that cannot be explicitly dropped or ignored during an import.

Ensure that there is enough space available in large contiguous chunks to complete the data load.

Dealing with Large Amounts of LONG Data

Keep in mind that importing a table with a LONG column may cause a higher rate of I/O and disk usage, resulting in reduced performance of the import operation. There are no specific parameters that will improve performance during an import of large amounts of LONG data, although some of the more general tuning suggestions made in this section may help overall performance.

Using Different Releases of Export and Import

This section describes compatibility issues that relate to using different releases of Export and the Oracle database.

Whenever you are moving data between different releases of the Oracle database, the following basic rules apply:

Restrictions When Using Different Releases of Export and Import

The following restrictions apply when you are using different releases of Export and Import:

  • Export dump files can be read only by the Import utility because they are stored in a special binary format.

  • Any export dump file can be imported into a later release of the Oracle database.

  • The Import utility cannot read export dump files created by the Export utility of a later maintenance release or version. For example, a release 9.2 export dump file cannot be imported by a release 9.0.1 Import utility.

  • Whenever a lower version of the Export utility runs with a later version of the Oracle database, categories of database objects that did not exist in the earlier version are excluded from the export.

  • Export files generated by Oracle9i Export, either direct path or conventional path, are incompatible with earlier releases of Import and can be imported only with Oracle9i Import. When backward compatibility is an issue, use the earlier release or version of the Export utility against the Oracle9i database.

Examples of Using Different Releases of Export and Import

Table 22-5 shows some examples of which Export and Import releases to use when moving data between different releases of the Oracle database.

Table 22-5 Using Different Releases of Export and Import

Export from->Import toUse Export ReleaseUse Import Release

8.1.6 -> 8.1.6

8.1.6

8.1.6

8.1.5 -> 8.0.6

8.0.6

8.0.6

8.1.7 -> 8.1.6

8.1.6

8.1.6

9.0.1 -> 8.1.6

8.1.6

8.1.6

9.0.1 -> 9.0.2

9.0.1

9.0.2

9.0.2 -> 10.1.0

9.0.2

10.1.0

10.1.0 -> 9.0.2

9.0.2

9.0.2


Table 22-5 covers moving data only between the original Export and Import utilities. For Oracle Database 10g release 1 (10.1) or higher, Oracle recommends the Data Pump Export and Import utilities in most cases because these utilities provide greatly enhanced performance compared to the original Export and Import utilities.


See Also:

Oracle Database Upgrade Guide for more information about exporting and importing data between different releases, including releases higher than 10.1

PKD@@PKN:AOEBPS/whatsnew.htm4F What's New in Database Utilities?

What's New in Database Utilities?

This section describes new features of the Oracle Database 11g utilities, and provides pointers to additional information. For information about features that were introduced in earlier releases of Oracle Database, refer to the documentation for those releases.

Oracle Database 11g Release 2 (11.2.0.3) New Features in Oracle Database Utilities

This section lists the major new and changed features in Oracle Database 11g release 2 (11.2.0.3).

Support for both of these storage formats is available only on Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher. See "Supported Datatypes, Storage Attributes, and Database and Redo Log File Versions" for more information about Oracle LogMiner supported data types.

Oracle Database 11g Release 2 (11.2.0.2) New Features in Oracle Database Utilities

This section lists the major new and changed features in Oracle Database 11g release 2 (11.2.0.2).

Oracle Database 11g Release 2 (11.2.0.1) New Features in Oracle Database Utilities

This section lists the major new and changed features in Oracle Database 11g release 2 (11.2.0.1).

Data Pump Export and Data Pump Import

External Tables

Original Export

Other Utilities

New Features in Oracle Database Utilities 11g Release 1

This section lists the major new features that have been added for Oracle Database 11g release 1 (11.1).

Data Pump Export and Data Pump Import

For the Data Pump Export and Data Pump Import products, new features have been added that allow you to do the following:

Additionally, Data Pump now performs a one-time automatic restart of workers (on the same instance) that have stopped due to certain errors. For example, if someone manually stops a process, the worker is automatically restarted one time, on the same instance. If the process stops a second time, it must be manually restarted.

External Tables

For the External Tables functionality, the following new features have been added to the ORACLE_DATAPUMP access driver:

LogMiner Utility

LogMiner now provides the following additional support:

See "Supported Datatypes and Table Storage Attributes".

Automatic Diagnostic Repository Command Interpreter (ADRCI)

The Automatic Diagnostic Repository Command Interpreter (ADRCI) provides a way for you to work with the diagnostic data contained in the Automatic Diagnostic Repository (ADR). The ADR is a file-based repository for database diagnostic data, such as traces, dumps, the alert log, health monitor reports, and more. It has a unified directory structure across multiple instances and multiple products.

See Chapter 16, "ADRCI: ADR Command Interpreter" for more information.

PKB44PKN:AOEBPS/dp_api.htm The Data Pump API

6 The Data Pump API

The Data Pump API, DBMS_DATAPUMP, provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.

This chapter provides details about how the Data Pump API works. The following topics are covered:

How Does the Client Interface to the Data Pump API Work?

The main structure used in the client interface is a job handle, which appears to the caller as an integer. Handles are created using the DBMS_DATAPUMP.OPEN or DBMS_DATAPUMP.ATTACH function. Other sessions can attach to a job to monitor and control its progress. This allows a DBA to start up a job before departing from work and then watch the progress of the job from home. Handles are session specific. The same job can create different handles in different sessions.

Job States

There is a state associated with each phase of a job, as follows:

  • Undefined - before a handle is created

  • Defining - when the handle is first created

  • Executing - when the DBMS_DATAPUMP.START_JOB procedure is executed

  • Completing - when the job has finished its work and the Data Pump processes are ending

  • Completed - when the job is completed

  • Stop Pending - when an orderly job shutdown has been requested

  • Stopping - when the job is stopping

  • Idling - the period between the time that a DBMS_DATAPUMP.ATTACH is executed to attach to a stopped job and the time that a DBMS_DATAPUMP.START_JOB is executed to restart that job

  • Not Running - when a master table exists for a job that is not running (has no Data Pump processes associated with it)

Performing DBMS_DATAPUMP.START_JOB on a job in an Idling state will return it to an Executing state.

If all users execute DBMS_DATAPUMP.DETACH to detach from a job in the Defining state, then the job will be totally removed from the database.

When a job abnormally terminates or when an instance running the job is shut down, the job is placed in the Not Running state if it was previously executing or idling. It can then be restarted by the user.

The master control process is active in the Defining, Idling, Executing, Stopping, Stop Pending, and Completing states. It is also active briefly in the Stopped and Completed states. The master table for the job exists in all states except the Undefined state. Worker processes are only active in the Executing and Stop Pending states, and briefly in the Defining state for import jobs.

Detaching while a job is in the Executing state will not halt the job, and you can re-attach to an executing job at any time to resume obtaining status information about the job.

A Detach can occur explicitly, when the DBMS_DATAPUMP.DETACH procedure is executed, or it can occur implicitly when a Data Pump API session is run down, when the Data Pump API is unable to communicate with a Data Pump job, or when the DBMS_DATAPUMP.STOP_JOB procedure is executed.

The Not Running state indicates that a master table exists outside the context of an executing job. This will occur if a job has been stopped (probably to be restarted later) or if a job has abnormally terminated. This state can also be seen momentarily during job state transitions at the beginning of a job, and at the end of a job before the master table is dropped. Note that the Not Running state is shown only in the DBA_DATAPUMP_JOBS view and the USER_DATAPUMP_JOBS view. It is never returned by the GET_STATUS procedure.

Table 6-1 shows the valid job states in which DBMS_DATAPUMP procedures can be executed. The states listed are valid for both export and import jobs, unless otherwise noted.

Table 6-1 Valid Job States in Which DBMS_DATAPUMP Procedures Can Be Executed

Procedure NameValid StatesDescription
ADD_FILE

Defining (valid for both export and import jobs)

Executing and Idling (valid only for specifying dump files for export jobs)

Specifies a file for the dump file set, the log file, or the SQLFILE output.

ATTACH

Defining, Executing, Idling, Stopped, Completed, Completing, Not Running

Allows a user session to monitor a job or to restart a stopped job. The attach will fail if the dump file set or master table for the job have been deleted or altered in any way.

DATA_FILTER

Defining

Restricts data processed by a job.

DETACH

All

Disconnects a user session from a job.

GET_DUMPFILE_INFO

All

Retrieves dump file header information.

GET_STATUS

All, except Completed, Not Running, Stopped, and Undefined

Obtains the status of a job.

LOG_ENTRY

Defining, Executing, Idling, Stop Pending, Completing

Adds an entry to the log file.

METADATA_FILTER

Defining

Restricts metadata processed by a job.

METADATA_REMAP

Defining

Remaps metadata processed by a job.

METADATA_TRANSFORM

Defining

Alters metadata processed by a job.

OPEN

Undefined

Creates a new job.

SET_PARALLEL

Defining, Executing, Idling

Specifies parallelism for a job.

SET_PARAMETER

DefiningFoot 1 

Alters default processing by a job.

START_JOB

Defining, Idling

Begins or resumes execution of a job.

STOP_JOB

Defining, Executing, Idling, Stop Pending

Initiates shutdown of a job.

WAIT_FOR_JOB

All, except Completed, Not Running, Stopped, and Undefined

Waits for a job to end.


Footnote 1 The ENCRYPTION_PASSWORD parameter can be entered during the Idling state, as well as during the Defining state.

What Are the Basic Steps in Using the Data Pump API?

To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP package. The following steps list the basic activities involved in using the Data Pump API. The steps are presented in the order in which the activities would generally be performed:

  1. Execute the DBMS_DATAPUMP.OPEN procedure to create a Data Pump job and its infrastructure.

  2. Define any parameters for the job.

  3. Start the job.

  4. Optionally, monitor the job until it completes.

  5. Optionally, detach from the job and reattach at a later time.

  6. Optionally, stop the job.

  7. Optionally, restart the job, if desired.

These concepts are illustrated in the examples provided in the next section.


See Also:

Oracle Database PL/SQL Packages and Types Reference for a complete description of the DBMS_DATAPUMP package

Examples of Using the Data Pump API

This section provides the following examples to help you get started using the Data Pump API:

The examples are in the form of PL/SQL scripts. If you choose to copy these scripts and run them, then you must first do the following, using SQL*Plus:

Example 6-1 Performing a Simple Schema Export

The PL/SQL script in this example shows how to use the Data Pump API to perform a simple schema export of the HR schema. It shows how to create a job, start it, and monitor it. Additional information about the example is contained in the comments within the script. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS to retrieve more detailed error information when a failure occurs.

Connect as user SYSTEM to use this script.

DECLARE
  ind NUMBER;              -- Loop index
  h1 NUMBER;               -- Data Pump job handle
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  le ku$_LogEntry;         -- For WIP and error messages
  js ku$_JobStatus;        -- The job status from get_status
  jd ku$_JobDesc;          -- The job description from get_status
  sts ku$_Status;          -- The status object returned by get_status
BEGIN

-- Create a (user-named) Data Pump job to do a schema export.

  h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE1','LATEST');

-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.

  DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');

-- A metadata filter is used to specify the schema that will be exported.

  DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''HR'')');

-- Start the job. An exception will be generated if something is not set up
-- properly. 

  DBMS_DATAPUMP.START_JOB(h1);

-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
 
  percent_done := 0;
  job_state := 'UNDEFINED';
  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

-- If the percentage done changed, display the new value.

    if js.percent_done != percent_done
    then
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;

-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.

   if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
  end loop;

-- Indicate that the job finished and detach from it.

  dbms_output.put_line('Job has completed');
  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(h1);
END;
/

Example 6-2 Importing a Dump File and Remapping All Schema Objects

The script in this example imports the dump file created in Example 6-1 (an export of the hr schema). All schema objects are remapped from the hr schema to the blake schema. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS to retrieve more detailed error information when a failure occurs.

Connect as user SYSTEM to use this script.

DECLARE
  ind NUMBER;              -- Loop index
  h1 NUMBER;               -- Data Pump job handle
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  le ku$_LogEntry;         -- For WIP and error messages
  js ku$_JobStatus;        -- The job status from get_status
  jd ku$_JobDesc;          -- The job description from get_status
  sts ku$_Status;          -- The status object returned by get_status
BEGIN

-- Create a (user-named) Data Pump job to do a "full" import (everything
-- in the dump file without filtering).

  h1 := DBMS_DATAPUMP.OPEN('IMPORT','FULL',NULL,'EXAMPLE2');

-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.

  DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');

-- A metadata remap will map all schema objects from HR to BLAKE.

  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','HR','BLAKE');

-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.

  DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');

-- Start the job. An exception is returned if something is not set up properly.

  DBMS_DATAPUMP.START_JOB(h1);

-- The import job should now be running. In the following loop, the job is 
-- monitored until it completes. In the meantime, progress information is 
-- displayed. Note: this is identical to the export example.
 
 percent_done := 0;
  job_state := 'UNDEFINED';
  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

-- If the percentage done changed, display the new value.

     if js.percent_done != percent_done
    then
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;

-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.

       if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
  end loop;

-- Indicate that the job finished and gracefully detach from it. 

  dbms_output.put_line('Job has completed');
  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(h1);
END;
/

Example 6-3 Using Exception Handling During a Simple Schema Export

The script in this example shows a simple schema export using the Data Pump API. It extends Example 6-1 to show how to use exception handling to catch the SUCCESS_WITH_INFO case, and how to use the GET_STATUS procedure to retrieve additional information about errors. If you want to get exception information about a DBMS_DATAPUMP.OPEN or DBMS_DATAPUMP.ATTACH failure, then call DBMS_DATAPUMP.GET_STATUS with a DBMS_DATAPUMP.KU$_STATUS_JOB_ERROR information mask and a NULL job handle to retrieve the error details.

Connect as user SYSTEM to use this example.

DECLARE
  ind NUMBER;              -- Loop index
  spos NUMBER;             -- String starting position
  slen NUMBER;             -- String length for output
  h1 NUMBER;               -- Data Pump job handle
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  le ku$_LogEntry;         -- For WIP and error messages
  js ku$_JobStatus;        -- The job status from get_status
  jd ku$_JobDesc;          -- The job description from get_status
  sts ku$_Status;          -- The status object returned by get_status
BEGIN

-- Create a (user-named) Data Pump job to do a schema export.

  h1 := dbms_datapump.open('EXPORT','SCHEMA',NULL,'EXAMPLE3','LATEST');

-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.

  dbms_datapump.add_file(h1,'example3.dmp','DMPDIR');

-- A metadata filter is used to specify the schema that will be exported.

  dbms_datapump.metadata_filter(h1,'SCHEMA_EXPR','IN (''HR'')');

-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.

    begin
    dbms_datapump.start_job(h1);
    dbms_output.put_line('Data Pump job started successfully');
    exception
      when others then
        if sqlcode = dbms_datapump.success_with_info_num
        then
          dbms_output.put_line('Data Pump job started with info available:');
          dbms_datapump.get_status(h1,
                                   dbms_datapump.ku$_status_job_error,0,
                                   job_state,sts);
          if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
          then
            le := sts.error;
            if le is not null
            then
              ind := le.FIRST;
              while ind is not null loop
                dbms_output.put_line(le(ind).LogText);
                ind := le.NEXT(ind);
              end loop;
            end if;
          end if;
        else
          raise;
        end if;
  end;

-- The export job should now be running. In the following loop, we will monitor
-- the job until it completes. In the meantime, progress information is
-- displayed.
 
 percent_done := 0;
  job_state := 'UNDEFINED';
  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

-- If the percentage done changed, display the new value.

     if js.percent_done != percent_done
    then
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;

-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.

      if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
  end loop;

-- Indicate that the job finished and detach from it.

  dbms_output.put_line('Job has completed');
  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(h1);

-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.

  exception
    when others then
      dbms_output.put_line('Exception in Data Pump job');
      dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
                               job_state,sts);
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
        if le is not null
        then
          ind := le.FIRST;
          while ind is not null loop
            spos := 1;
            slen := length(le(ind).LogText);
            if slen > 255
            then
              slen := 255;
            end if;
            while slen > 0 loop
              dbms_output.put_line(substr(le(ind).LogText,spos,slen));
              spos := spos + 255;
              slen := length(le(ind).LogText) + 1 - spos;
            end loop;
            ind := le.NEXT(ind);
          end loop;
        end if;
      end if;
END;
/

Example 6-4 Displaying Dump File Information

The PL/SQL script in this example shows how to use the Data Pump API procedure DBMS_DATAPUMP.GET_DUMPFILE_INFO to display information about a Data Pump dump file outside the context of any Data Pump job. This example displays information contained in the example1.dmp dump file created by the sample PL/SQL script in Example 6-1.

This PL/SQL script can also be used to display information for dump files created by original Export (the exp utility) as well as by the ORACLE_DATAPUMP external tables access driver.

Connect as user SYSTEM to use this script.

SET VERIFY OFF
SET FEEDBACK OFF
 
DECLARE
  ind        NUMBER;
  fileType   NUMBER;
  value      VARCHAR2(2048);
  infoTab    KU$_DUMPFILE_INFO := KU$_DUMPFILE_INFO();
 
BEGIN
  --
  -- Get the information about the dump file into the infoTab.
  --
  BEGIN
    DBMS_DATAPUMP.GET_DUMPFILE_INFO('example1.dmp','DMPDIR',infoTab,fileType);
    DBMS_OUTPUT.PUT_LINE('---------------------------------------------');
    DBMS_OUTPUT.PUT_LINE('Information for file: example1.dmp');
 
    --
    -- Determine what type of file is being looking at.
    --
    CASE fileType
      WHEN 1 THEN
        DBMS_OUTPUT.PUT_LINE('example1.dmp is a Data Pump dump file');
      WHEN 2 THEN
        DBMS_OUTPUT.PUT_LINE('example1.dmp is an Original Export dump file');
      ELSE
        DBMS_OUTPUT.PUT_LINE('example1.dmp is not a dump file');
        DBMS_OUTPUT.PUT_LINE('---------------------------------------------');
    END CASE;
 
  EXCEPTION
    WHEN OTHERIS THEN
      DBMS_OUTPUT.PUT_LINE('---------------------------------------------');
      DBMS_OUTPUT.PUT_LINE('Error retrieving information for file: ' ||
                           'example1.dmp');
      DBMS_OUTPUT.PUT_LINE(SQLERRM);
      DBMS_OUTPUT.PUT_LINE('---------------------------------------------');
      fileType := 0;
  END;
 
  --
  -- If a valid file type was returned, then loop through the infoTab and 
  -- display each item code and value returned.
  --
  IF fileType > 0
  THEN
    DBMS_OUTPUT.PUT_LINE('The information table has ' || 
                          TO_CHAR(infoTab.COUNT) || ' entries');
    DBMS_OUTPUT.PUT_LINE('---------------------------------------------');
 
    ind := infoTab.FIRST;
    WHILE ind IS NOT NULL
    LOOP
      --
      -- The following item codes return boolean values in the form
      -- of a '1' or a '0'. We'll display them as 'Yes' or 'No'.
      --
      value := NVL(infoTab(ind).value, 'NULL');
      IF infoTab(ind).item_code IN
         (DBMS_DATAPUMP.KU$_DFHDR_MASTER_PRESENT,
          DBMS_DATAPUMP.KU$_DFHDR_DIRPATH,
          DBMS_DATAPUMP.KU$_DFHDR_METADATA_COMPRESSED,
          DBMS_DATAPUMP.KU$_DFHDR_DATA_COMPRESSED,
          DBMS_DATAPUMP.KU$_DFHDR_METADATA_ENCRYPTED,
          DBMS_DATAPUMP.KU$_DFHDR_DATA_ENCRYPTED,
          DBMS_DATAPUMP.KU$_DFHDR_COLUMNS_ENCRYPTED)
      THEN
        CASE value
          WHEN '1' THEN value := 'Yes';
          WHEN '0' THEN value := 'No';
        END CASE;
      END IF;
 
      --
      -- Display each item code with an appropriate name followed by
      -- its value.
      --
      CASE infoTab(ind).item_code
        --
        -- The following item codes have been available since Oracle Database 10g
        -- Release 10.2.
        --
        WHEN DBMS_DATAPUMP.KU$_DFHDR_FILE_VERSION   THEN
          DBMS_OUTPUT.PUT_LINE('Dump File Version:         ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_MASTER_PRESENT THEN
          DBMS_OUTPUT.PUT_LINE('Master Table Present:      ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_GUID THEN
          DBMS_OUTPUT.PUT_LINE('Job Guid:                  ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_FILE_NUMBER THEN
          DBMS_OUTPUT.PUT_LINE('Dump File Number:          ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_CHARSET_ID  THEN
          DBMS_OUTPUT.PUT_LINE('Character Set ID:          ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_CREATION_DATE THEN
          DBMS_OUTPUT.PUT_LINE('Creation Date:             ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_FLAGS THEN
          DBMS_OUTPUT.PUT_LINE('Internal Dump Flags:       ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_JOB_NAME THEN
          DBMS_OUTPUT.PUT_LINE('Job Name:                  ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_PLATFORM THEN
          DBMS_OUTPUT.PUT_LINE('Platform Name:             ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_INSTANCE THEN
          DBMS_OUTPUT.PUT_LINE('Instance Name:             ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_LANGUAGE THEN
          DBMS_OUTPUT.PUT_LINE('Language Name:             ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_BLOCKSIZE THEN
          DBMS_OUTPUT.PUT_LINE('Dump File Block Size:      ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_DIRPATH THEN
          DBMS_OUTPUT.PUT_LINE('Direct Path Mode:          ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_METADATA_COMPRESSED THEN
          DBMS_OUTPUT.PUT_LINE('Metadata Compressed:       ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_DB_VERSION THEN
          DBMS_OUTPUT.PUT_LINE('Database Version:          ' || value);
 
        --
        -- The following item codes were introduced in Oracle Database 11g
        -- Release 11.1
        --
        WHEN DBMS_DATAPUMP.KU$_DFHDR_MASTER_PIECE_COUNT THEN
          DBMS_OUTPUT.PUT_LINE('Master Table Piece Count:  ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_MASTER_PIECE_NUMBER THEN
          DBMS_OUTPUT.PUT_LINE('Master Table Piece Number: ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_DATA_COMPRESSED THEN
          DBMS_OUTPUT.PUT_LINE('Table Data Compressed:     ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_METADATA_ENCRYPTED THEN
          DBMS_OUTPUT.PUT_LINE('Metadata Encrypted:        ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_DATA_ENCRYPTED THEN
          DBMS_OUTPUT.PUT_LINE('Table Data Encrypted:      ' || value);
        WHEN DBMS_DATAPUMP.KU$_DFHDR_COLUMNS_ENCRYPTED THEN
          DBMS_OUTPUT.PUT_LINE('TDE Columns Encrypted:     ' || value);
 
        --
        -- For the DBMS_DATAPUMP.KU$_DFHDR_ENCRYPTION_MODE item code a
        -- numeric value is returned. So examine that numeric value
        -- and display an appropriate name value for it.
        --
        WHEN DBMS_DATAPUMP.KU$_DFHDR_ENCRYPTION_MODE THEN
          CASE TO_NUMBER(value)
            WHEN DBMS_DATAPUMP.KU$_DFHDR_ENCMODE_NONE THEN
              DBMS_OUTPUT.PUT_LINE('Encryption Mode:           None');
            WHEN DBMS_DATAPUMP.KU$_DFHDR_ENCMODE_PASSWORD THEN
              DBMS_OUTPUT.PUT_LINE('Encryption Mode:           Password');
            WHEN DBMS_DATAPUMP.KU$_DFHDR_ENCMODE_DUAL THEN
              DBMS_OUTPUT.PUT_LINE('Encryption Mode:           Dual');
            WHEN DBMS_DATAPUMP.KU$_DFHDR_ENCMODE_TRANS THEN
              DBMS_OUTPUT.PUT_LINE('Encryption Mode:           Transparent');
          END CASE;
        ELSE NULL;  -- Ignore other, unrecognized dump file attributes.
      END CASE;
      ind := infoTab.NEXT(ind);
    END LOOP;
  END IF;
END;
/
PKDISIPKN:AOEBPS/ldr_concepts.htm SQL*Loader Concepts

7 SQL*Loader Concepts

This chapter explains the basic concepts of loading data into an Oracle database with SQL*Loader. This chapter covers the following topics:

SQL*Loader Features

SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful data parsing engine that puts little limitation on the format of the data in the data file. You can use SQL*Loader to do the following:

A typical SQL*Loader session takes as input a control file, which controls the behavior of SQL*Loader, and one or more data files. The output of SQL*Loader is an Oracle database (where the data is loaded), a log file, a bad file, and potentially, a discard file. An example of the flow of a SQL*Loader session is shown in Figure 7-1.

Figure 7-1 SQL*Loader Overview

Description of Figure 7-1 follows
Description of "Figure 7-1 SQL*Loader Overview"

SQL*Loader Parameters

SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that establish session characteristics.

In situations where you always use the same parameters for which the values seldom change, it can be more efficient to specify parameters using the following methods, rather than on the command line:

Parameters specified on the command line override any parameter values specified in a parameter file or OPTIONS clause.


See Also:


SQL*Loader Control File

The control file is a text file written in a language that SQL*Loader understands. The control file tells SQL*Loader where to find the data, how to parse and interpret the data, where to insert the data, and more.

Although not precisely defined, a control file can be said to have three sections.

The first section contains session-wide information, for example:

The second section consists of one or more INTO TABLE blocks. Each of these blocks contains information about the table into which the data is to be loaded, such as the table name and the columns of the table.

The third section is optional and, if present, contains input data.

Some control file syntax considerations to keep in mind are:

Input Data and Data Files

SQL*Loader reads data from one or more files (or operating system equivalents of files) specified in the control file. From SQL*Loader's perspective, the data in the data file is organized as records. A particular data file can be in fixed record format, variable record format, or stream record format. The record format can be specified in the control file with the INFILE parameter. If no record format is specified, then the default is stream record format.


Note:

If data is specified inside the control file (that is, INFILE * was specified in the control file), then the data is interpreted in the stream record format with the default record terminator.

Fixed Record Format

A file is in fixed record format when all records in a data file are the same byte length. Although this format is the least flexible, it results in better performance than variable or stream format. Fixed format is also simple to specify. For example:

INFILE datafile_name "fix n"

This example specifies that SQL*Loader should interpret the particular data file as being in fixed record format where every record is n bytes long.

Example 7-1 shows a control file that specifies a data file (example.dat) to be interpreted in the fixed record format. The data file in the example contains five physical records; each record has fields that contain the number and name of an employee. Each of the five records is 11 bytes long, including spaces. For the purposes of explaining this example, periods are used to represent spaces in the records, but in the actual records there would be no periods. With that in mind, the first physical record is 396,...ty,. which is exactly eleven bytes (assuming a single-byte character set). The second record is 4922,beth, followed by the newline character (\n) which is the eleventh byte, and so on. (Newline characters are not required with the fixed record format; it is simply used here to illustrate that if used, it counts as a byte in the record length.)

Note that the length is always interpreted in bytes, even if character-length semantics are in effect for the file. This is necessary because the file could contain a mix of fields, some of which are processed with character-length semantics and others which are processed with byte-length semantics. See "Character-Length Semantics".

Example 7-1 Loading Data in Fixed Record Format

load data
infile 'example.dat'  "fix 11"
into table example
fields terminated by ',' optionally enclosed by '"'
(col1, col2)

example.dat:

396,...ty,.4922,beth,\n
68773,ben,.
1,.."dave",
5455,mike,.

Variable Record Format

A file is in variable record format when the length of each record in a character field is included at the beginning of each record in the data file. This format provides some added flexibility over the fixed record format and a performance advantage over the stream record format. For example, you can specify a data file that is to be interpreted as being in variable record format as follows:

INFILE "datafile_name" "var n"

In this example, n specifies the number of bytes in the record length field. If n is not specified, then SQL*Loader assumes a length of 5 bytes. Specifying n larger than 40 will result in an error.

Example 7-2 shows a control file specification that tells SQL*Loader to look for data in the data file example.dat and to expect variable record format where the record length fields are 3 bytes long. The example.dat data file consists of three physical records. The first is specified to be 009 (that is, 9) bytes long, the second is 010 bytes long (that is, 10, including a 1-byte newline), and the third is 012 bytes long (also including a 1-byte newline). Note that newline characters are not required with the variable record format. This example also assumes a single-byte character set for the data file.

The lengths are always interpreted in bytes, even if character-length semantics are in effect for the file. This is necessary because the file could contain a mix of fields, some processed with character-length semantics and others processed with byte-length semantics. See "Character-Length Semantics".

Example 7-2 Loading Data in Variable Record Format

load data
infile 'example.dat'  "var 3"
into table example
fields terminated by ',' optionally enclosed by '"'
(col1 char(5),
 col2 char(7))

example.dat:
009hello,cd,010world,im,
012my,name is,

Stream Record Format

A file is in stream record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator. Stream record format is the most flexible format, but there can be a negative effect on performance. The specification of a data file to be interpreted as being in stream record format looks similar to the following:

INFILE datafile_name ["str terminator_string"]

The terminator_string is specified as either 'char_string' or X'hex_string' where:

  • 'char_string' is a string of characters enclosed in single or double quotation marks

  • X'hex_string' is a byte string in hexadecimal format

When the terminator_string contains special (nonprintable) characters, it should be specified as an X'hex_string'. However, some nonprintable characters can be specified as ('char_string' ) by using a backslash. For example:

  • \n indicates a line feed

  • \t indicates a horizontal tab

  • \f indicates a form feed

  • \v indicates a vertical tab

  • \r indicates a carriage return

If the character set specified with the NLS_LANG parameter for your session is different from the character set of the data file, then character strings are converted to the character set of the data file. This is done before SQL*Loader checks for the default record terminator.

Hexadecimal strings are assumed to be in the character set of the data file, so no conversion is performed.

On UNIX-based platforms, if no terminator_string is specified, then SQL*Loader defaults to the line feed character, \n.

On Windows NT, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the data file. This means that if you know that one or more records in your data file has \n embedded in a field, but you want \r\n to be used as the record terminator, then you must specify it.

Example 7-3 illustrates loading data in stream record format where the terminator string is specified using a character string, '|\n'. The use of the backslash character allows the character string to specify the nonprintable line feed character.

Example 7-3 Loading Data in Stream Record Format

load data
infile 'example.dat'  "str '|\n'"
into table example
fields terminated by ',' optionally enclosed by '"'
(col1 char(5),
 col2 char(7))

example.dat:
hello,world,|
james,bond,|

Logical Records

SQL*Loader organizes the input data into physical records, according to the specified record format. By default a physical record is a logical record, but for added flexibility, SQL*Loader can be instructed to combine several physical records into a logical record.

SQL*Loader can be instructed to follow one of the following logical record-forming strategies:

Data Fields

Once a logical record is formed, field setting on the logical record is done. Field setting is a process in which SQL*Loader uses control-file field specifications to determine which parts of logical record data correspond to which control-file fields. It is possible for two or more field specifications to claim the same data. Also, it is possible for a logical record to contain data that is not claimed by any control-file field specification.

Most control-file field specifications claim a particular part of the logical record. This mapping takes the following forms:

  • The byte position of the data field's beginning, end, or both, can be specified. This specification form is not the most flexible, but it provides high field-setting performance.

  • The strings delimiting (enclosing and/or terminating) a particular data field can be specified. A delimited data field is assumed to start where the last data field ended, unless the byte position of the start of the data field is specified.

  • The byte offset and/or the length of the data field can be specified. This way each field starts a specified number of bytes from where the last one ended and continues for a specified length.

  • Length-value datatypes can be used. In this case, the first n number of bytes of the data field contain information about how long the rest of the data field is.

LOBFILEs and Secondary Data Files (SDFs)

LOB data can be lengthy enough that it makes sense to load it from a LOBFILE. In LOBFILEs, LOB data instances are still considered to be in fields (predetermined size, delimited, length-value), but these fields are not organized into records (the concept of a record does not exist within LOBFILEs). Therefore, the processing overhead of dealing with records is avoided. This type of organization of data is ideal for LOB loading.

For example, you might use LOBFILEs to load employee names, employee IDs, and employee resumes. You could read the employee names and IDs from the main data files and you could read the resumes, which can be quite lengthy, from LOBFILEs.

You might also use LOBFILEs to facilitate the loading of XML data. You can use XML columns to hold data that models structured and semistructured data. Such data can be quite lengthy.

Secondary data files (SDFs) are similar in concept to primary data files. Like primary data files, SDFs are a collection of records, and each record is made up of fields. The SDFs are specified on a per control-file-field basis. Only a collection_fld_spec can name an SDF as its data source.

SDFs are specified using the SDF parameter. The SDF parameter can be followed by either the file specification string, or a FILLER field that is mapped to a data field containing one or more file specification strings.

Data Conversion and Datatype Specification

During a conventional path load, data fields in the data file are converted into columns in the database (direct path loads are conceptually similar, but the implementation is different). There are two conversion steps:

  1. SQL*Loader uses the field specifications in the control file to interpret the format of the data file, parse the input data, and populate the bind arrays that correspond to a SQL INSERT statement using that data.

  2. The Oracle database accepts the data and executes the INSERT statement to store the data in the database.

The Oracle database uses the datatype of the column to convert the data into its final, stored form. Keep in mind the distinction between a field in a data file and a column in the database. Remember also that the field datatypes defined in a SQL*Loader control file are not the same as the column datatypes.

Discarded and Rejected Records

Records read from the input file might not be inserted into the database. Such records are placed in either a bad file or a discard file.

The Bad File

The bad file contains records that were rejected, either by SQL*Loader or by the Oracle database. If you do not specify a bad file and there are rejected records, then SQL*Loader automatically creates one. It will have the same name as the data file, with a.bad extension. Some of the possible reasons for rejection are discussed in the next sections.

SQL*Loader Rejects

Data file records are rejected by SQL*Loader when the input format is invalid. For example, if the second enclosure delimiter is missing, or if a delimited field exceeds its maximum length, then SQL*Loader rejects the record. Rejected records are placed in the bad file.

Oracle Database Rejects

After a data file record is accepted for processing by SQL*Loader, it is sent to the Oracle database for insertion into a table as a row. If the Oracle database determines that the row is valid, then the row is inserted into the table. If the row is determined to be invalid, then the record is rejected and SQL*Loader puts it in the bad file. The row may be invalid, for example, because a key is not unique, because a required field is null, or because the field contains invalid data for the Oracle datatype.


See Also:


The Discard File

As SQL*Loader executes, it may create a file called the discard file. This file is created only when it is needed, and only if you have specified that a discard file should be enabled. The discard file contains records that were filtered out of the load because they did not match any record-selection criteria specified in the control file.

The discard file therefore contains records that were not inserted into any table in the database. You can specify the maximum number of such records that the discard file can accept. Data written to any database table is not written to the discard file.


See Also:


Log File and Logging Information

When SQL*Loader begins execution, it creates a log file. If it cannot create a log file, then execution terminates. The log file contains a detailed summary of the load, including a description of any errors that occurred during the load.

Conventional Path Loads, Direct Path Loads, and External Table Loads

SQL*Loader provides the following methods to load data:

Conventional Path Loads

During conventional path loads, the input records are parsed according to the field specifications, and each data field is copied to its corresponding bind array. When the bind array is full (or no more data is left to read), an array insert is executed.

SQL*Loader stores LOB fields after a bind array insert is done. Thus, if there are any errors in processing the LOB field (for example, the LOBFILE could not be found), then the LOB field is left empty. Note also that because LOB data is loaded after the array insert has been performed, BEFORE and AFTER row triggers may not work as expected for LOB columns. This is because the triggers fire before SQL*Loader has a chance to load the LOB contents into the column. For instance, suppose you are loading a LOB column, C1, with data and you want a BEFORE row trigger to examine the contents of this LOB column and derive a value to be loaded for some other column, C2, based on its examination. This is not possible because the LOB contents will not have been loaded at the time the trigger fires.

Direct Path Loads

A direct path load parses the input records according to the field specifications, convqrerts the input field data to the column datatype, and builds a column array. The column array is passed to a block formatter, which creates data blocks in Oracle database block format. The newly formatted database blocks are written directly to the database, bypassing much of the data processing that normally takes place. Direct path load is much faster than conventional path load, but entails several restrictions.


See Also:

"Direct Path Load"

Parallel Direct Path

A parallel direct path load allows multiple direct path load sessions to concurrently load the same data segments (allows intrasegment parallelism). Parallel direct path is more restrictive than direct path.

External Table Loads

External tables are defined as tables that do not reside in the database, and can be in any format for which an access driver is provided. Oracle Database provides two access drivers: ORACLE_LOADER and ORACLE_DATAPUMP. By providing the database with metadata describing an external table, the database is able to expose the data in the external table as if it were data residing in a regular database table.

An external table load creates an external table for data that is contained in a data file. The load executes INSERT statements to insert the data from the data file into the target table.

The advantages of using external table loads over conventional path and direct path loads are as follows:

  • If a data file is big enough, then an external tables load attempts to load that file in parallel.

  • An external table load allows modification of the data being loaded by using SQL functions and PL/SQL functions as part of the INSERT statement that is used to create the external table.


Note:

An external table load is not supported using a named pipe on Windows NT.

Choosing External Tables Versus SQL*Loader

The record parsing of external tables and SQL*Loader is very similar, so normally there is not a major performance difference for the same record format. However, due to the different architecture of external tables and SQL*Loader, there are situations in which one method may be more appropriate than the other.

Use external tables for the best load performance in the following situations:

  • You want to transform the data as it is being loaded into the database

  • You want to use transparent parallel processing without having to split the external data first

Use SQL*Loader for the best load performance in the following situations:

  • You want to load data remotely

  • Transformations are not required on the data, and the data does not need to be loaded in parallel

  • You want to load data, and additional indexing of the staging table is required

Behavior Differences Between SQL*Loader and External Tables

This section describes important differences between loading data with external tables, using the ORACLE_LOADER access driver, as opposed to loading data with SQL*Loader conventional and direct path loads. This information does not apply to the ORACLE_DATAPUMP access driver.

Multiple Primary Input Data Files

If there are multiple primary input data files with SQL*Loader loads, then a bad file and a discard file are created for each input data file. With external table loads, there is only one bad file and one discard file for all input data files. If parallel access drivers are used for the external table load, then each access driver has its own bad file and discard file.

Syntax and Datatypes

The following are not supported with external table loads:

  • Use of CONTINUEIF or CONCATENATE to combine multiple physical records into a single logical record.

  • Loading of the following SQL*Loader datatypes: GRAPHIC, GRAPHIC EXTERNAL, and VARGRAPHIC

  • Use of the following database column types: LONGs, nested tables, VARRAYs, REFs, primary key REFs, and SIDs

Byte-Order Marks

With SQL*Loader, if a primary data file uses a Unicode character set (UTF8 or UTF16) and it also contains a byte-order mark (BOM), then the byte-order mark is written at the beginning of the corresponding bad and discard files. With external table loads, the byte-order mark is not written at the beginning of the bad and discard files.

Default Character Sets, Date Masks, and Decimal Separator

For fields in a data file, the settings of NLS environment variables on the client determine the default character set, date mask, and decimal separator. For fields in external tables, the database settings of the NLS parameters determine the default character set, date masks, and decimal separator.

Use of the Backslash Escape Character

In SQL*Loader, you can use the backslash (\) escape character to mark a single quotation mark as a single quotation mark, as follows:

FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\''

In external tables, the use of the backslash escape character within a string will raise an error. The workaround is to use double quotation marks to mark the separation string, as follows:

TERMINATED BY ',' ENCLOSED BY "'"

Loading Objects, Collections, and LOBs

You can use SQL*Loader to bulk load objects, collections, and LOBs. It is assumed that you are familiar with the concept of objects and with Oracle's implementation of object support as described in Oracle Database Concepts and in the Oracle Database Administrator's Guide.

Supported Object Types

SQL*Loader supports loading of the following two object types:

column objects

When a column of a table is of some object type, the objects in that column are referred to as column objects. Conceptually such objects are stored in their entirety in a single column position in a row. These objects do not have object identifiers and cannot be referenced.

If the object type of the column object is declared to be nonfinal, then SQL*Loader allows a derived type (or subtype) to be loaded into the column object.

row objects

These objects are stored in tables, known as object tables, that have columns corresponding to the attributes of the object. The object tables have an additional system-generated column, called SYS_NC_OID$, that stores system-generated unique identifiers (OIDs) for each of the objects in the table. Columns in other tables can refer to these objects by using the OIDs.

If the object type of the object table is declared to be nonfinal, then SQL*Loader allows a derived type (or subtype) to be loaded into the row object.

Supported Collection Types

SQL*Loader supports loading of the following two collection types:

Nested Tables

A nested table is a table that appears as a column in another table. All operations that can be performed on other tables can also be performed on nested tables.

VARRAYs

VARRAYs are variable sized arrays. An array is an ordered set of built-in types or objects, called elements. Each array element is of the same type and has an index, which is a number corresponding to the element's position in the VARRAY.

When creating a VARRAY type, you must specify the maximum size. Once you have declared a VARRAY type, it can be used as the datatype of a column of a relational table, as an object type attribute, or as a PL/SQL variable.


See Also:

"Loading Collections (Nested Tables and VARRAYs)" for details on using SQL*Loader control file data definition language to load these collection types

Supported LOB Types

A LOB is a large object type. This release of SQL*Loader supports loading of four LOB types:

  • BLOB: a LOB containing unstructured binary data

  • CLOB: a LOB containing character data

  • NCLOB: a LOB containing characters in a database national character set

  • BFILE: a BLOB stored outside of the database tablespaces in a server-side operating system file

LOBs can be column datatypes, and except for NCLOB, they can be an object's attribute datatypes. LOBs can have an actual value, they can be null, or they can be "empty."


See Also:

"Loading LOBs" for details on using SQL*Loader control file data definition language to load these LOB types

Partitioned Object Support

SQL*Loader supports loading partitioned objects in the database. A partitioned object in an Oracle database is a table or index consisting of partitions (pieces) that have been grouped, typically by common logical attributes. For example, sales data for the year 2000 might be partitioned by month. The data for each month is stored in a separate partition of the sales table. Each partition is stored in a separate segment of the database and can have different physical attributes.

SQL*Loader partitioned object support enables SQL*Loader to load the following:

Application Development: Direct Path Load API

Oracle provides a direct path load API for application developers. See the Oracle Call Interface Programmer's Guide for more information.

SQL*Loader Case Studies

SQL*Loader features are illustrated in a variety of case studies. The case studies are based upon the Oracle demonstration database tables, emp and dept, owned by the user scott. (In some case studies, additional columns have been added.)The case studies are numbered 1 through 11, starting with the simplest scenario and progressing in complexity.


Note:

Files for use in the case studies are located in the $ORACLE_HOME/rdbms/demo directory. These files are installed when you install the Oracle Database 11g Examples (formerly Companion) media. See Table 7-1 for the names of the files.

The following is a summary of the case studies:

Case Study Files

Generally, each case study is comprised of the following types of files:

  • Control files (for example, ulcase5.ctl)

  • Data files (for example, ulcase5.dat)

  • Setup files (for example, ulcase5.sql)

These files are installed when you install the Oracle Database 11g Examples (formerly Companion) media. They are installed in the $ORACLE_HOME/rdbms/demo directory.

If the sample data for the case study is contained within the control file, then there will be no .dat file for that case.

Case study 2 does not require any special set up, so there is no .sql script for that case. Case study 7 requires that you run both a starting (setup) script and an ending (cleanup) script.

Table 7-1 lists the files associated with each case.

Table 7-1 Case Studies and Their Related Files

Case.ctl.dat.sql

1

ulcase1.ctl

N/A

ulcase1.sql

2

ulcase2.ctl

ulcase2.dat

N/A

3

ulcase3.ctl

N/A

ulcase3.sql

4

ulcase4.ctl

ulcase4.dat

ulcase4.sql

5

ulcase5.ctl

ulcase5.dat

ulcase5.sql

6

ulcase6.ctl

ulcase6.dat

ulcase6.sql

7

ulcase7.ctl

ulcase7.dat

ulcase7s.sql

ulcase7e.sql

8

ulcase8.ctl

ulcase8.dat

ulcase8.sql

9

ulcase9.ctl

ulcase9.dat

ulcase9.sql

10

ulcase10.ctl

N/A

ulcase10.sql

11

ulcase11.ctl

ulcase11.dat

ulcase11.sql


Running the Case Studies

In general, you use the following steps to run the case studies (be sure you are in the $ORACLE_HOME/rdbms/demo directory, which is where the case study files are located):

  1. At the system prompt, type sqlplus and press Enter to start SQL*Plus. At the user-name prompt, enter scott. At the password prompt, enter tiger.

    The SQL prompt is displayed.

  2. At the SQL prompt, execute the SQL script for the case study. For example, to execute the SQL script for case study 1, enter the following:

    SQL> @ulcase1
    

    This prepares and populates tables for the case study and then returns you to the system prompt.

  3. At the system prompt, invoke SQL*Loader and run the case study, as follows:

    sqlldr USERID=scott CONTROL=ulcase1.ctl LOG=ulcase1.log
    

    Substitute the appropriate control file name and log file name for the CONTROL and LOG parameters and press Enter. When you are prompted for a password, type tiger and then press Enter.

Be sure to read the control file for each case study before you run it. The beginning of the control file contains information about what is being demonstrated in the case study and any other special information you need to know. For example, case study 6 requires that you add DIRECT=TRUE to the SQL*Loader command line.

Case Study Log Files

Log files for the case studies are not provided in the $ORACLE_HOME/rdbms/demo directory. This is because the log file for each case study is produced when you execute the case study, provided that you use the LOG parameter. If you do not want to produce a log file, then omit the LOG parameter from the command line.

Checking the Results of a Case Study

To check the results of running a case study, start SQL*Plus and perform a select operation from the table that was loaded in the case study. This is done, as follows:

  1. At the system prompt, type sqlplus and press Enter to start SQL*Plus. At the user-name prompt, enter scott. At the password prompt, enter tiger.

    The SQL prompt is displayed.

  2. At the SQL prompt, use the SELECT statement to select all rows from the table that the case study loaded. For example, if the table emp was loaded, then enter:

    SQL> SELECT * FROM emp;
    

    The contents of each row in the emp table will be displayed.

PK^#PKN:AOEBPS/original_export.htm Original Export

21 Original Export

This chapter describes how to use the original Export utility (exp) to write data from an Oracle database into an operating system file in binary format. This file is stored outside the database, and it can be read into another Oracle database using the original Import utility.


Note:

Original Export is desupported for general use as of Oracle Database 11g. The only supported use of original Export in Oracle Database 11g is backward migration of XMLType data to Oracle Database 10g release 2 (10.2) or earlier. Therefore, Oracle recommends that you use the new Data Pump Export and Import utilities, except in the following situations which require original Export and Import:
  • You want to import files that were created using the original Export utility (exp).

  • You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to export data from Oracle Database 10g and then import it into an earlier database release.


The following topics are discussed in this chapter:

What is the Export Utility?

The Export utility provides a simple way for you to transfer data objects between Oracle databases, even if they reside on platforms with different hardware and software configurations.

When you run Export against an Oracle database, objects (such as tables) are extracted, followed by their related objects (such as indexes, comments, and grants), if any.

An Export file is an Oracle binary-format dump file that is typically located on disk or tape. The dump files can be transferred using FTP or physically transported (in the case of tape) to a different site. The files can then be used with the Import utility to transfer data between databases that are on systems not connected through a network. The files can also be used as backups in addition to normal backup procedures.

Export dump files can only be read by the Oracle Import utility. The version of the Import utility cannot be earlier than the version of the Export utility used to create the dump file.

You can also display the contents of an export file without actually performing an import. To do this, use the Import SHOW parameter. See "SHOW" for more information.To load data from ASCII fixed-format or delimited files, use the SQL*Loader utility.

Before Using Export

Before you begin using Export, be sure you take care of the following items (described in detail in the following sections):

Running catexp.sql or catalog.sql

To use Export, you must run the script catexp.sql or catalog.sql (which runs catexp.sql) after the database has been created or migrated to a newer release.

The catexp.sql or catalog.sql script needs to be run only once on a database. The script performs the following tasks to prepare the database for export and import operations:

  • Creates the necessary export and import views in the data dictionary

  • Creates the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles

  • Assigns all necessary privileges to the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles

  • Assigns EXP_FULL_DATABASE and IMP_FULL_DATABASE to the DBA role

  • Records the version of catexp.sql that has been installed

The EXP_FULL_DATABASE and IMP_FULL_DATABASE roles are powerful. Database administrators should use caution when granting these roles to users.

Ensuring Sufficient Disk Space for Export Operations

Before you run Export, ensure that there is sufficient disk or tape storage space to write the export file. If there is not enough space, then Export terminates with a write-failure error.

You can use table sizes to estimate the maximum space needed. You can find table sizes in the USER_SEGMENTS view of the Oracle data dictionary. The following query displays disk usage for all tables:

SELECT SUM(BYTES) FROM USER_SEGMENTS WHERE SEGMENT_TYPE='TABLE';

The result of the query does not include disk space used for data stored in LOB (large object) or VARRAY columns or in partitioned tables.


See Also:

Oracle Database Reference for more information about dictionary views

Verifying Access Privileges for Export and Import Operations

To use Export, you must have the CREATE SESSION privilege on an Oracle database. This privilege belongs to the CONNECT role established during database creation. To export tables owned by another user, you must have the EXP_FULL_DATABASE role enabled. This role is granted to all database administrators (DBAs).

If you do not have the system privileges contained in the EXP_FULL_DATABASE role, then you cannot export objects contained in another user's schema. For example, you cannot export a table in another user's schema, even if you created a synonym for it.

Several system schemas cannot be exported because they are not user schemas; they contain Oracle-managed data and metadata. Examples of schemas that are not exported include SYS, ORDSYS, and MDSYS.

Invoking Export

You can invoke Export and specify parameters by using any of the following methods:

Before you use one of these methods, be sure to read the descriptions of the available parameters. See "Export Parameters".

Invoking Export as SYSDBA

SYSDBA is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Export as SYSDBA except in the following situations:

  • At the request of Oracle technical support

  • When importing a transportable tablespace set

Command-Line Entries

You can specify all valid parameters and their values from the command line using the following syntax (you will then be prompted for a username and password):

exp PARAMETER=value

or

exp PARAMETER=(value1,value2,...,valuen)

The number of parameters cannot exceed the maximum length of a command line on the system.

Parameter Files

You can specify all valid parameters and their values in a parameter file. Storing the parameters in a file allows them to be easily modified or reused, and is the recommended method for invoking Export. If you use different parameters for different databases, then you can have multiple parameter files.

Create the parameter file using any flat file text editor. The command-line option PARFILE=filename tells Export to read the parameters from the specified file rather than from the command line. For example:

The syntax for parameter file specifications is one of the following:

PARAMETER=value
PARAMETER=(value)
PARAMETER=(value1, value2, ...)

The following example shows a partial parameter file listing:

FULL=y
FILE=dba.dmp
GRANTS=y
INDEXES=y
CONSISTENT=y

Note:

The maximum size of the parameter file may be limited by the operating system. The name of the parameter file is subject to the file-naming conventions of the operating system.

You can add comments to the parameter file by preceding them with the pound (#) sign. Export ignores all characters to the right of the pound (#) sign.

You can specify a parameter file at the same time that you are entering parameters on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines which parameters take precedence. For example, assume the parameter file params.dat contains the parameter INDEXES=y and Export is invoked with the following line:

exp PARFILE=params.dat INDEXES=n

In this case, because INDEXES=n occurs after PARFILE=params.dat, INDEXES=n overrides the value of the INDEXES parameter in the parameter file.

Interactive Mode

If you prefer to be prompted for the value of each parameter, then you can simply specify either exp at the command line. You will be prompted for a username and password.

Commonly used parameters are then displayed. You can accept the default value, if one is provided, or enter a different value. The command-line interactive method does not provide prompts for all functionality and is provided only for backward compatibility. If you want to use an interactive interface, then Oracle recommends that you use the Oracle Enterprise Manager Export Wizard.

Restrictions When Using Export's Interactive Method

Keep in mind the following points when you use the interactive method:

  • In user mode, Export prompts for all usernames to be included in the export before exporting any data. To indicate the end of the user list and begin the current Export session, press Enter.

  • In table mode, if you do not specify a schema prefix, then Export defaults to the exporter's schema or the schema containing the last table exported in the current session.

    For example, if beth is a privileged user exporting in table mode, then Export assumes that all tables are in the beth schema until another schema is specified. Only a privileged user (someone with the EXP_FULL_DATABASE role) can export tables in another user's schema.

  • If you specify a null table list to the prompt "Table to be exported," then the Export utility exits.

Getting Online Help

Export provides online help. Enter exp help=y on the command line to invoke Export help.

Export Modes

The Export utility supports four modes of operation:

See Table 21-1 for a list of objects that are exported and imported in each mode.


Note:

The original Export utility does not export any table that was created with deferred segment creation and has not had a segment created for it. The most common way for a segment to be created is to store a row into the table, though other operations such as ALTER TABLE ALLOCATE EXTENTS will also create a segment. If a segment does exist for the table and the table is exported, then the SEGMENT CREATION DEFERRED clause is not included in the CREATE TABLE statement that is executed by the original Import utility.

You can use conventional path Export or direct path Export to export in any mode except tablespace mode.The differences between conventional path Export and direct path Export are described in "Conventional Path Export Versus Direct Path Export".

Table 21-1 Objects Exported in Each Mode

ObjectTable ModeUser ModeFull Database ModeTablespace Mode

Analyze cluster

No

Yes

Yes

No

Analyze tables/statistics

Yes

Yes

Yes

Yes

Application contexts

No

No

Yes

No

Auditing information

Yes

Yes

Yes

No

B-tree, bitmap, domain function-based indexes

YesFoot 1 

Yes

Yes

Yes

Cluster definitions

No

Yes

Yes

Yes

Column and table comments

Yes

Yes

Yes

Yes

Database links

No

Yes

Yes

No

Default roles

No

No

Yes

No

Dimensions

No

Yes

Yes

No

Directory aliases

No

No

Yes

No

External tables (without data)

Yes

Yes

Yes

No

Foreign function libraries

No

Yes

Yes

No

Indexes owned by users other than table owner

Yes (Privileged users only)

Yes

Yes

Yes

Index types

No

Yes

Yes

No

Java resources and classes

No

Yes

Yes

No

Job queues

No

Yes

Yes

No

Nested table data

Yes

Yes

Yes

Yes

Object grants

Yes (Only for tables and indexes)

Yes

Yes

Yes

Object type definitions used by table

Yes

Yes

Yes

Yes

Object types

No

Yes

Yes

No

Operators

No

Yes

Yes

No

Password history

No

No

Yes

No

Postinstance actions and objects

No

No

Yes

No

Postschema procedural actions and objects

No

Yes

Yes

No

Posttable actions

Yes

Yes

Yes

Yes

Posttable procedural actions and objects

Yes

Yes

Yes

Yes

Preschema procedural objects and actions

No

Yes

Yes

No

Pretable actions

Yes

Yes

Yes

Yes

Pretable procedural actions

Yes

Yes

Yes

Yes

Private synonyms

No

Yes

Yes

No

Procedural objects

No

Yes

Yes

No

Profiles

No

No

Yes

No

Public synonyms

No

No

Yes

No

Referential integrity constraints

Yes

Yes

Yes

No

Refresh groups

No

Yes

Yes

No

Resource costs

No

No

Yes

No

Role grants

No

No

Yes

No

Roles

No

No

Yes

No

Rollback segment definitions

No

No

Yes

No

Security policies for table

Yes

Yes

Yes

Yes

Sequence numbers

No

Yes

Yes

No

Snapshot logs

No

Yes

Yes

No

Snapshots and materialized views

No

Yes

Yes

No

System privilege grants

No

No

Yes

No

Table constraints (primary, unique, check)

Yes

Yes

Yes

Yes

Table data

Yes

Yes

Yes

Yes

Table definitions

Yes

Yes

Yes

Yes

Tablespace definitions

No

No

Yes

No

Tablespace quotas

No

No

Yes

No

Triggers

Yes

YesFoot 2 

YesFoot 3 

Yes

Triggers owned by other users

Yes (Privileged users only)

No

No

No

User definitions

No

No

Yes

No

User proxies

No

No

Yes

No

User views

No

Yes

Yes

No

User-stored procedures, packages, and functions

No

Yes

Yes

No


Footnote 1 Nonprivileged users can export and import only indexes they own on tables they own. They cannot export indexes they own that are on tables owned by other users, nor can they export indexes owned by other users on their own tables. Privileged users can export and import indexes on the specified users' tables, even if the indexes are owned by other users. Indexes owned by the specified user on other users' tables are not included, unless those other users are included in the list of users to export.

Footnote 2 Nonprivileged and privileged users can export and import all triggers owned by the user, even if they are on tables owned by other users.

Footnote 3 A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import.

Table-Level and Partition-Level Export

You can export tables, partitions, and subpartitions in the following ways:

  • Table-level Export: exports all data from the specified tables

  • Partition-level Export: exports only data from the specified source partitions or subpartitions

In all modes, partitioned data is exported in a format such that partitions or subpartitions can be imported selectively.

Table-Level Export

In table-level Export, you can export an entire table (partitioned or nonpartitioned) along with its indexes and other table-dependent objects. If the table is partitioned, then all of its partitions and subpartitions are also exported. This applies to both direct path Export and conventional path Export. You can perform a table-level export in any Export mode.

Partition-Level Export

In partition-level Export, you can export one or more specified partitions or subpartitions of a table. You can only perform a partition-level export in table mode.

For information about how to specify table-level and partition-level Exports, see "TABLES".

Export Parameters

This section contains descriptions of the Export command-line parameters.

BUFFER

Default: operating system-dependent. See your Oracle operating system-specific documentation to determine the default value for this parameter.

Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:

buffer_size = rows_in_array * maximum_row_size

If you specify zero, then the Export utility fetches only one row at a time.

Tables with columns of type LOBs, LONG, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.


Note:

The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.

Example: Calculating Buffer Size

This section shows an example of how to calculate buffer size.

The following table is created:

CREATE TABLE sample (name varchar(30), weight number);

The maximum size of the name column is 30, plus 2 bytes for the indicator. The maximum size of the weight column is 22 (the size of the internal representation for Oracle numbers), plus 2 bytes for the indicator.

Therefore, the maximum row size is 56 (30+2+22+2).

To perform array operations for 100 rows, a buffer size of 5600 should be specified.

COMPRESS

Default: y

Specifies how Export and Import manage the initial extent for table data.

The default, COMPRESS=y, causes Export to flag table data for consolidation into one initial extent upon import. If extent sizes are large (for example, because of the PCTINCREASE parameter), then the allocated space will be larger than the space required to hold the data.

If you specify COMPRESS=n, then Export uses the current storage parameters, including the values of initial extent size and next extent size. The values of the parameters may be the values specified in the CREATE TABLE or ALTER TABLE statements or the values modified by the database system. For example, the NEXT extent size value may be modified if the table grows and if the PCTINCREASE parameter is nonzero.

The COMPRESS parameter does not work with bitmapped tablespaces.


Note:

Although the actual consolidation is performed upon import, you can specify the COMPRESS parameter only when you export, not when you import. The Export utility, not the Import utility, generates the data definitions, including the storage parameter definitions. Therefore, if you specify COMPRESS=y when you export, then you can import the data in consolidated form only.


Note:

Neither LOB data nor subpartition data is compressed. Rather, values of initial extent size and next extent size at the time of export are used.

CONSISTENT

Default: n

Specifies whether Export uses the SET TRANSACTION READ ONLY statement to ensure that the data seen by Export is consistent to a single point in time and does not change during the execution of the exp command. You should specify CONSISTENT=y when you anticipate that other applications will be updating the target data after an export has started.

If you use CONSISTENT=n, then each table is usually exported in a single transaction. However, if a table contains nested tables, then the outer table and each inner table are exported as separate transactions. If a table is partitioned, then each partition is exported as a separate transaction.

Therefore, if nested tables and partitioned tables are being updated by other applications, then the data that is exported could be inconsistent. To minimize this possibility, export those tables at a time when updates are not being done.

Table 21-2 shows a sequence of events by two users: user1 exports partitions in a table and user2 updates data in that table.

Table 21-2 Sequence of Events During Updates by Two Users

TIme Sequenceuser1user2

1

Begins export of TAB:P1

No activity

2

No activity

Updates TAB:P2 Updates TAB:P1 Commits transaction

3

Ends export of TAB:P1

No activity

4

Exports TAB:P2

No activity


If the export uses CONSISTENT=y, then none of the updates by user2 are written to the export file.

If the export uses CONSISTENT=n, then the updates to TAB:P1 are not written to the export file. However, the updates to TAB:P2 are written to the export file, because the update transaction is committed before the export of TAB:P2 begins. As a result, the user2 transaction is only partially recorded in the export file, making it inconsistent.

If you use CONSISTENT=y and the volume of updates is large, then the rollback segment usage will be large. In addition, the export of each table will be slower, because the rollback segment must be scanned for uncommitted transactions.

Keep in mind the following points about using CONSISTENT=y:

  • CONSISTENT=y is unsupported for exports that are performed when you are connected as user SYS or you are using AS SYSDBA, or both.

  • Export of certain metadata may require the use of the SYS schema within recursive SQL. In such situations, the use of CONSISTENT=y will be ignored. Oracle recommends that you avoid making metadata changes during an export process in which CONSISTENT=y is selected.

  • To minimize the time and space required for such exports, you should export tables that need to remain consistent separately from those that do not. For example, export the emp and dept tables together in a consistent export, and then export the remainder of the database in a second pass.

  • A "snapshot too old" error occurs when rollback space is used up, and space taken up by committed transactions is reused for new transactions. Reusing space in the rollback segment allows database integrity to be preserved with minimum space requirements, but it imposes a limit on the amount of time that a read-consistent image can be preserved.

    If a committed transaction has been overwritten and the information is needed for a read-consistent view of the database, then a "snapshot too old" error results.

    To avoid this error, you should minimize the time taken by a read-consistent export. (Do this by restricting the number of objects exported and, if possible, by reducing the database transaction rate.) Also, make the rollback segment as large as possible.


    Note:

    Rollback segments will be deprecated in a future Oracle database release. Oracle recommends that you use automatic undo management instead.

CONSTRAINTS

Default: y

Specifies whether the Export utility exports table constraints.

DIRECT

Default: n

Specifies the use of direct path Export.

Specifying DIRECT=y causes Export to extract data by reading the data directly, bypassing the SQL command-processing layer (evaluating buffer). This method can be much faster than a conventional path Export.

For information about direct path Exports, including security and performance considerations, see "Invoking a Direct Path Export".

FEEDBACK

Default: 0 (zero)

Specifies that Export should display a progress meter in the form of a period for n number of rows exported. For example, if you specify FEEDBACK=10, then Export displays a period each time 10 rows are exported. The FEEDBACK value applies to all tables being exported; it cannot be set individually for each table.

FILE

Default: expdat.dmp

Specifies the names of the export dump files. The default extension is .dmp, but you can specify any extension. Because Export supports multiple export files, you can specify multiple file names to be used. For example:

exp scott FILE = dat1.dmp, dat2.dmp, dat3.dmp FILESIZE=2048

When Export reaches the value you have specified for the maximum FILESIZE, Export stops writing to the current file, opens another export file with the next name specified by the FILE parameter, and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export file names to complete the export, then Export prompts you to provide additional file names.

FILESIZE

Default: Data is written to one file until the maximum size, as specified in Table 21-3, is reached.

Export supports writing to multiple export files, and Import can read from multiple export files. If you specify a value (byte limit) for the FILESIZE parameter, then Export will write only the number of bytes you specify to each dump file.

When the amount of data Export must write exceeds the maximum value you specified for FILESIZE, it will get the name of the next export file from the FILE parameter (see "FILE" for more information) or, if it has used all the names specified in the FILE parameter, then it will prompt you to provide a new export file name. If you do not specify a value for FILESIZE (note that a value of 0 is equivalent to not specifying FILESIZE), then Export will write to only one file, regardless of the number of files specified in the FILE parameter.


Note:

If the space requirements of your export file exceed the available disk space, then Export will terminate, and you will have to repeat the Export after making sufficient disk space available.

The FILESIZE parameter has a maximum value equal to the maximum value that can be stored in 64 bits.

Table 21-3 shows that the maximum size for dump files depends on the operating system you are using and on the release of the Oracle database that you are using.

Table 21-3 Maximum Size for Dump Files

Operating SystemRelease of Oracle DatabaseMaximum Size

Any

Before 8.1.5

2 gigabytes

32-bit

8.1.5

2 gigabytes

64-bit

8.1.5 and later

Unlimited

32-bit with 32-bit files

Any

2 gigabytes

32-bit with 64-bit files

8.1.6 and later

Unlimited


The maximum value that can be stored in a file is dependent on your operating system. You should verify this maximum value in your Oracle operating system-specific documentation before specifying FILESIZE. You should also ensure that the file size you specify for Export is supported on the system on which Import will run.

The FILESIZE value can also be specified as a number followed by KB (number of kilobytes). For example, FILESIZE=2KB is the same as FILESIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to obtain the final file size (FILESIZE=2048B is the same as FILESIZE=2048).

FLASHBACK_SCN

Default: none

Specifies the system change number (SCN) that Export will use to enable flashback. The export operation is performed with data consistent as of this specified SCN.


See Also:

Oracle Database Advanced Application Developer's Guide for more information about using flashback

The following is an example of specifying an SCN. When the export is performed, the data will be consistent as of SCN 3482971.

> exp FILE=exp.dmp FLASHBACK_SCN=3482971

FLASHBACK_TIME

Default: none

Enables you to specify a timestamp. Export finds the SCN that most closely matches the specified timestamp. This SCN is used to enable flashback. The export operation is performed with data consistent as of this SCN.

You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME procedure accepts. This means that you can specify it in either of the following ways:

> exp FILE=exp.dmp FLASHBACK_TIME="TIMESTAMP '2006-05-01 11:00:00'"

> exp FILE=exp.dmp FLASHBACK_TIME="TO_TIMESTAMP('12-02-2005 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"

Also, the old format, as shown in the following example, will continue to be accepted to ensure backward compatibility:

> exp FILE=exp.dmp FLASHBACK_TIME="'2006-05-01 11:00:00'"

See Also:


FULL

Default: n

Indicates that the export is a full database mode export (that is, it exports the entire database). Specify FULL=y to export in full database mode. You need to have the EXP_FULL_DATABASE role to export in this mode.

Points to Consider for Full Database Exports and Imports

A full database export and import can be a good way to replicate or clean up a database. However, to avoid problems be sure to keep the following points in mind:

  • A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import.

  • A full export also does not export the default profile. If you have modified the default profile in the source database (for example, by adding a password verification function owned by schema SYS), then you must manually pre-create the function and modify the default profile in the target database after the import completes.

  • If possible, before beginning, make a physical copy of the exported database and the database into which you intend to import. This ensures that any mistakes are reversible.

  • Before you begin the export, it is advisable to produce a report that includes the following information:

    • A list of tablespaces and data files

    • A list of rollback segments

    • A count, by user, of each object type such as tables, indexes, and so on

    This information lets you ensure that tablespaces have already been created and that the import was successful.

  • If you are creating a completely new database from an export, then remember to create an extra rollback segment in SYSTEM and to make it available in your initialization parameter file (init.ora)before proceeding with the import.

  • When you perform the import, ensure you are pointing at the correct instance. This is very important because on some UNIX systems, just the act of entering a subshell can change the database against which an import operation was performed.

  • Do not perform a full import on a system that has more than one database unless you are certain that all tablespaces have already been created. A full import creates any undefined tablespaces using the same data file names as the exported database. This can result in problems in the following situations:

    • If the data files belong to any other database, then they will become corrupted. This is especially true if the exported database is on the same system, because its data files will be reused by the database into which you are importing.

    • If the data files have names that conflict with existing operating system files.

GRANTS

Default: y

Specifies whether the Export utility exports object grants. The object grants that are exported depend on whether you use full database mode or user mode. In full database mode, all grants on a table are exported. In user mode, only those granted by the owner of the table are exported. System privilege grants are always exported.

HELP

Default: none

Displays a description of the Export parameters. Enter exp help=y on the command line to invoke it.

INDEXES

Default: y

Specifies whether the Export utility exports indexes.

LOG

Default: none

Specifies a file name (for example, export.log) to receive informational and error messages.

If you specify this parameter, then messages are logged in the log file and displayed to the terminal display.

OBJECT_CONSISTENT

Default: n

Specifies whether the Export utility uses the SET TRANSACTION READ ONLY statement to ensure that the data exported is consistent to a single point in time and does not change during the export. If OBJECT_CONSISTENT is set to y, then each object is exported in its own read-only transaction, even if it is partitioned. In contrast, if you use the CONSISTENT parameter, then there is only one read-only transaction.


See Also:

"CONSISTENT"

OWNER

Default: none

Indicates that the export is a user-mode export and lists the users whose objects will be exported. If the user initiating the export is the database administrator (DBA), then multiple users can be listed.

User-mode exports can be used to back up one or more database users. For example, a DBA may want to back up the tables of deleted users for a period of time. User mode is also appropriate for users who want to back up their own data or who want to move objects from one owner to another.

PARFILE

Default: none

Specifies a file name for a file that contains a list of Export parameters. For more information about using a parameter file, see "Invoking Export".

QUERY

Default: none

This parameter enables you to select a subset of rows from a set of tables when doing a table mode export. The value of the query parameter is a string that contains a WHERE clause for a SQL SELECT statement that will be applied to all tables (or table partitions) listed in the TABLES parameter.

For example, if user scott wants to export only those employees whose job title is SALESMAN and whose salary is less than 1600, then he could do the following (this example is UNIX-based):

exp scott TABLES=emp QUERY=\"WHERE job=\'SALESMAN\' and sal \<1600\"

Note:

Because the value of the QUERY parameter contains blanks, most operating systems require that the entire string WHERE job=\'SALESMAN\' and sal\<1600 be placed in double quotation marks or marked as a literal by some method. Operating system reserved characters also need to be preceded by an escape character. See your Oracle operating system-specific documentation for information about special and reserved characters on your system.

When executing this query, Export builds a SQL SELECT statement similar to the following:

SELECT * FROM emp WHERE job='SALESMAN' and sal <1600; 
 

The values specified for the QUERY parameter are applied to all tables (or table partitions) listed in the TABLES parameter. For example, the following statement will unload rows in both emp and bonus that match the query:

exp scott TABLES=emp,bonus QUERY=\"WHERE job=\'SALESMAN\' and sal\<1600\"
 

Again, the SQL statements that Export executes are similar to the following:

SELECT * FROM emp WHERE job='SALESMAN' and sal <1600;

SELECT * FROM bonus WHERE job='SALESMAN' and sal <1600;

If a table is missing the columns specified in the QUERY clause, then an error message will be produced, and no rows will be exported for the offending table.

Restrictions When Using the QUERY Parameter

  • The QUERY parameter cannot be specified for full, user, or tablespace-mode exports.

  • The QUERY parameter must be applicable to all specified tables.

  • The QUERY parameter cannot be specified in a direct path Export (DIRECT=y)

  • The QUERY parameter cannot be specified for tables with inner nested tables.

  • You cannot determine from the contents of the export file whether the data is the result of a QUERY export.

RECORDLENGTH

Default: operating system-dependent

Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value.

If you do not define this parameter, then it defaults to your platform-dependent value for buffer size.

You can set RECORDLENGTH to any value equal to or greater than your system's buffer size. (The highest value is 64 KB.) Changing the RECORDLENGTH parameter affects only the size of data that accumulates before writing to the disk. It does not affect the operating system file block size.


Note:

You can use this parameter to specify the size of the Export I/O buffer.

RESUMABLE

Default: n

The RESUMABLE parameter is used to enable and disable resumable space allocation. Because this parameter is disabled by default, you must set RESUMABLE=y to use its associated parameters, RESUMABLE_NAME and RESUMABLE_TIMEOUT.


See Also:


RESUMABLE_NAME

Default: 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'

The value for this parameter identifies the statement that is resumable. This value is a user-defined text string that is inserted in either the USER_RESUMABLE or DBA_RESUMABLE view to help you identify a specific resumable statement that has been suspended.

This parameter is ignored unless the RESUMABLE parameter is set to y to enable resumable space allocation.

RESUMABLE_TIMEOUT

Default: 7200 seconds (2 hours)

The value of the parameter specifies the time period during which an error must be fixed. If the error is not fixed within the timeout period, then execution of the statement is terminated.

This parameter is ignored unless the RESUMABLE parameter is set to y to enable resumable space allocation.

ROWS

Default: y

Specifies whether the rows of table data are exported.

STATISTICS

Default: ESTIMATE

Specifies the type of database optimizer statistics to generate when the exported data is imported. Options are ESTIMATE, COMPUTE, and NONE.

In some cases, Export will place the precalculated statistics in the export file, and also the ANALYZE statements to regenerate the statistics.

However, the precalculated optimizer statistics will not be used at export time if a table has columns with system-generated names.

The precalculated optimizer statistics are flagged as questionable at export time if:

  • There are row errors while exporting

  • The client character set or NCHAR character set does not match the server character set or NCHAR character set

  • A QUERY clause is specified

  • Only certain partitions or subpartitions are exported


    Note:

    Specifying ROWS=n does not preclude saving the precalculated statistics in the export file. This enables you to tune plan generation for queries in a nonproduction database using statistics from a production database.

TABLES

Default: none

Specifies that the export is a table-mode export and lists the table names and partition and subpartition names to export. You can specify the following when you specify the name of the table:

  • schemaname specifies the name of the user's schema from which to export the table or partition. If a schema name is not specified, then the exporter's schema is used as the default. System schema names such as ORDSYS, MDSYS, CTXSYS, LBACSYS, and ORDPLUGINS are reserved by Export.

  • tablename specifies the name of the table or tables to be exported. Table-level export lets you export entire partitioned or nonpartitioned tables. If a table in the list is partitioned and you do not specify a partition name, then all its partitions and subpartitions are exported.

    The table name can contain any number of '%' pattern matching characters, which can each match zero or more characters in the table name against the table objects in the database. All the tables in the relevant schema that match the specified pattern are selected for export, as if the respective table names were explicitly specified in the parameter.

  • partition_name indicates that the export is a partition-level Export. Partition-level Export lets you export one or more specified partitions or subpartitions within a table.

The syntax you use to specify the preceding is in the form:

schemaname.tablename:partition_name
schemaname.tablename:subpartition_name

If you use tablename:partition_name, then the specified table must be partitioned, and partition_name must be the name of one of its partitions or subpartitions. If the specified table is not partitioned, then the partition_name is ignored and the entire table is exported.

See "Example Export Session Using Partition-Level Export" for several examples of partition-level Exports.

Table Name Restrictions

The following restrictions apply to table names:

  • By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, then you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.

    Some operating systems require that quotation marks on the command line be preceded by an escape character. The following are examples of how case-sensitivity can be preserved in the different Export modes.

    • In command-line mode:

      TABLES='\"Emp\"'
      
    • In interactive mode:

      Table(T) to be exported: "Emp"
      
    • In parameter file mode:

      TABLES='"Emp"'
      
  • Table names specified on the command line cannot include a pound (#) sign, unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound (#) sign, then the Export utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.

    For example, if the parameter file contains the following line, then Export interprets everything on the line after emp# as a comment and does not export the tables dept and mydata:

    TABLES=(emp#, dept, mydata)
    

    However, given the following line, the Export utility exports all three tables, because emp# is enclosed in quotation marks:

    TABLES=("emp#", dept, mydata)
    

    Note:

    Some operating systems require single quotation marks rather than double quotation marks, or the reverse. Different operating systems also have other restrictions on table naming.

TABLESPACES

Default: none

The TABLESPACES parameter specifies that all tables in the specified tablespace be exported to the Export dump file. This includes all tables contained in the list of tablespaces and all tables that have a partition located in the list of tablespaces. Indexes are exported with their tables, regardless of where the index is stored.

You must have the EXP_FULL_DATABASE role to use TABLESPACES to export all tables in the tablespace.

When TABLESPACES is used in conjunction with TRANSPORT_TABLESPACE=y, you can specify a limited list of tablespaces to be exported from the database to the export file.

TRANSPORT_TABLESPACE

Default: n

When specified as y, this parameter enables the export of transportable tablespace metadata.

Encrypted columns are not supported in transportable tablespace mode.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

TRIGGERS

Default: y

Specifies whether the Export utility exports triggers.

TTS_FULL_CHECK

Default: n

When TTS_FULL_CHECK is set to y, Export verifies that a recovery set (set of tablespaces to be recovered) has no dependencies (specifically, IN pointers) on objects outside the recovery set, and the reverse.

USERID (username/password)

Default: none

Specifies the username, password, and optional connect string of the user performing the export. If you omit the password, then Export will prompt you for it.

If you connect as user SYS, then you must also specify AS SYSDBA in the connect string. Your operating system may require you to treat AS SYSDBA as a special string, in which case the entire string would be enclosed in quotation marks.


See Also:


VOLSIZE

Default: none

Specifies the maximum number of bytes in an export file on each volume of tape.

The VOLSIZE parameter has a maximum value equal to the maximum value that can be stored in 64 bits on your platform.

The VOLSIZE value can be specified as a number followed by KB (number of kilobytes). For example, VOLSIZE=2KB is the same as VOLSIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to get the final file size (VOLSIZE=2048B is the same as VOLSIZE=2048).

Example Export Sessions

This section provides examples of the following types of Export sessions:

In each example, you are shown how to use both the command-line method and the parameter file method. Some examples use vertical ellipses to indicate sections of example output that were too long to include.

Example Export Session in Full Database Mode

Only users with the DBA role or the EXP_FULL_DATABASE role can export in full database mode. In this example, an entire database is exported to the file dba.dmp with all GRANTS and all data.

Parameter File Method

> exp PARFILE=params.dat

The params.dat file contains the following information:

FILE=dba.dmp
GRANTS=y
FULL=y
ROWS=y

Command-Line Method

> exp FULL=y FILE=dba.dmp GRANTS=y ROWS=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Status messages are written out as the entire database is exported. A final completion message is returned when the export completes successfully, without warnings.

Example Export Session in User Mode

User-mode exports can be used to back up one or more database users. For example, a DBA may want to back up the tables of deleted users for a period of time. User mode is also appropriate for users who want to back up their own data or who want to move objects from one owner to another. In this example, user scott is exporting his own tables.

Parameter File Method

> exp scott PARFILE=params.dat

The params.dat file contains the following information:

FILE=scott.dmp
OWNER=scott
GRANTS=y
ROWS=y
COMPRESS=y

Command-Line Method

> exp scott FILE=scott.dmp OWNER=scott GRANTS=y ROWS=y COMPRESS=y 

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
. about to export SCOTT's tables via Conventional Path ...
. . exporting table                          BONUS          0 rows exported
. . exporting table                           DEPT          4 rows exported
. . exporting table                            EMP         14 rows exported
. . exporting table                       SALGRADE          5 rows exported
.
.
.
Export terminated successfully without warnings.

Example Export Sessions in Table Mode

In table mode, you can export table data or the table definitions. (If no rows are exported, then the CREATE TABLE statement is placed in the export file, with grants and indexes, if they are specified.)

A user with the EXP_FULL_DATABASE role can use table mode to export tables from any user's schema by specifying TABLES=schemaname.tablename.

If schemaname is not specified, then Export defaults to the exporter's schema name. In the following example, Export defaults to the SYSTEM schema for table a and table c:

> exp TABLES=(a, scott.b, c, mary.d)

A user with the EXP_FULL_DATABASE role can also export dependent objects that are owned by other users. A nonprivileged user can export only dependent objects for the specified tables that the user owns.

Exports in table mode do not include cluster definitions. As a result, the data is exported as unclustered tables. Thus, you can use table mode to uncluster tables.

Example 1: DBA Exporting Tables for Two Users

In this example, a DBA exports specified tables for two users.

Parameter File Method

> exp PARFILE=params.dat

The params.dat file contains the following information:

FILE=expdat.dmp
TABLES=(scott.emp,blake.dept)
GRANTS=y
INDEXES=y

Command-Line Method

> exp FILE=expdat.dmp TABLES=(scott.emp,blake.dept) GRANTS=y INDEXES=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
About to export specified tables via Conventional Path ...
Current user changed to SCOTT
. . exporting table                            EMP         14 rows exported
Current user changed to BLAKE
. . exporting table                           DEPT          8 rows exported
Export terminated successfully without warnings.

Example 2: User Exports Tables That He Owns

In this example, user blake exports selected tables that he owns.

Parameter File Method

> exp blake PARFILE=params.dat

The params.dat file contains the following information:

FILE=blake.dmp
TABLES=(dept,manager)
ROWS=y
COMPRESS=y

Command-Line Method

> exp blake FILE=blake.dmp TABLES=(dept, manager) ROWS=y COMPRESS=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.

About to export specified tables via Conventional Path ...
. . exporting table                           DEPT          8 rows exported
. . exporting table                        MANAGER          4 rows exported
Export terminated successfully without warnings.

Example 3: Using Pattern Matching to Export Various Tables

In this example, pattern matching is used to export various tables for users scott and blake.

Parameter File Method

> exp PARFILE=params.dat

The params.dat file contains the following information:

FILE=misc.dmp
TABLES=(scott.%P%,blake.%,scott.%S%)

Command-Line Method

> exp FILE=misc.dmp TABLES=(scott.%P%,blake.%,scott.%S%)

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
About to export specified tables via Conventional Path ...
Current user changed to SCOTT
. . exporting table                           DEPT          4 rows exported
. . exporting table                            EMP         14 rows exported
Current user changed to BLAKE
. . exporting table                           DEPT          8 rows exported
. . exporting table                        MANAGER          4 rows exported
Current user changed to SCOTT
. . exporting table                          BONUS          0 rows exported
. . exporting table                       SALGRADE          5 rows exported
Export terminated successfully without warnings.

Example Export Session Using Partition-Level Export

In partition-level Export, you can specify the partitions and subpartitions of a table that you want to export.

Example 1: Exporting a Table Without Specifying a Partition

Assume emp is a table that is partitioned on employee name. There are two partitions, m and z. As this example shows, if you export the table without specifying a partition, then all of the partitions are exported.

Parameter File Method

> exp scott PARFILE=params.dat

The params.dat file contains the following:

TABLES=(emp)
ROWS=y

Command-Line Method

> exp scott TABLES=emp rows=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                              M          8 rows exported
. . exporting partition                              Z          6 rows exported
Export terminated successfully without warnings.

Example 2: Exporting a Table with a Specified Partition

Assume emp is a table that is partitioned on employee name. There are two partitions, m and z. As this example shows, if you export the table and specify a partition, then only the specified partition is exported.

Parameter File Method

 > exp scott PARFILE=params.dat

The params.dat file contains the following:

TABLES=(emp:m)
ROWS=y

Command-Line Method

> exp scott TABLES=emp:m rows=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                              M          8 rows exported
Export terminated successfully without warnings.

Example 3: Exporting a Composite Partition

Assume emp is a partitioned table with two partitions, m and z. Table emp is partitioned using the composite method. Partition m has subpartitions sp1 and sp2, and partition z has subpartitions sp3 and sp4. As the example shows, if you export the composite partition m, then all its subpartitions (sp1 and sp2) will be exported. If you export the table and specify a subpartition (sp4), then only the specified subpartition is exported.

Parameter File Method

> exp scott PARFILE=params.dat

The params.dat file contains the following:

TABLES=(emp:m,emp:sp4)
ROWS=y

Command-Line Method

> exp scott TABLES=(emp:m, emp:sp4) ROWS=y

Export Messages

Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:

.
.
.
About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting composite partition                    M
. . exporting subpartition                         SP1          1 rows exported
. . exporting subpartition                         SP2          3 rows exported
. . exporting composite partition                    Z
. . exporting subpartition                         SP4          1 rows exported
Export terminated successfully without warnings.

Warning, Error, and Completion Messages

This section describes the different types of messages issued by Export and how to save them in a log file.

Log File

You can capture all Export messages in a log file, either by using the LOG parameter or, for those systems that permit it, by redirecting the output to a file. A log of detailed information is written about successful unloads and any errors that may have occurred.

Warning Messages

Export does not terminate after recoverable errors. For example, if an error occurs while exporting a table, then Export displays (or logs) an error message, skips to the next table, and continues processing. These recoverable errors are known as warnings.

Export also issues warnings when invalid objects are encountered.

For example, if a nonexistent table is specified as part of a table-mode Export, then the Export utility exports all other tables. Then it issues a warning and terminates successfully.

Nonrecoverable Error Messages

Some errors are nonrecoverable and terminate the Export session. These errors typically occur because of an internal problem or because a resource, such as memory, is not available or has been exhausted. For example, if the catexp.sql script is not executed, then Export issues the following nonrecoverable error message:

EXP-00024: Export views not installed, please notify your DBA

Completion Messages

When an export completes without errors, a message to that effect is displayed, for example:

Export terminated successfully without warnings

If one or more recoverable errors occurs but the job continues to completion, then a message similar to the following is displayed:

Export terminated successfully with warnings

If a nonrecoverable error occurs, then the job terminates immediately and displays a message stating so, for example:

Export terminated unsuccessfully

Exit Codes for Inspection and Display

Export provides the results of an operation immediately upon completion. Depending on the platform, the outcome may be reported in a process exit code and the results recorded in the log file. This enables you to check the outcome from the command line or script. Table 21-4 shows the exit codes that get returned for various results.

Table 21-4 Exit Codes for Export

ResultExit Code

Export terminated successfully without warnings

Import terminated successfully without warnings

EX_SUCC

Export terminated successfully with warnings

Import terminated successfully with warnings

EX_OKWARN

Export terminated unsuccessfully

Import terminated unsuccessfully

EX_FAIL


For UNIX, the exit codes are as follows:

EX_SUCC   0
EX_OKWARN 0
EX_FAIL   1

Conventional Path Export Versus Direct Path Export

Export provides two methods for exporting table data:

Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into a buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.

Direct path Export is much faster than conventional path Export because data is read from disk into the buffer cache and rows are transferred directly to the Export client. The evaluating buffer (that is, the SQL command-processing layer) is bypassed. The data is already in the format that Export expects, thus avoiding unnecessary data conversion. The data is transferred to the Export client, which then writes the data into the export file.

Invoking a Direct Path Export

To use direct path Export, specify the DIRECT=y parameter on the command line or in the parameter file. The default is DIRECT=n, which extracts the table data using the conventional path. The rest of this section discusses the following topics:

Security Considerations for Direct Path Exports

Oracle Virtual Private Database (VPD) and Oracle Label Security are not enforced during direct path Exports.

The following users are exempt from Virtual Private Database and Oracle Label Security enforcement regardless of the export mode, application, or utility used to extract data from the database:

  • The database user SYS

  • Database users granted the EXEMPT ACCESS POLICY privilege, either directly or through a database role

This means that any user who is granted the EXEMPT ACCESS POLICY privilege is completely exempt from enforcement of VPD and Oracle Label Security. This is a powerful privilege and should be carefully managed. This privilege does not affect the enforcement of traditional object privileges such as SELECT, INSERT, UPDATE, and DELETE. These privileges are enforced even if a user has been granted the EXEMPT ACCESS POLICY privilege.

Performance Considerations for Direct Path Exports

You may be able to improve performance by increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export. Your exact performance gain depends upon the following factors:

  • DB_BLOCK_SIZE

  • The types of columns in your table

  • Your I/O layout (The drive receiving the export file should be separate from the disk drive where the database files reside.)

The following values are generally recommended for RECORDLENGTH:

  • Multiples of the file system I/O block size

  • Multiples of DB_BLOCK_SIZE

An export file that is created using direct path Export will take the same amount of time to import as an export file created using conventional path Export.

Restrictions for Direct Path Exports

Keep the following restrictions in mind when you are using direct path mode:

  • To invoke a direct path Export, you must use either the command-line method or a parameter file. You cannot invoke a direct path Export using the interactive method.

  • The Export parameter BUFFER applies only to conventional path Exports. For direct path Export, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.

  • You cannot use direct path when exporting in tablespace mode (TRANSPORT_TABLESPACES=Y).

  • The QUERY parameter cannot be specified in a direct path Export.

  • A direct path Export can only export data when the NLS_LANG environment variable of the session invoking the export equals the database character set. If NLS_LANG is not set or if it is different than the database character set, then a warning is displayed and the export is discontinued. The default value for the NLS_LANG environment variable is AMERICAN_AMERICA.US7ASCII.

Network Considerations

This section describes factors to consider when using Export across a network.

Transporting Export Files Across a Network

Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. For example, use FTP or a similar file transfer protocol to transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.

Exporting with Oracle Net

With Oracle Net, you can perform exports over a network. For example, if you run Export locally, then you can write data from a remote Oracle database into a local export file.

To use Export with Oracle Net, include the connection qualifier string @connect_string when entering the username and password in the exp command. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.

Character Set and Globalization Support Considerations

The following sections describe the globalization support behavior of Export with respect to character set conversion of user data and data definition language (DDL).

User Data

The Export utility always exports user data, including Unicode data, in the character sets of the Export server. (Character sets are specified at database creation.) If the character sets of the source database are different than the character sets of the import database, then a single conversion is performed to automatically convert the data to the character sets of the Import server.

Effect of Character Set Sorting Order on Conversions

If the export character set has a different sorting order than the import character set, then tables that are partitioned on character columns may yield unpredictable results. For example, consider the following table definition, which is produced on a database having an ASCII character set:

CREATE TABLE partlist 
   ( 
   part     VARCHAR2(10), 
   partno   NUMBER(2) 
   ) 
PARTITION BY RANGE (part) 
  ( 
  PARTITION part_low VALUES LESS THAN ('Z') 
    TABLESPACE tbs_1, 
  PARTITION part_mid VALUES LESS THAN ('z') 
    TABLESPACE tbs_2, 
  PARTITION part_high VALUES LESS THAN (MAXVALUE) 
    TABLESPACE tbs_3 
  );
<yCp>This partitioning scheme makes sense because z comes after Z in ASCII character sets.

When this table is imported into a database based upon an EBCDIC character set, all of the rows in the part_mid partition will migrate to the part_low partition because z comes before Z in EBCDIC character sets. To obtain the desired results, the owner of partlist must repartition the table following the import.

Data Definition Language (DDL)

Up to three character set conversions may be required for data definition language (DDL) during an export/import operation:

  1. Export writes export files using the character set specified in the NLS_LANG environment variable for the user session. A character set conversion is performed if the value of NLS_LANG differs from the database character set.

  2. If the export file's character set is different than the import user session character set, then Import converts the character set to its user session character set. Import can only perform this conversion for single-byte character sets. This means that for multibyte character sets, the import file's character set must be identical to the export file's character set.

  3. A final character set conversion may be performed if the target database's character set is different from the character set used by the import user session.

To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set.

Single-Byte Character Sets and Export and Import

Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when you import an 8-bit character set export file. This occurs if the system on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is apparent when accented characters lose the accent mark.

To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.

Multibyte Character Sets and Export and Import

During character set conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.


Caution:

When the character set width differs between the Export server and the Import server, truncation of data can occur if conversion causes expansion of data. If truncation occurs, then Import displays a warning message.

Using Instance Affinity with Export and Import

You can use instance affinity to associate jobs with instances in databases you plan to export and import. Be aware that there may be some compatibility issues if you are using a combination of releases.

Considerations When Exporting Database Objects

The following sections describe points you should consider when you export particular database objects.

Exporting Sequences

If transactions continue to access sequence numbers during an export, then sequence numbers might be skipped. The best way to ensure that sequence numbers are not skipped is to ensure that the sequences are not accessed during the export.

Sequence numbers can be skipped only when cached sequence numbers are in use. When a cache of sequence numbers has been allocated, they are available for use in the current database. The exported value is the next sequence number (after the cached values). Sequence numbers that are cached, but unused, are lost when the sequence is imported.

Exporting LONG and LOB Datatypes

On export, LONG datatypes are fetched in sections. However, enough memory must be available to hold all of the contents of each row, including the LONG data.

LONG columns can be up to 2 gigabytes in length.

All data in a LOB column does not need to be held in memory at the same time. LOB data is loaded and unloaded in sections.


Note:

Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases.

Exporting Foreign Function Libraries

The contents of foreign function libraries are not included in the export file. Instead, only the library specification (name, location) is included in full database mode and user-mode export. You must move the library's executable files and update the library specification if the database is moved to a new location.

Exporting Offline Locally Managed Tablespaces

If the data you are exporting contains offline locally managed tablespaces, then Export will not be able to export the complete tablespace definition and will display an error message. You can still import the data; however, you must create the offline locally managed tablespaces before importing to prevent DDL commands that may reference the missing tablespaces from failing.

Exporting Directory Aliases

Directory alias definitions are included only in a full database mode export. To move a database to a new location, the database administrator must update the directory aliases to point to the new location.

Directory aliases are not included in user-mode or table-mode export. Therefore, you must ensure that the directory alias has been created on the target system before the directory alias is used.

Exporting BFILE Columns and Attributes

The export file does not hold the contents of external files referenced by BFILE columns or attributes. Instead, only the names and directory aliases for files are copied on Export and restored on Import. If you move the database to a location where the old directories cannot be used to access the included files, then the database administrator (DBA) must move the directories containing the specified files to a new location where they can be accessed.

Exporting External Tables

The contents of external tables are not included in the export file. Instead, only the table specification (name, location) is included in full database mode and user-mode export. You must manually move the external data and update the table specification if the database is moved to a new location.

Exporting Object Type Definitions

In all Export modes, the Export utility includes information about object type definitions used by the tables being exported. The information, including object name, object identifier, and object geometry, is needed to verify that the object type on the target system is consistent with the object instances contained in the export file. This ensures that the object types needed by a table are created with the same object identifier at import time.

Note, however, that in table mode, user mode, and tablespace mode, the export file does not include a full object type definition needed by a table if the user running Export does not have execute access to the object type. In this case, only enough information is written to verify that the type exists, with the same object identifier and the same geometry, on the Import target system.

The user must ensure that the proper type definitions exist on the target system, either by working with the DBA to create them, or by importing them from full database mode or user-mode exports performed by the DBA.

It is important to perform a full database mode export regularly to preserve all object type definitions. Alternatively, if object type definitions from different schemas are used, then the DBA should perform a user mode export of the appropriate set of users. For example, if table1 belonging to user scott contains a column on blake's type type1, then the DBA should perform a user mode export of both blake and scott to preserve the type definitions needed by the table.

Exporting Nested Tables

Inner nested table data is exported whenever the outer containing table is exported. Although inner nested tables can be named, they cannot be exported individually.

Exporting Advanced Queue (AQ) Tables

Queues are implemented on tables. The export and import of queues constitutes the export and import of the underlying queue tables and related dictionary tables. You can export and import queues only at queue table granularity.

When you export a queue table, both the table definition information and queue data are exported. Because the queue table data and the table definition is exported, the user is responsible for maintaining application-level data integrity when queue table data is imported.

Exporting Synonyms

You should be cautious when exporting compiled objects that reference a name used as a synonym and as another object. Exporting and importing these objects will force a recompilation that could result in changes to the object definitions.

The following example helps to illustrate this problem:

CREATE PUBLIC SYNONYM emp FOR scott.emp;

CONNECT blake/paper; CREATE TRIGGER t_emp BEFORE INSERT ON emp BEGIN NULL; END; CREATE VIEW emp AS SELECT * FROM dual;

If the database in the preceding example were exported, then the reference to emp in the trigger would refer to blake's view rather than to scott's table. This would cause an error when Import tried to reestablish the t_emp trigger.

Possible Export Errors Related to Java Synonyms

If an export operation attempts to export a synonym named DBMS_JAVA when there is no corresponding DBMS_JAVA package or when Java is either not loaded or loaded incorrectly, then the export will terminate unsuccessfully. The error messages that are generated include, but are not limited to, the following: EXP-00008, ORA-00904, and ORA-29516.

If Java is enabled, then ensure that both the DBMS_JAVA synonym and DBMS_JAVA package are created and valid before rerunning the export.

If Java is not enabled, then remove Java-related objects before rerunning the export.

Support for Fine-Grained Access Control

You can export tables with fine-grained access control policies enabled. When doing so, consider the following:

  • The user who imports from an export file containing such tables must have the appropriate privileges (specifically, the EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be reinstated). If a user without the correct privileges attempts to export a table with fine-grained access policies enabled, then only those rows that the exporter is privileged to read will be exported.

  • If fine-grained access control is enabled on a SELECT statement, then conventional path Export may not export the entire table because fine-grained access may rewrite the query.

  • Only user SYS, or a user with the EXP_FULL_DATABASE role enabled or who has been granted EXEMPT ACCESS POLICY, can perform direct path Exports on tables having fine-grained access control.

Transportable Tablespaces

The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

To move or copy a set of tablespaces, you must make the tablespaces read-only, copy the data files of these tablespaces, and use Export and Import to move the database information (metadata) stored in the data dictionary. Both the data files and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.

After copying the data files and exporting the metadata, you can optionally put the tablespaces in read/write mode.

Export and Import provide the following parameters to enable movement of transportable tablespace metadata.

See "TABLESPACES" and "TRANSPORT_TABLESPACE" for more information about using these parameters during an export operation.


See Also:


Exporting From a Read-Only Database

To extract metadata from a source database, Export uses queries that contain ordering clauses (sort operations). For these queries to succeed, the user performing the export must be able to allocate sort segments. For these sort segments to be allocated in a read-only database, the user's temporary tablespace should be set to point at a temporary, locally managed tablespace.

Using Export and Import to Partition a Database Migration

When you use the Export and Import utilities to migrate a large database, it may be more efficient to partition the migration into multiple export and import jobs. If you decide to partition the migration, then be aware of the following advantages and disadvantages.

Advantages of Partitioning a Migration

Partitioning a migration has the following advantages:

  • Time required for the migration may be reduced, because many of the subjobs can be run in parallel.

  • The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.

Disadvantages of Partitioning a Migration

Partitioning a migration has the following disadvantages:

  • The export and import processes become more complex.

  • Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, then you may not have the required parent records when you import the table into the dependent schema.

How to Use Export and Import to Partition a Database Migration

To perform a database migration in a partitioned manner, take the following steps:

  1. For all top-level metadata in the database, issue the following commands:

    1. exp FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n

    2. imp FILE=full FULL=y

  2. For each scheman in the database, issue the following commands:

    1. exp OWNER=scheman FILE=scheman

    2. imp FILE=scheman FROMUSER=scheman TOUSER=scheman IGNORE=y

All exports can be done in parallel. When the import of full.dmp completes, all remaining imports can also be done in parallel.

Using Different Releases of Export and Import

This section describes compatibility issues that relate to using different releases of Export and the Oracle database.

Whenever you are moving data between different releases of the Oracle database, the following basic rules apply:

Restrictions When Using Different Releases of Export and Import

The following restrictions apply when you are using different releases of Export and Import:

  • Export dump files can be read only by the Import utility because they are stored in a special binary format.

  • Any export dump file can be imported into a later release of the Oracle database.

  • The Import utility cannot read export dump files created by the Export utility of a later maintenance release. For example, a release 9.2 export dump file cannot be imported by a release 9.0.1 Import utility.

  • Whenever a lower version of the Export utility runs with a later release of the Oracle database, categories of database objects that did not exist in the earlier release are excluded from the export.

  • Export files generated by Oracle9i Export, either direct path or conventional path, are incompatible with earlier releases of Import and can be imported only with Oracle9i Import. When backward compatibility is an issue, use the earlier release or version of the Export utility against the Oracle9i database.

Examples of Using Different Releases of Export and Import

Table 21-5 shows some examples of which Export and Import releases to use when moving data between different releases of the Oracle database.

Table 21-5 Using Different Releases of Export and Import

Export from->Import toUse Export ReleaseUse Import Release

8.1.6 -> 8.1.6

8.1.6

8.1.6

8.1.5 -> 8.0.6

8.0.6

8.0.6

8.1.7 -> 8.1.6

8.1.6

8.1.6

9.0.1 -> 8.1.6

8.1.6

8.1.6

9.0.1 -> 9.0.2

9.0.1

9.0.2

9.0.2 -> 10.1.0

9.0.2

10.1.0

10.1.0 -> 9.0.2

9.0.2

9.0.2


Table 21-5 covers moving data only between the original Export and Import utilities. For Oracle Database 10g release 1 (10.1) or higher, Oracle recommends the Data Pump Export and Import utilities in most cases because these utilities provide greatly enhanced performance compared to the original Export and Import utilities.


See Also:

Oracle Database Upgrade Guide for more information about exporting and importing data between different releases, including releases higher than 10.1

PKD/yyPKN:AOEBPS/ldr_loading.htm Loading Objects, LOBs, and Collections

11 Loading Objects, LOBs, and Collections

This chapter discusses the following topics:

Loading Column Objects

Column objects in the control file are described in terms of their attributes. If the object type on which the column object is based is declared to be nonfinal, then the column object in the control file may be described in terms of the attributes, both derived and declared, of any subtype derived from the base object type. In the data file, the data corresponding to each of the attributes of a column object is in a data field similar to that corresponding to a simple relational column.


Note:

With SQL*Loader support for complex datatypes like column objects, the possibility arises that two identical field names could exist in the control file, one corresponding to a column, the other corresponding to a column object's attribute. Certain clauses can refer to fields (for example, WHEN, NULLIF, DEFAULTIF, SID, OID, REF, BFILE, and so on), causing a naming conflict if identically named fields exist in the control file.

Therefore, if you use clauses that refer to fields, then you must specify the full name. For example, if field fld1 is specified to be a COLUMN OBJECT and it contains field fld2, then when you specify fld2 in a clause such as NULLIF, you must use the full field name fld1.fld2.


The following sections show examples of loading column objects:

Loading Column Objects in Stream Record Format

Example 11-1 shows a case in which the data is in predetermined size fields. The newline character marks the end of a physical record. You can also mark the end of a physical record by using a custom record separator in the operating system file-processing clause (os_file_proc_clause).

Example 11-1 Loading Column Objects in Stream Record Format

Control File Contents

LOAD DATA
INFILE 'sample.dat'
INTO TABLE departments
   (dept_no     POSITION(01:03)    CHAR,
    dept_name   POSITION(05:15)    CHAR,
1   dept_mgr    COLUMN OBJECT
      (name     POSITION(17:33)    CHAR,
       age      POSITION(35:37)    INTEGER EXTERNAL,
       emp_id   POSITION(40:46)    INTEGER EXTERNAL) )

Data File (sample.dat)

101 Mathematics  Johny Quest       30   1024
237 Physics      Albert Einstein   65   0000

Notes

  1. This type of column object specification can be applied recursively to describe nested column objects.

Loading Column Objects in Variable Record Format

Example 11-2 shows a case in which the data is in delimited fields.

Example 11-2 Loading Column Objects in Variable Record Format

Control File Contents

LOAD DATA
1 INFILE 'sample.dat' "var 6"
INTO TABLE departments
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
2  (dept_no
   dept_name, 
   dept_mgr       COLUMN OBJECT
      (name       CHAR(30), 
      age         INTEGER EXTERNAL(5), 
      emp_id      INTEGER EXTERNAL(5)) )

Data File (sample.dat)

3  000034101,Mathematics,Johny Q.,30,1024,
   000039237,Physics,"Albert Einstein",65,0000,

Notes

  1. The "var" string includes the number of bytes in the length field at the beginning of each record (in this example, the number is 6). If no value is specified, then the default is 5 bytes. The maximum size of a variable record is 2^32-1. Specifying larger values will result in an error.

  2. Although no positional specifications are given, the general syntax remains the same (the column object's name followed by the list of its attributes enclosed in parentheses). Also note that an omitted type specification defaults to CHAR of length 255.

  3. The first 6 bytes (italicized) specify the length of the forthcoming record. These length specifications include the newline characters, which are ignored thanks to the terminators after the emp_id field.

Loading Nested Column Objects

Example 11-3 shows a control file describing nested column objects (one column object nested in another column object).

Example 11-3 Loading Nested Column Objects

Control File Contents

LOAD DATA
INFILE `sample.dat'
INTO TABLE departments_v2
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   (dept_no      CHAR(5), 
   dept_name     CHAR(30), 
   dept_mgr      COLUMN OBJECT
      (name      CHAR(30), 
      age        INTEGER EXTERNAL(3),
      emp_id     INTEGER EXTERNAL(7),
1     em_contact COLUMN OBJECT
         (name      CHAR(30), 
         phone_num  CHAR(20))))

Data File (sample.dat)

101,Mathematics,Johny Q.,30,1024,"Barbie",650-251-0010,
237,Physics,"Albert Einstein",65,0000,Wife Einstein,654-3210,

Notes

  1. This entry specifies a column object nested within a column object.

Loading Column Objects with a Derived Subtype

Example 11-4 shows a case in which a nonfinal base object type has been extended to create a new derived subtype. Although the column object in the table definition is declared to be of the base object type, SQL*Loader allows any subtype to be loaded into the column object, provided that the subtype is derived from the base object type.

Example 11-4 Loading Column Objects with a Subtype

Object Type Definitions

CREATE TYPE person_type AS OBJECT
  (name     VARCHAR(30),
   ssn      NUMBER(9)) not final;

CREATE TYPE employee_type UNDER person_type
  (empid    NUMBER(5));

CREATE TABLE personnel
  (deptno   NUMBER(3),
   deptname VARCHAR(30),
   person   person_type);

Control File Contents

LOAD DATA
INFILE 'sample.dat'
INTO TABLE personnel
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   (deptno        INTEGER EXTERNAL(3),
    deptname      CHAR,
1   person        COLUMN OBJECT TREAT AS employee_type
      (name       CHAR,
       ssn        INTEGER EXTERNAL(9),
2      empid      INTEGER EXTERNAL(5)))

Data File (sample.dat)

101,Mathematics,Johny Q.,301189453,10249,
237,Physics,"Albert Einstein",128606590,10030,

Notes

  1. The TREAT AS clause indicates that SQL*Loader should treat the column object person as if it were declared to be of the derived type employee_type, instead of its actual declared type, person_type.

  2. The empid attribute is allowed here because it is an attribute of the employee_type. If the TREAT AS clause had not been specified, then this attribute would have resulted in an error, because it is not an attribute of the column's declared type.

Specifying Null Values for Objects

Specifying null values for nonscalar datatypes is somewhat more complex than for scalar datatypes. An object can have a subset of its attributes be null, it can have all of its attributes be null (an attributively null object), or it can be null itself (an atomically null object).

Specifying Attribute Nulls

In fields corresponding to column objects, you can use the NULLIF clause to specify the field conditions under which a particular attribute should be initialized to NULL. Example 11-5 demonstrates this.

Example 11-5 Specifying Attribute Nulls Using the NULLIF Clause

Control File Contents

LOAD DATA
INFILE 'sample.dat'
INTO TABLE departments
  (dept_no      POSITION(01:03)    CHAR,
  dept_name     POSITION(05:15)    CHAR NULLIF dept_name=BLANKS,
  dept_mgr      COLUMN OBJECT
1    ( name     POSITION(17:33)    CHAR NULLIF dept_mgr.name=BLANKS,
1    age        POSITION(35:37)    INTEGER EXTERNAL NULLIF dept_mgr.age=BLANKS,
1    emp_id     POSITION(40:46)    INTEGER EXTERNAL NULLIF dept_mgr.empid=BLANKS))

Data File (sample.dat)

2  101             Johny Quest            1024
   237   Physics   Albert Einstein   65   0000

Notes

  1. The NULLIF clause corresponding to each attribute states the condition under which the attribute value should be NULL.

  2. The age attribute of the dept_mgr value is null. The dept_name value is also null.

Specifying Atomic Nulls

To specify in the control file the condition under which a particular object should take a null value (atomic null), you must follow that object's name with a NULLIF clause based on a logical combination of any of the mapped fields (for example, in Example 11-5, the named mapped fields would be dept_no, dept_name, name, age, emp_id, but dept_mgr would not be a named mapped field because it does not correspond (is not mapped) to any field in the data file).

Although the preceding is workable, it is not ideal when the condition under which an object should take the value of null is independent of any of the mapped fields. In such situations, you can use filler fields.

You can map a filler field to the field in the data file (indicating if a particular object is atomically null or not) and use the filler field in the field condition of the NULLIF clause of the particular object. This is shown in Example 11-6.

Example 11-6 Loading Data Using Filler Fields

Control File Contents

LOAD DATA
INFILE 'sample.dat'
INTO TABLE departments_v2
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   (dept_no         CHAR(5),
   dept_name        CHAR(30),
1  is_null          FILLER CHAR,
2  dept_mgr         COLUMN OBJECT NULLIF is_null=BLANKS
      (name         CHAR(30) NULLIF dept_mgr.name=BLANKS, 
      age           INTEGER EXTERNAL(3) NULLIF dept_mgr.age=BLANKS,
      emp_id        INTEGER EXTERNAL(7) 
                    NULLIF dept_mgr.emp_id=BLANKS,
      em_contact    COLUMN OBJECT NULLIF is_null2=BLANKS
         (name      CHAR(30) 
                    NULLIF dept_mgr.em_contact.name=BLANKS, 
         phone_num  CHAR(20) 
                    NULLIF dept_mgr.em_contact.phone_num=BLANKS)),
1  is_null2         FILLER CHAR)       

Data File (sample.dat)

101,Mathematics,n,Johny Q.,,1024,"Barbie",608-251-0010,,
237,Physics,,"Albert Einstein",65,0000,,650-654-3210,n,

Notes

  1. The filler field (data file mapped; no corresponding column) is of type CHAR (because it is a delimited field, the CHAR defaults to CHAR(255)). Note that the NULLIF clause is not applicable to the filler field itself.

  2. Gets the value of null (atomic null) if the is_null field is blank.

Loading Column Objects with User-Defined Constructors

The Oracle database automatically supplies a default constructor for every object type. This constructor requires that all attributes of the type be specified as arguments in a call to the constructor. When a new instance of the object is created, its attributes take on the corresponding values in the argument list. This constructor is known as the attribute-value constructor. SQL*Loader uses the attribute-value constructor by default when loading column objects.

It is possible to override the attribute-value constructor by creating one or more user-defined constructors. When you create a user-defined constructor, you must supply a type body that performs the user-defined logic whenever a new instance of the object is created. A user-defined constructor may have the same argument list as the attribute-value constructor but differ in the logic that its type body implements.

When the argument list of a user-defined constructor function matches the argument list of the attribute-value constructor, there is a difference in behavior between conventional and direct path SQL*Loader. Conventional path mode results in a call to the user-defined constructor. Direct path mode results in a call to the attribute-value constructor. Example 11-7 illustrates this difference.

Example 11-7 Loading a Column Object with Constructors That Match

Object Type Definitions

CREATE TYPE person_type AS OBJECT
     (name     VARCHAR(30),
      ssn      NUMBER(9)) not final;

   CREATE TYPE employee_type UNDER person_type
     (empid    NUMBER(5),
   -- User-defined constructor that looks like an attribute-value constructor
      CONSTRUCTOR FUNCTION
        employee_type (name VARCHAR2, ssn NUMBER, empid NUMBER)
        RETURN SELF AS RESULT);

   CREATE TYPE BODY employee_type AS
     CONSTRUCTOR FUNCTION
        employee_type (name VARCHAR2, ssn NUMBER, empid NUMBER)
      RETURN SELF AS RESULT AS
   --User-defined constructor makes sure that the name attribute is uppercase.
      BEGIN
        SELF.name  := UPPER(name);
        SELF.ssn   := ssn;
        SELF.empid := empid;
        RETURN;
      END;

   CREATE TABLE personnel
     (deptno   NUMBER(3),
      deptname VARCHAR(30),
      employee employee_type);

Control File Contents

LOAD DATA
   INFILE *
   REPLACE
   INTO TABLE personnel
   FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
      (deptno        INTEGER EXTERNAL(3),
       deptname      CHAR,
       employee      COLUMN OBJECT
         (name       CHAR,
          ssn        INTEGER EXTERNAL(9),
          empid      INTEGER EXTERNAL(5)))

   BEGINDATA
1  101,Mathematics,Johny Q.,301189453,10249,
   237,Physics,"Albert Einstein",128606590,10030,

Notes

  1. When this control file is run in conventional path mode, the name fields, Johny Q. and Albert Einstein, are both loaded in uppercase. This is because the user-defined constructor is called in this mode. In contrast, when this control file is run in direct path mode, the name fields are loaded exactly as they appear in the input data. This is because the attribute-value constructor is called in this mode.

It is possible to create a user-defined constructor whose argument list does not match that of the attribute-value constructor. In this case, both conventional and direct path modes will result in a call to the attribute-value constructor. Consider the definitions in Example 11-8.

Example 11-8 Loading a Column Object with Constructors That Do Not Match

Object Type Definitions

CREATE SEQUENCE employee_ids
    START     WITH  1000
    INCREMENT BY    1;

   CREATE TYPE person_type AS OBJECT
     (name     VARCHAR(30),
      ssn      NUMBER(9)) not final;

   CREATE TYPE employee_type UNDER person_type
     (empid    NUMBER(5),
   -- User-defined constructor that does not look like an attribute-value 
   -- constructor
      CONSTRUCTOR FUNCTION
        employee_type (name VARCHAR2, ssn NUMBER)
        RETURN SELF AS RESULT);

   CREATE TYPE BODY employee_type AS
     CONSTRUCTOR FUNCTION
        employee_type (name VARCHAR2, ssn NUMBER)
      RETURN SELF AS RESULT AS
   -- This user-defined constructor makes sure that the name attribute is in
   -- lowercase and assigns the employee identifier based on a sequence.
        nextid     NUMBER;
        stmt       VARCHAR2(64);
      BEGIN

        stmt := 'SELECT employee_ids.nextval FROM DUAL';
        EXECUTE IMMEDIATE stmt INTO nextid;

        SELF.name  := LOWER(name);
        SELF.ssn   := ssn;
        SELF.empid := nextid; 
        RETURN;
      END;

   CREATE TABLE personnel
     (deptno   NUMBER(3),
      deptname VARCHAR(30),
      employee employee_type);

If the control file described in Example 11-7 is used with these definitions, then the name fields are loaded exactly as they appear in the input data (that is, in mixed case). This is because the attribute-value constructor is called in both conventional and direct path modes.

It is still possible to load this table using conventional path mode by explicitly making reference to the user-defined constructor in a SQL expression. Example 11-9 shows how this can be done.

Example 11-9 Using SQL to Load Column Objects When Constructors Do Not Match

Control File Contents

LOAD DATA
   INFILE *
   REPLACE
   INTO TABLE personnel
   FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
      (deptno        INTEGER EXTERNAL(3),
       deptname      CHAR,
       name          BOUNDFILLER CHAR,
       ssn           BOUNDFILLER INTEGER EXTERNAL(9),
1      employee      EXPRESSION "employee_type(:NAME, :SSN)")

   BEGINDATA
1  101,Mathematics,Johny Q.,301189453,
   237,Physics,"Albert Einstein",128606590,

Notes

  1. The employee column object is now loaded using a SQL expression. This expression invokes the user-defined constructor with the correct number of arguments. The name fields, Johny Q. and Albert Einstein, will both be loaded in lowercase. In addition, the employee identifiers for each row's employee column object will have taken their values from the employee_ids sequence.

If the control file in Example 11-9 is used in direct path mode, then the following error is reported:

SQL*Loader-951: Error calling once/load initialization
ORA-26052: Unsupported type 121 for SQL expression on column EMPLOYEE.

Loading Object Tables

The control file syntax required to load an object table is nearly identical to that used to load a typical relational table. Example 11-10 demonstrates loading an object table with primary-key-based object identifiers (OIDs).

Example 11-10 Loading an Object Table with Primary Key OIDs

Control File Contents

LOAD DATA
INFILE 'sample.dat'
DISCARDFILE 'sample.dsc'
BADFILE 'sample.bad'
REPLACE
INTO TABLE employees 
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   (name    CHAR(30)                NULLIF name=BLANKS,
   age      INTEGER EXTERNAL(3)     NULLIF age=BLANKS,
   emp_id   INTEGER EXTERNAL(5))

Data File (sample.dat)

Johny Quest, 18, 007,
Speed Racer, 16, 000,

By looking only at the preceding control file you might not be able to determine if the table being loaded was an object table with system-generated OIDs, an object table with primary-key-based OIDs, or a relational table.

You may want to load data that already contains system-generated OIDs and to specify that instead of generating new OIDs, the existing OIDs in the data file should be used. To do this, you would follow the INTO TABLE clause with the OID clause:

OID (fieldname)

In this clause, fieldname is the name of one of the fields (typically a filler field) from the field specification list that is mapped to a data field that contains the system-generated OIDs. SQL*Loader assumes that the OIDs provided are in the correct format and that they preserve OID global uniqueness. Therefore, to ensure uniqueness, you should use the Oracle OID generator to generate the OIDs to be loaded.

The OID clause can only be used for system-generated OIDs, not primary-key-based OIDs.

Example 11-11 demonstrates loading system-generated OIDs with the row objects.

Example 11-11 Loading OIDs

Control File Contents

   LOAD DATA
   INFILE 'sample.dat'
   INTO TABLE employees_v2 
1  OID (s_oid)
   FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
      (name    CHAR(30)                NULLIF name=BLANKS,
      age      INTEGER EXTERNAL(3)    NULLIF age=BLANKS,
      emp_id   INTEGER EXTERNAL(5),
2     s_oid    FILLER CHAR(32))

Data File (sample.dat)

3  Johny Quest, 18, 007, 21E978406D3E41FCE03400400B403BC3,
   Speed Racer, 16, 000, 21E978406D4441FCE03400400B403BC3,

Notes

  1. The OID clause specifies that the s_oid loader field contains the OID. The parentheses are required.

  2. If s_oid does not contain a valid hexadecimal number, then the particular record is rejected.

  3. The OID in the data file is a character string and is interpreted as a 32-digit hexadecimal number. The 32-digit hexadecimal number is later converted into a 16-byte RAW and stored in the object table.

Loading Object Tables with a Subtype

If an object table's row object is based on a nonfinal type, then SQL*Loader allows for any derived subtype to be loaded into the object table. As previously mentioned, the syntax required to load an object table with a derived subtype is almost identical to that used for a typical relational table. However, in this case, the actual subtype to be used must be named, so that SQL*Loader can determine if it is a valid subtype for the object table. This concept is illustrated in Example 11-12.

Example 11-12 Loading an Object Table with a Subtype

Object Type Definitions

CREATE TYPE employees_type AS OBJECT
  (name     VARCHAR2(30),
   age      NUMBER(3),
   emp_id   NUMBER(5)) not final;

CREATE TYPE hourly_emps_type UNDER employees_type
  (hours    NUMBER(3));

CREATE TABLE employees_v3 of employees_type;

Control File Contents

   LOAD DATA

   INFILE 'sample.dat'
   INTO TABLE employees_v3
1  TREAT AS hourly_emps_type
   FIELDS TERMINATED BY ','
     (name     CHAR(30),
      age      INTEGER EXTERNAL(3),
      emp_id   INTEGER EXTERNAL(5),
2     hours    INTEGER EXTERNAL(2))

Data File (sample.dat)

   Johny Quest, 18, 007, 32,
   Speed Racer, 16, 000, 20,

Notes

  1. The TREAT AS clause indicates that SQL*Loader should treat the object table as if it were declared to be of type hourly_emps_type, instead of its actual declared type, employee_type.

  2. The hours attribute is allowed here because it is an attribute of the hourly_emps_type. If the TREAT AS clause had not been specified, then this attribute would have resulted in an error, because it is not an attribute of the object table's declared type.

Loading REF Columns

SQL*Loader can load system-generated OID REF columns, primary-key-based REF columns, and unscoped REF columns that allow primary keys. For each of these, the way in which table names are specified is important, as described in the following section.

Specifying Table Names in a REF Clause


Note:

The information in this section applies only to environments in which the release of both SQL*Loader and Oracle Database are 11g release 1 (11.1) or later. It does not apply to environments in which either SQL*Loader, Oracle Database, or both are at an earlier release.

In the SQL*Loader control file, the description of the field corresponding to a REF column consists of the column name followed by a REF clause. The REF clause takes as arguments the table name and any attributes applicable to the type of REF column being loaded. The table names can either be specified dynamically (using filler fields) or as constants. The table name can also be specified with or without the schema name.

Whether the table name specified in the REF clause is specified as a constant or by using a filler field, it is interpreted as case-sensitive. This could result in the following situations:

  • If user SCOTT creates a table named table2 in lowercase without quotation marks around the table name, then it can be used in a REF clause in any of the following ways:

    • REF(constant 'TABLE2', ...)

    • REF(constant '"TABLE2"', ...)

    • REF(constant 'SCOTT.TABLE2', ...)

  • If user SCOTT creates a table named "Table2" using quotation marks around a mixed-case name, then it can be used in a REF clause in any of the following ways:

    • REF(constant 'Table2', ...)

    • REF(constant '"Table2"', ...)

    • REF(constant 'SCOTT.Table2', ...)

In both of those situations, if constant is replaced with a filler field, then the same values as shown in the examples will also work if they are placed in the data section.

System-Generated OID REF Columns

SQL*Loader assumes, when loading system-generated REF columns, that the actual OIDs from which the REF columns are to be constructed are in the data file with the rest of the data. The description of the field corresponding to a REF column consists of the column name followed by the REF clause.

The REF clause takes as arguments the table name and an OID. Note that the arguments can be specified either as constants or dynamically (using filler fields). See "ref_spec" for the appropriate syntax. Example 11-13 demonstrates loading system-generated OID REF columns.

Example 11-13 Loading System-Generated REF Columns

Control File Contents

LOAD DATA
INFILE 'sample.dat'
INTO TABLE departments_alt_v2
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  (dept_no      CHAR(5),
   dept_name    CHAR(30),
1 dept_mgr     REF(t_name, s_oid),
   s_oid        FILLER CHAR(32),
   t_name       FILLER CHAR(30))

Data File (sample.dat)

22345, QuestWorld, 21E978406D3E41FCE03400400B403BC3, EMPLOYEES_V2,
23423, Geography, 21E978406D4441FCE03400400B403BC3, EMPLOYEES_V2,

Notes

  1. If the specified table does not exist, then the record is rejected. The dept_mgr field itself does not map to any field in the data file.

Primary Key REF Columns

To load a primary key REF column, the SQL*Loader control-file field description must provide the column name followed by a REF clause. The REF clause takes for arguments a comma-delimited list of field names and constant values. The first argument is the table name, followed by arguments that specify the primary key OID on which the REF column to be loaded is based. See "ref_spec" for the appropriate syntax.

SQL*Loader assumes that the ordering of the arguments matches the relative ordering of the columns making up the primary key OID in the referenced table. Example 11-14 demonstrates loading primary key REF columns.

Example 11-14 Loading Primary Key REF Columns

Control File Contents

LOAD DATA
INFILE 'sample.dat'
INTO TABLE departments_alt
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
 (dept_no       CHAR(5),
 dept_name      CHAR(30),
 dept_mgr       REF(CONSTANT 'EMPLOYEES', emp_id),
 emp_id         FILLER CHAR(32))

Data File (sample.dat)

22345, QuestWorld, 007,
23423, Geography, 000,

Unscoped REF Columns That Allow Primary Keys

An unscoped REF column that allows primary keys can reference both system-generated and primary key REFs. The syntax for loading into such a REF column is the same as if you were loading into a system-generated OID REF column or into a primary-key-based REF column. See Example 11-13, "Loading System-Generated REF Columns" and Example 11-14, "Loading Primary Key REF Columns".

The following restrictions apply when loading into an unscoped REF column that allows primary keys:

  • Only one type of REF can be referenced by this column during a single-table load, either system-generated or primary key, but not both. If you try to reference both types, then the data row will be rejected with an error message indicating that the referenced table name is invalid.

  • If you are loading unscoped primary key REFs to this column, then only one object table can be referenced during a single-table load. That is, to load unscoped primary key REFs, some pointing to object table X and some pointing to object table Y, you would have to do one of the following:

    • Perform two single-table loads.

    • Perform a single load using multiple INTO TABLE clauses for which the WHEN clause keys off some aspect of the data, such as the object table name for the unscoped primary key REF. For example:

      LOAD DATA 
      INFILE 'data.dat' 
      
      INTO TABLE orders_apk 
      APPEND 
      when CUST_TBL = "CUSTOMERS_PK" 
      fields terminated by "," 
      ( 
        order_no   position(1)  char, 
        cust_tbl FILLER     char, 
        cust_no  FILLER     char, 
        cust   REF (cust_tbl, cust_no) NULLIF order_no='0' 
      ) 
      
      INTO TABLE orders_apk 
      APPEND 
      when CUST_TBL = "CUSTOMERS_PK2" 
      fields terminated by "," 
      ( 
        order_no  position(1)  char, 
        cust_tbl FILLER     char, 
        cust_no  FILLER     char, 
        cust   REF (cust_tbl, cust_no) NULLIF order_no='0' 
      ) 
      

    If you do not use either of these methods, then the data row will be rejected with an error message indicating that the referenced table name is invalid.

  • Unscoped primary key REFs in collections are not supported by SQL*Loader.

  • If you are loading system-generated REFs into this REF column, then any limitations described in "System-Generated OID REF Columns" also apply here.

  • If you are loading primary key REFs into this REF column, then any limitations described in "Primary Key REF Columns" also apply here.


    Note:

    For an unscoped REF column that allows primary keys, SQL*Loader takes the first valid object table parsed (either from the REF directive or from the data rows) and uses that object table's OID type to determine the REF type that can be referenced in that single-table load.

Loading LOBs

A LOB is a large object type. SQL*Loader supports the following types of LOBs:

LOBs can be column datatypes, and except for NCLOB, they can be an object's attribute datatypes. LOBs can have actual values, they can be null, or they can be empty. SQL*Loader creates an empty LOB when there is a 0-length field to store in the LOB. (Note that this is different than other datatypes where SQL*Loader sets the column to NULL for any 0-length string.) This means that the only way to load NULL values into a LOB column is to use the NULLIF clause.

XML columns are columns declared to be of type SYS.XMLTYPE. SQL*Loader treats XML columns as if they were CLOBs. All of the methods described in the following sections for loading LOB data from the primary data file or from LOBFILEs are applicable to loading XML columns.


Note:

You cannot specify a SQL string for LOB fields. This is true even if you specify LOBFILE_spec.

Because LOBs can be quite large, SQL*Loader can load LOB data from either a primary data file (in line with the rest of the data) or from LOBFILEs, as described in the following sections:


See Also:

Oracle Database SQL Language Reference for more information about large object (LOB) data types

Loading LOB Data from a Primary Data File

To load internal LOBs (BLOBs, CLOBs, and NCLOBs) or XML columns from a primary data file, you can use the following standard SQL*Loader formats:

  • Predetermined size fields

  • Delimited fields

  • Length-value pair fields

Each of these formats is described in the following sections.

LOB Data in Predetermined Size Fields

This is a very fast and conceptually simple format in which to load LOBs, as shown in Example 11-15.


Note:

Because the LOBs you are loading may not be of equal size, you can use whitespace to pad the LOB data to make the LOBs all of equal length within a particular data field.

To load LOBs using this format, you should use either CHAR or RAW as the loading datatype.

Example 11-15 Loading LOB Data in Predetermined Size Fields

Control File Contents

LOAD DATA 
INFILE 'sample.dat' "fix 501"
INTO TABLE person_table
   (name       POSITION(01:21)       CHAR,
1  "RESUME"    POSITION(23:500)      CHAR   DEFAULTIF "RESUME"=BLANKS)

Data File (sample.dat)

Julia Nayer      Julia Nayer
             500 Example Parkway
             jnayer@us.example.com ...

Notes

  1. Because the DEFAULTIF clause is used, if the data field containing the resume is empty, then the result is an empty LOB rather than a null LOB. However, if a NULLIF clause had been used instead of DEFAULTIF, then the empty data field would be null.

    You can use SQL*Loader datatypes other than CHAR to load LOBs. For example, when loading BLOBs, you would probably want to use the RAW datatype.

LOB Data in Delimited Fields

This format handles LOBs of different sizes within the same column (data file field) without a problem. However, this added flexibility can affect performance because SQL*Loader must scan through the data, looking for the delimiter string.

As with single-character delimiters, when you specify string delimiters, you should consider the character set of the data file. When the character set of the data file is different than that of the control file, you can specify the delimiters in hexadecimal notation (that is, X'hexadecimal string'). If the delimiters are specified in hexadecimal notation, then the specification must consist of characters that are valid in the character set of the input data file. In contrast, if hexadecimal notation is not used, then the delimiter specification is considered to be in the client's (that is, the control file's) character set. In this case, the delimiter is converted into the data file's character set before SQL*Loader searches for the delimiter in the data file.

Note the following:

  • Stutter syntax is supported with string delimiters (that is, the closing enclosure delimiter can be stuttered).

  • Leading whitespaces in the initial multicharacter enclosure delimiter are not allowed.

  • If a field is terminated by WHITESPACE, then the leading whitespaces are trimmed.


    Note:

    SQL*Loader defaults to 255 bytes when moving CLOB data, but a value of up to 2 gigabytes can be specified. For a delimited field, if a length is specified, then that length is used as a maximum. If no maximum is specified, then it defaults to 255 bytes. For a CHAR field that is delimited and is also greater than 255 bytes, you must specify a maximum length. See "CHAR" for more information about the CHAR datatype.

Example 11-16 shows an example of loading LOB data in delimited fields.

Example 11-16 Loading LOB Data in Delimited Fields

Control File Contents

LOAD DATA 
INFILE 'sample.dat' "str '|'"
INTO TABLE person_table
FIELDS TERMINATED BY ','
   (name        CHAR(25),
1  "RESUME"     CHAR(507) ENCLOSED BY '<startlob>' AND '<endlob>')

Data File (sample.dat)

Julia Nayer,<startlob>        Julia Nayer
                          500 Example Parkway
                          jnayer@us.example.com ...   <endlob>
2  |Bruce Ernst, .......

Notes

  1. <startlob> and <endlob> are the enclosure strings. With the default byte-length semantics, the maximum length for a LOB that can be read using CHAR(507) is 507 bytes. If character-length semantics were used, then the maximum would be 507 characters. See "Character-Length Semantics".

  2. If the record separator '|' had been placed right after <endlob> and followed with the newline character, then the newline would have been interpreted as part of the next record. An alternative would be to make the newline part of the record separator (for example, '|\n' or, in hexadecimal notation, X'7C0A').

LOB Data in Length-Value Pair Fields

You can use VARCHAR, VARCHARC, or VARRAW datatypes to load LOB data organized in length-value pair fields. This method of loading provides better performance than using delimited fields, but can reduce flexibility (for example, you must know the LOB length for each LOB before loading). Example 11-17 demonstrates loading LOB data in length-value pair fields.

Example 11-17 Loading LOB Data in Length-Value Pair Fields

Control File Contents

  LOAD DATA 
1 INFILE 'sample.dat' "str '<endrec>\n'"
  INTO TABLE person_table
  FIELDS TERMINATED BY ','
     (name       CHAR(25),
2    "RESUME"    VARCHARC(3,500))

Data File (sample.dat)

  Julia Nayer,479                Julia Nayer
                             500 Example Parkway
                             jnayer@us.example.com
                                    ... <endrec>
3    Bruce Ernst,000<endrec>

Notes

  1. If the backslash escape character is not supported, then the string used as a record separator in the example could be expressed in hexadecimal notation.

  2. "RESUME" is a field that corresponds to a CLOB column. In the control file, it is a VARCHARC, whose length field is 3 bytes long and whose maximum size is 500 bytes (with byte-length semantics). If character-length semantics were used, then the length would be 3 characters and the maximum size would be 500 characters. See "Character-Length Semantics".

  3. The length subfield of the VARCHARC is 0 (the value subfield is empty). Consequently, the LOB instance is initialized to empty.

Loading LOB Data from LOBFILEs

LOB data can be lengthy enough so that it makes sense to load it from a LOBFILE instead of from a primary data file. In LOBFILEs, LOB data instances are still considered to be in fields (predetermined size, delimited, length-value), but these fields are not organized into records (the concept of a record does not exist within LOBFILEs). Therefore, the processing overhead of dealing with records is avoided. This type of organization of data is ideal for LOB loading.

There is no requirement that a LOB from a LOBFILE fit in memory. SQL*Loader reads LOBFILEs in 64 KB chunks.

In LOBFILEs the data can be in any of the following types of fields:

  • A single LOB field into which the entire contents of a file can be read

  • Predetermined size fields (fixed-length fields)

  • Delimited fields (that is, TERMINATED BY or ENCLOSED BY)

    The clause PRESERVE BLANKS is not applicable to fields read from a LOBFILE.

  • Length-value pair fields (variable-length fields)

    To load data from this type of field, use the VARRAW, VARCHAR, or VARCHARC SQL*Loader datatypes.

See "Examples of Loading LOB Data from LOBFILEs" for examples of using each of these field types. All of the previously mentioned field types can be used to load XML columns.

See "lobfile_spec" for LOBFILE syntax.

Dynamic Versus Static LOBFILE Specifications

You can specify LOBFILEs either statically (the name of the file is specified in the control file) or dynamically (a FILLER field is used as the source of the file name). In either case, if the LOBFILE is not terminated by EOF, then when the end of the LOBFILE is reached, the file is closed and further attempts to read data from that file produce results equivalent to reading data from an empty field.

However, if you have a LOBFILE that is terminated by EOF, then the entire file is always returned on each attempt to read data from that file.

You should not specify the same LOBFILE as the source of two different fields. If you do, then the two fields typically read the data independently.

Examples of Loading LOB Data from LOBFILEs

This section contains examples of loading data from different types of fields in LOBFILEs.

One LOB per File

In Example 11-18, each LOBFILE is the source of a single LOB. To load LOB data that is organized in this way, the column or field name is followed by the LOBFILE datatype specifications.

Example 11-18 Loading LOB DATA with One LOB per LOBFILE

Control File Contents

LOAD DATA 
INFILE 'sample.dat'
   INTO TABLE person_table
   FIELDS TERMINATED BY ','
   (name      CHAR(20),
1  ext_fname    FILLER CHAR(40),
2  "RESUME"     LOBFILE(ext_fname) TERMINATED BY EOF)

Data File (sample.dat)

Johny Quest,jqresume.txt,
Speed Racer,'/private/sracer/srresume.txt',

Secondary Data File (jqresume.txt)

             Johny Quest
         500 Oracle Parkway
            ...

Secondary Data File (srresume.txt)

         Speed Racer
     400 Oracle Parkway
        ...

Notes

  1. The filler field is mapped to the 40-byte data field, which is read using the SQL*Loader CHAR datatype. This assumes the use of default byte-length semantics. If character-length semantics were used, then the field would be mapped to a 40-character data field.

  2. SQL*Loader gets the LOBFILE name from the ext_fname filler field. It then loads the data from the LOBFILE (using the CHAR datatype) from the first byte to the EOF character. If no existing LOBFILE is specified, then the "RESUME" field is initialized to empty.

Predetermined Size LOBs

In Example 11-19, you specify the size of the LOBs to be loaded into a particular column in the control file. During the load, SQL*Loader assumes that any LOB data loaded into that particular column is of the specified size. The predetermined size of the fields allows the data-parser to perform optimally. However, it is often difficult to guarantee that all LOBs are the same size.

Example 11-19 Loading LOB Data Using Predetermined Size LOBs

Control File Contents

LOAD DATA 
INFILE 'sample.dat'
INTO TABLE person_table
FIELDS TERMINATED BY ','
   (name     CHAR(20),
1  "RESUME"    LOBFILE(CONSTANT '/usr/private/jquest/jqresume.txt')
               CHAR(2000))

Data File (sample.dat)

Johny Quest,
Speed Racer,

Secondary Data File (jqresume.txt)

             Johny Quest
         500 Oracle Parkway
            ...
             Speed Racer
         400 Oracle Parkway
            ...

Notes

  1. This entry specifies that SQL*Loader load 2000 bytes of data from the jqresume.txt LOBFILE, using the CHAR datatype, starting with the byte following the byte loaded last during the current loading session. This assumes the use of the default byte-length semantics. If character-length semantics were used, then SQL*Loader would load 2000 characters of data, starting from the first character after the last-loaded character. See "Character-Length Semantics".

Delimited LOBs

In Example 11-20, the LOB data instances in the LOBFILE are delimited. In this format, loading different size LOBs into the same column is not a problem. However, this added flexibility can affect performance, because SQL*Loader must scan through the data, looking for the delimiter string.

Example 11-20 Loading LOB Data Using Delimited LOBs

Control File Contents

LOAD DATA 
INFILE 'sample.dat'
INTO TABLE person_table
FIELDS TERMINATED BY ','
   (name     CHAR(20),
1  "RESUME"    LOBFILE( CONSTANT 'jqresume') CHAR(2000) 
               TERMINATED BY "<endlob>\n")

Data File (sample.dat)

Johny Quest,
Speed Racer,

Secondary Data File (jqresume.txt)

             Johny Quest
         500 Oracle Parkway
            ... <endlob>
             Speed Racer
         400 Oracle Parkway
            ... <endlob>

Notes

  1. Because a maximum length of 2000 is specified for CHAR, SQL*Loader knows what to expect as the maximum length of the field, which can result in memory usage optimization. If you choose to specify a maximum length, then you should be sure not to underestimate its value. The TERMINATED BY clause specifies the string that terminates the LOBs. Alternatively, you could use the ENCLOSED BY clause. The ENCLOSED BY clause allows a bit more flexibility as to the relative positioning of the LOBs in the LOBFILE (the LOBs in the LOBFILE need not be sequential).

Length-Value Pair Specified LOBs

In Example 11-21 each LOB in the LOBFILE is preceded by its length. You could use VARCHAR, VARCHARC, or VARRAW datatypes to load LOB data organized in this way.

This method of loading can provide better performance over delimited LOBs, but at the expense of some flexibility (for example, you must know the LOB length for each LOB before loading).

Example 11-21 Loading LOB Data Using Length-Value Pair Specified LOBs

Control File Contents

LOAD DATA 
INFILE 'sample.dat'
INTO TABLE person_table
FIELDS TERMINATED BY ','
   (name          CHAR(20),
1  "RESUME"       LOBFILE(CONSTANT 'jqresume') VARCHARC(4,2000))

Data File (sample.dat)

Johny Quest,
Speed Racer,

Secondary Data File (jqresume.txt)

2      0501Johny Quest
       500 Oracle Parkway
          ... 
3      0000   

Notes

  1. The entry VARCHARC(4,2000) tells SQL*Loader that the LOBs in the LOBFILE are in length-value pair format and that the first 4 bytes should be interpreted as the length. The value of 2000 tells SQL*Loader that the maximum size of the field is 2000 bytes. This assumes the use of the default byte-length semantics. If character-length semantics were used, then the first 4 characters would be interpreted as the length in characters. The maximum size of the field would be 2000 characters. See "Character-Length Semantics".

  2. The entry 0501 preceding Johny Quest tells SQL*Loader that the LOB consists of the next 501 characters.

  3. This entry specifies an empty (not null) LOB.

Considerations When Loading LOBs from LOBFILEs

Keep in mind the following when you load data using LOBFILEs:

  • Only LOBs and XML columns can be loaded from LOBFILEs.

  • The failure to load a particular LOB does not result in the rejection of the record containing that LOB. Instead, you will have a record that contains an empty LOB. In the case of an XML column, a null value will be inserted if there is a failure loading the LOB.

  • It is not necessary to specify the maximum length of a field corresponding to a LOB column. If a maximum length is specified, then SQL*Loader uses it as a hint to optimize memory usage. Therefore, it is important that the maximum length specification does not understate the true maximum length.

  • You cannot supply a position specification (pos_spec) when loading data from a LOBFILE.

  • NULLIF or DEFAULTIF field conditions cannot be based on fields read from LOBFILEs.

  • If a nonexistent LOBFILE is specified as a data source for a particular field, then that field is initialized to empty. If the concept of empty does not apply to the particular field type, then the field is initialized to null.

  • Table-level delimiters are not inherited by fields that are read from a LOBFILE.

  • When loading an XML column or referencing a LOB column in a SQL expression in conventional path mode, SQL*Loader must process the LOB data as a temporary LOB. To ensure the best load performance possible in these cases, refer to the guidelines concerning temporary LOB performance in Oracle Database SecureFiles and Large Objects Developer's Guide.

Loading BFILE Columns

The BFILE datatype stores unstructured binary data in operating system files outside the database. A BFILE column or attribute stores a file locator that points to the external file containing the data. The file to be loaded as a BFILE does not have to exist at the time of loading; it can be created later. SQL*Loader assumes that the necessary directory objects have already been created (a logical alias name for a physical directory on the server's file system). For more information, see the Oracle Database SecureFiles and Large Objects Developer's Guide.

A control file field corresponding to a BFILE column consists of a column name followed by the BFILE clause. The BFILE clause takes as arguments a directory object (the server_directory alias) name followed by a BFILE name. Both arguments can be provided as string constants, or they can be dynamically loaded through some other field. See the Oracle Database SQL Language Reference for more information.

In the next two examples of loading BFILEs, Example 11-22 has only the file name specified dynamically, while Example 11-23 demonstrates specifying both the BFILE and the directory object dynamically.

Example 11-22 Loading Data Using BFILEs: Only File Name Specified Dynamically

Control File Contents

LOAD DATA
INFILE sample.dat
INTO TABLE planets
FIELDS TERMINATED BY ','
   (pl_id    CHAR(3), 
   pl_name   CHAR(20),
   fname     FILLER CHAR(30),
1  pl_pict   BFILE(CONSTANT "scott_dir1", fname))

Data File (sample.dat)

1,Mercury,mercury.jpeg,
2,Venus,venus.jpeg,
3,Earth,earth.jpeg,

Notes

  1. The directory name is in quotation marks; therefore, the string is used as is and is not capitalized.

Example 11-23 Loading Data Using BFILEs: File Name and Directory Specified Dynamically

Control File Contents

LOAD DATA
INFILE sample.dat
INTO TABLE planets
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   (pl_id    NUMBER(4), 
   pl_name   CHAR(20), 
   fname     FILLER CHAR(30),
1  dname     FILLER CHAR(20),
   pl_pict   BFILE(dname, fname) )

Data File (sample.dat)

1, Mercury, mercury.jpeg, scott_dir1,
2, Venus, venus.jpeg, scott_dir1,
3, Earth, earth.jpeg, scott_dir2,

Notes

  1. dname is mapped to the data file field containing the directory name corresponding to the file being loaded.

Loading Collections (Nested Tables and VARRAYs)

Like LOBs, collections can be loaded either from a primary data file (data inline) or from secondary data files (data out of line). See "Secondary Data Files (SDFs)" for details about SDFs.

When you load collection data, a mechanism must exist by which SQL*Loader can tell when the data belonging to a particular collection instance has ended. You can achieve this in two ways:

In the control file, collections are described similarly to column objects. See "Loading Column Objects". There are some differences:

Restrictions in Nested Tables and VARRAYs

The following restrictions exist for nested tables and VARRAYs:

  • A field_list cannot contain a collection_fld_spec.

  • A col_obj_spec nested within a VARRAY cannot contain a collection_fld_spec.

  • The column_name specified as part of the field_list must be the same as the column_name preceding the VARRAY parameter.

Also, be aware that if you are loading into a table containing nested tables, then SQL*Loader will not automatically split the load into multiple loads and generate a set ID.

Example 11-24 demonstrates loading a VARRAY and a nested table.

Example 11-24 Loading a VARRAY and a Nested Table

Control File Contents

   LOAD DATA
   INFILE 'sample.dat' "str '\n' "
   INTO TABLE dept
   REPLACE
   FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   (
     dept_no       CHAR(3),
     dname         CHAR(25) NULLIF dname=BLANKS,
1    emps          VARRAY TERMINATED BY ':'
     (
       emps        COLUMN OBJECT
       (
         name      CHAR(30),
         age       INTEGER EXTERNAL(3),
2        emp_id    CHAR(7) NULLIF emps.emps.emp_id=BLANKS
     )
   ),
3   proj_cnt      FILLER CHAR(3),
4   projects      NESTED TABLE SDF (CONSTANT "pr.txt" "fix 57") COUNT (proj_cnt)
  (
    projects    COLUMN OBJECT
    (
      project_id        POSITION (1:5) INTEGER EXTERNAL(5),
      project_name      POSITION (7:30) CHAR 
                        NULLIF projects.projects.project_name = BLANKS
    )
  )
)

Data File (sample.dat)

 101,MATH,"Napier",28,2828,"Euclid", 123,9999:0
 210,"Topological Transforms",:2

Secondary Data File (SDF) (pr.txt)

21034 Topological Transforms
77777 Impossible Proof

Notes

  1. The TERMINATED BY clause specifies the VARRAY instance terminator (note that no COUNT clause is used).

  2. Full name field references (using dot notation) resolve the field name conflict created by the presence of this filler field.

  3. proj_cnt is a filler field used as an argument to the COUNT clause.

  4. This entry specifies the following:

    • An SDF called pr.txt as the source of data. It also specifies a fixed-record format within the SDF.

    • If COUNT is 0, then the collection is initialized to empty. Another way to initialize a collection to empty is to use a DEFAULTIF clause. The main field name corresponding to the nested table field description is the same as the field name of its nested nonfiller-field, specifically, the name of the column object field description.

Secondary Data Files (SDFs)

Secondary data files (SDFs) are similar in concept to primary data files. Like primary data files, SDFs are a collection of records, and each record is made up of fields. The SDFs are specified on a per control-file-field basis. They are useful when you load large nested tables and VARRAYs.


Note:

Only a collection_fld_spec can name an SDF as its data source.

SDFs are specified using the SDF parameter. The SDF parameter can be followed by either the file specification string, or a FILLER field that is mapped to a data field containing one or more file specification strings.

As for a primary data file, the following can be specified for each SDF:

  • The record format (fixed, stream, or variable). Also, if stream record format is used, then you can specify the record separator.

  • The record size.

  • The character set for an SDF can be specified using the CHARACTERSET clause (see "Handling Different Character Encoding Schemes").

  • A default delimiter (using the delimiter specification) for the fields that inherit a particular SDF specification (all member fields or attributes of the collection that contain the SDF specification, with exception of the fields containing their own LOBFILE specification).

Also note the following regarding SDFs:

  • If a nonexistent SDF is specified as a data source for a particular field, then that field is initialized to empty. If the concept of empty does not apply to the particular field type, then the field is initialized to null.

  • Table-level delimiters are not inherited by fields that are read from an SDF.

  • To load SDFs larger than 64 KB, you must use the READSIZE parameter to specify a larger physical record size. You can specify the READSIZE parameter either from the command line or as part of an OPTIONS clause.

Dynamic Versus Static SDF Specifications

You can specify SDFs either statically (you specify the actual name of the file) or dynamically (you use a FILLER field as the source of the file name). In either case, when the EOF of an SDF is reached, the file is closed and further attempts at reading data from that particular file produce results equivalent to reading data from an empty field.

In a dynamic secondary file specification, this behavior is slightly different. Whenever the specification changes to reference a new file, the old file is closed, and the data is read from the beginning of the newly referenced file.

The dynamic switching of the data source files has a resetting effect. For example, when SQL*Loader switches from the current file to a previously opened file, the previously opened file is reopened, and the data is read from the beginning of the file.

You should not specify the same SDF as the source of two different fields. If you do, then the two fields will typically read the data independently.

Loading a Parent Table Separately from Its Child Table

When you load a table that contains a nested table column, it may be possible to load the parent table separately from the child table. You can load the parent and child tables independently if the SIDs (system-generated or user-defined) are already known at the time of the load (that is, the SIDs are in the data file with the data).

Example 11-25 illustrates how to load a parent table with user-provided SIDs.

Example 11-25 Loading a Parent Table with User-Provided SIDs

Control File Contents

   LOAD DATA
   INFILE 'sample.dat' "str '|\n' "
   INTO TABLE dept
   FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   TRAILING NULLCOLS
   ( dept_no   CHAR(3),
   dname       CHAR(20) NULLIF dname=BLANKS ,
   mysid       FILLER CHAR(32),
1  projects    SID(mysid))

Data File (sample.dat)

101,Math,21E978407D4441FCE03400400B403BC3,|
210,"Topology",21E978408D4441FCE03400400B403BC3,|

Notes

  1. mysid is a filler field that is mapped to a data file field containing the actual set IDs and is supplied as an argument to the SID clause.

Example 11-26 illustrates how to load a child table (the nested table storage table) with user-provided SIDs.

Example 11-26 Loading a Child Table with User-Provided SIDs

Control File Contents

   LOAD DATA
   INFILE 'sample.dat'
   INTO TABLE dept
   FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
   TRAILING NULLCOLS
1  SID(sidsrc)
   (project_id     INTEGER EXTERNAL(5),
   project_name   CHAR(20) NULLIF project_name=BLANKS,
   sidsrc FILLER  CHAR(32))

Data File (sample.dat)

21034, "Topological Transforms", 21E978407D4441FCE03400400B403BC3,
77777, "Impossible Proof", 21E978408D4441FCE03400400B403BC3,

Notes

  1. The table-level SID clause tells SQL*Loader that it is loading the storage table for nested tables. sidsrc is the filler field name that is the source of the real set IDs.

Memory Issues When Loading VARRAY Columns

The following list describes some issues to keep in mind when you load VARRAY columns:

  • VARRAYs are created in the client's memory before they are loaded into the database. Each element of a VARRAY requires 4 bytes of client memory before it can be loaded into the database. Therefore, when you load a VARRAY with a thousand elements, you will require at least 4000 bytes of client memory for each VARRAY instance before you can load the VARRAYs into the database. In many cases, SQL*Loader requires two to three times that amount of memory to successfully construct and load a VARRAY.

  • The BINDSIZE parameter specifies the amount of memory allocated by SQL*Loader for loading records. Given the value specified for BINDSIZE, SQL*Loader takes into consideration the size of each field being loaded, and determines the number of rows it can load in one transaction. The larger the number of rows, the fewer transactions, resulting in better performance. But if the amount of memory on your system is limited, then at the expense of performance, you can specify a lower value for ROWS than SQL*Loader calculated.

  • Loading very large VARRAYs or a large number of smaller VARRAYs could cause you to run out of memory during the load. If this happens, then specify a smaller value for BINDSIZE or ROWS and retry the load.

PK FS7SPKN:AOEBPS/dbnewid.htm1sΌ DBNEWID Utility

18 DBNEWID Utility

DBNEWID is a database utility that can change the internal database identifier (DBID) and the database name (DBNAME) for an operational database.

This chapter contains the following sections:

What Is the DBNEWID Utility?

Before the introduction of the DBNEWID utility, you could manually create a copy of a database and give it a new database name (DBNAME) by re-creating the control file. However, you could not give the database a new identifier (DBID). The DBID is an internal, unique identifier for a database. Because Recovery Manager (RMAN) distinguishes databases by DBID, you could not register a seed database and a manually copied database together in the same RMAN repository. The DBNEWID utility solves this problem by allowing you to change any of the following:

Ramifications of Changing the DBID and DBNAME

Changing the DBID of a database is a serious procedure. When the DBID of a database is changed, all previous backups and archived logs of the database become unusable. This is similar to creating a database except that the data is already in the data files. After you change the DBID, backups and archive logs that were created before the change can no longer be used because they still have the original DBID, which does not match the current DBID. You must open the database with the RESETLOGS option, which re-creates the online redo logs and resets their sequence to 1 (see Oracle Database Administrator's Guide). Consequently, you should make a backup of the whole database immediately after changing the DBID.

Changing the DBNAME without changing the DBID does not require you to open with the RESETLOGS option, so database backups and archived logs are not invalidated. However, changing the DBNAME does have consequences. You must change the DB_NAME initialization parameter after a database name change to reflect the new name. Also, you may have to re-create the Oracle password file. If you restore an old backup of the control file (before the name change), then you should use the initialization parameter file and password file from before the database name change.


Note:

Do not change the DBID or DBNAME of a database if you are using a capture process to capture changes to the database. See Oracle Streams Concepts and Administration for more information about capture processes.

Considerations for Global Database Names

If you are dealing with a database in a distributed database system, then each database should have a unique global database name. The DBNEWID utility does not change global database names. This can only be done with the SQL ALTER DATABASE statement, for which the syntax is as follows:

ALTER DATABASE RENAME GLOBAL_NAME TO newname.domain;

The global database name is made up of a database name and a domain, which are determined by the DB_NAME and DB_DOMAIN initialization parameters when the database is first created.

The following example changes the database name to sales in the domain us.example.com:

ALTER DATABASE RENAME GLOBAL_NAME TO sales.us.example.com

You would do this after you finished using DBNEWID to change the database name.


See Also:

Oracle Database Administrator's Guide for more information about global database names

Changing the DBID and DBNAME of a Database

This section contains these topics:

Changing the DBID and Database Name

The following steps describe how to change the DBID of a database. Optionally, you can change the database name as well.

  1. Ensure that you have a recoverable whole database backup.

  2. Ensure that the target database is mounted but not open, and that it was shut down consistently before mounting. For example:

    SHUTDOWN IMMEDIATE
    STARTUP MOUNT
    
  3. Invoke the DBNEWID utility on the command line, specifying a valid user (TARGET) that has the SYSDBA privilege (you will be prompted for a password):

    % nid TARGET=SYS
    

    To change the database name in addition to the DBID, also specify the DBNAME parameter on the command line (you will be prompted for a password). The following example changes the database name to test_db:

    % nid TARGET=SYS DBNAME=test_db
    

    The DBNEWID utility performs validations in the headers of the data files and control files before attempting I/O to the files. If validation is successful, then DBNEWID prompts you to confirm the operation (unless you specify a log file, in which case it does not prompt), changes the DBID (and the DBNAME, if specified, as in this example) for each data file, including offline normal and read-only data files, shuts down the database, and then exits. The following is an example of what the output for this would look like:

    .
    .
    .
    Connected to database PROD (DBID=86997811)
    .
    .
    .
    Control Files in database:
        /oracle/TEST_DB/data/cf1.dbf
        /oracle/TEST_DB/data/cf2.dbf
    
    The following datafiles are offline clean:
        /oracle/TEST_DB/data/tbs_61.dbf (23)
        /oracle/TEST_DB/data/tbs_62.dbf (24)
        /oracle/TEST_DB/data/temp3.dbf (3)
    These files must be writable by this utility.
    
    The following datafiles are read-only:
        /oracle/TEST_DB/data/tbs_51.dbf (15)
        /oracle/TEST_DB/data/tbs_52.dbf (16)
        /oracle/TEST_DB/data/tbs_53.dbf (22)
    These files must be writable by this utility.
    
    Changing database ID from 86997811 to 1250654267
    Changing database name from PROD to TEST_DB
        Control File /oracle/TEST_DB/data/cf1.dbf - modified
        Control File /oracle/TEST_DB/data/cf2.dbf - modified
        Datafile /oracle/TEST_DB/data/tbs_01.dbf - dbid changed, wrote new name
        Datafile /oracle/TEST_DB/data/tbs_ax1.dbf - dbid changed, wrote new name
        Datafile /oracle/TEST_DB/data/tbs_02.dbf - dbid changed, wrote new name
        Datafile /oracle/TEST_DB/data/tbs_11.dbf - dbid changed, wrote new name
        Datafile /oracle/TEST_DB/data/tbs_12.dbf - dbid changed, wrote new name
        Datafile /oracle/TEST_DB/data/temp1.dbf - dbid changed, wrote new name
        Control File /oracle/TEST_DB/data/cf1.dbf - dbid changed, wrote new name
        Control File /oracle/TEST_DB/data/cf2.dbf - dbid changed, wrote new name
        Instance shut down
    
    Database name changed to TEST_DB.
    Modify parameter file and generate a new password file before restarting.
    Database ID for database TEST_DB changed to 1250654267.
    All previous backups and archived redo logs for this database are unusable.
    Database has been shutdown, open database with RESETLOGS option.
    Successfully changed database name and ID.
    DBNEWID - Completed successfully.
    

    If validation is not successful, then DBNEWID terminates and leaves the target database intact, as shown in the following sample output. You can open the database, fix the error, and then either resume the DBNEWID operation or continue using the database without changing its DBID.

    .
    .
    .
    Connected to database PROD (DBID=86997811)
    .
    .
    .  
    Control Files in database:
        /oracle/TEST_DB/data/cf1.dbf
        /oracle/TEST_DB/data/cf2.dbf
     
    The following datafiles are offline clean:
        /oracle/TEST_DB/data/tbs_61.dbf (23)
        /oracle/TEST_DB/data/tbs_62.dbf (24)
        /oracle/TEST_DB/data/temp3.dbf (3)
    These files must be writable by this utility.
     
    The following datafiles are read-only:
        /oracle/TEST_DB/data/tbs_51.dbf (15)
        /oracle/TEST_DB/data/tbs_52.dbf (16)
        /oracle/TEST_DB/data/tbs_53.dbf (22)
    These files must be writable by this utility.
     
    The following datafiles are offline immediate:
        /oracle/TEST_DB/data/tbs_71.dbf (25)
        /oracle/TEST_DB/data/tbs_72.dbf (26)
     
    NID-00122: Database should have no offline immediate datafiles
      
    Change of database name failed during validation - database is intact.
    DBNEWID - Completed with validation errors.
    
  4. Mount the database. For example:

    STARTUP MOUNT
    
  5. Open the database in RESETLOGS mode and resume normal use. For example:

    ALTER DATABASE OPEN RESETLOGS;
    

    Make a new database backup. Because you reset the online redo logs, the old backups and archived logs are no longer usable in the current incarnation of the database.

Changing Only the Database ID

To change the database ID without changing the database name, follow the steps in "Changing the DBID and Database Name", but in Step 3 do not specify the optional database name (DBNAME). The following is an example of the type of output that is generated when only the database ID is changed.

.
.
.
Connected to database PROD (DBID=86997811)
.
.
.  
Control Files in database:
    /oracle/TEST_DB/data/cf1.dbf
    /oracle/TEST_DB/data/cf2.dbf
 
The following datafiles are offline clean:
    /oracle/TEST_DB/data/tbs_61.dbf (23)
    /oracle/TEST_DB/data/tbs_62.dbf (24)
    /oracle/TEST_DB/data/temp3.dbf (3)
These files must be writable by this utility.
 
The following datafiles are read-only:
    /oracle/TEST_DB/data/tbs_51.dbf (15)
    /oracle/TEST_DB/data/tbs_52.dbf (16)
    /oracle/TEST_DB/data/tbs_53.dbf (22)
These files must be writable by this utility.
 
Changing database ID from 86997811 to 4004383693
    Control File /oracle/TEST_DB/data/cf1.dbf - modified
    Control File /oracle/TEST_DB/data/cf2.dbf - modified
    Datafile /oracle/TEST_DB/data/tbs_01.dbf - dbid changed
    Datafile /oracle/TEST_DB/data/tbs_ax1.dbf - dbid changed
    Datafile /oracle/TEST_DB/data/tbs_02.dbf - dbid changed
    Datafile /oracle/TEST_DB/data/tbs_11.dbf - dbid changed
    Datafile /oracle/TEST_DB/data/tbs_12.dbf - dbid changed
    Datafile /oracle/TEST_DB/data/temp1.dbf - dbid changed
    Control File /oracle/TEST_DB/data/cf1.dbf - dbid changed
    Control File /oracle/TEST_DB/data/cf2.dbf - dbid changed
    Instance shut down
 
Database ID for database TEST_DB changed to 4004383693.
All previous backups and archived redo logs for this database are unusable.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database ID.
DBNEWID - Completed succesfully.

Changing Only the Database Name

The following steps describe how to change the database name without changing the DBID.

  1. Ensure that you have a recoverable whole database backup.

  2. Ensure that the target database is mounted but not open, and that it was shut down consistently before mounting. For example:

    SHUTDOWN IMMEDIATE
    STARTUP MOUNT
    
  3. Invoke the utility on the command line, specifying a valid user with the SYSDBA privilege (you will be prompted for a password). You must specify both the DBNAME and SETNAME parameters. This example changes the name to test_db:

    % nid TARGET=SYS DBNAME=test_db SETNAME=YES
    

    DBNEWID performs validations in the headers of the control files (not the data files) before attempting I/O to the files. If validation is successful, then DBNEWID prompts for confirmation, changes the database name in the control files, shuts down the database and exits. The following is an example of what the output for this would look like:

    .
    .
    .
    Control Files in database:
        /oracle/TEST_DB/data/cf1.dbf
        /oracle/TEST_DB/data/cf2.dbf
    
    The following datafiles are offline clean:
        /oracle/TEST_DB/data/tbs_61.dbf (23)
        /oracle/TEST_DB/data/tbs_62.dbf (24)
        /oracle/TEST_DB/data/temp3.dbf (3)
    These files must be writable by this utility.
    
    The following datafiles are read-only:
        /oracle/TEST_DB/data/tbs_51.dbf (15)
        /oracle/TEST_DB/data/tbs_52.dbf (16)
        /oracle/TEST_DB/data/tbs_53.dbf (22)
    These files must be writable by this utility.
    
    Changing database name from PROD to TEST_DB
        Control File /oracle/TEST_DB/data/cf1.dbf - modified
        Control File /oracle/TEST_DB/data/cf2.dbf - modified
        Datafile /oracle/TEST_DB/data/tbs_01.dbf - wrote new name
        Datafile /oracle/TEST_DB/data/tbs_ax1.dbf - wrote new name
        Datafile /oracle/TEST_DB/data/tbs_02.dbf - wrote new name
        Datafile /oracle/TEST_DB/data/tbs_11.dbf - wrote new name
        Datafile /oracle/TEST_DB/data/tbs_12.dbf - wrote new name
        Datafile /oracle/TEST_DB/data/temp1.dbf - wrote new name
        Control File /oracle/TEST_DB/data/cf1.dbf - wrote new name
        Control File /oracle/TEST_DB/data/cf2.dbf - wrote new name
        Instance shut down
    
    Database name changed to TEST_DB.
    Modify parameter file and generate a new password file before restarting.
    Successfully changed database name.
    DBNEWID - Completed successfully.
    

    If validation is not successful, then DBNEWID terminates and leaves the target database intact. You can open the database, fix the error, and then either resume the DBNEWID operation or continue using the database without changing the database name. (For an example of what the output looks like for an unsuccessful validation, see Step 3 in "Changing the DBID and Database Name".)

  4. Set the DB_NAME initialization parameter in the initialization parameter file (PFILE) to the new database name.


    Note:

    The DBNEWID utility does not change the server parameter file (SPFILE). Therefore, if you use SPFILE to start your Oracle database, then you must re-create the initialization parameter file from the server parameter file, remove the server parameter file, change the DB_NAME in the initialization parameter file, and then re-create the server parameter file.

  5. Create a new password file.

  6. Start up the database and resume normal use. For example:

    STARTUP
    

    Because you have changed only the database name, and not the database ID, it is not necessary to use the RESETLOGS option when you open the database. This means that all previous backups are still usable.

Troubleshooting DBNEWID

If the DBNEWID utility succeeds in its validation stage but detects an error while performing the requested change, then the utility stops and leaves the database in the middle of the change. In this case, you cannot open the database until the DBNEWID operation is either completed or reverted. DBNEWID displays messages indicating the status of the operation.

Before continuing or reverting, fix the underlying cause of the error. Sometimes the only solution is to restore the whole database from a recent backup and perform recovery to the point in time before DBNEWID was started. This underscores the importance of having a recent backup available before running DBNEWID.

If you choose to continue with the change, then re-execute your original command. The DBNEWID utility resumes and attempts to continue the change until all data files and control files have the new value or values. At this point, the database is shut down. You should mount it before opening it with the RESETLOGS option.

If you choose to revert a DBNEWID operation, and if the reversion succeeds, then DBNEWID reverts all performed changes and leaves the database in a mounted state.

If DBNEWID is run against a release 10.1 or later Oracle database, then a summary of the operation is written to the alert file. For example, for a change of database name and database ID, you might see something similar to the following:

*** DBNEWID utility started ***
DBID will be changed from 86997811 to new DBID of 1250452230 for
database PROD
DBNAME will be changed from PROD to new DBNAME of TEST_DB
Starting datafile conversion
Setting recovery target incarnation to 1
Datafile conversion complete
Database name changed to TEST_DB.
Modify parameter file and generate a new password file before restarting.
Database ID for database TEST_DB changed to 1250452230.
All previous backups and archived redo logs for this database are unusable.
Database has been shutdown, open with RESETLOGS option.
Successfully changed database name and ID.
*** DBNEWID utility finished successfully ***

For a change of just the database name, the alert file might show something similar to the following:

*** DBNEWID utility started ***
DBNAME will be changed from PROD to new DBNAME of TEST_DB
Starting datafile conversion
Datafile conversion complete
Database name changed to TEST_DB.
Modify parameter file and generate a new password file before restarting.
Successfully changed database name.
*** DBNEWID utility finished successfully ***
 
In case of failure during DBNEWID the alert will also log the failure:
*** DBNEWID utility started ***
DBID will be changed from 86997811 to new DBID of 86966847 for database
AV3
Change of database ID failed.
Must finish change or REVERT changes before attempting any database
operation.
*** DBNEWID utility finished with errors ***

DBNEWID Syntax

The following diagrams show the syntax for the DBNEWID utility.

Description of nid.gif follows
Description of the illustration nid.gif

Parameters

Table 18-1 describes the parameters in the DBNEWID syntax.

Table 18-1 Parameters for the DBNEWID Utility

ParameterDescription

TARGET

Specifies the username and password used to connect to the database. The user must have the SYSDBA privilege. If you are using operating system authentication, then you can connect with the slash (/). If the $ORACLE_HOME and $ORACLE_SID variables are not set correctly in the environment, then you can specify a secure (IPC or BEQ) service to connect to the target database. A target database must be specified in all invocations of the DBNEWID utility.

REVERT

Specify YES to indicate that a failed change of DBID should be reverted (default is NO). The utility signals an error if no change DBID operation is in progress on the target database. A successfully completed change of DBID cannot be reverted. REVERT=YES is valid only when a DBID change failed.

DBNAME=new_db_name

Changes the database name of the database. You can change the DBID and the DBNAME of a database at the same time. To change only the DBNAME, also specify the SETNAME parameter.

SETNAME

Specify YES to indicate that DBNEWID should change the database name of the database but should not change the DBID (default is NO). When you specify SETNAME=YES, the utility writes only to the target database control files.

LOGFILE=logfile

Specifies that DBNEWID should write its messages to the specified file. By default the utility overwrites the previous log. If you specify a log file, then DBNEWID does not prompt for confirmation.

APPEND

Specify YES to append log output to the existing log file (default is NO).

HELP

Specify YES to print a list of the DBNEWID syntax options (default is NO).


Restrictions and Usage Notes

The DBNEWID utility has the following restrictions:

  • To change the DBID of a database, the database must be mounted and must have been shut down consistently before mounting. In the case of an Oracle Real Application Clusters database, the database must be mounted in NOPARALLEL mode.

  • You must open the database with the RESETLOGS option after changing the DBID. However, you do not have to open with the RESETLOGS option after changing only the database name.

  • No other process should be running against the database when DBNEWID is executing. If another session shuts down and starts the database, then DBNEWID terminates unsuccessfully.

  • All online data files should be consistent without needing recovery.

  • Normal offline data files should be accessible and writable. If this is not the case, then you must drop these files before invoking the DBNEWID utility.

  • All read-only tablespaces must be accessible and made writable at the operating system level before invoking DBNEWID. If these tablespaces cannot be made writable (for example, they are on a CD-ROM), then you must unplug the tablespaces using the transportable tablespace feature and then plug them back in the database before invoking the DBNEWID utility (see the Oracle Database Administrator's Guide).

  • The DBNEWID utility does not change global database names. See "Considerations for Global Database Names".

Additional Restrictions for Releases Earlier Than Oracle Database 10g

The following additional restrictions apply if the DBNEWID utility is run against an Oracle Database release earlier than 10.1:

  • The nid executable file should be owned and run by the Oracle owner because it needs direct access to the data files and control files. If another user runs the utility, then set the user ID to the owner of the data files and control files.

  • The DBNEWID utility must access the data files of the database directly through a local connection. Although DBNEWID can accept a net service name, it cannot change the DBID of a nonlocal database.

PK6s1sPKN:AOEBPS/title.htmX Oracle Database Utilities, 11g Release 2 (11.2)

Oracle® Database

Utilities

11g Release 2 (11.2)

E22490-05

December 2012


Oracle Database Utilities, 11g Release 2 (11.2)

E22490-05

Copyright © 1996, 2012, Oracle and/or its affiliates. All rights reserved.

Primary Author:  Kathy Rich

Contributors:   Lee Barton, Ellen Batbouta, Janet Blowney, Steve DiPirro, Bill Fisher, Steve Fogel, Dean Gagne, John Kalogeropoulos, Jonathan Klein, Cindy Lim, Brian McCarthy, Rod Payne, Rich Phillips, Mike Sakayeda, Francisco Sanchez, Marilyn Saunders, Jim Stenoish, Randy Urbano, Hui-ling Yu

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PKk3"]XPKN:AOEBPS/part_dp.htm u Oracle Data Pump

Part I

Oracle Data Pump

This part contains the following chapters:

PKb PKN:AOEBPS/dp_legacy.htm Data Pump Legacy Mode

4 Data Pump Legacy Mode

If you use original Export (exp) and Import (imp), then you may have scripts you have been using for many years. To ease the transition to the newer Data Pump Export and Import utilities, Data Pump provides a legacy mode which allows you to continue to use your existing scripts with Data Pump.

Data Pump enters legacy mode once it determines a parameter unique to original Export or Import is present, either on the command line or in a script. As Data Pump processes the parameter, the analogous Data Pump Export or Data Pump Import parameter is displayed. Oracle strongly recommends that you view the new syntax and make script changes as time permits.


Note:

Data Pump Export and Import only handle dump files and log files in the Data Pump format. They never create or read dump files compatible with original Export or Import. If you have a dump file created with original Export, then you must use original Import to import the data into the database.

This chapter contains the following sections:

Parameter Mappings

This section describes how original Export and Import parameters map to the Data Pump Export and Import parameters that supply similar functionality.

Using Original Export Parameters with Data Pump

Data Pump Export accepts original Export parameters when they map to a corresponding Data Pump parameter. Table 4-1 describes how Data Pump Export interprets original Export parameters. Parameters that have the same name and functionality in both original Export and Data Pump Export are not included in this table.

Table 4-1 How Data Pump Export Handles Original Export Parameters

Original Export ParameterAction Taken by Data Pump Export Parameter
BUFFER

This parameter is ignored.

COMPRESS

This parameter is ignored. In original Export, the COMPRESS parameter affected how the initial extent was managed. Setting COMPRESS=n caused original Export to use current storage parameters for the initial and next extent.

The Data Pump Export COMPRESSION parameter is used to specify how data is compressed in the dump file, and is not related to the original Export COMPRESS parameter.

CONSISTENT

Data Pump Export determines the current time and uses FLASHBACK_TIME.

CONSTRAINTS

If original Export used CONSTRAINTS=n, then Data Pump Export uses EXCLUDE=CONSTRAINTS.

The default behavior is to include constraints as part of the export.

DIRECT 

This parameter is ignored. Data Pump Export automatically chooses the best export method.

FEEDBACK

The Data Pump Export STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the export job, as well as the rows being processed.

In original Export, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Export, the status is given every so many seconds, as specified by STATUS.

FILE

Data Pump Export attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.

See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Export FILE parameter.

GRANTS

If original Export used GRANTS=n, then Data Pump Export uses EXCLUDE=GRANT.

If original Export used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

INDEXES

If original Export used INDEXES=n, then Data Pump Export uses the EXCLUDE=INDEX parameter.

If original Export used INDEXES=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

LOG

Data Pump Export attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.

See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Export LOG parameter.

The contents of the log file will be those of a Data Pump Export operation. See "Log Files" for information about log file location and content.

OBJECT_CONSISTENT

This parameter is ignored because Data Pump Export processing ensures that each object is in a consistent state when being exported.

OWNER

The Data Pump SCHEMAS parameter is used.

RECORDLENGTH

This parameter is ignored because Data Pump Export automatically takes care of buffer sizing.

RESUMABLE

This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.

RESUMABLE_NAME

This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.

RESUMABLE_TIMEOUT

This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.

ROWS

If original Export used ROWS=y, then Data Pump Export uses the CONTENT=ALL parameter.

If original Export used ROWS=n, then Data Pump Export uses the CONTENT=METADATA_ONLY parameter.

STATISTICS

This parameter is ignored because statistics are always saved for tables as part of a Data Pump export operation.

TABLESPACES

If original Export also specified TRANSPORT_TABLESPACE=n, then Data Pump Export ignores the TABLESPACES parameter.

If original Export also specified TRANSPORT_TABLESPACE=y, then Data Pump Export takes the names listed for the TABLESPACES parameter and uses them on the Data Pump Export TRANSPORT_TABLESPACES parameter.

TRANSPORT_TABLESPACE

If original Export used TRANSPORT_TABLESPACE=n (the default), then Data Pump Export uses the TABLESPACES parameter.

If original Export used TRANSPORT_TABLESPACE=y, then Data Pump Export uses the TRANSPORT_TABLESPACES parameter and only the metadata is exported.

TRIGGERS

If original Export used TRIGGERS=n, then Data Pump Export uses the EXCLUDE=TRIGGER parameter.

If original Export used TRIGGERS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

TTS_FULL_CHECK

If original Export used TTS_FULL_CHECK=y, then Data Pump Export uses the TRANSPORT_FULL_CHECK parameter.

If original Export used TTS_FULL_CHECK=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

VOLSIZE

When the original Export VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump Export dump file format does not support tape devices. Therefore, this operation terminates with an error.


Using Original Import Parameters with Data Pump

Data Pump Import accepts original Import parameters when they map to a corresponding Data Pump parameter. Table 4-2 describes how Data Pump Import interprets original Import parameters. Parameters that have the same name and functionality in both original Import and Data Pump Import are not included in this table.

Table 4-2 How Data Pump Import Handles Original Import Parameters

Original Import ParameterAction Taken by Data Pump Import Parameter
BUFFER

This parameter is ignored.

CHARSET

This parameter was desupported several releases ago and should no longer be used. It will cause the Data Pump Import operation to abort.

COMMIT

This parameter is ignored. Data Pump Import automatically performs a commit after each table is processed.

COMPILE

This parameter is ignored. Data Pump Import compiles procedures after they are created. A recompile can be executed if necessary for dependency reasons.

CONSTRAINTS

If original Import used CONSTRAINTS=n, then Data Pump Import uses the EXCLUDE=CONSTRAINT parameter.

If original Import used CONSTRAINTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

DATAFILES

The Data Pump Import TRANSPORT_DATAFILES parameter is used.

DESTROY

If original Import used DESTROY=y, then Data Pump Import uses the REUSE_DATAFILES=y parameter.

If original Import used DESTROY=n, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

FEEDBACK

The Data Pump Import STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the import job, as well as the rows being processed.

In original Import, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Import, the status is given every so many seconds, as specified by STATUS.

FILE

Data Pump Import attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.

See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Import FILE parameter.

FILESIZE

This parameter is ignored because the information is already contained in the Data Pump dump file set.

FROMUSER

The Data Pump Import SCHEMAS parameter is used. If FROMUSER was used without TOUSER also being used, then import schemas that have the IMP_FULL_DATABASE role cause Data Pump Import to attempt to create the schema and then import that schema's objects. Import schemas that do not have the IMP_FULL_DATABASE role can only import their own schema from the dump file set.

GRANTS

If original Import used GRANTS=n, then Data Pump Import uses the EXCLUDE=OBJECT_GRANT parameter.

If original Import used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

IGNORE

If original Import used IGNORE=y, then Data Pump Import uses the TABLE_EXISTS_ACTION=APPEND parameter. This causes the processing of table data to continue.

If original Import used IGNORE=n, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

INDEXES

If original Import used INDEXES=n, then Data Pump Import uses the EXCLUDE=INDEX parameter.

If original Import used INDEXES=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

INDEXFILE

The Data Pump Import SQLFILE={directory-object:}filename and INCLUDE=INDEX parameters are used.

The same method and attempts made when looking for a directory object described for the FILE parameter also take place for the INDEXFILE parameter.

If no directory object was specified on the original Import, then Data Pump Import uses the directory object specified with the DIRECTORY parameter.

LOG

Data Pump Import attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.

See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Import LOG parameter.

The contents of the log file will be those of a Data Pump Import operation. See "Log Files" for information about log file location and content.

RECORDLENGTH

This parameter is ignored because Data Pump handles issues about record length internally.

RESUMABLE

This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.

RESUMABLE_NAME

This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.

RESUMABLE_TIMEOUT

This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.

ROWS=N

If original Import used ROWS=n, then Data Pump Import uses the CONTENT=METADATA_ONLY parameter.

If original Import used ROWS=y, then Data Pump Import uses the CONTENT=ALL parameter.

SHOW

If SHOW=y is specified, then the Data Pump Import SQLFILE=[directory_object:]file_name parameter is used to write the DDL for the import operation to a file. Only the DDL (not the entire contents of the dump file) is written to the specified file. (Note that the output is not shown on the screen as it was in original Import.)

The name of the file will be the file name specified on the DUMPFILE parameter (or on the original Import FILE parameter, which is remapped to DUMPFILE). If multiple dump file names are listed, then the first file name in the list is used. The file will be located in the directory object location specified on the DIRECTORY parameter or the directory object included on the DUMPFILE parameter. (Directory objects specified on the DUMPFILE parameter take precedence.)

STATISTICS

This parameter is ignored because statistics are always saved for tables as part of a Data Pump Import operation.

STREAMS_CONFIGURATION

This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.

STREAMS_INSTANTIATION

This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified

TABLESPACES

If original Import also specified TRANSPORT_TABLESPACE=n (the default), then Data Pump Import ignores the TABLESPACES parameter.

If original Import also specified TRANSPORT_TABLESPACE=y, then Data Pump Import takes the names supplied for this TABLESPACES parameter and applies them to the Data Pump Import TRANSPORT_TABLESPACES parameter.

TOID_NOVALIDATE

This parameter is ignored. OIDs are no longer used for type validation.

TOUSER

The Data Pump Import REMAP_SCHEMA parameter is used. There may be more objects imported than with original Import. Also, Data Pump Import may create the target schema if it does not already exist.

The FROMUSER parameter must also have been specified in original Import or the operation will fail.

TRANSPORT_TABLESPACE

The TRANSPORT_TABLESPACE parameter is ignored, but if you also specified the DATAFILES parameter, then the import job continues to load the metadata. If the DATAFILES parameter is not specified, then an ORA-39002:invalid operation error message is returned.

TTS_OWNERS 

This parameter is ignored because this information is automatically stored in the Data Pump dump file set.

VOLSIZE

When the original Import VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump Import dump file format does not support tape devices. Therefore, this operation t'Eerminates with an error.


Management of File Locations in Data Pump Legacy Mode

Original Export and Import and Data Pump Export and Import differ on where dump files and log files can be written to and read from because the original version is client-based and Data Pump is server-based.

Original Export and Import use the FILE and LOG parameters to specify dump file and log file names, respectively. These file names always refer to files local to the client system and they may also contain a path specification.

Data Pump Export and Import use the DUMPFILE and LOGFILE parameters to specify dump file and log file names, respectively. These file names always refer to files local to the server system and cannot contain any path information. Instead, a directory object is used to indirectly specify path information. The path value defined by the directory object must be accessible to the server. The directory object is specified for a Data Pump job through the DIRECTORY parameter. It is also possible to prepend a directory object to the file names passed to the DUMPFILE and LOGFILE parameters. For privileged users, Data Pump supports the use of a default directory object if one is not specified on the command line. This default directory object, DATA_PUMP_DIR, is set up at installation time.

If Data Pump legacy mode is enabled and the original Export FILE=filespec parameter and/or LOG=filespec parameter are present on the command line, then the following rules of precedence are used to determine a file's location:


Note:

If the FILE parameter and LOG parameter are both present on the command line, then the rules of precedence are applied separately to each parameter.

Also, when a mix of original Export/Import and Data Pump Export/Import parameters are used, separate rules apply to them. For example, suppose you have the following command:

expdp system FILE=/user/disk/foo.dmp LOGFILE=foo.log DIRECTORY=dpump_dir

The Data Pump legacy mode file management rules, as explained in this section, would apply to the FILE parameter. The normal (that is, non-legacy mode) Data Pump file management rules, as described in "Default Locations for Dump, Log, and SQL Files", would apply to the LOGFILE parameter.


  1. If a path location is specified as part of the file specification, then Data Pump attempts to look for a directory object accessible to the schema executing the export job whose path location matches the path location of the file specification. If such a directory object cannot be found, then an error is returned. For example, assume that a server-based directory object named USER_DUMP_FILES has been defined with a path value of '/disk1/user1/dumpfiles/' and that read and write access to this directory object has been granted to the hr schema. The following command causes Data Pump to look for a server-based directory object whose path value contains '/disk1/user1/dumpfiles/' and to which the hr schema has been granted read and write access:

    expdp hr FILE=/disk1/user1/dumpfiles/hrdata.dmp
    

    In this case, Data Pump uses the directory object USER_DUMP_FILES. The path value, in this example '/disk1/user1/dumpfiles/', must refer to a path on the server system that is accessible to the Oracle Database.

    If a path location is specified as part of the file specification, then any directory object provided using the DIRECTORY parameter is ignored. For example, if the following command is issued, then Data Pump does not use the DPUMP_DIR directory object for the file parameter, but instead looks for a server-based directory object whose path value contains '/disk1/user1/dumpfiles/' and to which the hr schema has been granted read and write access:

    expdp hr FILE=/disk1/user1/dumpfiles/hrdata.dmp DIRECTORY=dpump_dir
    
  2. If no path location is specified as part of the file specification, then the directory object named by the DIRECTORY parameter is used. For example, if the following command is issued, then Data Pump applies the path location defined for the DPUMP_DIR directory object to the hrdata.dmp file:

    expdp hr FILE=hrdata.dmp DIRECTORY=dpump_dir
    
  3. If no path location is specified as part of the file specification and no directory object is named by the DIRECTORY parameter, then Data Pump does the following, in the order shown:

    1. Data Pump looks for the existence of a directory object of the form DATA_PUMP_DIR_schema_name, where schema_name is the schema that is executing the Data Pump job. For example, the following command would cause Data Pump to look for the existence of a server-based directory object named DATA_PUMP_DIR_HR:

      expdp hr FILE=hrdata.dmp
      

      The hr schema also must have been granted read and write access to this directory object. If such a directory object does not exist, then the process moves to step b.

    2. Data Pump looks for the existence of the client-based environment variable DATA_PUMP_DIR. For instance, assume that a server-based directory object named DUMP_FILES1 has been defined and the hr schema has been granted read and write access to it. Then on the client system, the environment variable DATA_PUMP_DIR can be set to point to DUMP_FILES1 as follows:

      setenv DATA_PUMP_DIR DUMP_FILES1
      expdp hr FILE=hrdata.dmp
      

      Data Pump then uses the served-based directory object DUMP_FILES1 for the hrdata.dmp file.

      If a client-based environment variable DATA_PUMP_DIR does not exist, then the process moves to step c.

    3. If the schema that is executing the Data Pump job has DBA privileges, then the default Data Pump directory object, DATA_PUMP_DIR, is used. This default directory object is established at installation time. For example, the following command causes Data Pump to attempt to use the default DATA_PUMP_DIR directory object, assuming that system has DBA privileges:

      expdp system FILE=hrdata.dmp
      

See Also:

"Default Locations for Dump, Log, and SQL Files" for information about Data Pump file management rules of precedence under normal Data Pump conditions (that is, non-legacy mode)

Adjusting Existing Scripts for Data Pump Log Files and Errors

Data Pump legacy mode requires that you review and update your existing scripts written for original Export and Import because of differences in file format and error reporting.

Log Files

Data Pump Export and Import do not generate log files in the same format as those created by original Export and Import. Any scripts you have that parse the output of original Export and Import must be updated to handle the log file format used by Data Pump Export and Import. For example, the message Successfully Terminated does not appear in Data Pump log files.

Error Cases

Data Pump Export and Import may not produce the same errors as those generated by original Export and Import. For example, if a parameter that is ignored by Data Pump Export would have had an out-of-range value in original Export, then an informational message is written to the log file stating that the parameter is being ignored. No value checking is performed, therefore no error message is generated.

Exit Status

Data Pump Export and Import have enhanced exit status values to allow scripts to better determine the success of failure of export and import jobs. Any scripts that look at the exit status should be reviewed and updated, if necessary.

PKVħPKN:A OEBPS/loe.htmq List of Examples

List of Examples

PKPKN:AOEBPS/ldr_modes.htm Conventional and Direct Path Loads

12 Conventional and Direct Path Loads

This chapter describes SQL*Loader's conventional and direct path load methods. The following topics are discussed:

For an example of using the direct path load method, see case study 6, Loading Data Using the Direct Path Load Method. The other cases use the conventional path load method. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Data Loading Methods

SQL*Loader provides two methods for loading data:

A conventional path load executes SQL INSERT statements to populate tables in an Oracle database. A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. A direct load does not compete with other users for database resources, so it can usually load data at near disk speed. Considerations inherent to direct path loads, such as restrictions, security, and backup implications, are discussed in this chapter.

The tables to be loaded must already exist in the database. SQL*Loader never creates tables. It loads existing tables that either already contain data or are empty.

The following privileges are required for a load:

Figure 12-1 shows how conventional and direct path loads perform database writes.

Figure 12-1 Database Writes on SQL*Loader Direct Path and Conventional Path

Description of Figure 12-1 follows
Description of "Figure 12-1 Database Writes on SQL*Loader Direct Path and Conventional Path"

Loading ROWID Columns

In both conventional path and direct path, you can specify a text value for a ROWID column. (This is the same text you get when you perform a SELECT ROWID FROM table_name operation.) The character string interpretation of the ROWID is converted into the ROWID type for a column in a table.

Conventional Path Load

Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. This method is used by all Oracle tools and applications.

When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to Oracle, and executed.

The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically.

Conventional Path Load of a Single Partition

By definition, a conventional path load uses SQL INSERT statements. During a conventional path load of a single partition, SQL*Loader uses the partition-extended syntax of the INSERT statement, which has the following form:

INSERT INTO TABLE T PARTITION (P) VALUES ... 

The SQL layer of the Oracle kernel determines if the row being inserted maps to the specified partition. If the row does not map to the partition, then the row is rejected, and the SQL*Loader log file records an appropriate error message.

When to Use a Conventional Path Load

If load speed is most important to you, then you should use direct path load because it is faster than conventional path load. However, certain restrictions on direct path loads may require you to use a conventional path load. You should use a conventional path load in the following situations:

  • When accessing an indexed table concurrently with the load, or when applying inserts or updates to a nonindexed table concurrently with the load

    To use a direct path load (except for parallel loads), SQL*Loader must have exclusive write access to the table and exclusive read/write access to any indexes.

  • When loading data into a clustered table

    A direct path load does not support loading of clustered tables.

  • When loading a relatively small number of rows into a large indexed table

    During a direct path load, the existing index is copied when it is merged with the new index keys. If the existing index is very large and the number of new keys is very small, then the index copy time can offset the time saved by a direct path load.

  • When loading a relatively small number of rows into a large table with referential and column-check integrity constraints

    Because these constraints cannot be applied to rows loaded on the direct path, they are disabled for the duration of the load. Then they are applied to the whole table when the load completes. The costs could outweigh the savings for a very large table and a small number of new rows.

  • When loading records and you want to ensure that a record is rejected under any of the following circumstances:

    • If the record, upon insertion, causes an Oracle error

    • If the record is formatted incorrectly, so that SQL*Loader cannot find field boundaries

    • If the record violates a constraint or tries to make a unique index non-unique

Direct Path Load

Instead of filling a bind array buffer and passing it to the Oracle database with a SQL INSERT statement, a direct path load uses the direct path API to pass the data to be loaded to the load engine in the server. The load engine builds a column array structure from the data passed to it.

The direct path load engine uses the column array structure to format Oracle data blocks and build index keys. The newly formatted database blocks are written directly to the database (multiple blocks per I/O request using asynchronous writes if the host platform supports asynchronous I/O).

Internally, multiple buffers are used for the formatted blocks. While one buffer is being filled, one or more buffers are being written if asynchronous I/O is available on the host platform. Overlapping computation with I/O increases load performance.

Data Conversion During Direct Path Loads

During a direct path load, data conversion occurs on the client side rather than on the server side. This means that NLS parameters in the initialization parameter file (server-side language handle) will not be used. To override this behavior, you can specify a format mask in the SQL*Loader control file that is equivalent to the setting of the NLS parameter in the initialization parameter file, or set the appropriate environment variable. For example, to specify a date format for a field, you can either set the date format in the SQL*Loader control file as shown in Example 12-1 or set an NLS_DATE_FORMAT environment variable as shown in Example 12-2.

Example 12-1 Setting the Date Format in the SQL*Loader Control File

LOAD DATA
INFILE 'data.dat'
INSERT INTO TABLE emp
FIELDS TERMINATED BY "|"
(
EMPNO NUMBER(4) NOT NULL,
ENAME CHAR(10),
JOB CHAR(9),
MGR NUMBER(4),
HIREDATE DATE 'YYYYMMDD',
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)

Example 12-2 Setting an NLS_DATE_FORMAT Environment Variable

On UNIX Bourne or Korn shell:

% NLS_DATE_FORMAT='YYYYMMDD'
% export NLS_DATE_FORMAT

On UNIX csh:

%setenv NLS_DATE_FORMAT='YYYYMMDD'

Direct Path Load of a Partitioned or Subpartitioned Table

When loading a partitioned or subpartitioned table, SQL*Loader partitions the rows and maintains indexes (which can also be partitioned). Note that a direct path load of a partitioned or subpartitioned table can be quite resource-intensive for tables with many partitions or subpartitions.


Note:

If you are performing a direct path load into multiple partitions and a space error occurs, then the load is rolled back to the last commit point. If there was no commit point, then the entire load is rolled back. This ensures that no data encountered after the space error is written out to a different partition.

You can use the ROWS parameter to specify the frequency of the commit points. If the ROWS parameter is not specified, then the entire load is rolled back.


Direct Path Load of a Single Partition or Subpartition

When loading a single partition of a partitioned or subpartitioned table, SQL*Loader partitions the rows and rejects any rows that do not map to the partition or subpartition specified in the SQL*Loader control file. Local index partitions that correspond to the data partition or subpartition being loaded are maintained by SQL*Loader. Global indexes are not maintained on single partition or subpartition direct path loads. During a direct path load of a single partition, SQL*Loader uses the partition-extended syntax of the LOAD statement, which has either of the following forms:

LOAD INTO TABLE T PARTITION (P) VALUES ... 

LOAD INTO TABLE T SUBPARTITION (P) VALUES ... 

While you are loading a partition of a partitioned or subpartitioned table, you are also allowed to perform DML operations on, and direct path loads of, other partitions in the table.

Although a direct path load minimizes database processing, several calls to the Oracle database are required at the beginning and end of the load to initialize and finish the load, respectively. Also, certain DML locks are required during load initialization and are released when the load completes. The following operations occur during the load: index keys are built and put into a sort, and space management routines are used to get new extents when needed and to adjust the upper boundary (high-water mark) for a data savepoint. See "Using Data Saves to Protect Against Data Loss" for information about adjusting the upper boundary.

Advantages of a Direct Path Load

A direct path load is faster than the conventional path for the following reasons:

  • Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed.

  • SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on the Oracle database is reduced.

  • A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement.

  • A direct path load uses multiblock asynchronous I/O for writes to the database files.

  • During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This minimizes contention with other Oracle users.

  • The sorted indexes option available during direct path loads enables you to presort data using high-performance sort routines that are native to your system or installation.

  • When a table to be loaded is empty, the presorting option eliminates the sort and merge phases of index-building. The index is filled in as data arrives.

  • Protection against instance failure does not require redo log file entries during direct path loads. Therefore, no time is required to log the load when:

    • The Oracle database has the SQL NOARCHIVELOG parameter enabled

    • The SQL*Loader UNRECOVERABLE clause is enabled

    • The object being loaded has the SQL NOLOGGING parameter set

    See "Instance Recovery and Direct Path Loads".

Restrictions on Using Direct Path Loads

The following conditions must be satisfied for you to use the direct path load method:

  • Tables are not clustered.

  • Segments to be loaded do not have any active transactions pending.

    To check for this condition, use the Oracle Enterprise Manager command MONITOR TABLE to find the object ID for the tables you want to load. Then use the command MONITOR LOCK to see if there are any locks on the tables.

  • For releases of the database earlier than Oracle9i, you can perform a SQL*Loader direct path load only when the client and server are the same release. This also means that you cannot perform a direct path load of Oracle9i data into a database of an earlier release. For example, you cannot use direct path load to load data from a release 9.0.1 database into a release 8.1.7 database.

    Beginning with Oracle9i, you can perform a SQL*Loader direct path load when the client and server are different releases. However, both releases must be at least release 9.0.1 and the client release must be the same as or lower than the server release. For example, you can perform a direct path load from a release 9.0.1 database into a release 9.2 database. However, you cannot use direct path load to load data from a release 10.0.0 database into a release 9.2 database.

  • Tables to be loaded in direct path mode do not have VPD policies active on INSERT.

The following features are not available with direct path load:

  • Loading a parent table together with a child table

  • Loading BFILE columns

  • Use of CREATE SEQUENCE during the load. This is because in direct path loads there is no SQL being generated to fetch the next value since direct path does not generate INSERT statements.

Restrictions on a Direct Path Load of a Single Partition

In addition to the previously listed restrictions, loading a single partition has the following restrictions:

  • The table that the partition is a member of cannot have any global indexes defined on it.

  • Enabled referential and check constraints on the table that the partition is a member of are not allowed.

  • Enabled triggers are not allowed.

When to Use a Direct Path Load

If none of the previous restrictions apply, then you should use a direct path load when:

  • You have a large amount of data to load quickly. A direct path load can quickly load and index large amounts of data. It can also load data into either an empty or nonempty table.

  • You want to load data in parallel for maximum performance. See "Parallel Data Loading Models".

Integrity Constraints

All integrity constraints are enforced during direct path loads, although not necessarily at the same time. NOT NULL constraints are enforced during the load. Records that fail these constraints are rejected.

UNIQUE constraints are enforced both during and after the load. A record that violates a UNIQUE constraint is not rejected (the record is not available in memory when the constraint violation is detected).

Integrity constraints that depend on other rows or tables, such as referential constraints, are disabled before the direct path load and must be reenabled afterwards. If REENABLE is specified, then SQL*Loader can reenable them automatically at the end of the load. When the constraints are reenabled, the entire table is checked. Any rows that fail this check are reported in the specified error log. See "Direct Loads, Integrity Constraints, and Triggers".

Field Defaults on the Direct Path

Default column specifications defined in the database are not available when you use direct path loading. Fields for which default values are desired must be specified with the DEFAULTIF clause. If a DEFAULTIF clause is not specified and the field is NULL, then a null value is inserted into the database.

Loading into Synonyms

You can load data into a synonym for a table during a direct path load, but the synonym must point directly to either a table or a view on a simple table. Note the following restrictions:

  • Direct path mode cannot be used if the view is on a table that has user-defined types or XML data.

  • In direct path mode, a view cannot be loaded using a SQL*Loader control file that contains SQL expressions.

Using Direct Path Load

This section explains how to use the SQL*Loader direct path load method.

Setting Up for Direct Path Loads

To prepare the database for direct path loads, you must run the setup script, catldr.sql, to create the necessary views. You need only run this script once for each database you plan to do direct loads to. You can run this script during database installation if you know then that you will be doing direct loads.

Specifying a Direct Path Load

To start SQL*Loader in direct path load mode, set the DIRECT parameter to true on the command line or in the parameter file, if used, in the format:

DIRECT=true

See Also:


Building Indexes

You can improve performance of direct path loads by using temporary storage. After each block is formatted, the new index keys are put in a sort (temporary) segment. The old index and the new keys are merged at load finish time to create the new index. The old index, sort (temporary) segment, and new index segment all require storage until the merge is complete. Then the old index and temporary segment are removed.

During a conventional path load, every time a row is inserted the index is updated. This method does not require temporary storage space, but it does add processing time.

Improving Performance

To improve performance on systems with limited memory, use the SINGLEROW parameter. For more information, see "SINGLEROW Option".


Note:

If, during a direct load, you have specified that the data is to be presorted and the existing index is empty, then a temporary segment is not required, and no merge occurs—the keys are put directly into the index. See "Optimizing Performance of Direct Path Loads" for more information.

When multiple indexes are built, the temporary segments corresponding to each index exist simultaneously, in addition to the old indexes. The new keys are then merged with the old indexes, one index at a time. As each new index is created, the old index and the corresponding temporary segment are removed.


See Also:

Oracle Database Administrator's Guide for information about how to estimate index size and set storage parameters

Temporary Segment Storage Requirements

To estimate the amount of temporary segment space needed for storing the new index keys (in bytes), use the following formula:

1.3 * key_storage

In this formula, key storage is defined as follows:

key_storage = (number_of_rows) *
     ( 10 + sum_of_column_sizes + number_of_columns )

The columns included in this formula are the columns in the index. There is one length byte per column, and 10 bytes per row are used for a ROWID and additional overhead.

The constant 1.3 reflects the average amount of extra space needed for sorting. This value is appropriate for most randomly ordered data. If the data arrives in exactly opposite order, then twice the key-storage space is required for sorting, and the value of this constant would be 2.0. That is the worst case.

If the data is fully sorted, then only enough space to store the index entries is required, and the value of this constant would be 1.0. See "Presorting Data for Faster Indexing" for more information.

Indexes Left in an Unusable State

SQL*Loader leaves indexes in an Index Unusable state when the data segment being loaded becomes more up-to-date than the index segments that index it.

Any SQL statement that tries to use an index that is in an Index Unusable state returns an error. The following conditions cause a direct path load to leave an index or a partition of a partitioned index in an Index Unusable state:

  • SQL*Loader runs out of space for the index and cannot update the index.

  • The data is not in the order specified by the SORTED INDEXES clause.

  • There is an instance failure, or the Oracle shadow process fails while building the index.

  • There are duplicate keys in a unique index.

  • Data savepoints are being used, and the load fails or is terminated by a keyboard interrupt after a data savepoint occurred.

To determine if an index is in an Index Unusable state, you can execute a simple query:

SELECT INDEX_NAME, STATUS
   FROM USER_INDEXES 
   WHERE TABLE_NAME = 'tablename';

If you are not the owner of the table, then search ALL_INDEXES or DBA_INDEXES instead of USER_INDEXES.

To determine if an index partition is in an unusable state, you can execute the following query:

SELECT INDEX_NAME, 
       PARTITION_NAME,
       STATUS FROM USER_IND_PARTITIONS
       WHERE STATUS != 'VALID';

If you are not the owner of the table, then search ALL_IND_PARTITIONS and DBA_IND_PARTITIONS instead of USER_IND_PARTITIONS.

Using Data Saves to Protect Against Data Loss

You can use data saves to protect against loss of data due to instance failure. All data loaded up to the last savepoint is protected against instance failure. To continue the load after an instance failure, determine how many rows from the input file were processed before the failure, then use the SKIP parameter to skip those processed rows.

If there are any indexes on the table, drop them before continuing the load, and then re-create them after the load. See "Data Recovery During Direct Path Loads" for more information about media and instance recovery.


Note:

Indexes are not protected by a data save, because SQL*Loader does not build indexes until after data loading completes. (The only time indexes are built during the load is when presorted data is loaded into an empty table, but these indexes are also unprotected.)

Using the ROWS Parameter

The ROWS parameter determines when data saves occur during a direct path load. The value you specify for ROWS is the number of rows you want SQL*Loader to read from the input file before saving inserts in the database.

A data save is an expensive operation. The value for ROWS should be set high enough so that a data save occurs once every 15 minutes or longer. The intent is to provide an upper boundary (high-water mark) on the amount of work that is lost when an instance failure occurs during a long-running direct path load. Setting the value of ROWS to a small number adversely affects performance and data block space utilization.

Data Save Versus Commit

In a conventional load, ROWS is the number of rows to read before a commit operation. A direct load data save is similar to a conventional load commit, but it is not identical.

The similarities are as follows:

  • A data save will make the rows visible to other users.

  • Rows cannot be rolled back after a data save.

The major difference is that in a direct path load data save, the indexes will be unusable (in Index Unusable state) until the load completes.

Data Recovery During Direct Path Loads

SQL*Loader provides full support for data recovery when using the direct path load method. There are two main types of recovery:

  • Media - recovery from the loss of a database file. You must be operating in ARCHIVELOG mode to recover after you lose a database file.

  • Instance - recovery from a system failure in which in-memory data was changed but lost due to the failure before it was written to disk. The Oracle database can always recover from instance failures, even when redo logs are not archived.


    See Also:

    Oracle Database Administrator's Guide for more information about recovery

Media Recovery and Direct Path Loads

If redo log file archiving is enabled (you are operating in ARCHIVELOG mode), then SQL*Loader logs loaded data when using the direct path, making media recovery possible. If redo log archiving is not enabled (you are operating in NOARCHIVELOG mode), then media recovery is not possible.

To recover a database file that was lost while it was being loaded, use the same method that you use to recover data loaded with the conventional path:

  1. Restore the most recent backup of the affected database file.

  2. Recover the tablespace using the RECOVER command.


    See Also:

    Oracle Database Backup and Recovery User's Guide for more information about the RMAN RECOVER command

Instance Recovery and Direct Path Loads

Because SQL*Loader writes directly to the database files, all rows inserted up to the last data save will automatically be present in the database files if the instance is restarted. Changes do not need to be recorded in the redo log file to make instance recovery possible.

If an instance failure occurs, then the indexes being built may be left in an Index Unusable state. Indexes that are Unusable must be rebuilt before you can use the table or partition. See "Indexes Left in an Unusable State" for information about how to determine if an index has been left in Index Unusable state.

Loading Long Data Fields

Data that is longer than SQL*Loader's maximum buffer size can be loaded on the direct path by using LOBs. You can improve performance when doing this by using a large STREAMSIZE value.

You could also load data that is longer than the maximum buffer size by using the PIECED parameter, as described in the next section, but Oracle highly recommends that you use LOBs instead.

Loading Data As PIECED

The PIECED parameter can be used to load data in sections, if the data is in the last column of the logical record.

Declaring a column as PIECED informs the direct path loader that a LONG field might be split across multiple physical records (pieces). In such cases, SQL*Loader processes each piece of the LONG field as it is found in the physical record. All the pieces are read before the record is processed. SQL*Loader makes no attempt to materialize the LONG field before storing it; however, all the pieces are read before the record is processed.

The following restrictions apply when you declare a column as PIECED:

  • This option is only valid on the direct path.

  • Only one field per table may be PIECED.

  • The PIECED field must be the last field in the logical record.

  • The PIECED field may not be used in any WHEN, NULLIF, or DEFAULTIF clauses.

  • The PIECED field's region in the logical record must not overlap with any other field's region.

  • The PIECED corresponding database column may not be part of the index.

  • It may not be possible to load a rejected record from the bad file if it contains a PIECED field.

    For example, a PIECED field could span three records. SQL*Loader loads the piece from the first record and then reuses the buffer for the second buffer. After loading the second piece, the buffer is reused for the third record. If an error is discovered, then only the third record is placed in the bad file because the first two records no longer exist in the buffer. As a result, the record in the bad file would not be valid.

Optimizing Performance of Direct Path Loads

You can control the time and temporary storage used during direct path loads.

To minimize time:

To minimize space:

Preallocating Storage for Faster Loading

SQL*Loader automatically adds extents to the table if necessary, but this process takes time. For faster loads into a new table, allocate the required extents when the table is created.

To calculate the space required by a table, see the information about managing database files in the Oracle Database Administrator's Guide. Then use the INITIAL or MINEXTENTS clause in the SQL CREATE TABLE statement to allocate the required space.

Another approach is to size extents large enough so that extent allocation is infrequent.

Presorting Data for Faster Indexing

You can improve the performance of direct path loads by presorting your data on indexed columns. Presorting minimizes temporary storage requirements during the load. Presorting also enables you to take advantage of high-performance sorting routines that are optimized for your operating system or application.

If the data is presorted and the existing index is not empty, then presorting minimizes the amount of temporary segment space needed for the new keys. The sort routine appends each new key to the key list.

Instead of requiring extra space for sorting, only space for the keys is needed. To calculate the amount of storage needed, use a sort factor of 1.0 instead of 1.3. For more information about estimating storage requirements, see "Temporary Segment Storage Requirements".

If presorting is specified and the existing index is empty, then maximum efficiency is achieved. The new keys are simply inserted into the index. Instead of having a temporary segment and new index existing simultaneously with the empty, old index, only the new index exists. So, temporary storage is not required, and time is saved.

SORTED INDEXES Clause

The SORTED INDEXES clause identifies the indexes on which the data is presorted. This clause is allowed only for direct path loads. See case study 6, Loading Data Using the Direct Path Load Method, for an example. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Generally, you specify only one index in the SORTED INDEXES clause, because data that is sorted for one index is not usually in the right order for another index. When the data is in the same order for multiple indexes, however, all indexes can be specified at once.

All indexes listed in the SORTED INDEXES clause must be created before you start the direct path load.

Unsorted Data

If you specify an index in the SORTED INDEXES clause, and the data is not sorted for that index, then the index is left in an Index Unusable state at the end of the load. The data is present, but any attempt to use the index results in an error. Any index that is left in an Index Unusable state must be rebuilt after the load.

Multiple-Column Indexes

If you specify a multiple-column index in the SORTED INDEXES clause, then the data should be sorted so that it is ordered first on the first column in the index, next on the second column in the index, and so on.

For example, if the first column of the index is city, and the second column is last name; then the data should be ordered by name within each city, as in the following list:

Albuquerque      Adams
Albuquerque      Hartstein
Albuquerque      Klein
...         ...
Boston           Andrews
Boston           Bobrowski
Boston           Heigham
...              ...

Choosing the Best Sort Order

For the best overall performance of direct path loads, you should presort the data based on the index that requires the most temporary segment space. For example, if the primary key is one numeric column, and the secondary key consists of three text columns, then you can minimize both sort time and storage requirements by presorting on the secondary key.

To determine the index that requires the most storage space, use the following procedure:

  1. For each index, add up the widths of all columns in that index.

  2. For a single-table load, pick the index with the largest overall width.

  3. For each table in a multiple-table load, identify the index with the largest overall width. If the same number of rows are to be loaded into each table, then again pick the index with the largest overall width. Usually, the same number of rows are loaded into each table.

  4. If a different number of rows are to be loaded into the indexed tables in a multiple-table load, then multiply the width of each index identified in Step 3 by the number of rows that are to be loaded into that index, and pick the index with the largest result.

Infrequent Data Saves

Frequent data saves resulting from a small ROWS value adversely affect the performance of a direct path load. A small ROWS value can also result in wasted data block space because the last data block is not written to after a save, even if the data block is not full.

Because direct path loads can be many times faster than conventional loads, the value of ROWS should be considerably higher for a direct load than it would be for a conventional load.

During a data save, loading stops until all of SQL*Loader's buffers are successfully written. You should select the largest value for ROWS that is consistent with safety. It is a good idea to determine the average time to load a row by loading a few thousand rows. Then you can use that value to select a good value for ROWS.

For example, if you can load 20,000 rows per minute, and you do not want to repeat more than 10 minutes of work after an interruption, then set ROWS to be 200,000 (20,000 rows/minute * 10 minutes).

Minimizing Use of the Redo Log

One way to speed a direct load dramatically is to minimize use of the redo log. There are three ways to do this. You can disable archiving, you can specify that the load is unrecoverable, or you can set the SQL NOLOGGING parameter for the objects being loaded. This section discusses all methods.

Disabling Archiving

If archiving is disabled, then direct path loads do not generate full image redo. Use the SQL ARCHIVELOG and NOARCHIVELOG parameters to set the archiving mode. See the Oracle Database Administrator's Guide for more information about archiving.

Specifying the SQL*Loader UNRECOVERABLE Clause

To save time and space in the redo log file, use the SQL*Loader UNRECOVERABLE clause in the control file when you load data. An unrecoverable load does not record loaded data in the redo log file; instead, it generates invalidation redo.

The UNRECOVERABLE clause applies to all objects loaded during the load session (both data and index segments). Therefore, media recovery is disabled for the loaded table, although database changes by other users may continue to be logged.


Note:

Because the data load is not logged, you may want to make a backup of the data after loading.

If media recovery becomes necessary on data that was loaded with the UNRECOVERABLE clause, then the data blocks that were loaded are marked as logically corrupted.

To recover the data, drop and re-create the data. It is a good idea to do backups immediately after the load to preserve the otherwise unrecoverable data.

By default, a direct path load is RECOVERABLE.

The following is an example of specifying the UNRECOVERABLE clause in the control file:

UNRECOVERABLE
LOAD DATA
INFILE 'sample.dat'
INTO TABLE emp
(ename VARCHAR2(10), empno NUMBER(4));

Setting the SQL NOLOGGING Parameter

If a data or index segment has the SQL NOLOGGING parameter set, then full image redo logging is disabled for that segment (invalidation redo is generated). Use of the NOLOGGING parameter allows a finer degree of control over the objects that are not logged.

Specifying the Number of Column Array Rows and Size of Stream Buffers

The number of column array rows determines the number of rows loaded before the stream buffer is built. The STREAMSIZE parameter specifies the size (in bytes) of the data stream sent from the client to the server.

Use the COLUMNARRAYROWS parameter to specify a value for the number of column array rows. Note that when VARRAYs are loaded using direct path, the COLUMNARRAYROWS parameter defaults to 100 to avoid client object cache thrashing.

Use the STREAMSIZE parameter to specify the size for direct path stream buffers.

The optimal values for these parameters vary, depending on the system, input datatypes, and Oracle column datatypes used. When you are using optimal values for your particular configuration, the elapsed time in the SQL*Loader log file should go down.

To see a list of default values for these and other parameters, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".


Note:

You should monitor process paging activity, because if paging becomes excessive, then performance can be significantly degraded. You may need to lower the values for READSIZE, STREAMSIZE, and COLUMNARRAYROWS to avoid excessive paging.

It can be particularly useful to specify the number of column array rows and size of the steam buffer when you perform direct path loads on multiple-CPU systems. See "Optimizing Direct Path Loads on Multiple-CPU Systems" for more information.

Specifying a Value for the Date Cache

If you are performing a direct path load in which the same date or timestamp values are loaded many times, then a large percentage of total load time can end up being used for converting date and timestamp data. This is especially true if multiple date columns are being loaded. In such a case, it may be possible to improve performance by using the SQL*Loader date cache.

The date cache reduces the number of date conversions done when many duplicate values are present in the input data. It enables you to specify the number of unique dates anticipated during the load.

The date cache is enabled by default. To completely disable the date cache, set it to 0.

The default date cache size is 1000 elements. If the default is used and the number of unique input values loaded exceeds 1000, then the date cache is automatically disabled for that table. This prevents excessive and unnecessary lookup times that could affect performance. However, if instead of using the default, you specify a nonzero value for the date cache and it is exceeded, then the date cache is not disabled. Instead, any input data that exceeded the maximum is explicitly converted using the appropriate conversion routines.

The date cache can be associated with only one table. No date cache sharing can take place across tables. A date cache is created for a table only if all of the following conditions are true:

  • The DATE_CACHE parameter is not set to 0

  • One or more date values, timestamp values, or both are being loaded that require datatype conversion in order to be stored in the table

  • The load is a direct path load

Date cache statistics are written to the log file. You can use those statistics to improve direct path load performance as follows:

  • If the number of cache entries is less than the cache size and there are no cache misses, then the cache size could safely be set to a smaller value.

  • If the number of cache hits (entries for which there are duplicate values) is small and the number of cache misses is large, then the cache size should be increased. Be aware that if the cache size is increased too much, then it may cause other problems, such as excessive paging or too much memory usage.

  • If most of the input date values are unique, then the date cache will not enhance performance and therefore should not be used.


    Note:

    Date cache statistics are not written to the SQL*Loader log file if the cache was active by default and disabled because the maximum was exceeded.

If increasing the cache size does not improve performance, then revert to the default behavior or set the cache size to 0. The overall performance improvement also depends on the datatypes of the other columns being loaded. Improvement will be greater for cases in which the total number of date columns loaded is large compared to other types of data loaded.


See Also:

"DATE_CACHE"

Optimizing Direct Path Loads on Multiple-CPU Systems

If you are performing direct path loads on a multiple-CPU system, then SQL*Loader uses multithreading by default. A multiple-CPU system in this case is defined as a single system that has two or more CPUs.

Multithreaded loading means that, when possible, conversion of the column arrays to stream buffers and stream buffer loading are performed in parallel. This optimization works best when:

The status of this process is recorded in the SQL*Loader log file, as shown in the following sample portion of a log:

Total stream buffers loaded by SQL*Loader main thread:         47
Total stream buffers loaded by SQL*Loader load thread:        180
Column array rows:                                           1000
Stream buffer bytes:                                       256000

In this example, the SQL*Loader load thread has offloaded the SQL*Loader main thread, allowing the main thread to build the next stream buffer while the load thread loads the current stream on the server.

The goal is to have the load thread perform as many stream buffer loads as possible. This can be accomplished by increasing the number of column array rows, decreasing the stream buffer size, or both. You can monitor the elapsed time in the SQL*Loader log file to determine whether your changes are having the desired effect. See "Specifying the Number of Column Array Rows and Size of Stream Buffers" for more information.

On single-CPU systems, optimization is turned off by default. When the server is on another system, performance may improve if you manually turn on multithreading.

To turn the multithreading option on or off, use the MULTITHREADING parameter at the SQL*Loader command line or specify it in your SQL*Loader control file.


See Also:

Oracle Call Interface Programmer's Guide for more information about the concepts of direct path loading

Avoiding Index Maintenance

For both the conventional path and the direct path, SQL*Loader maintains all existing indexes for a table.

To avoid index maintenance, use one of the following methods:

By avoiding index maintenance, you minimize the amount of space required during a direct path load, in the following ways:

Avoiding index maintenance is quite reasonable when the number of rows to be loaded is large compared to the size of the table. But if relatively few rows are added to a large table, then the time required to resort the indexes may be excessive. In such cases, it is usually better to use the conventional path load method, or to use the SINGLEROW parameter of SQL*Loader. For more information, see "SINGLEROW Option".

Direct Loads, Integrity Constraints, and Triggers

With the conventional path load method, arrays of rows are inserted with standard SQL INSERT statements—integrity constraints and insert triggers are automatically applied. But when you load data with the direct path, SQL*Loader disables some integrity constraints and all database triggers. This section discusses the implications of using direct path loads with respect to these features.

Integrity Constraints

During a direct path load, some integrity constraints are automatically disabled. Others are not. For a description of the constraints, see the information about maintaining data integrity in the Oracle Database Advanced Application Developer's Guide.

Enabled Constraints

During a direct path load, the constraints that remain enabled are as follows:

  • NOT NULL

  • UNIQUE

  • PRIMARY KEY (unique-constraints on not-null columns)

NOT NULL constraints are checked at column array build time. Any row that violates the NOT NULL constraint is rejected.

Even though UNIQUE constraints remain enabled during direct path loads, any rows that violate those constraints are loaded anyway (this is different than in conventional path in which such rows would be rejected). When indexes are rebuilt at the end of the direct path load, UNIQUE constraints are verified and if a violation is detected, then the index will be left in an Index Unusable state. See "Indexes Left in an Unusable State".

Disabled Constraints

During a direct path load, the following constraints are automatically disabled by default:

  • CHECK constraints

  • Referential constraints (FOREIGN KEY)

You can override the automatic disabling of CHECK constraints by specifying the EVALUATE CHECK_CONSTRAINTS clause. SQL*Loader will then evaluate CHECK constraints during a direct path load. Any row that violates the CHECK constraint is rejected. The following example shows the use of the EVALUATE CHECK_CONSTRAINTS clause in a SQL*Loader control file:

LOAD DATA 
INFILE * 
APPEND 
INTO TABLE emp 
EVALUATE CHECK_CONSTRAINTS 
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' 
(c1 CHAR(10) ,c2) 
BEGINDATA 
Jones,10
Smith,20 
Brown,30
Taylor,40

Reenable Constraints

When the load completes, the integrity constraints will be reenabled automatically if the REENABLE clause is specified. The syntax for the REENABLE clause is as follows:

Description of into_table4.gif follows
Description of the illustration into_table4.gif

Description of into_table5.gif follows
Description of the illustration into_table5.gif

The optional parameter DISABLED_CONSTRAINTS is provided for readability. If the EXCEPTIONS clause is included, then the table must already exist and you must be able to insert into it. This table contains the ROWIDs of all rows that violated one of the integrity constraints. It also contains the name of the constraint that was violated. See Oracle Database SQL Language Reference for instructions on how to create an exceptions table.

The SQL*Loader log file describes the constraints that were disabled, the ones that were reenabled, and what error, if any, prevented reenabling or validating of each constraint. It also contains the name of the exceptions table specified for each loaded table.

If the REENABLE clause is not used, then the constraints must be reenabled manually, at which time all rows in the table are verified. If the Oracle database finds any errors in the new data, then error messages are produced. The names of violated constraints and the ROWIDs of the bad data are placed in an exceptions table, if one is specified.

If the REENABLE clause is used, then SQL*Loader automatically reenables the constraint and verifies all new rows. If no errors are found in the new data, then SQL*Loader automatically marks the constraint as validated. If any errors are found in the new data, then error messages are written to the log file and SQL*Loader marks the status of the constraint as ENABLE NOVALIDATE. The names of violated constraints and the ROWIDs of the bad data are placed in an exceptions table, if one is specified.


Note:

Normally, when a table constraint is left in an ENABLE NOVALIDATE state, new data can be inserted into the table but no new invalid data may be inserted. However, SQL*Loader direct path load does not enforce this rule. Thus, if subsequent direct path loads are performed with invalid data, then the invalid data will be inserted but the same error reporting and exception table processing as described previously will take place. In this scenario the exception table may contain duplicate entries if it is not cleared out before each load. Duplicate entries can easily be filtered out by performing a query such as the following:
SELECT UNIQUE * FROM exceptions_table; 


Note:

Because referential integrity must be reverified for the entire table, performance may be improved by using the conventional path, instead of the direct path, when a small number of rows are to be loaded into a very large table.

Database Insert Triggers

Table insert triggers are also disabled when a direct path load begins. After the rows are loaded and indexes rebuilt, any triggers that were disabled are automatically reenabled. The log file lists all triggers that were disabled for the load. There should not be any errors reenabling triggers.

Unlike integrity constraints, insert triggers are not reapplied to the whole table when they are enabled. As a result, insert triggers do not fire for any rows loaded on the direct path. When using the direct path, the application must ensure that any behavior associated with insert triggers is carried out for the new rows.

Replacing Insert Triggers with Integrity Constraints

Applications commonly use insert triggers to implement integrity constraints. Most of the triggers that these application insert are simple enough that they can be replaced with Oracle's automatic integrity constraints.

When Automatic Constraints Cannot Be Used

Sometimes an insert trigger cannot be replaced with Oracle's automatic integrity constraints. For example, if an integrity check is implemented with a table lookup in an insert trigger, then automatic check constraints cannot be used, because the automatic constraints can only reference constants and columns in the current row. This section describes two methods for duplicating the effects of such a trigger.

Preparation

Before either method can be used, the table must be prepared. Use the following general guidelines to prepare the table:

  1. Before the load, add a 1-byte or 1-character column to the table that marks rows as "old data" or "new data."

  2. Let the value of null for this column signify "old data" because null columns do not take up space.

  3. When loading, flag all loaded rows as "new data" with SQL*Loader's CONSTANT parameter.

After following this procedure, all newly loaded rows are identified, making it possible to operate on the new data without affecting the old rows.

Using an Update Trigger

Generally, you can use a database update trigger to duplicate the effects of an insert trigger. This method is the simplest. It can be used whenever the insert trigger does not raise any exceptions.

  1. Create an update trigger that duplicates the effects of the insert trigger.

    Copy the trigger. Change all occurrences of "new.column_name" to "old.column_name".

  2. Replace the current update trigger, if it exists, with the new one.

  3. Update the table, changing the "new data" flag to null, thereby firing the update trigger.

  4. Restore the original update trigger, if there was one.

Depending on the behavior of the trigger, it may be necessary to have exclusive update access to the table during this operation, so that other users do not inadvertently apply the trigger to rows they modify.

Duplicating the Effects of Exception Conditions

If the insert trigger can raise an exception, then more work is required to duplicate its effects. Raising an exception would prevent the row from being inserted into the table. To duplicate that effect with an update trigger, it is necessary to mark the loaded row for deletion.

The "new data" column cannot be used as a delete flag, because an update trigger cannot modify the columns that caused it to fire. So another column must be added to the table. This column marks the row for deletion. A null value means the row is valid. Whenever the insert trigger would raise an exception, the update trigger can mark the row as invalid by setting a flag in the additional column.

In summary, when an insert trigger can raise an exception condition, its effects can be duplicated by an update trigger, provided:

  • Two columns (which are usually null) are added to the table

  • The table can be updated exclusively (if necessary)

Using a Stored Procedure

The following procedure always works, but it is more complex to implement. It can be used when the insert trigger raises exceptions. It does not require a second additional column; and, because it does not replace the update trigger, it can be used without exclusive access to the table.

  1. Do the following to create a stored procedure that duplicates the effects of the insert trigger:

    1. Declare a cursor for the table, selecting all new rows.

    2. Open the cursor and fetch rows, one at a time, in a processing loop.

    3. Perform the operations contained in the insert trigger.

    4. If the operations succeed, then change the "new data" flag to null.

    5. If the operations fail, then change the "new data" flag to "bad data."

  2. Execute the stored procedure using an administration tool such as SQL*Plus.

  3. After running the procedure, check the table for any rows marked "bad data."

  4. Update or remove the bad rows.

  5. Reenable the insert trigger.

Permanently Disabled Triggers and Constraints

SQL*Loader needs to acquire several locks on the table to be loaded to disable triggers and constraints. If a competing process is enabling triggers or constraints at the same time that SQL*Loader is trying to disable them for that table, then SQL*Loader may not be able to acquire exclusive access to the table.

SQL*Loader attempts to handle this situation as gracefully as possible. It attempts to reenable disabled triggers and constraints before exiting. However, the same table-locking problem that made it impossible for SQL*Loader to continue may also have made it impossible for SQL*Loader to finish enabling triggers and constraints. In such cases, triggers and constraints will remain disabled until they are manually enabled.

Although such a situation is unlikely, it is possible. The best way to prevent it is to ensure that no applications are running that could enable triggers or constraints for the table while the direct load is in progress.

If a direct load is terminated due to failure to acquire the proper locks, then carefully check the log. It will show every trigger and constraint that was disabled, and each attempt to reenable them. Any triggers or constraints that were not reenabled by SQL*Loader should be manually enabled with the ENABLE clause of the ALTER TABLE statement described in Oracle Database SQL Language Reference.

Increasing Performance with Concurrent Conventional Path Loads

If triggers or integrity constraints pose a problem, but you want faster loading, then you should consider using concurrent conventional path loads. That is, use multiple load sessions executing concurrently on a multiple-CPU system. Split the input data files into separate files on logical record boundaries, and then load each such input data file with a conventional path load session. The resulting load has the following attributes:

  • It is faster than a single conventional load on a multiple-CPU system, but probably not as fast as a direct load.

  • Triggers fire, integrity constraints are applied to the loaded rows, and indexes are maintained using the standard DML execution logic.

Parallel Data Loading Models

This section discusses three basic models of concurrency that you can use to minimize the elapsed time required for data loading:

  • Concurrent conventional path loads

  • Intersegment concurrency with the direct path load method

  • Intrasegment concurrency with the direct path load method

Concurrent Conventional Path Loads

Using multiple conventional path load sessions executing concurrently is discussed in "Increasing Performance with Concurrent Conventional Path Loads". You can use this technique to load the same or different objects concurrently with no restrictions.

Intersegment Concurrency with Direct Path

Intersegment concurrency can be used for concurrent loading of different objects. You can apply this technique to concurrent direct path loading of different tables, or to concurrent direct path loading of different partitions of the same table.

When you direct path load a single partition, consider the following items:

  • Local indexes can be maintained by the load.

  • Global indexes cannot be maintained by the load.

  • Referential integrity and CHECK constraints must be disabled.

  • Triggers must be disabled.

  • The input data should be partitioned (otherwise many records will be rejected, which adversely affects performance).

Intrasegment Concurrency with Direct Path

SQL*Loader permits multiple, concurrent sessions to perform a direct path load into the same table, or into the same partition of a partitioned table. Multiple SQL*Loader sessions improve the performance of a direct path load given the available resources on your system.

This method of data loading is enabled by setting both the DIRECT and the PARALLEL parameters to true, and is often referred to as a parallel direct path load.

It is important to realize that parallelism is user managed. Setting the PARALLEL parameter to true only allows multiple concurrent direct path load sessions.

Restrictions on Parallel Direct Path Loads

The following restrictions are enforced on parallel direct path loads:

  • Neither local nor global indexes can be maintained by the load.

  • Rows can only be appended. REPLACE, TRUNCATE, and INSERT cannot be used (this is due to the individual loads not being coordinated). If you must truncate a table before a parallel load, then you must do it manually.

Additionally, the following objects must be disabled on parallel direct path loads. You do not have to take any action to disable them. SQL*Loader disables them before the load begins and re-enables them after the load completes:

  • Referential integrity constraints

  • Triggers

  • CHECK constraints, unless the ENABLE_CHECK_CONSTRAINTS control file option is used

If a parallel direct path load is being applied to a single partition, then you should partition the data first (otherwise, the overhead of record rejection due to a partition mismatch slows down the load).

Initiating Multiple SQL*Loader Sessions

Each SQL*Loader session takes a different data file as input. In all sessions executing a direct load on the same table, you must set PARALLEL to true. The syntax is:

Description of parallel.gif follows
Description of the illustration parallel.gif

PARALLEL can be specified on the command line or in a parameter file. It can also be specified in the control file with the OPTIONS clause.

For example, to invoke three SQL*Loader direct path load sessions on the same table, you would execute each of the following commands at the operating system prompt. After entering each command, you will be prompted for a password.

sqlldr USERID=scott CONTROL=load1.ctl DIRECT=TRUE PARALLEL=true
sqlldr USERID=scott CONTROL=load2.ctl DIRECT=TRUE PARALLEL=true
sqlldr USERID=scott CONTROL=load3.ctl DIRECT=TRUE PARALLEL=true

The previous commands must be executed in separate sessions, or if permitted on your operating system, as separate background jobs. Note the use of multiple control files. This enables you to be flexible in specifying the files to use for the direct path load.


Note:

Indexes are not maintained during a parallel load. Any indexes must be created or re-created manually after the load completes. You can use the parallel index creation or parallel index rebuild feature to speed the building of large indexes after a parallel load.

When you perform a parallel load, SQL*Loader creates temporary segments for each concurrent session and then merges the segments upon completion. The segment created from the merge is then added to the existing segment in the database above the segment's high-water mark. The last extent used of each segment for each loader session is trimmed of any free space before being combined with the other extents of the SQL*Loader session.

Parameters for Parallel Direct Path Loads

When you perform parallel direct path loads, there are options available for specifying attributes of the temporary segment to be allocated by the loader. These options are specified with the FILE and STORAGE parameters. These parameters are valid only for parallel loads.

Using the FILE Parameter to Specify Temporary Segments

To allow for maximum I/O throughput, Oracle recommends that each concurrent direct path load session use files located on different disks. In the SQL*Loader control file, use the FILE parameter of the OPTIONS clause to specify the file name of any valid data file in the tablespace of the object (table or partition) being loaded.

For example:

LOAD DATA
INFILE 'load1.dat'
INSERT INTO TABLE emp
OPTIONS(FILE='/dat/data1.dat')
(empno POSITION(01:04) INTEGER EXTERNAL NULLIF empno=BLANKS
...

You could also specify the FILE parameter on the command line of each concurrent SQL*Loader session, but then it would apply globally to all objects being loaded with that session.

Using the FILE Parameter

The FILE parameter in the Oracle database has the following restrictions for parallel direct path loads:

  • For nonpartitioned tables: The specified file must be in the tablespace of the table being loaded.

  • For partitioned tables, single-partition load: The specified file must be in the tablespace of the partition being loaded.

  • For partitioned tables, full-table load: The specified file must be in the tablespace of all partitions being loaded; that is, all partitions must be in the same tablespace.

Using the STORAGE Parameter

You can use the STORAGE parameter to specify the storage attributes of the temporary segments allocated for a parallel direct path load. If the STORAGE parameter is not used, then the storage attributes of the segment containing the object (table, partition) being loaded are used. Also, when the STORAGE parameter is not specified, SQL*Loader uses a default of 2 KB for EXTENTS.

For example, the following OPTIONS clause could be used to specify STORAGE parameters:

OPTIONS (STORAGE=(INITIAL 100M NEXT 100M PCTINCREASE 0))

You can use the STORAGE parameter only in the SQL*Loader control file, and not on the command line. Use of the STORAGE parameter to specify anything other than PCTINCREASE of 0, and INITIAL or NEXT values is strongly discouraged and may be silently ignored.

Enabling Constraints After a Parallel Direct Path Load

Constraints and triggers must be enabled manually after all data loading is complete.

Because each SQL*Loader session can attempt to reenable constraints on a table after a direct path load, there is a danger that one session may attempt to reenable a constraint before another session is finished loading data. In this case, the first session to complete the load will be unable to enable the constraint because the remaining sessions possess share locks on the table.

Because there is a danger that some constraints might not be reenabled after a direct path load, you should check the status of the constraint after completing the load to ensure that it was enabled properly.

PRIMARY KEY and UNIQUE KEY Constraints

PRIMARY KEY and UNIQUE KEY constraints create indexes on a table when they are enabled, and subsequently can take a significantly long time to enable after a direct path loading session if the table is very large. You should consider enabling these constraints manually after a load (and not specifying the automatic enable feature). This enables you to manually create the required indexes in parallel to save time before enabling the constraint.

General Performance Improvement Hints

If you have control over the format of the data to be loaded, then you can use the following hints to improve load performance:

  • Make logical record processing efficient.

    • Use one-to-one mapping of physical records to logical records (avoid using CONTINUEIF and CONCATENATE).

    • Make it easy for the software to identify physical record boundaries. Use the file processing option string "FIX nnn" or "VAR". If you use the default (stream mode), then on most platforms (for example, UNIX and NT) the loader must scan each physical record for the record terminator (newline character).

  • Make field setting efficient. Field setting is the process of mapping fields in the data file to their corresponding columns in the table being loaded. The mapping function is controlled by the description of the fields in the control file. Field setting (along with data conversion) is the biggest consumer of CPU cycles for most loads.

    • Avoid delimited fields; use positional fields. If you use delimited fields, then the loader must scan the input data to find the delimiters. If you use positional fields, then field setting becomes simple pointer arithmetic (very fast).

    • Do not trim whitespace if you do not need to (use PRESERVE BLANKS).

  • Make conversions efficient. SQL*Loader performs character set conversion and datatype conversion for you. Of course, the quickest conversion is no conversion.

    • Use single-byte character sets if you can.

    • Avoid character set conversions if you can. SQL*Loader supports four character sets:

      • Client character set (NLS_LANG of the client sqlldr process)

      • Data file character set (usually the same as the client character set)

      • Database character set

      • Database national character set

      Performance is optimized if all character sets are the same. For direct path loads, it is best if the data file character set and the database character set are the same. If the character sets are the same, then character set conversion buffers are not allocated.

  • Use direct path loads.

  • Use the SORTED INDEXES clause.

  • Avoid unnecessary NULLIF and DEFAULTIF clauses. Each clause must be evaluated on each column that has a clause associated with it for every row loaded.

  • Use parallel direct path loads and parallel index creation when you can.

  • Be aware of the effect on performance when you have large values for both the CONCATENATE clause and the COLUMNARRAYROWS clause. See "Using CONCATENATE to Assemble Logical Records".

PKuvvPKN:AOEBPS/ldr_params.htm SQL*Loader Command-Line Reference

8 SQL*Loader Command-Line Reference

This chapter describes the command-line parameters used to invoke SQL*Loader. The following topics are discussed:

Invoking SQL*Loader

This section describes how to invoke SQL*Loader and specify parameters. It contains the following sections:

Specifying Parameters on the Command Line

When you invoke SQL*Loader, you specify parameters to establish session characteristics. You can separate the parameters by commas, if you want to.

> sqlldr CONTROL=ulcase1.ctl
Username: scott
Password: password
 

Specifying by position means that you enter a value, but not the parameter name. In the following example, the username scott is provided and then the name of the control file, ulcase1.ctl. You are prompted for the password:

> sqlldr scott ulcase1.ctl
Password: password
 

Once a keyword specification is used, no positional specification is allowed after that. For example, the following command line would result in an error even though the position of ulcase1.log is correct:

> sqlldr scott CONTROL=ulcase1.ctl ulcase1.log

If you invoke SQL*Loader without specifying any parameters, then SQL*Loader displays a help screen that lists the available parameters and their default values.


See Also:

"Command-Line Parameters" for descriptions of all the command-line parameters

Alternative Ways to Specify Parameters

If the length of the command line exceeds the size of the maximum command line on your system, then you can put certain command-line parameters in the control file by using the OPTIONS clause.

You can also group parameters together in a parameter file. You specify the name of this file on the command line using the PARFILE parameter when you invoke SQL*Loader.

These alternative ways of specifying parameters are useful when you often use the same parameters with the same values.

Parameter values specified on the command line override parameter values specified in either a parameter file or in the OPTIONS clause.

Loading Data Across a Network

To use SQL*Loader to load data across a network connection, you can specify a connect identifier in the connect string when you invoke the SQL*Loader utility. This identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID). The connect identifier can be an Oracle Net connect descriptor or a net service name (usually defined in the tnsnames.ora file) that maps to a connect descriptor. Use of a connect identifier requires that you have Oracle Net Listener running (to start the default listener, enter lsnrctl start). The following example invokes SQL*Loader for user scott using the connect identifier inst1:

> sqlldr CONTROL=ulcase1.ctl
Username: scott@inst1
Password: password

The local SQL*Loader client connects to the database instance defined by the connect identifier inst1 (a net service name), and loads the data, as specified in the ulcase1.ctl control file.


See Also:

Oracle Database Net Services Administrator's Guide for more information about connect identifiers and Oracle Net Listener

Command-Line Parameters

This section describes each SQL*Loader command-line parameter. The defaults and maximum values listed for these parameters are for UNIX-based systems. They may be different on your operating system. Refer to your Oracle operating system-specific documentation for more information.

BAD (bad file)

Default: The name of the data file, with an extension of .bad.

BAD specifies the name of the bad file created by SQL*Loader to store records that cause errors during insert or that are improperly formatted. If you do not specify a file name, then the default is used. A bad file is not automatically created if there are no rejected records.

A bad file name specified on the command line becomes the bad file associated with the first INFILE statement in the control file.

Note that the name of the bad file can also be specified in the SQL*Loader control file, using the BADFILE clause. If the bad file name is specified in the control file as well as on the command line, then the command line value takes precedence.


See Also:

"Specifying the Bad File" for information about the format of bad files

BINDSIZE (maximum size)

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".

BINDSIZE specifies the maximum size (bytes) of the bind array. The size of the bind array given by BINDSIZE overrides the default size (which is system dependent) and any size determined by ROWS.

COLUMNARRAYROWS

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".

Specifies the number of rows to allocate for direct path column arrays. The value for this parameter is not calculated by SQL*Loader. You must either specify it or accept the default.

CONTROL (control file)

Default: none

CONTROL specifies the name of the SQL*Loader control file that describes how to load the data. If a file extension or file type is not specified, then it defaults to .ctl. If the file name is omitted, then SQL*Loader prompts you for it.

If the name of your SQL*Loader control file contains special characters, then your operating system may require that they be preceded by an escape character. Also, if your operating system uses backslashes in its file system paths, then you may need to use multiple escape characters or to enclose the path in quotation marks. See your Oracle operating system-specific documentation for more information.


See Also:

Chapter 9 for a detailed description of the SQL*Loader control file

DATA (data file)

Default: The name of the control file, with an extension of .dat.

DATA specifies the name of the data file containing the data to be loaded. If you do not specify a file extension or file type, then the default is .dat.

If you specify a data file on the command line and also specify data files in the control file with INFILE, then the data specified on the command line is processed first. The first data file specified in the control file is ignored. All other data files specified in the control file are processed.

If you specify a file processing option when loading data from the control file, then a warning message will be issued.

DATE_CACHE

Default: Enabled (for 1000 elements). To completely disable the date cache feature, set it to 0.

The date cache is used to store the results of conversions from text strings to internal date format. The cache is useful because the cost of looking up dates is much less than converting from text format to date format. If the same dates occur repeatedly in the data file, then using the date cache can improve the speed of a direct path load.

DATE_CACHE specifies the date cache size (in entries). For example, DATE_CACHE=5000 specifies that each date cache created can contain a maximum of 5000 unique date entries. Every table has its own date cache, if one is needed. A date cache is created only if at least one date or timestamp value is loaded that requires datatype conversion in order to be stored in the table.

The date cache feature is only available for direct path loads. It is enabled by default. The default date cache size is 1000 elements. If the default size is used and the number of unique input values loaded exceeds 1000, then the date cache feature is automatically disabled for that table. However, if you override the default and specify a nonzero date cache size and that size is exceeded, then the cache is not disabled.

You can use the date cache statistics (entries, hits, and misses) contained in the log file to tune the size of the cache for future similar loads.

DIRECT (data path)

Default: false

DIRECT specifies the data path, that is, the load method to use, either conventional path or direct path. A value of true specifies a direct path load. A value of false specifies a conventional path load.

DISCARD (file name)

Default: The name of the data file, with an extension of .dsc.

DISCARD specifies a discard file (optional) to be created by SQL*Loader to store records that are neither inserted into a table nor rejected.

A discard file specified on the command line becomes the discard file associated with the first INFILE statement in the control file. If the discard file is also specified in the control file, then the command-line value overrides it.


See Also:

"Discarded and Rejected Records" for information about the format of discard files

DISCARDMAX (integer)

Default: ALL

DISCARDMAX specifies the number of discard records to allow before data loading is terminated. To stop on the first discarded record, specify one (1).

ERRORS (errors to allow)

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".

ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value specified for ERRORS, then SQL*Loader terminates the load. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.

On a single-table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.

SQL*Loader maintains the consistency of records across all tables. Therefore, multitable loads do not terminate immediately if errors exceed the error limit. When SQL*Loader encounters the maximum number of errors for a multitable load, it continues to load rows to ensure that valid rows previously loaded into tables are loaded into all tables and rejected rows are filtered out of all tables.

In all cases, SQL*Loader writes erroneous records to the bad file.

EXTERNAL_TABLE

Default: NOT_USED

EXTERNAL_TABLE instructs SQL*Loader whether to load data using the external tables option. There are three possible values:

  • NOT_USED - the default value. It means the load is performed using either conventional or direct path mode.

  • GENERATE_ONLY - places all the SQL statements needed to do the load using external tables, as described in the control file, in the SQL*Loader log file. These SQL statements can be edited and customized. The actual load can be done later without the use of SQL*Loader by executing these statements in SQL*Plus.

  • EXECUTE - attempts to execute the SQL statements that are needed to do the load using external tables. However, if any of the SQL statements returns an error, then the attempt to load stops. Statements are placed in the log file as they are executed. This means that if a SQL statement returns an error, then the remaining SQL statements required for the load will not be placed in the log file.

    If you use EXTERNAL_TABLE=EXECUTE and also use the SEQUENCE parameter in your SQL*Loader control file, then SQL*Loader creates a database sequence, loads the table using that sequence, and then deletes the sequence. The results of doing the load this way will be different than if the load were done with conventional or direct path. (For more information about creating sequences, see CREATE SEQUENCE in Oracle Database SQL Language Reference.)


Note:

When the EXTERNAL_TABLE parameter is specified, any datetime data types (for example, TIMESTAMP) in a SQL*Loader control file are automatically converted to a CHAR data type and use the external tables date_format_spec clause. See "date_format_spec".

Note that the external table option uses directory objects in the database to indicate where all data files are stored and to indicate where output files, such as bad files and discard files, are created. You must have READ access to the directory objects containing the data files, and you must have WRITE access to the directory objects where the output files are created. If there are no existing directory objects for the location of a data file or output file, then SQL*Loader will generate the SQL statement to create one. Therefore, when the EXECUTE option is specified, you must have the CREATE ANY DIRECTORY privilege. If you want the directory object to be deleted at the end of the load, then you must also have the DELETE ANY DIRECTORY privilege.


Note:

The EXTERNAL_TABLE=EXECUTE qualifier tells SQL*Loader to create an external table that can be used to load data and then execute the INSERT statement to load the data. All files in the external table must be identified as being in a directory object. SQL*Loader attempts to use directory objects that already exist and that you have privileges to access. However, if SQL*Loader does not find the matching directory object, then it attempts to create a temporary directory object. If you do not have privileges to create new directory objects, then the operation fails.

To work around this, use EXTERNAL_TABLE=GENERATE_ONLY to create the SQL statements that SQL*Loader would try to execute. Extract those SQL statements and change references to directory objects to be the directory object that you have privileges to access. Then, execute those SQL statements.


When using a multitable load, SQL*Loader does the following:

  1. Creates a table in the database that describes all fields in the data file that will be loaded into any table.

  2. Creates an INSERT statement to load this table from an external table description of the data.

  3. Executes one INSERT statement for every table in the control file.

To see an example of this, run case study 5, but add the EXTERNAL_TABLE=GENERATE_ONLY parameter. To guarantee unique names in the external table, SQL*Loader uses generated names for all fields. This is because the field names may not be unique across the different tables in the control file.

Restrictions When Using EXTERNAL_TABLE

The following restrictions apply when you use the EXTERNAL_TABLE qualifier:

  • Julian dates cannot be used when you insert data into a database table from an external table through SQL*Loader. To work around this, use TO_DATE and TO_CHAR to convert the Julian date format, as shown in the following example:

    TO_CHAR(TO_DATE(:COL1, 'MM-DD-YYYY'), 'J')
    
  • Built-in functions and SQL strings cannot be used for object elements when you insert data into a database table from an external table.

FILE (tablespace file to load into)

Default: none

FILE specifies the database file to allocate extents from. It is used only for direct path parallel loads. By varying the value of the FILE parameter for different SQL*Loader processes, data can be loaded onto a system with minimal disk contention.

LOAD (number of records to load)

Default: All records are loaded.

LOAD specifies the maximum number of logical records to load (after skipping the specified number of records). No error occurs if fewer than the maximum number of records are found.

LOG (log file)

Default: The name of the control file, with an extension of .log.

LOG specifies the log file that SQL*Loader will create to store logging information about the loading process.

MULTITHREADING

Default: true on multiple-CPU systems, false on single-CPU systems

This parameter is available only for direct path loads.

By default, the multithreading option is always enabled (set to true) on multiple-CPU systems. In this case, the definition of a multiple-CPU system is a single system that has more than one CPU.

On single-CPU systems, multithreading is set to false by default. To use multithreading between two single-CPU systems, you must enable multithreading; it will not be on by default. This will allow stream building on the client system to be done in parallel with stream loading on the server system.

Multithreading functionality is operating system-dependent. Not all operating systems support multithreading.

NO_INDEX_ERRORS

Default: none

When NO_INDEX_ERRORS is specified on the command line, indexes will not be set unusable at any time during the load. If any index errors are detected, then the load is aborted. That is, no rows are loaded and the indexes are left as is.

The NO_INDEX_ERRORS parameter is valid only for direct path loads. If specified for conventional path loads, then it is ignored.

PARALLEL (parallel load)

Default: false

PARALLEL specifies whether direct loads can operate in multiple concurrent sessions to load data into the same table.

PARFILE (parameter file)

Default: none

PARFILE specifies the name of a file that contains commonly used command-line parameters. For example, a parameter file named daily_report.par might have the following contents:

USERID=scott
CONTROL=daily_report.ctl
ERRORS=9999
LOG=daily_report.log

For security reasons, you should not include your USERID password in a parameter file. SQL*Loader will prompt you for the password after you specify the parameter file at the command line, for example:

sqlldr PARFILE=daily_report.par
Password: password

Note:

Although it is not usually important, on some systems it may be necessary to have no spaces around the equal sign (=) in the parameter specifications.

READSIZE (read buffer size)

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".

The READSIZE parameter is used only when reading data from data files. When reading records from a control file, a value of 64 kilobytes (KB) is always used as the READSIZE.

The READSIZE parameter lets you specify (in bytes)Jw the size of the read buffer, if you choose not to use the default. The maximum size allowed is platform dependent.

In the conventional path method, the bind array is limited by the size of the read buffer. Therefore, the advantage of a larger read buffer is that more data can be read before a commit operation is required.

For example, setting READSIZE to 1000000 enables SQL*Loader to perform reads from the external data file in chunks of 1,000,000 bytes before a commit is required.


Note:

If the READSIZE value specified is smaller than the BINDSIZE value, then the READSIZE value will be increased.

The READSIZE parameter has no effect on LOBs. The size of the LOB read buffer is fixed at 64 kilobytes (KB).

See "BINDSIZE (maximum size)".

RESUMABLE

Default: false

The RESUMABLE parameter is used to enable and disable resumable space allocation. Because this parameter is disabled by default, you must set RESUMABLE=true to use its associated parameters, RESUMABLE_NAME and RESUMABLE_TIMEOUT.

RESUMABLE_NAME

Default: 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'

The value for this parameter identifies the statement that is resumable. This value is a user-defined text string that is inserted in either the USER_RESUMABLE or DBA_RESUMABLE view to help you identify a specific resumable statement that has been suspended.

This parameter is ignored unless the RESUMABLE parameter is set to true to enable resumable space allocation.

RESUMABLE_TIMEOUT

Default: 7200 seconds (2 hours)

The value of the parameter specifies the time period during which an error must be fixed. If the error is not fixed within the timeout period, then execution of the statement is terminated, without finishing.

This parameter is ignored unless the RESUMABLE parameter is set to true to enable resumable space allocation.

ROWS (rows per commit)

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".

Keep in mind that if you specify a low value for ROWS and then attempt to compress data using table compression, then your compression ratio will probably be degraded. Oracle recommends that you either specify a high value or accept the default value when compressing data.

Conventional path loads only: The ROWS parameter specifies the number of rows in the bind array. The maximum number of rows is 65534. See "Bind Arrays and Conventional Path Loads".

Direct path loads only: The ROWS parameter identifies the number of rows you want to read from the data file before a data save. The default is to read all rows and save data once at the end of the load. See "Using Data Saves to Protect Against Data Loss". The actual number of rows loaded into a table on a save is approximately the value of ROWS minus the number of discarded and rejected records since the last save.


Note:

The ROWS parameter is ignored for direct path loads when data is loaded into an Index Organized Table (IOT) or into a table containing VARRAYs, XML columns, or LOBs. This means that the load will still take place, but no save points will be done.

SILENT (feedback mode)

When SQL*Loader begins, information about the SQL*Loader version being used appears on the screen and is placed in the log file. As SQL*Loader executes, you also see feedback messages on the screen, for example:

Commit point reached - logical record count 20

SQL*Loader may also display data error messages similar to the following:

Record 4: Rejected - Error on table EMP
ORA-00001: unique constraint <name> violated

You can suppress these messages by specifying SILENT with one or more values.

For example, you can suppress the header and feedback messages that normally appear on the screen with the following command-line argument:

SILENT=(HEADER, FEEDBACK)

Use the appropriate values to suppress one or more of the following:

  • HEADER - Suppresses the SQL*Loader header messages that normally appear on the screen. Header messages still appear in the log file.

  • FEEDBACK - Suppresses the "commit point reached" feedback messages that normally appear on the screen.

  • ERRORS - Suppresses the data error messages in the log file that occur when a record generates an Oracle error that causes it to be written to the bad file. A count of rejected records still appears.

  • DISCARDS - Suppresses the messages in the log file for each record written to the discard file.

  • PARTITIONS - Disables writing the per-partition statistics to the log file during a direct load of a partitioned table.

  • ALL - Implements all of the suppression values: HEADER, FEEDBACK, ERRORS, DISCARDS, and PARTITIONS.

SKIP (records to skip)

Default: No records are skipped.

SKIP specifies the number of logical records from the beginning of the file that should not be loaded.

This parameter continues loads that have been interrupted for some reason. It is used for all conventional loads, for single-table direct loads, and for multiple-table direct loads when the same number of records was loaded into each table. It is not used for multiple-table direct loads when a different number of records was loaded into each table.

If a WHEN clause is also present and the load involves secondary data, then the secondary data is skipped only if the WHEN clause succeeds for the record in the primary data file.

SKIP_INDEX_MAINTENANCE

Default: false

The SKIP_INDEX_MAINTENANCE parameter stops index maintenance for direct path loads but does not apply to conventional path loads. It causes the index partitions that would have had index keys added to them to be marked Index Unusable instead, because the index segment is inconsistent with respect to the data it indexes. Index segments that are not affected by the load retain the Index Unusable state they had before the load.

The SKIP_INDEX_MAINTENANCE parameter:

  • Applies to both local and global indexes

  • Can be used (with the PARALLEL parameter) to do parallel loads on an object that has indexes

  • Can be used (with the PARTITION parameter on the INTO TABLE clause) to do a single partition load to a table that has global indexes

  • Puts a list (in the SQL*Loader log file) of the indexes and index partitions that the load set into Index Unusable state

SKIP_UNUSABLE_INDEXES

Default: The value of the Oracle database configuration parameter, SKIP_UNUSABLE_INDEXES, as specified in the initialization parameter file. The default database setting is TRUE.

Both SQL*Loader and the Oracle database provide a SKIP_UNUSABLE_INDEXES parameter. The SQL*Loader SKIP_UNUSABLE_INDEXES parameter is specified at the SQL*Loader command line. The Oracle database SKIP_UNUSABLE_INDEXES parameter is specified as a configuration parameter in the initialization parameter file. It is important to understand how they affect each other.

If you specify a value for SKIP_UNUSABLE_INDEXES at the SQL*Loader command line, then it overrides the value of the SKIP_UNUSABLE_INDEXES configuration parameter in the initialization parameter file.

If you do not specify a value for SKIP_UNUSABLE_INDEXES at the SQL*Loader command line, then SQL*Loader uses the database setting for the SKIP_UNUSABLE_INDEXES configuration parameter, as specified in the initialization parameter file. If the initialization parameter file does not specify a database setting for SKIP_UNUSABLE_INDEXES, then the default database setting is TRUE.

A value of TRUE for SKIP_UNUSABLE_INDEXES means that if an index in an Index Unusable state is encountered, it is skipped and the load operation continues. This allows SQL*Loader to load a table with indexes that are in an Unusable state prior to the beginning of the load. Indexes that are not in an Unusable state at load time will be maintained by SQL*Loader. Indexes that are in an Unusable state at load time will not be maintained but will remain in an Unusable state at load completion.


Note:

Indexes that are unique and marked Unusable are not allowed to skip index maintenance. This rule is enforced by DML operations, and enforced by the direct path load to be consistent with DML.

The SKIP_UNUSABLE_INDEXES parameter applies to both conventional and direct path loads.

STREAMSIZE

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".

Specifies the size, in bytes, for direct path streams.

USERID (username/password)

Default: none

USERID is used to provide your Oracle username and password. If it is omitted, then you are prompted for it. If only a slash is used, then USERID defaults to your operating system login.

If you connect as user SYS, then you must also specify AS SYSDBA in the connect string.


Note:

Because the string, AS SYSDBA, contains a blank, some operating systems may require that the entire connect string be placed in quotation marks or marked as a literal by some method. Some operating systems also require that quotation marks on the command line be preceded by an escape character, such as backslashes.

See your Oracle operating system-specific documentation for information about special and reserved characters on your system.


Exit Codes for Inspection and Display

Oracle SQL*Loader provides the results of a SQL*Loader run immediately upon completion. In addition to recording the results in a log file, SQL*Loader may also report the outcome in a process exit code. This Oracle SQL*Loader functionality allows for checking the outcome of a SQL*Loader invocation from the command line or a script. Table 8-1 shows the exit codes for various results.

Table 8-1 Exit Codes for SQL*Loader

ResultExit Code

All rows loaded successfully

EX_SUCC

All or some rows rejected

EX_WARN

All or some rows discarded

EX_WARN

Discontinued load

EX_WARN

Command-line or syntax errors

EX_FAIL

Oracle errors nonrecoverable for SQL*Loader

EX_FAIL

Operating system errors (such as file open/close and malloc)

EX_FAIL


For UNIX, the exit codes are as follows:

EX_SUCC 0
EX_FAIL 1
EX_WARN 2
EX_FTL  3

For Windows NT, the exit codes are as follows:

EX_SUCC 0
EX_FAIL 1
EX_WARN 2
EX_FTL  4

If SQL*Loader returns any exit code other than zero, then you should consult your system log files and SQL*Loader log files for more detailed diagnostic information.

In UNIX, you can check the exit code from the shell to determine the outcome of a load.

PKw)PKN:AOEBPS/et_dp_driver.htm The ORACLE_DATAPUMP Access Driver

15 The ORACLE_DATAPUMP Access Driver

This chapter describes the ORACLE_DATAPUMP access driver which provides a set of access parameters unique to external tables of the type ORACLE_DATAPUMP. You can use the access parameters to modify the default behavior of the access driver. The information you provide through the access driver ensures that data from the data source is processed so that it matches the definition of the external table.

The following topics are discussed in this chapter:

To use the information in this chapter, you must have some knowledge of the file format and record format (including character sets and field datatypes) of the data files on your platform. You must also know enough about SQL to be able to create an external table and perform queries against it.


Notes:

  • It is sometimes difficult to describe syntax without using other syntax that is not documented until later in the chapter. If it is not clear what some syntax is supposed to do, then you might want to skip ahead and read about that particular element.

  • When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks. See "Reserved Words for the ORACLE_DATAPUMP Access Driver".


access_parameters Clause

When you create the external table, you can specify certain parameters in an access_parameters clause. This clause is optional, as are its individual parameters. For example, you could specify LOGFILE, but not VERSION, or vice versa. The syntax for the access_parameters clause is as follows.


Note:

These access parameters are collectively referred to as the opaque_format_spec in the SQL CREATE TABLE...ORGANIZATION EXTERNAL statement.


See Also:


Description of et_oracle_datapump.gif follows
Description of the illustration et_oracle_datapump.gif

comments

Comments are lines that begin with two hyphens followed by text. Comments must be placed before any access parameters, for example:

--This is a comment.
--This is another comment.
NOLOG

All text to the right of the double hyphen is ignored, until the end of the line.

COMPRESSION

Default: DISABLED

Purpose

Specifies whether to compress data before it is written to the dump file set.

Syntax and Description

COMPRESSION=[ENABLED | DISABLED]

If ENABLED is specified, then all data is compressed for the entire upload operation.

If DISABLED is specified, then no data is compressed for the upload operation.

Example

In the following example, the COMPRESSION parameter is set to ENABLED. Therefore, all data written to the dept.dmp dump file will be in compressed format.

CREATE TABLE deptXTec3
 ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY def_dir1
 ACCESS PARAMETERS (COMPRESSION ENABLED) LOCATION ('dept.dmp'));

ENCRYPTION

Default: DISABLED

Purpose

Specifies whether to encrypt data before it is written to the dump file set.

Syntax and Description

ENCRYPTION=[ENABLED | DISABLED]

If ENABLED is specified, then all data is written to the dump file set in encrypted format.

If DISABLED is specified, then no data is written to the dump file set in encrypted format.

Restrictions

This parameter is used only for export operations.

Example

In the following example, the ENCRYPTION parameter is set to ENABLED. Therefore, all data written to the dept.dmp file will be in encrypted format.

CREATE TABLE deptXTec3
 ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY def_dir1
 ACCESS PARAMETERS (ENCRYPTION ENABLED) LOCATION ('dept.dmp')); 

LOGFILE | NOLOGFILE

Default: If LOGFILE is not specified, then a log file is created in the default directory and the name of the log file is generated from the table name and the process ID with an extension of .log. If a log file already exists by the same name, then the access driver reopens that log file and appends the new log information to the end.

Purpose

LOGFILE specifies the name of the log file that contains any messages generated while the dump file was being accessed. NOLOGFILE prevents the creation of a log file.

Syntax and Description

NOLOGFILE

or

LOGFILE=[directory_object:]logfile_name

If a directory object is not specified as part of the log file name, then the directory object specified by the DEFAULT DIRECTORY attribute is used. If a directory object is not specified and no default directory was specified, then an error is returned. See "File Names for LOGFILE" for information about using substitution variables to create unique file names during parallel loads or unloads.

Example

In the following example, the dump file, dept_dmp, is in the directory identified by the directory object, load_dir, but the log file, deptxt.log, is in the directory identified by the directory object, log_dir.

CREATE TABLE dept_xt (dept_no INT, dept_name CHAR(20), location CHAR(20))
ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY load_dir 
ACCESS PARAMETERS (LOGFILE log_dir:deptxt) LOCATION ('dept_dmp'));

File Names for LOGFILE

The access driver does some symbol substitution to help make file names unique in the case of parallel loads. The symbol substitutions supported are as follows:

  • %p is replaced by the process ID of the current process. For example, if the process ID of the access driver is 12345, then exttab_%p.log becomes exttab_12345.log.

  • %a is replaced by the agent number of the current process. The agent number is the unique number assigned to each parallel process accessing the external table. This number is padded to the left with zeros to fill three characters. For example, if the third parallel agent is creating a file and exttab_%a.log was specified as the file name, then the agent would create a file named exttab_003.log.

  • %% is replaced by %. If there is a need to have a percent sign in the file name, then this symbol substitution must be used.

If the % character is followed by anything other than one of the characters in the preceding list, then an error is returned.

If %p or %a is not used to create unique file names for output files and an external table is being accessed in parallel, then output files may be corrupted or agents may be unable to write to the files.

If no extension is supplied for the file, then a default extension of .log is used. If the name generated is not a valid file name, then an error is returned and no data is loaded or unloaded.

VERSION Clause

The VERSION clause is used to specify the minimum release of Oracle Database that will be reading the dump file. If you specify 11.1, then both Oracle Database 11g release 11.1 and 11.2 databases can read the dump file. If you specify 11.2, then only Oracle Database 11g release 2 (11.2) databases can read the dump file.

The default value is COMPATIBLE.

Effects of Using the SQL ENCRYPT Clause

If you specify the SQL ENCRYPT clause when you create an external table, then keep the following in mind:

  • The columns for which you specify the ENCRYPT clause will be encrypted before being written into the dump file.

  • If you move the dump file to another database, then the same encryption password must be used for both the encrypted columns in the dump file and for the external table used to read the dump file.

  • If you do not specify a password for the correct encrypted columns in the external table on the second database, then an error is returned. If you do not specify the correct password, then garbage data is written to the dump file.

  • The dump file that is produced must be at release 10.2 or higher. Otherwise, an error is returned.


See Also:

Oracle Database SQL Language Reference for more information about using the ENCRYPT clause on a CREATE TABLE statement

Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver

As part of creating an external table with a SQL CREATE TABLE AS SELECT statement, the ORACLE_DATAPUMP access driver can write data to a dump file. The data in the file is written in a binary format that can only be read by the ORACLE_DATAPUMP access driver. Once the dump file is created, it cannot be modified (that is, no data manipulation language (DML) operations can be performed on it). However, the file can be read any number of times and used as the dump file for another external table in the same database or in a different database.

The following steps use the sample schema, oe, to show an extended example of how you can use the ORACLE_DATAPUMP access driver to unload and load data. (The example assumes that the directory object def_dir1 already exists, and that user oe has read and write access to it.)

  1. An external table will populate a file with data only as part of creating the external table with the AS SELECT clause. The following example creates an external table named inventories_xt and populates the dump file for the external table with the data from table inventories in the oe schema.

    SQL> CREATE TABLE inventories_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4    TYPE ORACLE_DATAPUMP
      5    DEFAULT DIRECTORY def_dir1
      6    LOCATION ('inv_xt.dmp')
      7  )
      8  AS SELECT * FROM inventories;
    
    Table created.
    
  2. Describe both inventories and the new external table, as follows. They should both match.

    SQL> DESCRIBE inventories
     Name                                      Null?    Type
     ---------------------------------------- --------- ----------------
     PRODUCT_ID                                NOT NULL NUMBER(6)
     WAREHOUSE_ID                              NOT NULL NUMBER(3)
     QUANTITY_ON_HAND                          NOT NULL NUMBER(8)
    
    SQL> DESCRIBE inventories_xt
     Name                                      Null?    Type
     ----------------------------------------- -------- -----------------
     PRODUCT_ID                                NOT NULL NUMBER(6)
     WAREHOUSE_ID                              NOT NULL NUMBER(3)
     QUANTITY_ON_HAND                          NOT NULL NUMBER(8)
    
  3. Now that the external table is created, it can be queried just like any other table. For example, select the count of records in the external table, as follows:

    SQL> SELECT COUNT(*) FROM inventories_xt;
    
      COUNT(*)
    ----------
          1112
    
  4. Compare the data in the external table against the data in inventories. There should be no differences.

    SQL> SELECT * FROM inventories MINUS SELECT * FROM inventories_xt;
    
    no rows selected
    
  5. After an external table has been created and the dump file populated by the CREATE TABLE AS SELECT statement, no rows may be added, updated, or deleted from the external table. Any attempt to modify the data in the external table will fail with an error.

    The following example shows an attempt to use data manipulation language (DML) on an existing external table. This will return an error, as shown.

    SQL> DELETE FROM inventories_xt WHERE warehouse_id = 5;
    DELETE FROM inventories_xt WHERE warehouse_id = 5
                *
    ERROR at line 1:
    ORA-30657: operation not supported on external organized table
    
  6. The dump file created for the external table can now be moved and used as the dump file for another external table in the same database or different database. Note that when you create an external table that uses an existing file, there is no AS SELECT clause for the CREATE TABLE statement.

    SQL> CREATE TABLE inventories_xt2
      2  (
      3    product_id          NUMBER(6),
      4    warehouse_id        NUMBER(3),
      5    quantity_on_hand    NUMBER(8)
      6  )
      7  ORGANIZATION EXTERNAL
      8  (
      9    TYPE ORACLE_DATAPUMP
     10    DEFAULT DIRECTORY def_dir1
     11    LOCATION ('inv_xt.dmp')
     12  );
    
    Table created.
    
  7. Compare the data for the new external table against the data in the inventories table. The product_id field will be converted to a compatible datatype before the comparison is done. There should be no differences.

    SQL> SELECT * FROM inventories MINUS SELECT * FROM inventories_xt2;
    
    no rows selected
    
  8. Create an external table with three dump files and with a degree of parallelism of three.

    SQL> CREATE TABLE inventories_xt3
      2  ORGANIZATION EXTERNAL
      3  (
      4    TYPE ORACLE_DATAPUMP
      5    DEFAULT DIRECTORY def_dir1
      6    LOCATION ('inv_xt1.dmp', 'inv_xt2.dmp', 'inv_xt3.dmp')
      7  )
      8  PARALLEL 3
      9  AS SELECT * FROM inventories;
    
    Table created.
    
  9. Compare the data unload against inventories. There should be no differences.

    SQL> SELECT * FROM inventories MINUS SELECT * FROM inventories_xt3;
    
    no rows selected
    
  10. Create an external table containing some rows from table inventories.

    SQL> CREATE TABLE inv_part_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4  TYPE ORACLE_DATAPUMP
      5  DEFAULT DIRECTORY def_dir1
      6  LOCATION ('inv_p1_xt.dmp')
      7  )
      8  AS SELECT * FROM inventories WHERE warehouse_id < 5;
     
    Table created.
    
  11. Create another external table containing the rest of the rows from inventories.

    SQL> drop table inv_part_xt;
     
    Table dropped.
     
    SQL> 
    SQL> CREATE TABLE inv_part_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4  TYPE ORACLE_DATAPUMP
      5  DEFAULT DIRECTORY def_dir1
      6  LOCATION ('inv_p2_xt.dmp')
      7  )
      8  AS SELECT * FROM inventories WHERE warehouse_id >= 5;
     
    Table created.
    
  12. Create an external table that uses the two dump files created in Steps 10 and 11.

    SQL> CREATE TABLE inv_part_all_xt
      2  (
      3  product_id NUMBER(6),
      4  warehouse_id NUMBER(3),
      5  quantity_on_hand NUMBER(8)
      6  )
      7  ORGANIZATION EXTERNAL
      8  (
      9  TYPE ORACLE_DATAPUMP
     10  DEFAULT DIRECTORY def_dir1
     11  LOCATION ('inv_p1_xt.dmp','inv_p2_xt.dmp')
     12  );
     
    Table created.
    
  13. Compare the new external table to the inventories table. There should be no differences. This is because the two dump files used to create the external table have the same metadata (for example, the same table name inv_part_xt and the same column information).

    SQL> SELECT * FROM inventories MINUS SELECT * FROM inv_part_all_xt;
    
    no rows selected
    

Parallel Loading and Unloading

The dump file must be on a disk big enough to hold all the data being written. If there is insufficient space for all of the data, then an error is returned for the CREATE TABLE AS SELECT statement. One way to alleviate the problem is to create multiple files in multiple directory objects (assuming those directories are on different disks) when executing the CREATE TABLE AS SELECT statement. Multiple files can be created by specifying multiple locations in the form directory:file in the LOCATION clause and by specifying the PARALLEL clause. Each parallel I/O server process that is created to populate the external table writes to its own file. The number of files in the LOCATION clause should match the degree of parallelization because each I/O server process requires its own files. Any extra files that are specified will be ignored. If there are not enough files for the degree of parallelization specified, then the degree of parallelization is lowered to match the number of files in the LOCATION clause.

Here is an example of unloading the inventories table into three files.

SQL> CREATE TABLE inventories_XT_3
  2  ORGANIZATION EXTERNAL
  3  (
  4    TYPE ORACLE_DATAPUMP
  5    DEFAULT DIRECTORY def_dir1
  6    LOCATION ('inv_xt1.dmp', 'inv_xt2.dmp', 'inv_xt3.dmp')
  7  )
  8  PARALLEL 3
  9  AS SELECT * FROM oe.inventories;

Table created.

When the ORACLE_DATAPUMP access driver is used to load data, parallel processes can read multiple dump files or even chunks of the same dump file concurrently. Thus, data can be loaded in parallel even if there is only one dump file, as long as that file is large enough to contain multiple file offsets. The degree of parallelization is not tied to the number of files in the LOCATION clause when reading from ORACLE_DATAPUMP external tables.

Combining Dump Files

Dump files populated by different external tables can all be specified in the LOCATION clause of another external table. For example, data from different production databases can be unloaded into separate files, and then those files can all be included in an external table defined in a data warehouse. This provides an easy way of aggregating data from multiple sources. The only restriction is that the metadata for all of the external tables be exactly the same. This means that the character set, time zone, schema name, table name, and column names must all match. Also, the columns must be defined in the same order, and their datatypes must be exactly alike. This means that after you create the first external table you must drop it so that you can use the same table name for the second external table. This ensures that the metadata listed in the two dump files is the same and they can be used together to create the same external table.

SQL> CREATE TABLE inv_part_1_xt
  2  ORGANIZATION EXTERNAL
  3  (
  4    TYPE ORACLE_DATAPUMP
  5    DEFAULT DIRECTORY def_dir1
  6    LOCATION ('inv_p1_xt.dmp')
  7  )
  8  AS SELECT * FROM oe.inventories WHERE warehouse_id < 5;

Table created.

SQL> DROP TABLE inv_part_1_xt;

SQL> CREATE TABLE inv_part_1_xt
  2  ORGANIZATION EXTERNAL
  3  (
  4    TYPE ORACLE_DATAPUMP
  5    DEFAULT directory def_dir1
  6    LOCATION ('inv_p2_xt.dmp')
  7  )
  8  AS SELECT * FROM oe.inventories WHERE warehouse_id >= 5;

Table created.

SQL> CREATE TABLE inv_part_all_xt
  2  (
  3    PRODUCT_ID          NUMBER(6),
  4    WAREHOUSE_ID        NUMBER(3),
  5    QUANTITY_ON_HAND    NUMBER(8)
  6  )
  7  ORGANIZATION EXTERNAL
  8  (
  9    TYPE ORACLE_DATAPUMP
 10    DEFAULT DIRECTORY def_dir1
 11    LOCATION ('inv_p1_xt.dmp','inv_p2_xt.dmp')
 12  );

Table created.

SQL> SELECT * FROM inv_part_all_xt MINUS SELECT * FROM oe.inventories;

no rows selected

Supported Datatypes

You may encounter the following situations when you use external tables to move data between databases:

  • The database character set and the database national character set may be different between the two platforms.

  • The endianness of the platforms for the two databases may be different.

The ORACLE_DATAPUMP access driver automatically resolves some of these situations.

The following datatypes are automatically converted during loads and unloads:

  • Character (CHAR, NCHAR, VARCHAR2, NVARCHAR2)

  • RAW

  • NUMBER

  • Date/Time

  • BLOB

  • CLOB and NCLOB

  • ROWID and UROWID

If you attempt to use a datatype that is not supported for external tables, then you receive an error. This is demonstrated in the following example, in which the unsupported datatype, LONG, is used:

SQL> CREATE TABLE bad_datatype_xt
  2  (
  3    product_id             NUMBER(6),
  4    language_id            VARCHAR2(3),
  5    translated_name        NVARCHAR2(50),
  6    translated_description LONG
  7  )
  8  ORGANIZATION EXTERNAL
  9  (
 10    TYPE ORACLE_DATAPUMP
 11    DEFAULT DIRECTORY def_dir1
 12    LOCATION ('proddesc.dmp')
 13  );
  translated_description LONG
  *
ERROR at line 6:
ORA-30656: column type not supported on external organized table

Unsupported Datatypes

An external table supports a subset of all possible datatypes for columns. In particular, it supports character datatypes (except LONG), the RAW datatype, all numeric datatypes, and all date, timestamp, and interval datatypes.

This section describes how you can use the ORACLE_DATAPUMP access driver to unload and reload data for some of the unsupported datatypes, specifically:

  • BFILE

  • LONG and LONG RAW

  • Final object types

  • Tables of final object types

Unloading and Loading BFILE Datatypes

The BFILE datatype has two pieces of information stored in it: the directory object for the file and the name of the file within that directory object.

You can unload BFILE columns using the ORACLE_DATAPUMP access driver by storing the directory object name and the file name in two columns in the external table. The procedure DBMS_LOB.FILEGETNAME will return both parts of the name. However, because this is a procedure, it cannot be used in a SELECT statement. Instead, two functions are needed. The first will return the name of the directory object, and the second will return the name of the file.

The steps in the following extended example demonstrate the unloading and loading of BFILE datatypes.

  1. Create a function to extract the directory object for a BFILE column. Note that if the column is NULL, then NULL is returned.

    SQL> CREATE FUNCTION get_dir_name (bf BFILE) RETURN VARCHAR2 IS
      2  DIR_ALIAS VARCHAR2(255);
      3  FILE_NAME VARCHAR2(255);
      4  BEGIN
      5    IF bf is NULL
      6    THEN
      7      RETURN NULL;
      8    ELSE
      9      DBMS_LOB.FILEGETNAME (bf, dir_alias, file_name);
     10      RETURN dir_alias;
     11    END IF;
     12  END;
     13  /
    
    Function created.
    
  2. Create a function to extract the file name for a BFILE column.

    SQL> CREATE FUNCTION get_file_name (bf BFILE) RETURN VARCHAR2 is
      2  dir_alias VARCHAR2(255);
      3  file_name VARCHAR2(255);
      4  BEGIN
      5    IF bf is NULL
      6    THEN
      7      RETURN NULL;
      8    ELSE
      9      DBMS_LOB.FILEGETNAME (bf, dir_alias, file_name);
     10      RETURN file_name;
     11    END IF;
     12  END;
     13  /
    
    Function created.
    
  3. You can then add a row with a NULL value for the BFILE column, as follows:

    SQL> INSERT INTO PRINT_MEDIA (product_id, ad_id, ad_graphic)
      2  VALUES (3515, 12001, NULL);
    
    1 row created.
    

    You can use the newly created functions to populate an external table. Note that the functions should set columns ad_graphic_dir and ad_graphic_file to NULL if the BFILE column is NULL.

  4. Create an external table to contain the data from the print_media table. Use the get_dir_name and get_file_name functions to get the components of the BFILE column.

    SQL> CREATE TABLE print_media_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4    TYPE oracle_datapump
      5    DEFAULT DIRECTORY def_dir1
      6    LOCATION ('pm_xt.dmp')
      7  ) AS
      8  SELECT product_id, ad_id,
      9         get_dir_name (ad_graphic) ad_graphic_dir,
     10         get_file_name(ad_graphic) ad_graphic_file
     11  FROM print_media;
    
    Table created.
    
  5. Create a function to load a BFILE column from the data that is in the external table. This function will return NULL if the ad_graphic_dir column in the external table is NULL.

    SQL> CREATE FUNCTION get_bfile (dir VARCHAR2, file VARCHAR2) RETURN
    BFILE is
      2  bf BFILE;
      3  BEGIN
      4    IF dir IS NULL
      5    THEN
      6      RETURN NULL;
      7    ELSE
      8      RETURN BV.FILENAME(dir,file);
      9    END IF;
     10  END;
     11  /
    
    Function created.
    
  6. The get_bfile function can be used to populate a new table containing a BFILE column.

    SQL> CREATE TABLE print_media_int AS
      2  SELECT product_id, ad_id,
      3         get_bfile (ad_graphic_dir, ad_graphic_file) ad_graphic
      4  FROM print_media_xt;
    
    Table created.
    
  7. The data in the columns of the newly loaded table should match the data in the columns of the print_media table.

    SQL> SELECT product_id, ad_id,
      2         get_dir_name(ad_graphic),
      3         get_file_name(ad_graphic)
      4  FROM print_media_int
      5  MINUS
      6  SELECT product_id, ad_id,
      7         get_dir_name(ad_graphic),
      8         get_file_name(ad_graphic)
      9  FROM print_media;
    
    no rows selected
    

Unloading LONG and LONG RAW Datatypes

The ORACLE_DATAPUMP access driver can be used to unload LONG and LONG RAW columns, but that data can only be loaded back into LOB fields. The steps in the following extended example demonstrate the unloading of LONG and LONG RAW datatypes.

  1. If a table to be unloaded contains a LONG or LONG RAW column, then define the corresponding columns in the external table as CLOB for LONG columns or BLOB for LONG RAW columns.

    SQL> CREATE TABLE long_tab
      2  (
      3    key                   SMALLINT,
      4    description           LONG
      5  );
    
    Table created.
    
    SQL> INSERT INTO long_tab VALUES (1, 'Description Text');
    
    1 row created.
    
  2. Now, an external table can be created that contains a CLOB column to contain the data from the LONG column. Note that when loading the external table, the TO_LOB operator is used to convert the LONG column into a CLOB.

    SQL> CREATE TABLE long_tab_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4    TYPE ORACLE_DATAPUMP
      5    DEFAULT DIRECTORY def_dir1
      6    LOCATION ('long_tab_xt.dmp')
      7  )
      8  AS SELECT key, TO_LOB(description) description FROM long_tab;
    
    Table created.
    
  3. The data in the external table can be used to create another table exactly like the one that was unloaded except the new table will contain a LOB column instead of a LONG column.

    SQL> CREATE TABLE lob_tab
      2  AS SELECT * from long_tab_xt;
    
    Table created.
    
  4. Verify that the table was created correctly.

    SQL> SELECT * FROM lob_tab;
    
           KEY  DESCRIPTION
    ----------------------------------------------------------------------------
             1  Description Text
    

Unloading and Loading Columns Containing Final Object Types

Final column objects are populated into an external table by moving each attribute in the object type into a column in the external table. In addition, the external table needs a new column to track whether the column object is atomically NULL. The following steps demonstrate the unloading and loading of columns containing final object types.

  1. In the following example, the warehouse column in the external table is used to track whether the warehouse column in the source table is atomically NULL.

    SQL> CREATE TABLE inventories_obj_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4    TYPE ORACLE_DATAPUMP
      5    DEFAULT DIRECTORY def_dir1
      6    LOCATION ('inv_obj_xt.dmp')
      7  )
      8  AS
      9  SELECT oi.product_id,
     10         DECODE (oi.warehouse, NULL, 0, 1) warehouse,
     11         oi.warehouse.location_id location_id,
     12         oi.warehouse.warehouse_id warehouse_id,
     13         oi.warehouse.warehouse_name warehouse_name,
     14         oi.quantity_on_hand
     15  FROM oc_inventories oi;
    
    Table created.
    

    The columns in the external table containing the attributes of the object type can now be used as arguments to the type constructor function when loading a column of that type. Note that the warehouse column in the external table is used to determine whether to call the constructor function for the object or set the column to NULL.

  2. Load a new internal table that looks exactly like the oc_inventories view. (The use of the WHERE 1=0 clause creates a new table that looks exactly like the old table but does not copy any data from the old table into the new table.)

    SQL> CREATE TABLE oc_inventories_2 AS SELECT * FROM oc_inventories
    WHERE 1 = 0;
    
    Table created.
    
    SQL> INSERT INTO oc_inventories_2
      2  SELECT product_id,
      3         DECODE (warehouse, 0, NULL,
      4                 warehouse_typ(warehouse_id, warehouse_name,
      5                 location_id)), quantity_on_hand
      6  FROM inventories_obj_xt;
    
    1112 rows created.
    

Tables of Final Object Types

Object tables have an object identifier that uniquely identifies every row in the table. The following situations can occur:

  • If there is no need to unload and reload the object identifier, then the external table only needs to contain fields for the attributes of the type for the object table.

  • If the object identifier (OID) needs to be unloaded and reloaded and the OID for the table is one or more fields in the table, (also known as primary-key-based OIDs), then the external table has one column for every attribute of the type for the table.

  • If the OID needs to be unloaded and the OID for the table is system-generated, then the procedure is more complicated. In addition to the attributes of the type, another column needs to be created to hold the system-generated OID.

The steps in the following example demonstrate this last situation.

  1. Create a table of a type with system-generated OIDs:

    SQL> CREATE TYPE person AS OBJECT (name varchar2(20)) NOT FINAL
      2  /
    
    Type created.
    
    SQL> CREATE TABLE people OF person;
    
    Table created.
    
    SQL> INSERT INTO people VALUES ('Euclid');
    
    1 row created.
    
  2. Create an external table in which the column OID is used to hold the column containing the system-generated OID.

    SQL> CREATE TABLE people_xt
      2  ORGANIZATION EXTERNAL
      3  (
      4    TYPE ORACLE_DATAPUMP
      5    DEFAULT DIRECTORY def_dir1
      6    LOCATION ('people.dmp')
      7  )
      8  AS SELECT SYS_NC_OID$ oid, name FROM people;
    
    Table created.
    
  3. Create another table of the same type with system-generated OIDs. Then, execute an INSERT statement to load the new table with data unloaded from the old table.

    SQL> CREATE TABLE people2 OF person;
    
    Table created.
    
    SQL> 
    SQL> INSERT INTO people2 (SYS_NC_OID$, SYS_NC_ROWINFO$)
      2  SELECT oid, person(name) FROM people_xt;
    
    1 row created.
    
    SQL> 
    SQL> SELECT SYS_NC_OID$, name FROM people
      2  MINUS
      3  SELECT SYS_NC_OID$, name FROM people2;
    
    no rows selected
    

Performance Hints When Using the ORACLE_DATAPUMP Access Driver

When you monitor performance, the most important measurement is the elapsed time for a load. Other important measurements are CPU usage, memory usage, and I/O rates.

You can alter performance by increasing or decreasing the degree of parallelism. The degree of parallelism indicates the number of access drivers that can be started to process the data files. The degree of parallelism enables you to choose on a scale between slower load with little resource usage and faster load with all resources utilized. The access driver cannot automatically tune itself, because it cannot determine how many resources you want to dedicate to the access driver.

An additional consideration is that the access drivers use large I/O buffers for better performance. On databases with shared servers, all memory used by the access drivers comes out of the system global area (SGA). For this reason, you should be careful when using external tables on shared servers.

Restrictions When Using the ORACLE_DATAPUMP Access Driver

The ORACLE_DATAPUMP access driver has the following restrictions:

  • Handling of byte-order marks during a load: In an external table load for which the data file character set is UTF8 or UTF16, it is not possible to suppress checking for byte-order marks. Suppression of byte-order mark checking is necessary only if the beginning of the data file contains binary data that matches the byte-order mark encoding. (It is possible to suppress byte-order mark checking with SQL*Loader loads.) Note that checking for a byte-order mark does not mean that a byte-order mark must be present in the data file. If no byte-order mark is present, then the byte order of the server platform is used.

  • The external tables feature does not support the use of the backslash (\) escape character within strings. See "Use of the Backslash Escape Character".

  • When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks.

Reserved Words for the ORACLE_DATAPUMP Access Driver

When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks. The following are the reserved words for the ORACLE_DATAPUMP access driver:

  • BADFILE

  • COMPATIBLE

  • COMPRESSION

  • DATAPUMP

  • DEBUG

  • ENCRYPTION

  • INTERNAL

  • JOB

  • LATEST

  • LOGFILE

  • NOBADFILE

  • NOLOGFILE

  • PARALLEL

  • TABLE

  • VERSION

  • WORKERID

PK`VPKN:AOEBPS/metadata_api.htm Using the Metadata APIs

20 Using the Metadata APIs

This chapter describes use of the Metadata APIs, DBMS_METADATA and DBMS_METADATA_DIFF.

The DBMS_METADATA API enables you to do the following:

The DBMS_METADATA_DIFF API lets you compare objects between databases to identify metadata changes over time in objects of the same type.

The following topics are discussed in this chapter:

Why Use the DBMS_METADATA API?

Over time, as you have used the Oracle database, you may have developed your own code for extracting metadata from the dictionary, manipulating the metadata (adding columns, changing column datatypes, and so on) and then converting the metadata to DDL so that you could re-create the object on the same or another database. Keeping that code updated to support new dictionary features has probably proven to be challenging.

The DBMS_METADATA API eliminates the need for you to write and maintain your own code for metadata extraction. It provides a centralized facility for the extraction, manipulation, and re-creation of dictionary metadata. And it supports all dictionary objects at their most current level.

Although the DBMS_METADATA API can dramatically decrease the amount of custom code you are writing and maintaining, it does not involve any changes to your normal database procedures. The DBMS_METADATA API is installed in the same way as data dictionary views, by running catproc.sql to invoke a SQL script at database installation time. Once it is installed, it is available whenever the instance is operational, even in restricted mode.

The DBMS_METADATA API does not require you to make any source code changes when you change database releases because it is upwardly compatible across different Oracle releases. XML documents retrieved by one release can be processed by the submit interface on the same or later release. For example, XML documents retrieved by an Oracle9i database can be submitted to Oracle Database 10g.

Overview of the DBMS_METADATA API

For the purposes of the DBMS_METADATA API, every entity in the database is modeled as an object that belongs to an object type. For example, the table scott.emp is an object and its object type is TABLE. When you fetch an object's metadata you must specify the object type.

To fetch a particular object or set of objects within an object type, you specify a filter. Different filters are defined for each object type. For example, two of the filters defined for the TABLE object type are SCHEMA and NAME. They allow you to say, for example, that you want the table whose schema is scott and whose name is emp.

The DBMS_METADATA API makes use of XML (Extensible Markup Language) and XSLT (Extensible Stylesheet Language Transformation). The DBMS_METADATA API represents object metadata as XML because it is a universal format that can be easily parsed and transformed. The DBMS_METADATA API uses XSLT to transform XML documents into either other XML documents or into SQL DDL.

You can use the DBMS_METADATA API to specify one or more transforms (XSLT scripts) to be applied to the XML when the metadata is fetched (or when it is resubmitted). The API provides some predefined transforms, including one named DDL that transforms the XML document into SQL creation DDL.

You can then specify conditions on the transform by using transform parameters. You can also specify optional parse items to access specific attributes of an object's metadata. For more details about all of these options and examples of their implementation, see the following sections:

Using the DBMS_METADATA API to Retrieve an Object's Metadata

The retrieval interface of the DBMS_METADATA API lets you specify the kind of object to be retrieved. This can be either a particular object type (such as a table, index, or procedure) or a heterogeneous collection of object types that form a logical unit (such as a database export or schema export). By default, metadata that you fetch is returned in an XML document.


Note:

To access objects that are not in your own schema you must have the SELECT_CATALOG_ROLE role. However, roles are disabled within many PL/SQL objects (stored procedures, functions, definer's rights APIs). Therefore, if you are writing a PL/SQL program that will access objects in another schema (or, in general, any objects for which you need the SELECT_CATALOG_ROLE role), then you must put the code in an invoker's rights API.

You can use the programmatic interface for casual browsing, or you can use it to develop applications. You would use the browsing interface if you simply wanted to make ad hoc queries of the system metadata. You would use the programmatic interface when you want to extract dictionary metadata as part of an application. In such cases, the procedures provided by the DBMS_METADATA API can be used in place of SQL scripts and customized code that you may be currently using to do the same thing.

Typical Steps Used for Basic Metadata Retrieval

When you retrieve metadata, you use the DBMS_METADATA PL/SQL API. The following examples illustrate the programmatic and browsing interfaces.


See Also:


Example 20-1 provides a basic demonstration of how you might use the DBMS_METADATA programmatic interface to retrieve metadata for one table. It creates a DBMS_METADATA program that creates a function named get_table_md. This function returns metadata for one table.

Example 20-1 Using the DBMS_METADATA Programmatic Interface to Retrieve Data

  1. Create a DBMS_METADATA program that creates a function named get_table_md, which will return the metadata for one table, timecards, in the hr schema. The content of such a program looks as follows. (For this example, name the program metadata_program.sql.)

    CREATE OR REPLACE FUNCTION get_table_md RETURN CLOB IS
    -- Define local variables.
    h NUMBER; --handle returned by OPEN
    th NUMBER; -- handle returned by ADD_TRANSFORM
    doc CLOB;
    BEGIN
    
    -- Specify the object type.
    h := DBMS_METADATA.OPEN('TABLE');
    
    -- Use filters to specify the particular object desired.
    DBMS_METADATA.SET_FILTER(h,'SCHEMA','HR');
    DBMS_METADATA.SET_FILTER(h,'NAME','TIMECARDS');
    
     -- Request that the metadata be transformed into creation DDL.
    th := DBMS_METADATA.ADD_TRANSFORM(h,'DDL');
    
     -- Fetch the object.
    doc := DBMS_METADATA.FETCH_CLOB(h);
    
     -- Release resources.
    DBMS_METADATA.CLOSE(h);
    RETURN doc;
    END;
    / 
    
  2. Connect as user hr.

  3. Run the program to create the get_table_md function:

    SQL> @metadata_program

  4. Use the newly created get_table_md function in a select operation. To generate complete, uninterrupted output, set the PAGESIZE to 0 and set LONG to some large number, as shown, before executing your query:

    SQL> SET PAGESIZE 0
    SQL> SET LONG 1000000
    SQL> SELECT get_table_md FROM dual;
    
  5. The output, which shows the metadata for the timecards table in the hr schema, looks similar to the following:

      CREATE TABLE "HR"."TIMECARDS"
       (    "EMPLOYEE_ID" NUMBER(6,0),
            "WEEK" NUMBER(2,0),
            "JOB_ID" VARCHAR2(10),
            "HOURS_WORKED" NUMBER(4,2),
             FOREIGN KEY ("EMPLOYEE_ID")
              REFERENCES "HR"."EMPLOYEES" ("EMPLOYEE_ID") ENABLE
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "EXAMPLE"
    

You can use the browsing interface and get the same results, as shown in Example 20-2.

Example 20-2 Using the DBMS_METADATA Browsing Interface to Retrieve Data

SQL> SET PAGESIZE 0
SQL> SET LONG 1000000
SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','TIMECARDS','HR') FROM dual;

The results will be the same as shown in step 5 for Example 20-1.

Retrieving Multiple Objects

In Example 20-1, the FETCH_CLOB procedure was called only once, because it was known that there was only one object. However, you can also retrieve multiple objects, for example, all the tables in schema scott. To do this, you need to use the following construct:

  LOOP
    doc := DBMS_METADATA.FETCH_CLOB(h);
    --
    -- When there are no more objects to be retrieved, FETCH_CLOB returns NULL.
    --
    EXIT WHEN doc IS NULL;
  END LOOP;

Example 20-3 demonstrates use of this construct and retrieving multiple objects. Connect as user scott for this example. The password is tiger.

Example 20-3 Retrieving Multiple Objects

  1. Create a table named my_metadata and a procedure named get_tables_md, as follows. Because not all objects can be returned, they are stored in a table and queried at the end.

    DROP TABLE my_metadata;
    CREATE TABLE my_metadata (md clob);
    CREATE OR REPLACE PROCEDURE get_tables_md IS
    -- Define local variables
    h       NUMBER;         -- handle returned by 'OPEN'
    th      NUMBER;         -- handle returned by 'ADD_TRANSFORM'
    doc     CLOB;           -- metadata is returned in a CLOB
    BEGIN
    
     -- Specify the object type.
     h := DBMS_METADATA.OPEN('TABLE');
    
     -- Use filters to specify the schema.
     DBMS_METADATA.SET_FILTER(h,'SCHEMA','SCOTT');
    
     -- Request that the metadata be transformed into creation DDL.
     th := DBMS_METADATA.ADD_TRANSFORM(h,'DDL');
    
     -- Fetch the objects.
     LOOP
       doc := DBMS_METADATA.FETCH_CLOB(h);
    
      -- When there are no more objects to be retrieved, FETCH_CLOB returns NULL.
       EXIT WHEN doc IS NULL;
    
       -- Store the metadata in a table.
       INSERT INTO my_metadata(md) VALUES (doc);
       COMMIT;
     END LOOP;
     
     -- Release resources.
     DBMS_METADATA.CLOSE(h);
    END;
    /
    
  2. Execute the procedure:

    EXECUTE get_tables_md;
    
  3. Query the my_metadata table to see what was retrieved:

    SET LONG 9000000
    SET PAGES 0
    SELECT * FROM my_metadata;
    

Placing Conditions on Transforms

You can use transform parameters to specify conditions on the transforms you add. To do this, you use the SET_TRANSFORM_PARAM procedure. For example, if you have added the DDL transform for a TABLE object, then you can specify the SEGMENT_ATTRIBUTES transform parameter to indicate that you do not want segment attributes (physical, storage, logging, and so on) to appear in the DDL. The default is that segment attributes do appear in the DDL.

Example 20-4 shows use of the SET_TRANSFORM_PARAM procedure.

Example 20-4 Placing Conditions on Transforms

  1. Create a function named get_table_md, as follows:

    CREATE OR REPLACE FUNCTION get_table_md RETURN CLOB IS
     -- Define local variables.
     h    NUMBER;   -- handle returned by 'OPEN'
     th   NUMBER;   -- handle returned by 'ADD_TRANSFORM'
     doc  CLOB;
    BEGIN
    
     -- Specify the object type. 
     h := DBMS_METADATA.OPEN('TABLE');
    
     -- Use filters to specify the particular object desired.
     DBMS_METADATA.SET_FILTER(h,'SCHEMA','HR');
     DBMS_METADATA.SET_FILTER(h,'NAME','TIMECARDS');
    
     -- Request that the metadata be transformed into creation DDL.
     th := dbms_metadata.add_transform(h,'DDL');
    
     -- Specify that segment attributes are not to be returned.
     -- Note that this call uses the TRANSFORM handle, not the OPEN handle.
    DBMS_METADATA.SET_TRANSFORM_PARAM(th,'SEGMENT_ATTRIBUTES',false);
    
     -- Fetch the object.
     doc := DBMS_METADATA.FETCH_CLOB(h);
    
     -- Release resources.
     DBMS_METADATA.CLOSE(h);
    
     RETURN doc;
    END;
    /
    
  2. Perform the following query:

    SQL> SELECT get_table_md FROM dual;
    

    The output looks similar to the following:

      CREATE TABLE "HR"."TIMECARDS"
       (    "EMPLOYEE_ID" NUMBER(6,0),
            "WEEK" NUMBER(2,0),
            "JOB_ID" VARCHAR2(10),
            "HOURS_WORKED" NUMBER(4,2),
             FOREIGN KEY ("EMPLOYEE_ID")
              REFERENCES "HR"."EMPLOYEES" ("EMPLOYEE_ID") ENABLE
       )
    

The examples shown up to this point have used a single transform, the DDL transform. The DBMS_METADATA API also enables you to specify multiple transforms, with the output of the first being the input to the next and so on.

Oracle supplies a transform called MODIFY that modifies an XML document. You can do things like change schema names or tablespace names. To do this, you use remap parameters and the SET_REMAP_PARAM procedure.

Example 20-5 shows a sample use of the SET_REMAP_PARAM procedure. It first adds the MODIFY transform and specifies remap parameters to change the schema name from hr to scott. It then adds the DDL transform. The output of the MODIFY transform is an XML document that becomes the input to the DDL transform. The end result is the creation DDL for the timecards table with all instances of schema hr changed to scott.

Example 20-5 Modifying an XML Document

  1. Create a function named remap_schema:

    CREATE OR REPLACE FUNCTION remap_schema RETURN CLOB IS
    -- Define local variables.
    h NUMBER; --handle returned by OPEN
    th NUMBER; -- handle returned by ADD_TRANSFORM
    doc CLOB;
    BEGIN
    
    -- Specify the object type.
    h := DBMS_METADATA.OPEN('TABLE');
    
    -- Use filters to specify the particular object desired.
    DBMS_METADATA.SET_FILTER(h,'SCHEMA','HR');
    DBMS_METADATA.SET_FILTER(h,'NAME','TIMECARDS');
    
    -- Request that the schema name be modified.
    th := DBMS_METADATA.ADD_TRANSFORM(h,'MODIFY');
    DBMS_METADATA.SET_REMAP_PARAM(th,'REMAP_SCHEMA','HR','SCOTT');
    
    -- Request that the metadata be transformed into creation DDL.
    th := DBMS_METADATA.ADD_TRANSFORM(h,'DDL');
    
    -- Specify that segment attributes are not to be returned.
    DBMS_METADATA.SET_TRANSFORM_PARAM(th,'SEGMENT_ATTRIBUTES',false);
    
    -- Fetch the object.
    doc := DBMS_METADATA.FETCH_CLOB(h);
    
    -- Release resources.
    DBMS_METADATA.CLOSE(h);
    RETURN doc;
    END;
    / 
    
  2. Perform the following query:

    SELECT remap_schema FROM dual;
    

    The output looks similar to the following:

      CREATE TABLE "SCOTT"."TIMECARDS"
       (    "EMPLOYEE_ID" NUMBER(6,0),
            "WEEK" NUMBER(2,0),
            "JOB_ID" VARCHAR2(10),
            "HOURS_WORKED" NUMBER(4,2),
             FOREIGN KEY ("EMPLOYEE_ID")
              REFERENCES "SCOTT"."EMPLOYEES" ("EMPLOYEE_ID") ENABLE
       )
    

    If you are familiar with XSLT, then you can add your own user-written transforms to process the XML.

Accessing Specific Metadata Attributes

It is often desirable to access specific attributes of an object's metadata, for example, its name or schema. You could get this information by parsing the returned metadata, but the DBMS_METADATA API provides another mechanism; you can specify parse items, specific attributes that will be parsed out of the metadata and returned in a separate data structure. To do this, you use the SET_PARSE_ITEM procedure.

Example 20-6 fetches all tables in a schema. For each table, a parse item is used to get its name. The name is then used to get all indexes on the table. The example illustrates the use of the FETCH_DDL function, which returns metadata in a sys.ku$_ddls object.

This example assumes you are connected to a schema that contains some tables and indexes. It also creates a table named my_metadata.

Example 20-6 Using Parse Items to Access Specific Metadata Attributes

  1. Create a table named my_metadata and a procedure named get_tables_and_indexes, as follows:

    DROP TABLE my_metadata;
    CREATE TABLE my_metadata (
      object_type   VARCHAR2(30),
      name          VARCHAR2(30),
      md            CLOB);
    CREATE OR REPLACE PROCEDURE get_tables_and_indexes IS
    -- Define local variables.
    h1      NUMBER;         -- handle returned by OPEN for tables
    h2      NUMBER;         -- handle returned by OPEN for indexes
    th1     NUMBER;         -- handle returned by ADD_TRANSFORM for tables
    th2     NUMBER;         -- handle returned by ADD_TRANSFORM for indexes
    doc     sys.ku$_ddls;   -- metadata is returned in sys.ku$_ddls,
                            --  a nested table of sys.ku$_ddl objects
    ddl     CLOB;           -- creation DDL for an object
    pi      sys.ku$_parsed_items;   -- parse items are returned in this object
                                    -- which is contained in sys.ku$_ddl
    objname VARCHAR2(30);   -- the parsed object name
    idxddls sys.ku$_ddls;   -- metadata is returned in sys.ku$_ddls,
                            --  a nested table of sys.ku$_ddl objects
    idxname VARCHAR2(30);   -- the parsed index name
    BEGIN
     -- This procedure has an outer loop that fetches tables,
     -- and an inner loop that fetches indexes.
     
     -- Specify the object type: TABLE.
     h1 := DBMS_METADATA.OPEN('TABLE');
     
     -- Request that the table name be returned as a parse item.
     DBMS_METADATA.SET_PARSE_ITEM(h1,'NAME');
     
     -- Request that the metadata be transformed into creation DDL.
     th1 := DBMS_METADATA.ADD_TRANSFORM(h1,'DDL');
     
     -- Specify that segment attributes are not to be returned.
     DBMS_METADATA.SET_TRANSFORM_PARAM(th1,'SEGMENT_ATTRIBUTES',false);
     
     -- Set up the outer loop: fetch the TABLE objects.
     LOOP
       doc := dbms_metadata.fetch_ddl(h1);
     
    -- When there are no more objects to be retrieved, FETCH_DDL returns NULL.
       EXIT WHEN doc IS NULL;
     
    -- Loop through the rows of the ku$_ddls nested table.
       FOR i IN doc.FIRST..doc.LAST LOOP
         ddl := doc(i).ddlText;
         pi := doc(i).parsedItems;
         -- Loop through the returned parse items.
         IF pi IS NOT NULL AND pi.COUNT > 0 THEN
           FOR j IN pi.FIRST..pi.LAST LOOP
             IF pi(j).item='NAME' THEN
               objname := pi(j).value;
             END IF;
           END LOOP;
         END IF;
         -- Insert information about this object into our table.
         INSERT INTO my_metadata(object_type, name, md)
           VALUES ('TABLE',objname,ddl);
         COMMIT;
       END LOOP;
     
       -- Now fetch indexes using the parsed table name as
       --  a BASE_OBJECT_NAME filter.
     
       -- Specify the object type.
       h2 := DBMS_METADATA.OPEN('INDEX');
     
       -- The base object is the table retrieved in the outer loop.
       DBMS_METADATA.SET_FILTER(h2,'BASE_OBJECT_NAME',objname);
     
       -- Exclude system-generated indexes.
       DBMS_METADATA.SET_FILTER(h2,'SYSTEM_GENERATED',false);
     
       -- Request that the index name be returned as a parse item.
       DBMS_METADATA.SET_PARSE_ITEM(h2,'NAME');
     
       -- Request that the metadata be transformed into creation DDL.
       th2 := DBMS_METADATA.ADD_TRANSFORM(h2,'DDL');
     
       -- Specify that segment attributes are not to be returned.
       DBMS_METADATA.SET_TRANSFORM_PARAM(th2,'SEGMENT_ATTRIBUTES',false);
      
     
       LOOP
        idxddls := dbms_metadata.fetch_ddl(h2);
     
        -- When there are no more objects to  be retrieved, FETCH_DDL returns NULL.
        EXIT WHEN idxddls IS NULL;
     
          FOR i in idxddls.FIRST..idxddls.LAST LOOP
            ddl := idxddls(i).ddlText;
            pi  := idxddls(i).parsedItems;
            -- Loop through the returned parse items.
            IF pi IS NOT NULL AND pi.COUNT > 0 THEN
              FOR j IN pi.FIRST..pi.LAST LOOP
                IF pi(j).item='NAME' THEN
                  idxname := pi(j).value;
                END IF;
              END LOOP;
             END IF;
       
             -- Store the metadata in our table.
              INSERT INTO my_metadata(object_type, name, md)
                VALUES ('INDEX',idxname,ddl);
             COMMIT;
           END LOOP;  -- for loop
      END LOOP;
      DBMS_METADATA.CLOSE(h2);
     END LOOP;
     DBMS_METADATA.CLOSE(h1);
    END;
    /
    
  2. Execute the procedure:

    EXECUTE get_tables_and_indexes;
    
  3. Perform the following query to see what was retrieved:

    SET LONG 9000000
    SET PAGES 0
    SELECT * FROM my_metadata;
    

Using the DBMS_METADATA API to Re-Create a Retrieved Object

When you fetch metadata for an object, you may want to use it to re-create the object in a different database or schema.

You may not be ready to make remapping decisions when you fetch the metadata. You may want to defer these decisions until later. To accomplish this, you fetch the metadata as XML and store it in a file or table. Later you can use the submit interface to re-create the object.

The submit interface is similar in form to the retrieval interface. It has an OPENW procedure in which you specify the object type of the object to be created. You can specify transforms, transform parameters, and parse items. You can call the CONVERT function to convert the XML to DDL, or you can call the PUT function to both convert XML to DDL and submit the DDL to create the object.


See Also:

Table 20-3 for descriptions of DBMS_METADATA procedures and functions used in the submit interface

Example 20-7 fetches the XML for a table in one schema, and then uses the submit interface to re-create the table in another schema.

Example 20-7 Using the Submit Interface to Re-Create a Retrieved Object

  1. Connect as a privileged user:

    CONNECT system
    Enter password: password
    
  2. Create an invoker's rights package to hold the procedure because access to objects in another schema requires the SELECT_CATALOG_ROLE role. In a definer's rights PL/SQL object (such as a procedure or function), roles are disabled.

    CREATE OR REPLACE PACKAGE example_pkg AUTHID current_user IS
      PROCEDURE move_table(
            table_name  in VARCHAR2,
            from_schema in VARCHAR2,
            to_schema   in VARCHAR2 );
    END example_pkg;
    /
    CREATE OR REPLACE PACKAGE BODY example_pkg IS
    PROCEDURE move_table(
            table_name  in VARCHAR2,
            from_schema in VARCHAR2,
            to_schema   in VARCHAR2 ) IS
    
    -- Define local variables.
    h1      NUMBER;         -- handle returned by OPEN
    h2      NUMBER;         -- handle returned by OPENW
    th1     NUMBER;         -- handle returned by ADD_TRANSFORM for MODIFY
    th2     NUMBER;         -- handle returned by ADD_TRANSFORM for DDL
    xml     CLOB;           -- XML document
    errs    sys.ku$_SubmitResults := sys.ku$_SubmitResults();
    err     sys.ku$_SubmitResult;
    result  BOOLEAN;
    BEGIN
    
    -- Specify the object type.
    h1 := DBMS_METADATA.OPEN('TABLE');
    
    -- Use filters to specify the name and schema of the table.
    DBMS_METADATA.SET_FILTER(h1,'NAME',table_name);
    DBMS_METADATA.SET_FILTER(h1,'SCHEMA',from_schema);
    
    -- Fetch the XML.
    xml := DBMS_METADATA.FETCH_CLOB(h1);
    IF xml IS NULL THEN
        DBMS_OUTPUT.PUT_LINE('Table ' || from_schema || '.' || table_name
    || ' not found');
        RETURN;
      END IF;
    
    -- Release resources.
    DBMS_METADATA.CLOSE(h1);
    
    -- Use the submit interface to re-create the object in another schema.
    
    -- Specify the object type using OPENW (instead of OPEN).
    h2 := DBMS_METADATA.OPENW('TABLE');
    
    -- First, add the MODIFY transform.
    th1 := DBMS_METADATA.ADD_TRANSFORM(h2,'MODIFY');
    
    -- Specify the desired modification: remap the schema name.
    DBMS_METADATA.SET_REMAP_PARAM(th1,'REMAP_SCHEMA',from_schema,to_schema);
    
    -- Now add the DDL transform so that the modified XML can be
    --  transformed into creation DDL.
    th2 := DBMS_METADATA.ADD_TRANSFORM(h2,'DDL');
    
    -- Call PUT to re-create the object.
    result := DBMS_METADATA.PUT(h2,xml,0,errs);
    
    DBMS_METADATA.CLOSE(h2);
      IF NOT result THEN
        -- Process the error information.
        FOR i IN errs.FIRST..errs.LAST LOOP
          err := errs(i);
          FOR j IN err.errorLines.FIRST..err.errorLines.LAST LOOP
            dbms_output.put_line(err.errorLines(j).errorText);
          END LOOP;
        END LOOP;
      END IF;
    END;
    END example_pkg;
    /
    
  3. Now create a table named my_example in the schema SCOTT:

    CONNECT scott
    Enter password:
    -- The password is tiger.
    
    DROP TABLE my_example;
    CREATE TABLE my_example (a NUMBER, b VARCHAR2(30));
    
    CONNECT system
    Enter password: password
    
    SET LONG 9000000
    SET PAGESIZE 0
    SET SERVEROUTPUT ON SIZE 100000
    
  4. Copy the my_example table to the SYSTEM schema:

    DROP TABLE my_example;
    EXECUTE example_pkg.move_table('MY_EXAMPLE','SCOTT','SYSTEM');
    
  5. Perform the following query to verify that it worked:

    SELECT DBMS_METADATA.GET_DDL('TABLE','MY_EXAMPLE') FROM dual;
    

Using the DBMS_METADATA API to Retrieve Collections of Different Object Types

There may be times when you need to retrieve collections of objects in which the objects are of different types, but comprise a logical unit. For example, you might need to retrieve all the objects in a database or a schema, or a table and all its dependent indexes, constraints, grants, audits, and so on. To make such a retrieval possible, the DBMS_METADATA API provides several heterogeneous object types. A heterogeneous object type is an ordered set of object types.

Oracle supplies the following heterogeneous object types:

These object types were developed for use by the Data Pump Export utility, but you can use them in your own applications.

You can use only the programmatic retrieval interface (OPEN, FETCH, CLOSE) with these types, not the browsing interface or the submit interface.

You can specify filters for heterogeneous object types, just as you do for the homogeneous types. For example, you can specify the SCHEMA and NAME filters for TABLE_EXPORT, or the SCHEMA filter for SCHEMA_EXPORT.

Example 20-8 shows how to retrieve the object types in the scott schema. Connect as user scott. The password is tiger.

Example 20-8 Retrieving Heterogeneous Object Types

  1. Create a table to store the retrieved objects:

    DROP TABLE my_metadata;
    CREATE TABLE my_metadata (md CLOB);
    CREATE OR REPLACE PROCEDURE get_schema_md IS
    
    -- Define local variables.
    h       NUMBER;         -- handle returned by OPEN
    th      NUMBER;         -- handle returned by ADD_TRANSFORM
    doc     CLOB;           -- metadata is returned in a CLOB
    BEGIN
    
    -- Specify the object type.
     h := DBMS_METADATA.OPEN('SCHEMA_EXPORT');
    
     -- Use filters to specify the schema.
     DBMS_METADATA.SET_FILTER(h,'SCHEMA','SCOTT');
    
     -- Request that the metadata be transformed into creation DDL.
     th := DBMS_METADATA.ADD_TRANSFORM(h,'DDL');
    
     -- Fetch the objects.
     LOOP
       doc := DBMS_METADATA.FETCH_CLOB(h);
    
       -- When there are no more objects to be retrieved, FETCH_CLOB returns NULL.
       EXIT WHEN doc IS NULL;
    
       -- Store the metadata in the table.
       INSERT INTO my_metadata(md) VALUES (doc);
       COMMIT;
     END LOOP;
     
     -- Release resources.
     DBMS_METADATA.CLOSE(h);
    END;
    /
    
  2. Execute the procedure:

    EXECUTE get_schema_md;
    
  3. Perform the following query to see what was retrieved:

    SET LONG 9000000
    SET PAGESIZE 0
    SELECT * FROM my_metadata;
    

In this example, objects are returned ordered by object type; for example, all tables are returned, then all grants on tables, then all indexes on tables, and so on. The order is, generally speaking, a valid creation order. Thus, if you take the objects in the order in which they were returned and use the submit interface to re-create them in the same order in another schema or database, then there will usually be no errors. (The exceptions usually involve circular references; for example, if package A contains a call to package B, and package B contains a call to package A, then one of the packages will need to be recompiled a second time.)

Filtering the Return of Heterogeneous Object Types

If you want finer control of the objects returned, then you can use the SET_FILTER procedure and specify that the filter apply only to a specific member type. You do this by specifying the path name of the member type as the fourth parameter to SET_FILTER. In addition, you can use the EXCLUDE_PATH_EXPR filter to exclude all objects of an object type. For a list of valid path names, see the TABLE_EXPORT_OBJECTS catalog view.

Example 20-9 shows how you can use SET_FILTER to specify finer control on the objects returned. Connect as user scott. The password is tiger.

Example 20-9 Filtering the Return of Heterogeneous Object Types

  1. Create a table, my_metadata, to store the retrieved objects. And create a procedure, get_schema_md2.

    DROP TABLE my_metadata;
    CREATE TABLE my_metadata (md CLOB);
    CREATE OR REPLACE PROCEDURE get_schema_md2 IS
    
    -- Define local variables.
    h       NUMBER;         -- handle returned by 'OPEN'
    th      NUMBER;         -- handle returned by 'ADD_TRANSFORM'
    doc     CLOB;           -- metadata is returned in a CLOB
    BEGIN
    
     -- Specify the object type.
     h := DBMS_METADATA.OPEN('SCHEMA_EXPORT');
    
     -- Use filters to specify the schema.
     DBMS_METADATA.SET_FILTER(h,'SCHEMA','SCOTT');
    
     -- Use the fourth parameter to SET_FILTER to specify a filter
     -- that applies to a specific member object type.
     DBMS_METADATA.SET_FILTER(h,'NAME_EXPR','!=''MY_METADATA''','TABLE');
    
     -- Use the EXCLUDE_PATH_EXPR filter to exclude procedures.
     DBMS_METADATA.SET_FILTER(h,'EXCLUDE_PATH_EXPR','=''PROCEDURE''');
    
     -- Request that the metadata be transformed into creation DDL.
     th := DBMS_METADATA.ADD_TRANSFORM(h,'DDL');
    
     -- Use the fourth parameter to SET_TRANSFORM_PARAM to specify a parameter
     --  that applies to a specific member object type.
    DBMS_METADATA.SET_TRANSFORM_PARAM(th,'SEGMENT_ATTRIBUTES',false,'TABLE');
    
     -- Fetch the objects.
     LOOP
       doc := dbms_metadata.fetch_clob(h);
    
       -- When there are no more objects to be retrieved, FETCH_CLOB returns NULL.
       EXIT WHEN doc IS NULL;
    
       -- Store the metadata in the table.
       INSERT INTO my_metadata(md) VALUES (doc);
       COMMIT;
     END LOOP;
     
     -- Release resources.
     DBMS_METADATA.CLOSE(h);
    END;
    /
    
  2. Execute the procedure:

    EXECUTE get_schema_md2;
    
  3. Perform the following query to see what was retrieved:

    SET LONG 9000000
    SET PAGESIZE 0
    SELECT * FROM my_metadata;
    

Using the DBMS_METADATA_DIFF API to Compare Object Metadata

This section provides an example that uses the retrieval, comparison, and submit interfaces of DBMS_METADATA and DBMS_METADATA_DIFF to fetch metadata for two tables, compare the metadata, and generate ALTER statements which make one table like the other. For simplicity, function variants are used throughout the example.

Example 20-10 Comparing Object Metadata

  1. Create two tables, TAB1 and TAB2:

    SQL> CREATE TABLE TAB1
      2     (    "EMPNO" NUMBER(4,0),
      3          "ENAME" VARCHAR2(10),
      4          "JOB" VARCHAR2(9),
      5          "DEPTNO" NUMBER(2,0)
      6     ) ;
     
    Table created.
     
    SQL> CREATE TABLE TAB2
      2     (    "EMPNO" NUMBER(4,0) PRIMARY KEY ENABLE,
      3          "ENAME" VARCHAR2(20),
      4          "MGR" NUMBER(4,0),
      5          "DEPTNO" NUMBER(2,0)
      6     ) ;
     
    Table created.
     
    

    Note the differences between TAB1 and TAB2:

    • The table names are different

    • TAB2 has a primary key constraint; TAB1 does not

    • The length of the ENAME column is different in each table

    • TAB1 has a JOB column; TAB2 does not

    • TAB2 has a MGR column; TAB1 does not

  2. Create a function to return the table metadata in SXML format. The following are some key points to keep in mind about SXML when you are using the DBMS_METADATA_DIFF API:

    • SXML is an XML representation of object metadata.

    • The SXML returned is not the same as the XML returned by DBMS_METADATA.GET_XML, which is complex and opaque and contains binary values, instance-specific values, and so on.

    • SXML looks like a direct translation of SQL creation DDL into XML. The tag names and structure correspond to names in the Oracle Database SQL Language Reference.

    • SXML is designed to support editing and comparison.

    To keep this example simple, a transform parameter is used to suppress physical properties:

    SQL> CREATE OR REPLACE FUNCTION get_table_sxml(name IN VARCHAR2) RETURN CLOB IS
      2   open_handle NUMBER;
      3   transform_handle NUMBER;
      4   doc CLOB;
      5  BEGIN
      6   open_handle := DBMS_METADATA.OPEN('TABLE');
      7   DBMS_METADATA.SET_FILTER(open_handle,'NAME',name);
      8   --
      9   -- Use the 'SXML' transform to convert XML to SXML
     10   --
     11   transform_handle := DBMS_METADATA.ADD_TRANSFORM(open_handle,'SXML');
     12   --
     13   -- Use this transform parameter to suppress physical properties
     14   --
     15   DBMS_METADATA.SET_TRANSFORM_PARAM(transform_handle,'PHYSICAL_PROPERTIES',
     16                                     FALSE);
     17   doc := DBMS_METADATA.FETCH_CLOB(open_handle);
     18   DBMS_METADATA.CLOSE(open_handle);
     19   RETURN doc;
     20  END;
     21  /
     
    Function created.
     
    
  3. Use the get_table_sxml function to fetch the table SXML for the two tables:

    SQL> SELECT get_table_sxml('TAB1') FROM dual;
     
      <TABLE xmlns="http://xmlns.oracle.com/ku" version="1.0">
       <SCHEMA>SCOTT</SCHEMA>
       <NAME>TAB1</NAME>
       <RELATIONAL_TABLE>
          <COL_LIST>
             <COL_LIST_ITEM>
                <NAME>EMPNO</NAME>
                <DATATYPE>NUMBER</DATATYPE>
                <PRECISION>4</PRECISION>
                <SCALE>0</SCALE>
             </COL_LIST_ITEM>
             <COL_LIST_ITEM>
                <NAME>ENAME</NAME>
                <DATATYPE>VARCHAR2</DATATYPE>
                <LENGTH>10</LENGTH>
             </COL_LIST_ITEM>
             <COL_LIST_ITEM>
                <NAME>JOB</NAME>
                <DATATYPE>VARCHAR2</DATATYPE>
                <LENGTH>9</LENGTH>
             </COL_LIST_ITEM>
             <COL_LIST_ITEM>
                <NAME>DEPTNO</NAME>
                <DATATYPE>NUMBER</DATATYPE>
                <PRECISION>2</PRECISION>
                <SCALE>0</SCALE>
             </COL_LIST_ITEM>
          </COL_LIST>
       </RELATIONAL_TABLE>
    </TABLE> 
      
    1 row selected.
     
    SQL> SELECT get_table_sxml('TAB2') FROM dual;
     
      <TABLE xmlns="http://xmlns.oracle.com/ku" version="1.0">
       <SCHEMA>SCOTT</SCHEMA>
       <NAME>TAB2</NAME>
       <RELATIONAL_TABLE>
          <COL_LIST>
             <COL_LIST_ITEM>
                <NAME>EMPNO</NAME>
                <DATATYPE>NUMBER</DATATYPE>
                <PRECISION>4</PRECISION>
                <SCALE>0</SCALE>
             </COL_LIST_ITEM>
             <COL_LIST_ITEM>
                <NAME>ENAME</NAME>
                <DATATYPE>VARCHAR2</DATATYPE>
                <LENGTH>20</LENGTH>
             </COL_LIST_ITEM>
             <COL_LIST_ITEM>
                <NAME>MGR</NAME>
                <DATATYPE>NUMBER</DATATYPE>
                <PRECISION>4</PRECISION>
                <SCALE>0</SCALE>
             </COL_LIST_ITEM>
             <COL_LIST_ITEM>
                <NAME>DEPTNO</NAME>
                <DATATYPE>NUMBER</DATATYPE>
                <PRECISION>2</PRECISION>
                <SCALE>0</SCALE>
             </COL_LIST_ITEM>
          </COL_LIST>
          <PRIMARY_KEY_CONSTRAINT_LIST>
             <PRIMARY_KEY_CONSTRAINT_LIST_ITEM>
                <COL_LIST>
                   <COL_LIST_ITEM>
                      <NAME>EMPNO</NAME>
                   </COL_LIST_ITEM>
                </COL_LIST>
             </PRIMARY_KEY_CONSTRAINT_LIST_ITEM>
          </PRIMARY_KEY_CONSTRAINT_LIST>
       </RELATIONAL_TABLE>
    </TABLE> 
     
    1 row selected.
     
    
  4. Compare the results using the DBMS_METADATA browsing APIs:

    SQL> SELECT dbms_metadata.get_sxml('TABLE','TAB1') FROM dual;
    SQL> SELECT dbms_metadata.get_sxml('TABLE','TAB2') FROM dual;
    
  5. Create a function using the DBMS_METADATA_DIFF API to compare the metadata for the two tables. In this function, the get_table_sxml function that was just defined in step 2 is used.

    SQL> CREATE OR REPLACE FUNCTION compare_table_sxml(name1 IN VARCHAR2,
      2                                          name2 IN VARCHAR2) RETURN CLOB IS
      3   doc1 CLOB;
      4   doc2 CLOB;
      5   diffdoc CLOB;
      6   openc_handle NUMBER;
      7  BEGIN
      8   --
      9   -- Fetch the SXML for the two tables
     10   --
     11   doc1 := get_table_sxml(name1);
     12   doc2 := get_table_sxml(name2);
     13   --
     14   -- Specify the object type in the OPENC call
     15   --
     16   openc_handle := DBMS_METADATA_DIFF.OPENC('TABLE');
     17   --
     18   -- Add each document
     19   --
     20   DBMS_METADATA_DIFF.ADD_DOCUMENT(openc_handle,doc1);
     21   DBMS_METADATA_DIFF.ADD_DOCUMENT(openc_handle,doc2);
     22   --
     23   -- Fetch the SXML difference document
     24   --
     25   diffdoc := DBMS_METADATA_DIFF.FETCH_CLOB(openc_handle);
     26   DBMS_METADATA_DIFF.CLOSE(openc_handle);
     27   RETURN diffdoc;
     28  END;
     29  /
     
    Function created.
    
  6. Use the function to fetch the SXML difference document for the two tables:

    SQL> SELECT compare_table_sxml('TAB1','TAB2') FROM dual;
    
    <TABLE xmlns="http://xmlns.oracle.com/ku" version="1.0">
      <SCHEMA>SCOTT</SCHEMA>
      <NAME value1="TAB1">TAB2</NAME>
      <RELATIONAL_TABLE>
        <COL_LIST>
          <COL_LIST_ITEM>
            <NAME>EMPNO</NAME>
            <DATATYPE>NUMBER</DATATYPE>
            <PRECISION>4</PRECISION>
            <SCALE>0</SCALE>
          </COL_LIST_ITEM>
          <COL_LIST_ITEM>
            <NAME>ENAME</NAME>
            <DATATYPE>VARCHAR2</DATATYPE>
            <LENGTH value1="10">20</LENGTH>
          </COL_LIST_ITEM>
          <COL_LIST_ITEM src="1">
            <NAME>JOB</NAME>
            <DATATYPE>VARCHAR2</DATATYPE>
            <LENGTH>9</LENGTH>
          </COL_LIST_ITEM>
          <COL_LIST_ITEM>
            <NAME>DEPTNO</NAME>
            <DATATYPE>NUMBER</DATATYPE>
            <PRECISION>2</PRECISION>
            <SCALE>0</SCALE>
          </COL_LIST_ITEM>
          <COL_LIST_ITEM src="2">
            <NAME>MGR</NAME>
            <DATATYPE>NUMBER</DATATYPE>
            <PRECISION>4</PRECISION>
            <SCALE>0</SCALE>
          </COL_LIST_ITEM>
        </COL_LIST>
        <PRIMARY_KEY_CONSTRAINT_LIST src="2">
          <PRIMARY_KEY_CONSTRAINT_LIST_ITEM>
            <COL_LIST>
              <COL_LIST_ITEM>
                <NAME>EMPNO</NAME>
              </COL_LIST_ITEM>
            </COL_LIST>
          </PRIMARY_KEY_CONSTRAINT_LIST_ITEM>
        </PRIMARY_KEY_CONSTRAINT_LIST>
      </RELATIONAL_TABLE>
    </TABLE>
     
    1 row selected.
     
    

    The SXML difference document shows the union of the two SXML documents, with the XML attributes value1 and src identifying the differences. When an element exists in only one document it is marked with src. Thus, <COL_LIST_ITEM src="1"> means that this element is in the first document (TAB1) but not in the second. When an element is present in both documents but with different values, the element's value is the value in the second document and the value1 gives its value in the first. For example, <LENGTH value1="10">20</LENGTH> means that the length is 10 in TAB1 (the first document) and 20 in TAB2.

  7. Compare the result using the DBMS_METADATA_DIFF browsing APIs:

    SQL> SELECT dbms_metadata_diff.compare_sxml('TABLE','TAB1','TAB2') FROM dual;
    
  8. Create a function using the DBMS_METADATA.CONVERT API to generate an ALTERXML document. This is an XML document containing ALTER statements to make one object like another. You can also use parse items to get information about the individual ALTER statements. (This example uses the functions defined thus far.)

    SQL> CREATE OR REPLACE FUNCTION get_table_alterxml(name1 IN VARCHAR2,
      2                                           name2 IN VARCHAR2) RETURN CLOB IS
      3   diffdoc CLOB;
      4   openw_handle NUMBER;
      5   transform_handle NUMBER;
      6   alterxml CLOB;
      7  BEGIN
      8   --
      9   -- Use the function just defined to get the difference document
     10   --
     11   diffdoc := compare_table_sxml(name1,name2);
     12   --
     13   -- Specify the object type in the OPENW call
     14   --
     15   openw_handle := DBMS_METADATA.OPENW('TABLE');
     16   --
     17   -- Use the ALTERXML transform to generate the ALTER_XML document
     18   --
     19   transform_handle := DBMS_METADATA.ADD_TRANSFORM(openw_handle,'ALTERXML');
     20   --
     21   -- Request parse items
     22   --
     23   DBMS_METADATA.SET_PARSE_ITEM(openw_handle,'CLAUSE_TYPE');
     24   DBMS_METADATA.SET_PARSE_ITEM(openw_handle,'NAME');
     25   DBMS_METADATA.SET_PARSE_ITEM(openw_handle,'COLUMN_ATTRIBUTE');
     26   --
     27   -- Create a temporary LOB
     28   --
     29   DBMS_LOB.CREATETEMPORARY(alterxml, TRUE );
     30   --
     31   -- Call CONVERT to do the transform
     32   --
     33   DBMS_METADATA.CONVERT(openw_handle,diffdoc,alterxml);
     34   --
     35   -- Close context and return the result
     36   --
     37   DBMS_METADATA.CLOSE(openw_handle);
     38   RETURN alterxml;
     39  END;
     40  /
     
    Function created.
    
  9. Use the function to fetch the ALTER_XML document:

    SQL> SELECT get_table_alterxml('TAB1','TAB2') FROM dual;
     
    <ALTER_XML xmlns="http://xmlns.oracle.com/ku" version="1.0">
       <OBJECT_TYPE>TABLE</OBJECT_TYPE>
       <OBJECT1>
          <SCHEMA>SCOTT</SCHEMA>
          <NAME>TAB1</NAME>
       </OBJECT1>
       <OBJECT2>
          <SCHEMA>SCOTT</SCHEMA>
          <NAME>TAB2</NAME>
       </OBJECT2>
       <ALTER_LIST>
          <ALTER_LIST_ITEM>
             <PARSE_LIST>
                <PARSE_LIST_ITEM>
                   <ITEM>NAME</ITEM>
                   <VALUE>MGR</VALUE>
                </PARSE_LIST_ITEM>
                <PARSE_LIST_ITEM>
                   <ITEM>CLAUSE_TYPE</ITEM>
                   <VALUE>ADD_COLUMN</VALUE>
                </PARSE_LIST_ITEM>
             </PARSE_LIST>
             <SQL_LIST>
                <SQL_LIST_ITEM>
                   <TEXT>ALTER TABLE "SCOTT"."TAB1" ADD ("MGR" NUMBER(4,0))</TEXT>
                </SQL_LIST_ITEM>
             </SQL_LIST>
          </ALTER_LIST_ITEM>
          <ALTER_LIST_ITEM>
             <PARSE_LIST>
                <PARSE_LIST_ITEM>
                   <ITEM>NAME</ITEM>
                   <VALUE>JOB</VALUE>
                </PARSE_LIST_ITEM>
                <PARSE_LIST_ITEM>
                   <ITEM>CLAUSE_TYPE</ITEM>
                   <VALUE>DROP_COLUMN</VALUE>
                </PARSE_LIST_ITEM>
             </PARSE_LIST>
             <SQL_LIST>
                <SQL_LIST_ITEM>
                   <TEXT>ALTER TABLE "SCOTT"."TAB1" DROP ("JOB")</TEXT>
                </SQL_LIST_ITEM>
             </SQL_LIST>
          </ALTER_LIST_ITEM>
          <ALTER_LIST_ITEM>
             <PARSE_LIST>
                <PARSE_LIST_ITEM>
                   <ITEM>NAME</ITEM>
                   <VALUE>ENAME</VALUE>
                </PARSE_LIST_ITEM>
                <PARSE_LIST_ITEM>
                   <ITEM>CLAUSE_TYPE</ITEM>
                   <VALUE>MODIFY_COLUMN</VALUE>
                </PARSE_LIST_ITEM>
                <PARSE_LIST_ITEM>
                   <ITEM>COLUMN_ATTRIBUTE</ITEM>
                   <VALUE> SIZE_INCREASE</VALUE>
                </PARSE_LIST_ITEM>
             </PARSE_LIST>
             <SQL_LIST>
                <SQL_LIST_ITEM>
                   <TEXT>ALTER TABLE "SCOTT"."TAB1" MODIFY 
                        ("ENAME" VARCHAR2(20))
                   </TEXT>
                </SQL_LIST_ITEM>
             </SQL_LIST>
          </ALTER_LIST_ITEM>
          <ALTER_LIST_ITEM>
             <PARSE_LIST>
                <PARSE_LIST_ITEM>
                   <ITEM>CLAUSE_TYPE</ITEM>
                   <VALUE>ADD_CONSTRAINT</VALUE>
                </PARSE_LIST_ITEM>
             </PARSE_LIST>
             <SQL_LIST>
                <SQL_LIST_ITEM>
                   <TEXT>ALTER TABLE "SCOTT"."TAB1" ADD  PRIMARY KEY
                         ("EMPNO") ENABLE
                   </TEXT>
                </SQL_LIST_ITEM>
             </SQL_LIST>
          </ALTER_LIST_ITEM>
          <ALTER_LIST_ITEM>
             <PARSE_LIST>
                <PARSE_LIST_ITEM>
                   <ITEM>NAME</ITEM>
                   <VALUE>TAB1</VALUE>
                </PARSE_LIST_ITEM>
                <PARSE_LIST_ITEM>
                   <ITEM>CLAUSE_TYPE</ITEM>
                   <VALUE>RENAME_TABLE</VALUE>
                </PARSE_LIST_ITEM>
             </PARSE_LIST>
             <SQL_LIST>
                <SQL_LIST_ITEM>
                   <TEXT>ALTER TABLE "SCOTT"."TAB1" RENAME TO "TAB2"</TEXT>
                </SQL_LIST_ITEM>
             </SQL_LIST>
          </ALTER_LIST_ITEM>
       </ALTER_LIST>
    </ALTER_XML>
     
     
    1 row selected.
     
    
  10. Compare the result using the DBMS_METADATA_DIFF browsing API:

    SQL> SELECT dbms_metadata_diff.compare_alter_xml('TABLE','TAB1','TAB2') FROM dual;
    
  11. The ALTER_XML document contains an ALTER_LIST of each of the alters. Each ALTER_LIST_ITEM has a PARSE_LIST containing the parse items as name-value pairs and a SQL_LIST containing the SQL for the particular alter. You can parse this document and decide which of the SQL statements to execute, using the information in the PARSE_LIST. (Note, for example, that in this case one of the alters is a DROP_COLUMN, and you might choose not to execute that.)

  12. Create one last function that uses the DBMS_METADATA.CONVERT API and the ALTER DDL transform to convert the ALTER_XML document into SQL DDL:

    SQL> CREATE OR REPLACE FUNCTION get_table_alterddl(name1 IN VARCHAR2,
      2                                           name2 IN VARCHAR2) RETURN CLOB IS
      3   alterxml CLOB;
      4   openw_handle NUMBER;
      5   transform_handle NUMBER;
      6   alterddl CLOB;
      7  BEGIN
      8   --
      9   -- Use the function just defined to get the ALTER_XML document
     10   --
     11   alterxml := get_table_alterxml(name1,name2);
     12   --
     13   -- Specify the object type in the OPENW call
     14   --
     15   openw_handle := DBMS_METADATA.OPENW('TABLE');
     16   --
     17   -- Use ALTERDDL transform to convert the ALTER_XML document to SQL DDL
     18   -- 
     19   transform_handle := DBMS_METADATA.ADD_TRANSFORM(openw_handle,'ALTERDDL');
     20   --
     21   -- Use the SQLTERMINATOR transform parameter to append a terminator
     22   -- to each SQL statement
     23   --
     24   DBMS_METADATA.SET_TRANSFORM_PARAM(transform_handle,'SQLTERMINATOR',true);
     25   --
     26   -- Create a temporary lob
     27   --
     28   DBMS_LOB.CREATETEMPORARY(alterddl, TRUE );
     29   --
     30   -- Call CONVERT to do the transform
     31   --
     32   DBMS_METADATA.CONVERT(openw_handle,alterxml,alterddl);
     33   --
     34   -- Close context and return the result
     35   --
     36   DBMS_METADATA.CLOSE(openw_handle);
     37   RETURN alterddl;
     38  END;
     39  /
     
    Function created.
     
    
  13. Use the function to fetch the SQL ALTER statements:

    SQL> SELECT get_table_alterddl('TAB1','TAB2') FROM dual;
    ALTER TABLE "SCOTT"."TAB1" ADD ("MGR" NUMBER(4,0))
    /
      ALTER TABLE "SCOTT"."TAB1" DROP ("JOB")
    /
      ALTER TABLE "SCOTT"."TAB1" MODIFY ("ENAME" VARCHAR2(20))
    /
      ALTER TABLE "SCOTT"."TAB1" ADD  PRIMARY KEY ("EMPNO") ENABLE
    /
      ALTER TABLE "SCOTT"."TAB1" RENAME TO "TAB2"
    /
      
    1 row selected.
     
    
  14. Compare the results using the DBMS_METADATA_DIFF browsing API:

    SQL> SELECT dbms_metadata_diff.compare_alter('TABLE','TAB1','TAB2') FROM dual;
    ALTER TABLE "SCOTT"."TAB1" ADD ("MGR" NUMBER(4,0))
      ALTER TABLE "SCOTT"."TAB1" DROP ("JOB")
      ALTER TABLE "SCOTT"."TAB1" MODIFY ("ENAME" VARCHAR2(20))
      ALTER TABLE "SCOTT"."TAB1" ADD  PRIMARY KEY ("EMPNO") USING INDEX 
      PCTFREE 10 INITRANS 2 STORAGE ( INITIAL 16384 NEXT 16384 MINEXTENTS 1
      MAXEXTENTS 505 PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL
      DEFAULT)  ENABLE ALTER TABLE "SCOTT"."TAB1" RENAME TO "TAB2"
     
    1 row selected.
    

Performance Tips for the Programmatic Interface of the DBMS_METADATA API

This section describes how to enhance performance when using the programmatic interface of the DBMS_METADATA API.

  1. Fetch all of one type of object before fetching the next. For example, if you are retrieving the definitions of all objects in your schema, first fetch all tables, then all indexes, then all triggers, and so on. This will be much faster than nesting OPEN contexts; that is, fetch one table then all of its indexes, grants, and triggers, then the next table and all of its indexes, grants, and triggers, and so on. "Example Usage of the DBMS_METADATA API" reflects this second, less efficient means, but its purpose is to demonstrate most of the programmatic calls, which are best shown by this method.

  2. Use the SET_COUNT procedure to retrieve more than one object at a time. This minimizes server round trips and eliminates many redundant function calls.

  3. When writing a PL/SQL package that calls the DBMS_METADATA API, declare LOB variables and objects that contain LOBs (such as SYS.KU$_DDLS) at package scope rather than within individual functions. This eliminates the creation and deletion of LOB duration structures upon function entrance and exit, which are very expensive operations.

Example Usage of the DBMS_METADATA API

This section provides an example of how the DBMS_METADATA API could be used. A script is provided that automatically runs the demo for you by performing the following actions:

To execute the example, do the following:

  1. Start SQL*Plus as user system. You will be prompted for a password.

    sqlplus system
    
  2. Install the demo, which is located in the file mddemo.sql in rdbms/demo:

    SQL> @mddemo
    

    For an explanation of what happens during this step, see "What Does the DBMS_METADATA Example Do?".

  3. Connect as user mddemo. You will be prompted for a password, which is also mddemo.

    SQL> CONNECT mddemo
    Enter password:
    
  4. Set the following parameters so that query output will be complete and readable:

    SQL> SET PAGESIZE 0
    SQL> SET LONG 1000000
    
  5. Execute the GET_PAYROLL_TABLES procedure, as follows:

    SQL> CALL payroll_demo.get_payroll_tables();
    
  6. Execute the following SQL query:

    SQL> SELECT ddl FROM DDL ORDER BY SEQNO;
    

    The output generated is the result of the execution of the GET_PAYROLL_TABLES procedure. It shows all the DDL that was performed in Step 2 when the demo was installed. See "Output Generated from the GET_PAYROLL_TABLES Procedure" for a listing of the actual output.

What Does the DBMS_METADATA Example Do?

When the mddemo script is run, the following steps take place. You can adapt these steps to your own situation.

  1. Drops users as follows, if they exist. This will ensure that you are starting out with fresh data. If the users do not exist, then a message to that effect is displayed, no harm is done, and the demo continues to execute.

    CONNECT system
    Enter password: password
    SQL> DROP USER mddemo CASCADE;
    SQL> DROP USER mddemo_clerk CASCADE;
    SQL> DROP USER mddemo_mgr CASCADE;
    
  2. Creates user mddemo, identified by mddemo:

    SQL> CREATE USER mddemo IDENTIFIED BY mddemo;
    SQL> GRANT resource, connect, create session,
      1     create table,
      2     create procedure, 
      3     create sequence,
      4     create trigger,
      5     create view,
      6     create synonym,
      7     alter session,
      8  TO mddemo;
    
  3. Creates user mddemo_clerk, identified by clerk:

    CREATE USER mddemo_clerk IDENTIFIED BY clerk;
    
  4. Creates user mddemo_mgr, identified by mgr:

    CREATE USER mddemo_mgr IDENTIFIED BY mgr;
    
  5. Connect to SQL*Plus as mddemo (the password is also mddemo):

    CONNECT mddemo
    Enter password:
    
  6. Creates some payroll-type tables:

    SQL> CREATE TABLE payroll_emps
      2  ( lastname VARCHAR2(60) NOT NULL,
      3  firstname VARCHAR2(20) NOT NULL,
      4  mi VARCHAR2(2),
      5  suffix VARCHAR2(10),
      6  dob DATE NOT NULL,
      7  badge_no NUMBER(6) PRIMARY KEY,
      8  exempt VARCHAR(1) NOT NULL,
      9  salary NUMBER (9,2),
      10 hourly_rate NUMBER (7,2) )
      11 /
    
    SQL> CREATE TABLE payroll_timecards 
      2  (badge_no NUMBER(6) REFERENCES payroll_emps (badge_no),
      3  week NUMBER(2),
      4  job_id NUMBER(5),
      5  hours_worked NO7UMBER(4,2) )
      6 /
    
  7. Creates a dummy table, audit_trail. This table is used to show that tables that do not start with payroll are not retrieved by the GET_PAYROLL_TABLES procedure.

    SQL> CREATE TABLE audit_trail 
      2  (action_time DATE,
      3  lastname VARCHAR2(60),
      4  action LONG )
      5  /
    
  8. Creates some grants on the tables just created:

    SQL> GRANT UPDATE (salary,hourly_rate) ON payroll_emps TO mddemo_clerk;
    SQL> GRANT ALL ON payroll_emps TO mddemo_mgr WITH GRANT OPTION;
    
    SQL> GRANT INSERT,UPDATE ON payroll_timecards TO mddemo_clerk;
    SQL> GRANT ALL ON payroll_timecards TO mddemo_mgr WITH GRANT OPTION;
    
  9. Creates some indexes on the tables just created:

    SQL> CREATE INDEX i_payroll_emps_name ON payroll_emps(lastname);
    SQL> CREATE INDEX i_payroll_emps_dob ON payroll_emps(dob);
    SQL> CREATE INDEX i_payroll_timecards_badge ON payroll_timecards(badge_no);
    
  10. Creates some triggers on the tables just created:

    SQL> CREATE OR REPLACE PROCEDURE check_sal( salary in number) AS BEGIN
      2  RETURN;
      3  END;
      4  /
    

    Note that the security is kept fairly loose to keep the example simple.

    SQL> CREATE OR REPLACE TRIGGER salary_trigger BEFORE INSERT OR UPDATE OF salary
    ON payroll_emps
    FOR EACH ROW WHEN (new.salary > 150000)
    CALL check_sal(:new.salary)
    /
    
    SQL> CREATE OR REPLACE TRIGGER hourly_trigger BEFORE UPDATE OF hourly_rate ON payroll_emps
    FOR EACH ROW
    BEGIN :new.hourly_rate:=:old.hourly_rate;END;
    /
    
  11. Sets up a table to hold the generated DDL:

    CREATE TABLE ddl (ddl CLOB, seqno NUMBER);
    
  12. Creates the PAYROLL_DEMO package, which provides examples of how DBMS_METADATA procedures can be used.

    SQL> CREATE OR REPLACE PACKAGE payroll_demo AS PROCEDURE get_payroll_tables;
    END;
    /
    

    Note:

    To see the entire script for this example, including the contents of the PAYROLL_DEMO package, see the file mddemo.sql located in your $ORACLE_HOME/rdbms/demo directory.

Output Generated from the GET_PAYROLL_TABLES Procedure

After you execute the mddemo.payroll_demo.get_payroll_tables procedure, you can execute the following query:

SQL> SELECT ddl FROM ddl ORDER BY seqno;

The results are as follows, which reflect all the DDL executed by the script as described in the previous section.

CREATE TABLE "MDDEMO"."PAYROLL_EMPS"
   (    "LASTNAME" VARCHAR2(60) NOT NULL ENABLE,
        "FIRSTNAME" VARCHAR2(20) NOT NULL ENABLE,
        "MI" VARCHAR2(2),
        "SUFFIX" VARCHAR2(10),
        "DOB" DATE NOT NULL ENABLE,
        "BADGE_NO" NUMBER(6,0),
        "EXEMPT" VARCHAR2(1) NOT NULL ENABLE,
        "SALARY" NUMBER(9,2),
        "HOURLY_RATE" NUMBER(7,2),
 PRIMARY KEY ("BADGE_NO") ENABLE
   ) ;

  GRANT UPDATE ("SALARY") ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_CLERK";
  GRANT UPDATE ("HOURLY_RATE") ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_CLERK";
  GRANT ALTER ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT DELETE ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT INDEX ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT INSERT ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT SELECT ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT UPDATE ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT REFERENCES ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT ON COMMIT REFRESH ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT QUERY REWRITE ON "MDDEMO"."PAYROLL_EMPS" TO "MDDEMO_MGR" WITH GRANT OPTION;

  CREATE INDEX "MDDEMO"."I_PAYROLL_EMPS_DOB" ON "MDDEMO"."PAYROLL_EMPS" ("DOB")
  PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50
  FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ;


  CREATE INDEX "MDDEMO"."I_PAYROLL_EMPS_NAME" ON "MDDEMO"."PAYROLL_EMPS" ("LASTNAME")
  PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50
  FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ;

  CREATE OR REPLACE TRIGGER hourly_trigger before update of hourly_rate on payroll_emps
for each row
begin :new.hourly_rate:=:old.hourly_rate;end;
/
ALTER TRIGGER "MDDEMO"."HOURLY_TRIGGER" ENABLE;

  CREATE OR REPLACE TRIGGER salary_trigger before insert or update of salary on payroll_emps
for each row  
WHEN (new.salary > 150000)  CALL check_sal(:new.salary)
/
ALTER TRIGGER "MDDEMO"."SALARY_TRIGGER" ENABLE;


CREATE TABLE "MDDEMO"."PAYROLL_TIMECARDS"
   (    "BADGE_NO" NUMBER(6,0),
        "WEEK" NUMBER(2,0),
        "JOB_ID" NUMBER(5,0),
        "HOURS_WORKED" NUMBER(4,2),
 FOREIGN KEY ("BADGE_NO")
  REFERENCES "MDDEMO"."PAYROLL_EMPS" ("BADGE_NO") ENABLE
   ) ;

  GRANT INSERT ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_CLERK";
  GRANT UPDATE ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_CLERK";
  GRANT ALTER ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT DELETE ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT INDEX ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT INSERT ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT SELECT ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT UPDATE ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT REFERENCES ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT ON COMMIT REFRESH ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;
  GRANT QUERY REWRITE ON "MDDEMO"."PAYROLL_TIMECARDS" TO "MDDEMO_MGR" WITH GRANT OPTION;

  CREATE INDEX "MDDEMO"."I_PAYROLL_TIMECARDS_BADGE" ON "MDDEMO"."PAYROLL_TIMECARDS" ("BADGE_NO")
  PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50
  FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ;

Summary of DBMS_METADATA Procedures

This section provides brief descriptions of the procedures provided by the DBMS_METADATA API. For detailed descriptions of these procedures, see Oracle Database PL/SQL Packages and Types Reference.

Table 20-1 provides a brief description of the procedures provided by the DBMS_METADATA programmatic interface for retrieving multiple objects.

Table 20-1 DBMS_METADATA Procedures Used for Retrieving Multiple Objects

PL/SQL Procedure NameDescription
DBMS_METADATA.OPEN()

Specifies the type of object to be retrieved, the version of its metadata, and the object model.

DBMS_METADATA.SET_FILTER()

Specifies restrictions on the objects to be retrieved, for example, the object name or schema.

DBMS_METADATA.SET_COUNT()

Specifies the maximum number of objects to be retrieved in a single FETCH_xxx call.

DBMS_METADATA.GET_QUERY()

Returns the text of the queries that are used by FETCH_xxx. You can use this as a debugging aid.

DBMS_METADATA.SET_PARSE_ITEM()

Enables output parsing by specifying an object attribute to be parsed and returned.

DBMS_METADATA.ADD_TRANSFORM()

Specifies a transform that FETCH_xxx applies to the XML representation of the retrieved objects.

DBMS_METADATA.SET_TRANSFORM_PARAM()

Specifies parameters to the XSLT stylesheet identified by transform_handle.

DBMS_METADATA.SET_REMAP_PARAM()

Specifies parameters to the XSLT stylesheet identified by transform_handle.

DBMS_METADATA.FETCH_xxx()

Returns metadata for objects meeting the criteria established by OPEN, SET_FILTER, SET_COUNT, ADD_TRANSFORM, and so on.

DBMS_METADATA.CLOSE()

Invalidates the handle returned by OPEN and cleans up the associated state.


Table 20-2 lists the procedures provided by the DBMS_METADATA browsing interface and provides a brief description of each one. These functions return metadata for one or more dependent or granted objects. These procedures do not support heterogeneous object types.

Table 20-2 DBMS_METADATA Procedures Used for the Browsing Interface

PL/SQL Procedure NameDescription
DBMS_METADATA.GET_xxx()

Provides a way to return metadata for a single object. Each GET_xxx call consists of an OPEN procedure, one or two SET_FILTER calls, optionally an ADD_TRANSFORM procedure, a FETCH_xxx call, and a CLOSE procedure.

The object_type parameter has the same semantics as in the OPEN procedure. schema and name are used for filtering.

If a transform is specified, then session-level transform flags are inherited.

DBMS_METADATA.GET_DEPENDENT_xxx()

Returns the metadata for one or more dependent objects, specified as XML or DDL.

DBMS_METADATA.GET_GRANTED_xxx()

Returns the metadata for one or more granted objects, specified as XML or DDL.


Table 20-3 provides a brief description of the DBMS_METADATA procedures and functions used for XML submission.

Table 20-3 DBMS_METADATA Procedures and Functions for Submitting XML Data

PL/SQL NameDescription
DBMS_METADATA.OPENW()

Opens a write context.

DBMS_METADATA.ADD_TRANSFORM()

Specifies a transform for the XML documents

DBMS_METADATA.SET_TRANSFORM_PARAM() and 
DBMS_METADATA.SET_REMAP_PARAM()

SET_TRANSFORM_PARAM specifies a parameter to a transform.

SET_REMAP_PARAM specifies a remapping for a transform.

DBMS_METADATA.SET_PARSE_ITEM()

Specifies an object attribute to be parsed.

DBMS_METADATA.CONVERT()

Converts an XML document to DDL.

DBMS_METADATA.PUT()

Submits an XML document to the database.

DBMS_METADATA.CLOSE()

Closes the context opened with OPENW.


Summary of DBMS_METADATA_DIFF Procedures

This section provides brief descriptions of the procedures and functions provided by the DBMS_METADATA_DIFF API. For detailed descriptions of these procedures, see Oracle Database PL/SQL Packages and Types Reference.

Table 20-4 DBMS_METADATA_DIFF Procedures and Functions

PL/SQL Procedure NameDescription

OPENC function

Specifies the type of objects to be compared.

ADD_DOCUMENT procedure

Specifies an SXML document to be compared.

FETCH_CLOB functions and procedures

Returns a CLOB showing the differences between the two documents specified by ADD_DOCUMENT.

CLOSE procedure

Invalidates the handle returned by OPENC and cleans up associated state.


PKI OOPKN:AOEBPS/preface.htm4 Preface

Preface

This document describes how to use Oracle Database utilities for data transfer, data maintenance, and database administration.

Audience

The utilities described in this book are intended for database administrators (DBAs), application programmers, security administrators, system operators, and other Oracle users who perform the following tasks:

To use this manual, you need a working knowledge of SQL and of Oracle fundamentals. You can find such information in Oracle Database Concepts. In addition, to use SQL*Loader, you must know how to use the file management facilities of your operating system.

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Related Documentation

For more information, see these Oracle resources:

The Oracle Database documentation set, especially:

Some of the examples in this book use the sample schemas of the seed database, which is installed by default when you install Oracle Database. Refer to Oracle Database Sample Schemas for information about how these schemas were created and how you can use them yourself.

Oracle error message documentation is only available in HTML. If you only have access to the Oracle Database Documentation CD, you can browse the error messages by range. Once you find the specific range, use your browser's "find in page" feature to locate the specific message. When connected to the Internet, you can search for a specific error message using the error message search feature of the Oracle online documentation.

To download free release notes, installation documentation, white papers, and other collateral, visit the Oracle Technology Network (OTN). To use OTN, you must have a username and password. If you do not already have these, you can register online for free at

http://www.oracle.com/technology

Syntax Diagrams

Syntax descriptions are provided in this book for various SQL, PL/SQL, or other command-line constructs in graphic form or Backus Naur Form (BNF). See Oracle Database SQL Language Reference for information about how to interpret these descriptions.

Conventions

The following text conventions are used in this document:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.

PKPKN:AOEBPS/index.htm Index

Index

A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X 

A

access privileges
Export and Import, 21.2.3
ADD_FILE parameter
Data Pump Export utility
interactive-command mode, 2.5.1
ADR
See automatic diagnostic repository
ADR base
in ADRCI utility, 16.2
ADR home
in ADRCI utility, 16.2
ADRCI
troubleshooting, 16.10
ADRCI utility, 16
ADR base, 16.2
ADR home, 16.2
batch mode, 16.3.3
commands, 16.9
getting help, 16.3
homepath, 16.2
interactive mode, 16.3.1
starting, 16.3
Advanced Queuing
exporting advanced queue tables, 21.14.10
importing advanced queue tables, 22.16.10
aliases
directory
exporting, 21.14.5
importing, 22.16.5
analyzer statistics, 22.24
analyzing redo log files, 19
ANYDATA type
effect on table-mode Import, 22.6
using SQL strings to load, 10.12.6
APPEND parameter
SQL*Loader utility, 9.14.2
append to table
SQL*Loader, 9.13.2.2.1
archived LOBs
restrictions on export, 2.4.44
archiving
disabling
effect on direct path loads, 12.5.4.1
arrays
committing after insert, 22.7.2
atomic null, 11.1.5.2
ATTACH parameter
Data Pump Export utility, 2.4.3
Data Pump Import utility, 3.4.3
attaching to an existing job
Data Pump Export utility, 2.4.3
attributes
null, 11.1.5.1
attribute-value constructors
overriding, 11.1.6
automatic diagnostic repository, 16.2

B

backslash escape character, 9.3.3.2
backups
restoring dropped snapshots
Import, 22.18.2.1
bad files
specifying for SQL*Loader, 9.8
BAD parameter
SQL*Loader command line, 8.2.1
BADFILE parameter
SQL*Loader utility, 9.8
BEGINDATA parameter
SQL*Loader control file, 9.6
BFILEs
in original Export, 21.14.6
in original Import, 22.16.5
loading with SQL*Loader, 11.4, 11.5
big-endian data
external tables, 14.2.8
bind arrays
determining size of for SQL*Loader, 9.16.4
minimizing SQL*Loader memory requirements, 9.16.5
minimum requirements, 9.16.1
size with multiple SQL*Loader INTO TABLE statements, 9.16.6
specifying maximum size, 8.2.2
specifying number of rows, 8.2.23
SQL*Loader performance implications, 9.16.2
BINDSIZE parameter
SQL*Loader command line, 8.2.2, 9.16.3
blanks
loading fields consisting of blanks, 10.9
SQL*Loader BLANKS parameter for field comparison, 10.5.1
trailing, 10.4.5.4
trimming, 10.10
external tables, 14.3.2
whitespace, 10.10
BLANKS parameter
SQL*Loader utility, 10.5.1
BLOBs
loading with SQL*Loader, 11.4
bound fillers, 10.3.1
buffer cache size
and Data Pump operations involving Streams, 5.3.1
BUFFER parameter
Export utility, 21.5.1
Import utility, 22.7.1
buffers
calculating for export, 21.5.1
specifying with SQL*Loader BINDSIZE parameter, 9.16.4
byte order, 10.8
big-endian, 10.8
little-endian, 10.8
specifying in SQL*Loader control file, 10.8.1
byte order marks, 10.8.2
precedence
for first primary datafile, 10.8.2
for LOBFILEs and SDFs, 10.8.2
suppressing checks for, 10.8.2.1
BYTEORDER parameter
SQL*Loader utility, 10.8.1
BYTEORDERMARK parameter
SQL*Loader utility, 10.8.2.1

C

cached sequence numbers
Export, 21.14.1
catalog.sql script
preparing database for Export and Import, 21.2.1, 22.2.1
catexp.sql script
preparing database for Export and Import, 21.2.1, 22.2.1
catldr.sql script
preparing for direct path loads, 12.4.1
changing a database ID, 18.3.1
changing a database name, 18.3.3
CHAR datatype
delimited form and SQL*Loader, 10.4.5
character fields
delimiters and SQL*Loader, 10.4.2, 10.4.5
determining length for SQL*Loader, 10.4.7
SQL*Loader datatypes, 10.4.2
character sets
conversion
during Export and Import, 21.12.1, 22.14.1
eight-bit to seven-bit conversions
Export/Import, 21.12.3, 22.14.3
identifying for external tables, 14.2.4
multibyte
Export/Import, 21.12.4
SQL*Loader, 9.10.1
single-byte
Export/Import, 21.12.3, 22.14.3
SQL*Loader control file, 9.10.5.3
SQL*Loader conversion between, 9.10
Unicode, 9.10.2
character strings
external tables
specifying bytes or characters, 14.2.10
SQL*Loader, 10.5.2
character-length semantics, 9.10.5.4
CHARACTERSET parameter
SQL*Loader utility, 9.10.5.2
check constraints
overriding disabling of, 12.8.1.2
CLOBs
loading with SQL*Loader, 11.4
CLUSTER parameter
Data Pump Export utility, 2.4.4
Data Pump Import utility, 3.4.4
collection types supported by SQL*Loader, 7.10.2
collections, 7.10
loading, 11.6
column array rows
specifying number of, 12.5.5
column objects
loading, 11.1
with user-defined constructors, 11.1.6
COLUMNARRAYROWS parameter
SQL*Loader command line, 8.2.3
columns
exporting LONG datatypes, 21.14.2
loading REF columns, 11.3
naming
SQL*Loader, 10.3
objects
loading nested column objects, 11.1.3
stream record format, 11.1.1
variable record format, 11.1.2, 11.1.2
reordering before Import, 22.3.1
setting to a constant value with SQL*Loader, 10.13.2
setting to a unique sequence number with SQL*Loader, 10.13.6
setting to an expression value with SQL*Loader, 10.13.3
setting to null with SQL*Loader, 10.13.2.1
setting to the current date with SQL*Loader, 10.13.5
setting to the datafile record number with SQL*Loader, 10.13.4
specifying
SQL*Loader, 10.3
specifying as PIECED
SQL*Loader, 12.4.7.1
using SQL*Loader, 10.13.3
comments
in Export and Import parameter files, 21.3.3, 22.5.2
with external tables, 14.1, 15.1.1
COMMIT parameter
Import utility, 22.7.2
COMPILE parameter
Import utility, 22.7.3
completion messages
Export, 21.7.4
Import, 21.7.4
COMPRESS parameter
Export utility, 21.5.2
COMPRESSION parameter
Data Pump Export utility, 2.4.5
CONCATENATE parameter
SQL*Loader utility, 9.12
concurrent conventional path loads, 12.8.4
configuration
of LogMiner utility, 19.2.1
CONSISTENT parameter
Export utility, 21.5.3
nested tables and, 21.5.3
partitioned table and, 21.5.3
consolidating
extents, 21.5.2
CONSTANT parameter
SQL*Loader, 10.13.2
constraints
automatic integrity and SQL*Loader, 12.8.2.2
direct path load, 12.8
disabling referential constraints, 22.3.2
enabling
after a parallel direct path load, 12.9.7
enforced on a direct load, 12.8.1.1
failed
Import, 22.10.1.1
load method, 12.3.8
CONSTRAINTS parameter
Export utility, 21.5.4
Import utility, 22.7.4
constructors
attribute-value, 11.1.6
overriding, 11.1.6
user-defined, 11.1.6
loading column objects with, 11.1.6
CONTENT parameter
Data Pump Export utility, 2.4.6
Data Pump Import utility, 3.4.3
CONTINUE_CLIENT parameter
Data Pump Export utility
interactive-command mode, 2.5.2
Data Pump Import utility
interactive-command mode, 3.5.1
CONTINUEIF parameter
SQL*Loader utility, 9.12
control files
character sets, 9.10.5.3
data definition language syntax, 9.1
specifying data, 9.6
specifying SQL*Loader discard file, 9.9
CONTROL parameter
SQL*Loader command line, 8.2.4
conventional path Export
compared to direct path, 21.9
conventional path loads
behavior when discontinued, 9.11.1
compared to direct path loads, 12.3.7
concurrent, 12.9.1
of a single partition, 12.2
SQL*Loader bind array, 9.16.2
when to use, 12.2.2
conversion of character sets
during Export/Import, 21.12.1, 22.14.1
effect of character set sorting on, 21.12.1.1, 22.14.1.1
conversion of data
during direct path loads, 12.3.1
conversion of input characters, 9.10.5
CREATE REPORT command, ADRCI utility, 16.9.1
CREATE SESSION privilege
Export, 21.2.3, 22.2.2
Import, 21.2.3, 22.2.2
creating
incident package, 16.8.2
tables
manually, before import, 22.3.1

D

data
conversion
direct path load, 12.3.1
delimiter marks in data and SQL*Loader, 10.4.5.2
distinguishing different input formats for SQL*Loader, 9.15
distinguishing different input row object subtypes, 9.15, 9.15.3
exporting, 21.5.24
generating unique values with SQL*Loader, 10.13.6
including in control files, 9.6
loading data contained in the SQL*Loader control file, 10.13.1
loading in sections
SQL*Loader, 12.4.7.1
loading into more than one table
SQL*Loader, 9.15
maximum length of delimited data for SQL*Loader, 10.4.5.3
moving between operating systems using SQL*Loader, 10.7
recovery
SQL*Loader direct path load, 12.4.6
saving in a direct path load, 12.4.5
saving rows
SQL*Loader, 12.5.3
unsorted
SQL*Loader, 12.5.2.2
values optimized for SQL*Loader performance, 10.13.1
data fields
specifying the SQL*Loader datatype, 10.3.2
data files
specifying buffering for SQL*Loader, 9.7
specifying for SQL*Loader, 9.5
DATA parameter
SQL*Loader command line, 8.2.5
Data Pump Export utility
ATTACH parameter, 2.4.3
CLUSTER parameter, 2.4.4
command-line mode, 2.4, 3.4
COMPRESSION parameter, 2.4.5
CONTENT parameter, 2.4.6, 2.4.6
controlling resource consumption, 5.2.1
DATA_OPTIONS parameter, 2.4.7
dump file set, 2.1
DUMPFILE parameter, 2.4.9
encryption of SecureFiles, 2.4.10
ENCRYPTION parameter, 2.4.10
ENCRYPTION_ALGORITHM parameter, 2.4.11
ENCRYPTION_MODE parameter, 2.4.12
ENCRYPTION_PASSWORD parameter, 2.4.13
ESTIMATE parameter, 2.4.14
ESTIMATE_ONLY parameter, 2.4.15
EXCLUDE parameter, 2.4.16
excluding objects, 2.4.16
export modes, 2.2.2
FILESIZE command
interactive-command mode, 2.5.4
FILESIZE parameter, 2.4.17
filtering data that is exported
using EXCLUDE parameter, 2.4.16
using INCLUDE parameter, 2.4.22
FLASHBACK_SCN parameter, 2.4.18
FLASHBACK_TIME parameter, 2.4.19
FULL parameter, 2.4.20
HELP parameter
interactive-command mode, 2.5.5
INCLUDE parameter, 2.4.22
interactive-command mode, 2.5
ADD_FILE parameter, 2.5.1
CONTINUE_CLIENT parameter, 2.5.2
EXIT_CLIENT parameter, 2.5.3
FILESIZE, 2.5.4
HELP parameter, 2.5.5
KILL_JOB parameter, 2.5.6
PARALLEL parameter, 2.5.7
START_JOB parameter, 2.5.8
STATUS parameter, 2.5.9, 3.5.7
STOP_JOB parameter, 2.5.10, 3.5.8
interfaces, 2.2.1
invoking
as SYSDBA, 2.2, 3.2
job names
specifying, 2.4.23
JOB_NAME parameter, 2.4.23
LOGFILE parameter, 2.4.25
NETWORK_LINK parameter, 2.4.27
NOLOGFILE parameter, 2.4.28
PARALLEL parameter
command-line mode, 2.4.29
interactive-command mode, 2.5.7
PARFILE parameter, 2.4.30
QUERY parameter, 2.4.31
REMAP_DATA parameter, 2.4.32
REUSE_DUMPFILES parameter, 2.4.33
SAMPLE parameter, 2.4.34
SCHEMAS parameter, 2.4.35
SecureFiles LOB considerations, 1.8
SERVICE_NAME parameter, 2.4.36
SOURCE_EDITION parameter, 2.4.37
specifying a job name, 2.4.23
syntax diagrams, 2.7
TABLES parameter, 2.4.39
TABLESPACES parameter, 2.4.40
transparent data encryption, 2.4.13
TRANSPORT_FULL_CHECK parameter, 2.4.41
TRANSPORT_TABLESPACES parameter, 2.4.42
TRANSPORTABLE parameter, 2.4.43
transportable tablespace mode
and time zone file versions, 2.2.2.5
VERSION parameter, 2.4.44
versioning, 1.7
Data Pump Import utility
ATTACH parameter, 3.4.3
attaching to an existing job, 3.4.3
changing name of source datafile, 3.4.29
CLUSTER parameter, 3.4.4
command-line mode
NOLOGFILE parameter, 3.4.23
STATUS parameter, 3.4.39
CONTENT parameter, 3.4.5
controlling resource consumption, 5.2.1
DATA_OPTIONS parameter, 3.4.6
DIRECTORY parameter, 3.4.7
DUMPFILE parameter, 3.4.8
ENCRYPTION_PASSWORD parameter, 3.4.9
ESTIMATE parameter, 3.4.10
estimating size of job, 3.4.10
EXCLUDE parameter, 3.4.11
filtering data that is imported
using EXCLUDE parameter, 3.4.11
using INCLUDE parameter, 3.4.16
FLASHBACK_SCN parameter, 3.4.12
FLASHBACK_TIME parameter, 3.4.13
full import mode, 3.2.2.1
FULL parameter, 3.4.14
HELP parameter
command-line mode, 3.4.15
interactive-command mode, 3.5.3
INCLUDE parameter, 3.4.16
interactive-command mode, 3.5
CONTINUE_CLIENT parameter, 3.5.1
EXIT_CLIENT parameter, 3.5.2
HELP parameter, 3.5.3
KILL_JOB parameter, 3.5.4
START_JOB parameter, 3.5.6
STOP_JOB parameter, 3.5.8
interfaces, 3.2.1
JOB_NAME parameter, 3.4.17
LOGFILE parameter, 3.4.19
PARALLEL parameter
command-line mode, 3.4.24
interactive-command mode, 3.5.5
PARFILE parameter, 3.4.25
PARTITION_OPTIONS parameter, 3.4.26
QUERY parameter, 3.4.27
REMAP_DATA parameter, 3.4.28
REMAP_DATAFILE parameter, 3.4.29
REMAP_SCHEMA parameter, 3.4.30
REMAP_TABLE parameter, 3.4.31
REMAP_TABLESPACE parameter, 3.4.32
REUSE_DATAFILES parameter, 3.4.33
schema mode, 3.2.2.2
SCHEMAS parameter, 3.4.34
SERVICE_NAME parameter, 3.4.35
SKIP_UNUSABLE_INDEXES parameter, 3.4.36
SOURCE_EDITION parameter, 3.4.37
specifying a job name, 3.4.17
specifying dump file set to import, 3.4.8
SQLFILE parameter, 3.4.38
STREAMS_CONFIGURATION parameter, 3.4.40
syntax diagrams, 3.7
table mode, 3.2.2.3
TABLE_EXISTS_ACTION parameter, 3.4.41
TABLES parameter, 3.4.42
tablespace mode, 3.2.2.4
TABLESPACES parameter, 3.4.43
TARGET_EDITION parameter, 3.4.44
TRANSFORM parameter, 3.4.45
transparent data encryption, 3.4.9
TRANSPORT_DATAFILES parameter, 3.4.46
TRANSPORT_FULL_CHECK parameter, 3.4.47
TRANSPORT_TABLESPACES parameter, 3.4.48
TRANSPORTABLE parameter, 3.4.49
transportable tablespace mode, 3.2.2.5
and time zone file versions, 3.2.2.5
VERSION parameter, 3.4.50
versioning, 1.7
Data Pump legacy mode, 4
DATA_OPTIONS parameter
Data Pump Export utility, 2.4.7
Data Pump Import utility, 3.4.6
database ID (DBID)
changing, 18.3.1
database identifier
changing, 18.3.1
database migration
partitioning of, 21.17, 22.25
database name (DBNAME)
changing, 18.3.3
database objects
exporting LONG columns, 21.14.2
databases
changing the database ID, 18.3.1
changing the name, 18.3.3
exporting entire, 21.5.11
full import, 22.7.12
privileges for exporting and importing, 21.2.3, 22.2.2
reusing existing datafiles
Import, 22.7.7
datafiles
preventing overwrite during import, 22.7.7
reusing during import, 22.7.7
specifying, 8.2.5
specifying format for SQL*Loader, 9.7
DATAFILES parameter
Import utility, 22.7.6
DATAPUMP_EXP_FULL_DATABASE role, 1.3
DATAPUMP_IMP_FULL_DATABASE role, 1.3
datatypes
BFILEs
in original Export, 21.14.6
in original Import, 22.16.5
loading with SQL*Loader, 11.4
BLOBs
loading with SQL*Loader, 11.4
CLOBs
loading with SQL*Loader, 11.4
converting SQL*Loader, 10.4.3
describing for external table fields, 14.3.6
determining character field lengths for SQL*Loader, 10.4.7
determining DATE length, 10.4.7.3
identifying for external tables, 14.3.4
native
conflicting length specifications in SQL*Loader, 10.4.2.9
NCLOBs
loading with SQL*Loader, 11.4
nonscalar, 11.1.5
specifying in SQL*Loader, 10.3.2
supported by the LogMiner utility, 19.13.1
types used by SQL*Loader, 10.4
unsupported by LogMiner utility, 19.13.2
date cache feature
DATE_CACHE parameter, 8.2.6
external tables, 14.7
SQL*Loader, 12.5.6
DATE datatype
delimited form and SQL*Loader, 10.4.5
determining length, 10.4.7.3
mask
SQL*Loader, 10.4.7.3
DATE_CACHE parameter
SQL*Loader utility, 8.2.6
DBID (database identifier)
changing, 18.3.1
DBMS_DATAPUMP PL/SQL package, 6
DBMS_LOGMNR PL/SQL procedure
LogMiner utility and, 19.2.2
DBMS_LOGMNR_D PL/SQL procedure
LogMiner utility and, 19.2.2
DBMS_LOGMNR_D.ADD_LOGFILES PL/SQL procedure
LogMiner utility and, 19.2.2
DBMS_LOGMNR_D.BUILD PL/SQL procedure
LogMiner utility and, 19.2.2
DBMS_LOGMNR_D.END_LOGMNR PL/SQL procedure
LogMiner utility and, 19.2.2
DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure
ADDFILE option, 19.3.2
NEW option, 19.3.2
DBMS_LOGMNR.COLUMN_PRESENT function, 19.5.2
DBMS_LOGMNR.MINE_VALUE function, 19.5.2
null values and, 19.5.2.1
DBMS_LOGMNR.START_LOGMNR PL/SQL procedure, 19.4
calling multiple times, 19.8
COMMITTED_DATA_ONLY option, 19.6.1
CONTINUOUS_MINE option, 19.3.2
ENDTIME parameter, 19.6.3, 19.6.4
LogMiner utility and, 19.2.2
options for, 19.4
PRINT_PRETTY_SQL option, 19.6.6
SKIP_CORRUPTION option, 19.6.2
STARTTIME parameter, 19.6.3, 19.6.4
DBMS_METADATA PL/SQL package, 20.3.1
DBNAME
changing, 18.3.3
DBNEWID utility, 18
changing a database ID, 18.3.1
changing a database name, 18.3.3
effect on global database names, 18.2.1
restrictions, 18.4.2
syntax, 18.4
troubleshooting a database ID change, 18.3.4
DBVERIFY utility
output, 17.1.3
restrictions, 17
syntax, 17.1.1
validating a segment, 17.2
validating disk blocks, 17.1
default schema
as determined by SQL*Loader, 9.13.1.1
DEFAULTIF parameter
SQL*Loader, 10.5
DELETE ANY TABLE privilege
SQL*Loader, 9.13.2.2.4
DELETE CASCADE
effect on loading nonempty tables, 9.13.2.2.2
SQL*Loader, 9.13.2.2.4
DELETE privilege
SQL*Loader, 9.13.2.2.2
delimited data
maximum length for SQL*Loader, 10.4.5.3
delimited fields
field length, 10.4.7.2
delimited LOBs, 11.4.2.2.3
delimiters
in external tables, 14.2.3
loading trailing blanks, 10.4.5.4
marks in data and SQL*Loader, 10.4.5.2
specifying for external tables, 14.3.1
specifying for SQL*Loader, 9.13.5, 10.4.5
SQL*Loader enclosure, 10.10.2.2
SQL*Loader field specifications, 10.10.2.2
termination, 10.10.2.2
DESTROY parameter
Import utility, 22.7.7
dictionary
requirements for LogMiner utility, 19.2.1.2
dictionary version mismatch, 19.9.4
DIRECT parameter
Export utility, 21.5.5
direct path Export, 21.9, 21.10
compared to conventional path, 21.9
effect of EXEMPT ACCESS POLICY privilege, 21.10.1
performance issues, 21.10.2
restrictions, 21.10.3
security considerations, 21.10.1
direct path load
advantages, 12.3.4
behavior when discontinued, 9.11.2
choosing sort order
SQL*Loader, 12.5.2.4
compared to conventional path load, 12.3.7
concurrent, 12.9.3, 12.9.3
conditions for use, 12.3.5
data saves, 12.4.5, 12.5.3
DIRECT command-line parameter
SQL*Loader, 12.4.2
dropping indexes, 12.7
effect of disabling archiving, 12.5.4
effect of PRIMARY KEY constraints, 12.9.8
effect of UNIQUE KEY constraints, 12.9.8
field defaults, 12.3.9
improper sorting
SQL*Loader, 12.5.2.2
indexes, 12.4.3
instance recovery, 12.4.6
intersegment concurrency, 12.9.3
intrasegment concurrency, 12.9.3
location of data conversion, 12.3.1
media recovery, 12.4.6.1, 12.4.6.1
optimizing on multiple-CPU systems, 12.6
partitioned load
SQL*Loader, 12.8.4
performance, 12.4.3, 12.5
preallocating storage, 12.5.1
presorting data, 12.5.2
recovery, 12.4.6
ROWS command-line parameter, 12.4.5.1
setting up, 12.4.1
specifying, 12.4.2
specifying number of rows to be read, 8.2.23
SQL*Loader data loading method, 7.9.2
table insert triggers, 12.8.2
temporary segment storage requirements, 12.4.3.2
triggers, 12.8
using, 12.3.7, 12.4
version requirements, 12.3.5
directory aliases
exporting, 21.14.5
importing, 22.16.5
directory objects
using with Data Pump
effect of Oracle ASM, 1.6.2.2
DIRECTORY parameter
Data Pump Export utility, 2.4.8
Data Pump Import utility, 3.4.7
disabled unique indexes
loading tables with, 1.2
discard files
SQL*Loader, 9.9
specifying a maximum, 9.9.2
DISCARD parameter
SQL*Loader command-line, 8.2.8
discarded SQL*Loader records, 7.7
causes, 9.9.4
discard file, 9.9
limiting, 9.9.6
DISCARDMAX parameter
SQL*Loader command-line, 8.2.9
discontinued loads, 9.11
continuing, 9.11.5
conventional path behavior, 9.11.1
direct path behavior, 9.11.2
dropped snapshots
Import, 22.18.2.1
dump files
maximum size, 21.5.8
DUMPFILE parameter
Data Pump Export utility, 2.4.9
Data Pump Import utility, 3.4.8

E

EBCDIC character set
Import, 21.12.3, 22.14.3
ECHO command, ADRCI utility, 16.9.2
eight-bit character set support, 21.12.3, 22.14.3
enclosed fields
whitespace, 10.10.6
enclosure delimiters, 10.4.5
SQL*Loader, 10.10.2.2
encrypted columns
in external tables, 15.1.6
ENCRYPTION parameter
Data Pump Export utility, 2.4.10
ENCRYPTION_ALGORITHM parameter
Data Pump Export utility, 2.4.11
ENCRYPTION_MODE parameter
Data Pump Export utility, 2.4.12
ENCRYPTION_PASSWORD parameter
Data Pump Export utility, 2.4.13
Data Pump Import utility, 3.4.9
errors
caused by tab characters in SQL*Loader data, 10.2.1
LONG data, 22.10.1.2
object creation, 22.10.2.1
Import parameter IGNORE, 22.7.15
resource errors on import, 22.10.2.3
writing to export log file, 21.5.15
ERRORS parameter
SQL*Loader command line, 8.2.10
escape character
quoted strings and, 9.3.3.2
usage in Data Pump Export, 2.4
usage in Data Pump Import, 3.4
usage in Export, 21.5.26.1
usage in Import, 22.7.30
ESTIMATE parameter
Data Pump Export utility, 2.4.14
Data Pump Import utility, 3.4.10
ESTIMATE_ONLY parameter
Data Pump Export utility, 2.4.15
estimating size of job
Data Pump Export utility, 2.4.14
EVALUATE CHECK_CONSTRAINTS clause, 12.8.1.2
EXCLUDE parameter
Data Pump Export utility, 2.4.16
Data Pump Import utility, 3.4.11
exit codes
Export and Import, 21.8, 22.9
SQL*Loader, 1.9, 8.3
EXIT command, ADRCI utility, 16.9.3
EXIT_CLIENT parameter
Data Pump Export utility
interactive-command mode, 2.5.3
Data Pump Import utility
interactive-command mode, 3.5.2
EXP_FULL_DATABASE role
assigning in Export, 21.2.1, 22.2.1
expdat.dmp
Export output file, 21.5.7
Export
BUFFER parameter, 21.5.1
character set conversion, 21.12.1, 22.14.1
COMPRESS parameter, 21.5.2
CONSISTENT parameter, 21.5.3
CONSTRAINTS parameter, 21.5.4
conventional path, 21.9
creating
necessary privileges, 21.2.1, 22.2.1
necessary views, 21.2.1, 22.2.1
database optimizer statistics, 21.5.25
DIRECT parameter, 21.5.5
direct path, 21.9
displaying online help, 21.5.13
example sessions, 21.6
full database mode, 21.6.1
partition-level, 21.6.4
table mode, 21.6.3
user mode, 21.5.17, 21.6.2
exit codes, 21.8, 22.9
exporting an entire database, 21.5.11
exporting indexes, 21.5.14
exporting sequence numbers, 21.14.1
exporting synonyms, 21.14.11
exporting to another operating system, 21.5.20, 22.7.20
FEEDBACK parameter, 21.5.6
FILE parameter, 21.5.7
FILESIZE parameter, 21.5.8
FLASHBACK_SCN parameter, 21.5.9
FLASHBACK_TIME parameter, 21.5.10
full database mode
example, 21.6.1
FULL parameter, 21.5.11
GRANTS parameter, 21.5.12
HELP parameter, 21.5.13
INDEXES parameter, 21.5.14
invoking, 21.3, 22.5
log files
specifying, 21.5.15
LOG parameter, 21.5.15
logging error messages, 21.5.15
LONG columns, 21.14.2
OBJECT_CONSISTENT parameter, 21.5.16
online help, 21.3.5
OWNER parameter, 21.5.17
parameter file, 21.5.18
maximum size, 21.3.3, 22.5.2
parameter syntax, 21.5
PARFILE parameter, 21.5.18
partitioning a database migration, 21.17, 22.25
QUERY parameter, 21.5.19
RECORDLENGTH parameter, 21.5.20
redirecting output to a log file, 21.7.1
remote operation, 21.11.2, 22.13
restrictions based on privileges, 21.2.3
RESUMABLE parameter, 21.5.21
RESUMABLE_NAME parameter, 21.5.22
RESUMABLE_TIMEOUT parameter, 21.5.23
ROWS parameter, 21.5.24
sequence numbers, 21.14.1
STATISTICS parameter, 21.5.25
storage requirements, 21.2.2
table mode
example session, 21.6.3
table name restrictions, 21.5.26.1
TABLES parameter, 21.5.26
TABLESPACES parameter, 21.5.27
TRANSPORT_TABLESPACE parameter, 21.5.28
TRIGGERS parameter, 21.5.29
TTS_FULL_CHECK parameter, 21.5.30
user access privileges, 21.2.3, 22.2.2
user mode
example session, 21.5.17, 21.6.2
specifying, 21.5.17
USERID parameter, 21.5.31
VOLSIZE parameter, 21.5.32
export dump file
importing the entire file, 22.7.12
export file
listing contents before importing, 22.7.25
specifying, 21.5.7
exporting
archived LOBs, 2.4.44
EXPRESSION parameter
SQL*Loader, 10.13.3, 10.13.3.1
extents
consolidating, 21.5.2
EXTERNAL parameter
SQL*Loader, 10.4.2.5
EXTERNAL SQL*Loader datatypes
numeric
determining len, 10.4.7
external tables
access parameters, 13.1.2, 14.1, 15.1
and encrypted columns, 15.1.6
big-endian data, 14.2.8
cacheing data during reads, 14.2.23
column_transforms clause, 14.1
datatypes, 14.3.6
date cache feature, 14.7
delimiters, 14.2.3
describing datatype of a field, 14.3.6
field_definitions clause, 14.1, 14.3
fixed-length records, 14.2.1
identifying character sets, 14.2.4
identifying datatypes, 14.3.4
improving performance when using
date cache feature, 14.7
IO_OPTIONS clause, 14.2.23
little-endian data, 14.2.8
opaque_format_spec, 13.1.2, 14.1, 15.1
preprocessing data, 14.2.5
record_format_info clause, 14.1, 14.2
reserved words, 13.3, 14.8, 15.6
restrictions, 13.3, 14.8
setting a field to a default value, 14.3.7
setting a field to null, 14.3.7
skipping records when loading data, 14.2.15
specifying delimiters, 14.3.1
specifying load conditions, 14.2.11
trimming blanks, 14.3.2
use of SQL strings, 13.3, 14.8
using comments, 14.1, 15.1.1
variable-length records, 14.2.2
EXTERNAL_TABLE parameter
SQL*Loader, 8.2.11

F

fatal errors
See nonrecoverable error messages
FEEDBACK parameter
Export utility, 21.5.6
Import utility, 22.7.8
field conditions
specifying for SQL*Loader, 10.5
field length
SQL*Loader specifications, 10.10.2
field location
SQL*Loader, 10.2
fields
character data length and SQL*Loader, 10.4.7
comparing to literals with SQL*Loader, 10.5.2
delimited
determining length, 10.4.7.2
SQL*Loader, 10.4.5
loading all blanks, 10.9
predetermined size
length, 10.4.7.1
SQL*Loader, 10.10.2.1
relative positioning and SQL*Loader, 10.10.3
specifying default delimiters for SQL*Loader, 9.13.5
specifying for SQL*Loader, 10.3
SQL*Loader delimited
specifications, 10.10.2.2
FIELDS clause
SQL*Loader, 9.13.5
terminated by whitespace, 10.10.4.1
file names
quotation marks and, 9.3.3.1
specifying multiple SQL*Loader, 9.5.2
SQL*Loader, 9.3
SQL*Loader bad file, 9.8
FILE parameter
Export utility, 21.5.7
Import utility, 22.7.9
SQL*Loader utility, 12.9.6.1
FILESIZE parameter
Data Pump Export utility, 2.4.17
Export utility, 21.5.8
Import utility, 22.7.10
FILLER field
using as argument to init_spec, 10.3.1
filtering data
using Data Pump Export utility, 2.1
using Data Pump Import utility, 3.1
filtering metadata that is imported
Data Pump Import utility, 3.4.11
finalizing
in ADRCI utility, 16.2
fine-grained access support
Export and Import, 22.17
fixed-format records, 7.4.1
fixed-length records
external tables, 14.2.1
FLASHBACK_SCN parameter
Data Pump Export utility, 2.4.18
Data Pump Import utility, 3.4.12
Export utility, 21.5.9
FLASHBACK_TIME parameter
Data Pump Export utility, 2.4.19
Data Pump Import utility, 3.4.13
Export utility, 21.5.10
FLOAT EXTERNAL data values
SQL*Loader, 10.4.2.5
foreign function libraries
exporting, 21.14.3
importing, 22.16.6, 22.16.9
formats
SQL*Loader input records and, 9.15.2
formatting errors
SQL*Loader, 9.8
FROMUSER parameter
Import utility, 22.7.11
full database mode
Import, 22.7.12
specifying with FULL, 21.5.11
full export mode
Data Pump Export utility, 2.2.2.1
FULL parameter
Data Pump Export utility, 2.4.20
Data Pump Import utility, 3.4.14
Export utility, 21.5.11
Import utility, 22.7.12

G

globalization
SQL*Loader, 9.10
grants
exporting, 21.5.12
importing, 22.7.13
GRANTS parameter
Export utility, 21.5.12
Import utility, 22.7.13

H

HELP parameter
Data Pump Export utility
command-line mode, 2.4.21
interactive-command mode, 2.5.5
Data Pump Import utility
command-line mode, 3.4.15
interactive-command mode, 3.5.3
Export utility, 21.5.13
Import utility, 22.7.14
hexadecimal strings
SQL*Loader, 10.5.2
homepath
in ADRCI utility, 16.2
HOST command, ADRCI utility, 16.9.4

I

IGNORE parameter
Import utility, 22.7.15
IMP_FULL_DATABASE role
assigning in Import, 21.2.1, 22.2.1
Import
BUFFER parameter, 22.7.1
character set conversion, 21.12.1, 21.12.3, 22.14.1, 22.14.3
COMMIT parameter, 22.7.2
committing after array insert, 22.7.2
COMPILE parameter, 22.7.3
CONSTRAINTS parameter, 22.7.4
creating
necessary privileges, 21.2.1, 22.2.1
necessary views, 21.2.1, 22.2.1
creating an index-creation SQL script, 22.7.17
database optimizer statistics, 22.7.27
DATAFILES parameter, 22.7.6
DESTROY parameter, 22.7.7
disabling referential constraints, 22.3.2, 22.3.2
displaying online help, 22.7.14
dropping a tablespace, 22.22
errors importing database objects, 22.10.2
example sessions, 22.8
all tables from one user to another, 22.8.3
selected tables for specific user, 22.8.1
tables exported by another user, 22.8.2
using partition-level Import, 22.8.4
exit codes, 21.8, 22.9
export file
importing the entire file, 22.7.12
listing contents before import, 22.7.25
FEEDBACK parameter, 22.7.8
FILE parameter, 22.7.9
FILESIZE parameter, 22.7.10
FROMUSER parameter, 22.7.11
FULL parameter, 22.7.12
grants
specifying for import, 22.7.13
GRANTS parameter, 22.7.13
HELP parameter, 22.7.14
IGNORE parameter, 22.7.15
importing grants, 22.7.13
importing objects into other schemas, 22.2.2.3
importing rows, 22.7.24
importing tables, 22.7.30
INDEXES parameter, 22.7.16
INDEXFILE parameter, 22.7.17
INSERT errors, 22.10.1.2
invalid data, 22.10.1.2
invoking, 21.3, 22.5
LOG parameter, 22.7.18
LONG columns, 22.16.11, 22.16.11
manually creating tables before import, 22.3.1
manually ordering tables, 22.3.3
NLS_LANG environment variable, 21.12.3, 22.14.3
object creation errors, 22.7.15
online help, 21.3.5
parameter file, 22.7.19
maximum size, 21.3.3, 22.5.2
parameter syntax, 22.7
PARFILE parameter, 22.7.19
partition-level, 22.11
pattern matching of table names, 22.7.30
read-only tablespaces, 22.21
RECORDLENGTH parameter, 22.7.20
records
specifying length, 22.7.20
redirecting output to a log file, 21.7.1
refresh error, 22.18.1
remote operation, 21.11.2, 22.13
reorganizing tablespace during, 22.23
resource errors, 22.10.2.3
restrictions
importing into own schema, 22.2.2.1
RESUMABLE parameter, 22.7.21
RESUMABLE_NAME parameter, 22.7.22
RESUMABLE_TIMEOUT parameter, 22.7.23
reusing existing datafiles, 22.7.7
rows
specifying for import, 22.7.24
ROWS parameter, 22.7.24
schema objects, 22.2.2.3
sequences, 22.10.2.2
SHOW parameter, 22.7.25
single-byte character sets, 21.12.3, 22.14.3
SKIP_UNUSABLE_INDEXES parameter, 22.7.26
snapshot master table, 22.18.1
snapshots, 22.18
restoring dropped, 22.18.2.1
specifying by user, 22.7.11
specifying index creation commands, 22.7.17
specifying the export file, 22.7.9
STATISTICS parameter, 22.7.27
storage parameters
overriding, 22.20.3
stored functions, 22.16.7
stored procedures, 22.16.7
STREAMS_CONFIGURATION parameter, 22.7.28
STREAMS_INSTANTIATION parameter, 22.7.29
system objects, 22.2.2.4
table name restrictions, 2.4.39, 3.4.42, 22.7.30.1
table objects
import order, 22.1.1
table-level, 22.11
TABLES parameter, 22.7.30
TABLESPACES parameter, 22.7.31
TOID_NOVALIDATE parameter, 22.7.32
TOUSER parameter, 22.7.33
TRANSPORT_TABLESPACE parameter, 22.7.34
TTS_OWNER parameter, 22.7.35
tuning considerations, 22.26
user access privileges, 21.2.3, 22.2.2
USERID parameter, 22.7.36
VOLSIZE parameter, 22.7.37
incident
fault diagnosability infrastructure, 16.2
packaging, 16.8
incident package
fault diagnosability infrastructure, 16.2
INCLUDE parameter
Data Pump Export utility, 2.4.22
Data Pump Import utility, 3.4.16
index options
SORTED INDEXES with SQL*Loader, 9.14.1
SQL*Loader SINGLEROW parameter, 9.14.2
Index Unusable state
indexes left in Index Unusable state, 9.11.3, 12.4.4
indexes
creating manually, 22.7.17
direct path load
left in direct load state, 12.4.4
dropping
SQL*Loader, 12.7
estimating storage requirements, 12.4.3.1
exporting, 21.5.14
importing, 22.7.16
index-creation commands
Import, 22.7.17
left in unusable state, 9.11.3, 12.5.2.2
multiple-column
SQL*Loader, 12.5.2.3
presorting data
SQL*Loader, 12.5.2
skipping maintenance, 8.2.26, 12.7
skipping unusable, 8.2.27, 12.7
SQL*Loader, 9.14
state after discontinued load, 9.11.3
unique, 22.7.16
INDEXES parameter
Export utility, 21.5.14
Import utility, 22.7.16
INDEXFILE parameter
Import utility, 22.7.17
INFILE parameter
SQL*Loader utility, 9.5
insert errors
Import, 22.10.1.2
specifying, 8.2.10
INSERT into table
SQL*Loader, 9.13.2.1.1
instance affinity
Export and Import, 21.13
instance recovery, 12.4.6.2
integrity constraints
disabled during direct path load, 12.8.1.2
enabled during direct path load, 12.8.1.1
failed on Import, 22.10.1.1
load method, 12.3.8
interactive method
Data Pump Export utility, 2.2.1
internal LOBs
loading, 11.4.1
interrupted loads, 9.11
INTO TABLE statement
effect on bind array size, 9.16.6
multiple statements with SQL*Loader, 9.15
SQL*Loader, 9.13
column names, 10.3
discards, 9.9.4
invalid data
Import, 22.10.1.2
invoking
Export, 21.3, 22.5
at the command line, 21.3.2, 22.5.1
direct path, 21.10
interactively, 21.3.4, 22.5.3
with a parameter file, 21.3.3, 22.5.2
Import, 21.3, 22.5
as SYSDBA, 21.3.1, 22.5.4
at the command line, 21.3.2, 22.5.1
interactively, 21.3.4, 22.5.3
with a parameter file, 21.3.3, 22.5.2
IPS command, ADRCI utility, 16.9.5

J

JOB_NAME parameter
Data Pump Export utility, 2.4.23
Data Pump Import utility, 3.4.17

K

key values
generating with SQL*Loader, 10.13.6
KILL_JOB parameter
Data Pump Export utility
interactive-command mode, 2.5.6
Data Pump Import utility, 3.5.4

L

leading whitespace
definition, 10.10.1
trimming and SQL*Loader, 10.10.4
legacy mode in Data Pump, 4
length indicator
determining size, 9.16.4.1
length-value pair specified LOBs, 11.4.2.2.4
libraries
foreign function
exporting, 21.14.3
importing, 22.16.6, 22.16.9
little-endian data
external tables, 14.2.8
LOAD parameter
SQL*Loader command line, 8.2.13
loading
collections, 11.6
column objects, 11.1
in variable record format, 11.1.2
with a derived subtype, 11.1.4
with user-defined constructors, 11.1.6
datafiles containing tabs
SQL*Loader, 10.2.1
external table data
skipping records, 14.2.15
specifying conditions, 14.2.8, 14.2.20
LOBs, 11.4
nested column objects, 11.1.3
object tables, 11.2
object tables with a subtype, 11.2.1
REF columns, 11.3
subpartitioned tables, 12.3.2
tables, 12.3.2
LOB data
in delimited fields, 11.4.1.2
in length-value pair fields, 11.4.1.3
in predetermined size fields, 11.4.1.1
loading with SQL*Loader, 11.4
no compression during export, 21.5.2
size of read buffer, 8.2.19
types supported by SQL*Loader, 7.10.3, 11.4
LOB data types, 7.5
LOBFILEs, 7.5, 11.4, 11.4.2
log files
after a discontinued load, 9.11.4
Export, 21.5.15, 21.7.1
Import, 21.7.1, 22.7.18
specifying for SQL*Loader, 8.2.14
SQL*Loader, 7.8
LOG parameter
Export utility, 21.5.15
Import utility, 22.7.18
SQL*Loader command line, 8.2.14
LOGFILE parameter
Data Pump Export utility, 2.4.25
Data Pump Import utility, 3.4.19
logical records
consolidating multiple physical records using SQL*Loader, 9.12
LogMiner utility
accessing redo data of interest, 19.5
adjusting redo log file list, 19.8
analyzing output, 19.5.1
configuration, 19.2.1
considerations for reapplying DDL statements, 19.7
current log file list
stored information about, 19.10.1
DBMS_LOGMNR PL/SQL procedure and, 19.2.2
DBMS_LOGMNR_D PL/SQL procedure and, 19.2.2
DBMS_LOGMNR_D.ADD_LOGFILES PL/SQL procedure and, 19.2.2
DBMS_LOGMNR_D.BUILD PL/SQL procedure and, 19.2.2
DBMS_LOGMNR_D.END_LOGMNR PL/SQL procedure and, 19.2.2
DBMS_LOGMNR.START_LOGMNR PL/SQL procedure and, 19.2.2
DDL tracking
time or SCN ranges, 19.9.6
determining redo log files being analyzed, 19.3.2
dictionary
purpose of, 19.2.1
dictionary extracted to flat file
stored information about, 19.10
dictionary options, 19.3.1
flat file and, 19.3.1
online catalog and, 19.3.1
redo log files and, 19.3.1
ending a session, 19.11.6
executing reconstructed SQL, 19.6.5
extracting data values from redo logs, 19.5.2
filtering data by SCN, 19.6.4
filtering data by time, 19.6.3
formatting returned data, 19.6.6
graphical user interface, 19
levels of supplemental logging, 19.9
LogMiner dictionary defined, 19.2.1
mining a subset of data in redo log files, 19.8
mining database definition for, 19.2.1
operations overview, 19.2.2
parameters
stored information about, 19.10
redo log files
on a remote database, 19.8
stored information about, 19.10
requirements for dictionary, 19.2.1.2
requirements for redo log files, 19.2.1.2
requirements for source and mining databases, 19.2.1.2
restrictions with XMLType data, 19.5.3.1
sample configuration, 19.2.1.1
showing committed transactions only, 19.6.1
skipping corruptions, 19.6.2
source database definition for, 19.2.1
specifying redo log files to mine, 19.3.2
automatically, 19.3.2
manually, 19.3.2
specifying redo logs for analysis, 19.11.3
starting, 19.4, 19.11.4
starting multiple times within a session, 19.8
steps for extracting dictionary to a flat file, 19.3.1.3
steps for extracting dictionary to redo log files, 19.3.1.2
steps for using dictionary in online catalog, 19.3.1.1
steps in a typical session, 19.11
supplemental log groups, 19.9
conditional, 19.9
unconditional, 19.9
supplemental logging, 19.9
database level, 19.9.1
database-level identification keys, 19.9.1.2
disabling database-level, 19.9.2
interactions with DDL tracking, 19.9.5
log groups, 19.9
minimal, 19.9.1.1
stored information about, 19.10
table-level identification keys, 19.9.3.1
table-level log groups, 19.9.3.2
user-defined log groups, 19.9.3.3
support for transparent data encryption, 19.5
supported database versions, 19.13.3
supported datatypes, 19.13.1
supported redo log file versions, 19.13.3
suppressing delimiters in SQL_REDO and SQL_UNDO, 19.6.5
table-level supplemental logging, 19.9.3
tracking DDL statements, 19.9.4
requirements, 19.9.4
unsupported datatypes, 19.13.2
using the online catalog, 19.3.1.1
using to analyze redo log files, 19
V$DATABASE view, 19.10
V$LOGMNR_CONTENTS view, 19.2.2, 19.5.1, 19.6
V$LOGMNR_DICTIONARY view, 19.10
V$LOGMNR_LOGS view, 19.10
querying, 19.10.1
V$LOGMNR_PARAMETERS view, 19.9.6, 19.10
views, 19.10
LogMiner Viewer, 19
LONG data
exporting, 21.14.2
importing, 22.16.11

M

master tables
Oracle Data Pump API, 1.4.2
snapshots
original Import, 22.18.1
materialized views, 22.18
media recovery
direct path load, 12.4.6.1
Metadata API
enhancing performance, 20.7
retrieving collections, 20.5
using to retrieve object metadata, 20.3
missing data columns
SQL*Loader, 9.13.6
multibyte character sets
blanks with SQL*Loader, 10.5.1
SQL*Loader, 9.10.1
multiple-column indexes
SQL*Loader, 12.5.2.3
multiple-CPU systems
optimizing direct path loads, 12.6
multiple-table load
generating unique sequence numbers using SQL*Loader, 10.13.7
SQL*Loader control file specification, 9.15
multithreading
on multiple-CPU systems, 12.6
MULTITHREADING parameter
SQL*Loader command line, 8.2.15

N

named pipes
external table loads, 7.9.3
native datatypes
conflicting length specifications
SQL*Loader, 10.4.2.9
NCLOBs
loading with SQL*Loader, 11.4
nested column objects
loading, 11.1.3
nested tables
exporting, 21.14.9
consistency and, 21.5.3
importing, 22.16.3
NETWORK_LINK parameter
Data Pump Export utility, 2.4.27
Data Pump Import utility, 3.4.22
networks
Export and Import, 21.11, 22.13
NLS_LANG environment variable, 21.12.2, 22.14.2
with Export and Import, 21.12.3, 22.14.3
NO_INDEX_ERRORS parameter
SQL*Loader command line, 8.2.16
NOLOGFILE parameter
Data Pump Export utility, 2.4.28
Data Pump Import utility, 3.4.23
nonrecoverable error messages
Export, 21.7.3
Import, 21.7.3
nonscalar datatypes, 11.1.5
NOT NULL constraint
load method, 12.3.8
null data
missing columns at end of record during load, 9.13.6
unspecified columns and SQL*Loader, 10.3
NULL values
objects, 11.1.5
NULLIF clause
SQL*Loader, 10.5, 10.9
NULLIF...BLANKS clause
SQL*Loader, 10.5.1
nulls
atomic, 11.1.5.2
attribute, 11.1.5.1
NUMBER datatype
SQL*Loader, 10.4.3, 10.4.4
numeric EXTERNAL datatypes
delimited form and SQL*Loader, 10.4.5
determining length, 10.4.7

O

object identifiers, 11.2
importing, 22.16.1</a>
object names
SQL*Loader, 9.3
object tables
loading, 11.2, 11.2
with a subtype
loading, 11.2.1
object type definitions
exporting, 21.14.8
object types supported by SQL*Loader, 7.10.1
OBJECT_CONSISTENT parameter
Export utility, 21.5.16
objects, 7.10
creation errors, 22.10.2.1
ignoring existing objects during import, 22.7.15
import creation errors, 22.7.15
loading nested column objects, 11.1.3
NULL values, 11.1.5
stream record format, 11.1.1
variable record format, 11.1.2
offline locally managed tablespaces
exporting, 21.14.4
OID
See object identifiers
online help
Export and Import, 21.3.5
opaque_format_spec, 13.1.2, 14.1, 15.1
operating systems
moving data to different systems using SQL*Loader, 10.7
OPTIMAL storage parameter
used with Export/Import, 22.20.1
optimizer statistics, 22.24
optimizing
direct path loads, 12.5
SQL*Loader input file processing, 9.7
OPTIONALLY ENCLOSED BY clause
SQL*Loader, 10.10.2.2
OPTIONS parameter
for parallel loads, 9.13.3
SQL*Loader utility, 9.2.1
Oracle Advanced Queuing
See Advanced Queuing
Oracle Automatic Storage Management (ASM)
Data Pump and, 1.6.2.2
Oracle Data Pump
direct path loads
restrictions, 1.2.2
master table, 1.4.2
tuning performance, 5.2
Oracle Data Pump API, 6
client interface, 6.1
job states, 6.1.1
monitoring job progress, 1.5.1
ORACLE_DATAPUMP access driver
effect of SQL ENCRYPT clause on, 15.1.6
reserved words, 15, 15.7
ORACLE_LOADER access driver
reserved words, 14, 14.9
OWNER parameter
Export utility, 21.5.17

P

packages
creating, 16.8.2
padding of literal strings
SQL*Loader, 10.5.2
parallel loads, 12.9
restrictions on direct path, 12.9.4
when using PREPROCESSOR clause, 14.2.5.1
PARALLEL parameter
Data Pump Export utility
command-line interface, 2.4.29
interactive-command mode, 2.5.7
Data Pump Import utility
command-line mode, 3.4.24
interactive-command mode, 3.5.5
SQL*Loader command line, 8.2.17
parameter files
Export, 21.5.18
Export and Import
comments in, 21.3.3, 22.5.2
maximum size, 21.3.3, 22.5.2
Import, 22.7.19
SQL*Loader, 8.2.18
PARFILE parameter
Data Pump Export utility, 2.4.30
Data Pump Import utility, 3.4.25
Export command line, 21.5.18
Import command line, 22.7.19
SQL*Loader command line, 8.2.18
PARTITION_OPTIONS parameter
Data Pump Import utility, 3.4.26
partitioned loads
concurrent conventional path loads, 12.8.4
SQL*Loader, 12.8.4
partitioned object support in SQL*Loader, 7.11
partitioned tables
export consistency and, 21.5.3
exporting, 21.4.1
importing, 22.8.1, 22.11.1
loading, 12.3.2
partitioning a database migration, 21.17, 22.25
advantages of, 21.17, 22.25
disadvantages of, 21.17, 22.25
procedure during export, 21.17.3, 22.25.3
partition-level Export, 21.4.1
example session, 21.6.4
partition-level Import, 22.11
specifying, 21.5.26
pattern matching
table names during import, 22.7.30
performance
improving when using integrity constraints, 12.8.4
optimizing for direct path loads, 12.5
optimizing reading of SQL*Loader data files, 9.7
tuning original Import, 22.26
performance tuning
Oracle Data Pump, 5.2
PIECED parameter
SQL*Loader, 12.4.7.1
POSITION parameter
using with data containing tabs, 10.2.1
with multiple SQL*Loader INTO TABLE clauses, 9.15.2.1, 10.2, 10.2.2
predetermined size fields
SQL*Loader, 10.10.2.1
predetermined size LOBs, 11.4.2.2.2
preprocessing data for external tables, 14.2.5
effects of parallel processing, 14.2.5.1
prerequisites
SQL*Loader, 12.1
PRESERVE parameter, 9.12.2
preserving
whitespace, 10.11
PRIMARY KEY constraints
effect on direct path load, 12.9.8
primary key OIDs
example, 11.2
primary key REF columns, 11.3.3, 11.3.3
privileges
EXEMPT ACCESS POLICY
effect on direct path export, 21.10.1
required for Export and Import, 21.2.1, 22.2.1
required for SQL*Loader, 12.1
problem
fault diagnosability infrastructure, 16.2
problem key
fault diagnosability infrastructure, 16.2
PURGE command, ADRCI utility, 16.9.6

Q

QUERY parameter
Data Pump Export utility, 2.4.31
Data Pump Import utility, 3.4.27
Export utility, 21.5.19
restrictions, 21.5.19.1
QUIT command, ADRCI utility, 16.9.7
quotation marks
escape characters and, 9.3.3.2
file names and, 9.3.3.1
SQL strings and, 9.3.2
table names and, 2.4.39, 3.4.42, 21.5.26.1, 22.7.30.1
usage in Data Pump Export, 2.4
usage in Data Pump Import, 3.4
use with database object names, 9.3.1

R

read-consistent export, 21.5.3
read-only tablespaces
Import, 22.21
READSIZE parameter
SQL*Loader command line, 8.2.19
effect on LOBs, 8.2.19
maximum size, 8.2.19
RECNUM parameter
use with SQL*Loader SKIP parameter, 10.13.4
RECORDLENGTH parameter
Export utility, 21.5.20
Import utility, 22.7.20
records
consolidating into a single logical record
SQL*Loader, 9.12
discarded by SQL*Loader, 7.7, 9.9
DISCARDMAX command-line parameter, 8.2.9
distinguishing different formats for SQL*Loader, 9.15.2
extracting multiple logical records using SQL*Loader, 9.15
fixed format, 7.4.1
missing data columns during load, 9.13.6
rejected by SQL*Loader, 7.7, 7.7.1.1, 7.7.1.2, 9.8
setting column to record number with SQL*Loader, 10.13.4
specifying how to load, 8.2.13
specifying length for export, 21.5.20
specifying length for import, 22.7.20
stream record format, 7.4.3
recovery
direct path load
SQL*Loader, 12.4.6
replacing rows, 9.13.2.2
redo log file
LogMiner utility
versions supported, 19.13.3
redo log files
analyzing, 19
requirements for LogMiner utility, 19.2.1.2
specifying for the LogMiner utility, 19.3.2
redo logs
direct path load, 12.4.6.1
instance and media recovery
SQL*Loader, 12.4.6.1
minimizing use during direct path loads, 12.5.4
saving space
direct path load, 12.5.4.2
REF columns, 11.3
loading, 11.3
primary key, 11.3.3
system-generated, 11.3.2
REF data
importing, 22.16.4
referential integrity constraints
disabling for import, 22.3.2
SQL*Loader, 12.8
refresh error
snapshots
Import, 22.18.1
reject files
specifying for SQL*Loader, 9.8
rejected records
SQL*Loader, 7.7, 9.8
relative field positioning
where a field starts and SQL*Loader, 10.10.3
with multiple SQL*Loader INTO TABLE clauses, 9.15.1.1
REMAP_DATA parameter
Data Pump Export utility, 2.4.32
Data Pump Import utility, 3.4.28
REMAP_DATAFILE parameter
Data Pump Import utility, 3.4.29
REMAP_SCHEMA parameter
Data Pump Import utility, 3.4.30
REMAP_TABLE parameter
Data Pump Import utility, 3.4.31
REMAP_TABLESPACE parameter
Data Pump Import utility, 3.4.32
remote operation
Export/Import, 21.11.2, 22.13
REPLACE table
replacing a table using SQL*Loader, 9.13.2.2.2
reserved words
external tables, 13.3, 14.8, 15.6
ORACLE_DATAPUMP access driver, 15, 15.7
ORACLE_LOADER access driver, 14, 14.9
SQL*Loader, 7.3
resource consumption
controlling in Data Pump Export utility, 5.2.1
controlling in Data Pump Import utility, 5.2.1
resource errors
Import, 22.10.2.3
RESOURCE role, 22.2.2.1
restrictions
importing into another user’s schema, 22.2.2.3
table names in Export parameter file, 21.5.26.1
table names in Import parameter file, 2.4.39, 3.4.42, 22.7.30.1
RESUMABLE parameter
Export utility, 21.5.21
Import utility, 22.7.21
SQL*Loader utility, 8.2.20
resumable space allocation
enabling and disabling, 8.2.20, 21.5.21, 22.7.21
RESUMABLE_NAME parameter
Export utility, 21.5.22
Import utility, 22.7.22
SQL*Loader utility, 8.2.21
RESUMABLE_TIMEOUT parameter
Export utility, 21.5.23
Import utility, 22.7.23
SQL*Loader utility, 8.2.22
retrieving object metadata
using Metadata API, 20.3
REUSE_DATAFILES parameter
Data Pump Import utility, 3.4.33
REUSE_DUMPFILES parameter
Data Pump Export utility, 2.4.33
roles
DATAPUMP_EXP_FULL_DATABASE, 1.3
DATAPUMP_IMP_FULL_DATABASE, 1.3
EXP_FULL_DATABASE, 21.2.1
IMP_FULL_DATABASE, 22.2.1
RESOURCE, 22.2.2.1
rollback segments
effects of CONSISTENT Export parameter, 21.5.3
row errors
Import, 22.10.1.1
ROWID columns
loading with SQL*Loader, 12.1.1
rows
choosing which to load using SQL*Loader, 9.13.4
exporting, 21.5.24
specifying for import, 22.7.24
specifying number to insert before save
SQL*Loader, 12.4.5.1
updates to existing rows with SQL*Loader, 9.13.2.2.3
ROWS parameter
Export utility, 21.5.24
Import utility, 22.7.24
performance issues
SQL*Loader, 12.5.3
SQL*Loader command line, 8.2.23
using to specify when data saves occur, 12.4.5.1
RUN command, ADRCI utility, 16.9.8

S

SAMPLE parameter
Data Pump Export utility, 2.4.34
schema mode export
Data Pump Export utility, 2.2.2.2
schemas
specifying for Export, 21.5.26
SCHEMAS parameter
Data Pump Export utility, 2.4.35
Data Pump Import utility, 3.4.34
scientific notation for FLOAT EXTERNAL, 10.4.2.5
script files
running before Export and Import, 21.2.1, 22.2.1
SDFs
See secondary datafiles
secondary datafiles, 7.5, 11.6.2
SecureFiles
encryption during Data Pump export, 2.4.10
SecureFiles LOBs
export considerations, 1.8
security considerations
direct path export, 21.10.1
segments
temporary
FILE parameter in SQL*Loader, 12.9.6.1
SELECT command, ADRCI utility, 16.9.9
functions, 16.9.9
sequence numb, 10.13.6
sequence numbers
cached, 21.14.1
exporting, 21.14.1
for multiple tables and SQL*Loader, 10.13.7
generated by SQL*Loader SEQUENCE clause, 10.13.6
generated, not read and SQL*Loader, 10.3
SERVICE_NAME parameter
Data Pump Export utility, 2.4.36
Data Pump Import utility, 3.4.35
SET BASE command, ADRCI utility, 16.9.10
SET BROWSER command, ADRCI utility, 16.9.11
SET CONTROL command, ADRCI utility, 16.9.12
SET ECHO command, ADRCI utility, 16.9.13
SET EDITOR command, ADRCI utility, 16.9.14
SET HOMEPATH command, ADRCI utility, 16.9.15
SET TERMOUT command, ADRCI utility, 16.9.16
short records with missing data
SQL*Loader, 9.13.6
SHOW ALERT command, ADRCI utility, 16.9.17
SHOW BASE command, ADRCI utility, 16.9.18
SHOW CONTROL command, ADRCI utility, 16.9.19
SHOW HM_RUN command, ADRCI utility, 16.9.20
SHOW HOMEPATH command, ADRCI utility, 16.9.21
SHOW HOMES command, ADRCI utility, 16.9.22
SHOW INCDIR command, ADRCI utility, 16.9.23
SHOW INCIDENT command, ADRCI utility, 16.9.24
SHOW parameter
Import utility, 22.7.25
SHOW PROBLEM command, ADRCI utility, 16.9.25
SHOW REPORT command, ADRCI utility, 16.9.26
SHOW TRACEFILE command, ADRCI utility, 16.9.27
SILENT parameter
SQL*Loader command line, 8.2.24
single-byte character sets
Export and Import, 21.12.3, 22.14.3
SINGLEROW parameter, 9.14.2, 12.7
single-table loads
continuing, 9.11.5
SKIP parameter
effect on SQL*Loader RECNUM specification, 10.13.4
SQL*Loader command line, 8.2.25, 8.2.25
SKIP_INDEX_MAINTENANCE parameter
SQL*Loader command line, 8.2.26, 12.7
SKIP_UNUSABLE_INDEXES parameter
Import utility, 22.7.26
SQL*Loader command line, 8.2.27, 12.7
SKIP_USABLE_INDEXES parameter
Data Pump Import utility, 3.4.36
skipping index maintenance, 8.2.26, 12.7
skipping unusable indexes, 8.2.27, 12.7
snapshot log
Import, 22.18.1
snapshots, 22.18.2
importing, 22.18
master table
Import, 22.18.1
restoring dropped
Import, 22.18.2.1
SORTED INDEXES clause
direct path loads, 9.14.1
SQL*Loader, 12.5.2.1
sorting
multiple-column indexes
SQL*Loader, 12.5.2.3
optimum sort order
SQL*Loader, 12.5.2.4
presorting in direct path load, 12.5.2
SORTED INDEXES clause
SQL*Loader, 12.5.2.1
SOURCE_EDITION parameter
Data Pump Export utility, 2.4.37
Data Pump Import utility, 3.4.37
SPOOL command, ADRCI utility, 16.9.28
SQL operators
applying to fields, 10.12
SQL strings
applying SQL operators to fields, 10.12
quotation marks and, 9.3.2
SQL*Loader
appending rows to tables, 9.13.2.2.1
BAD command-line parameter, 8.2.1
bad file, 8.2.1
BADFILE parameter, 9.8
bind arrays and performance, 9.16.2
BINDSIZE command-line parameter, 8.2.2, 9.16.3
choosing which rows to load, 9.13.4
COLUMNARRAYROWS command-line parameter, 8.2.3
command-line parameters, 8.1
continuing single-table loads, 9.11.5
CONTROL command-line parameter, 8.2.4
conventional path loads, 7.9.1, 12.2
DATA command-line parameter, 8.2.5
data conversion, 7.6
data definition language
syntax diagrams, A
datatype specifications, 7.6
DATE_CACHE command-line parameter, 8.2.6
determining default schema, 9.13.1.1
DIRECT command-line parameter, 12.4.2
direct path method, 7.9.2
using date cache feature to improve performance, 12.5.6
DISCARD command-line parameter, 8.2.8
discarded records, 7.7
DISCARDFILE parameter, 9.9.1
DISCARDMAX command-line parameter, 8.2.9
DISCARDMAX parameter, 9.9.6
DISCARDS parameter, 9.9.6
errors caused by tabs, 10.2.1
ERRORS command-line parameter, 8.2.10
exclusive access, 12.8.3
external table loads, 7.9.3
EXTERNAL_TABLE parameter, 8.2.11
FILE command-line parameter, 8.2.12
file names, 9.3
globalization technology, 9.10
index options, 9.14
inserting rows into tables, 9.13.2.1.1
INTO TABLE statement, 9.13
LOAD command-line parameter, 8.2.13
load methods, 12.1
loading column objects, 11.1
loading data across different platforms, 10.7
loading data contained in the control file, 10.13.1
loading object tables, 11.2
LOG command-line parameter, 8.2.14
log files, 7.8
methods of loading data, 7.9
multiple INTO TABLE statements, 9.15
MULTITHREADING command-line parameter, 8.2.15
object names, 9.3
parallel data loading, 12.9, 12.9.3, 12.10
PARFILE command-line parameter, 8.2.18
portable datatypes, 10.4.2
READSIZE command-line parameter, 8.2.19
maximum size, 8.2.19
rejected records, 7.7
replacing rows in tables, 9.13.2.2.2
required privileges, 12.1
RESUMABLE parameter, 8.2.20
RESUMABLE_NAME parameter, 8.2.21
RESUMABLE_TIMEOUT parameter, 8.2.22
ROWS command-line parameter, 8.2.23
SILENT command-line parameter, 8.2.24
SINGLEROW parameter, 9.14.2
SKIP_INDEX_MAINTENANCE command-line parameter, 8.2.26
SKIP_UNUSABLE_INDEXES command-line parameter, 8.2.27
SORTED INDEXES during direct path loads, 9.14.1
specifying columns, 10.3
specifying data files, 9.5
specifying field conditions, 10.5
specifying fields, 10.3
specifying more than one data file, 9.5.2
STREAMSIZE command-line parameter, 8.2.28
suppressing messages, 8.2.24
USERID command-line parameter, 8.2.29
SQL*Loader control files
guidelines when creating, 7.3
SQL*Loader datatypes
nonportable, 10.4.1
SQLFILE parameter
Data Pump Import utility, 3.4.38
START_JOB parameter
Data Pump Export utility
interactive-command mode, 2.5.8
Data Pump Import utility
interactive-command mode, 3.5.6
starting
LogMiner utility, 19.4
statistics
analyzer, 22.24
database optimizer
specifying for Export, 21.5.25
optimizer, 22.24
specifying for Import, 22.7.27
STATISTICS parameter
Export utility, 21.5.25
Import utility, 22.7.27
STATUS parameter
Data Pump Export utility, 2.4.38
interactive-command mode, 2.5.9
Data Pump Import utility, 3.4.39
interactive-command mode, 3.5.7
STOP_JOB parameter
Data Pump Export utility
interactive-command mode, 2.5.10
Data Pump Import utility
interactive-command mode, 3.5.8
STORAGE parameter, 12.9.6.1.2
storage parameters
estimating export requirements, 21.2.2
OPTIMAL parameter, 22.20.1
overriding
Import, 22.20.3
preallocating
direct path load, 12.5.1
temporary for a direct path load, 12.4.3.2
using with Export/Import, 22.20
stored functions
importing, 22.16.7
effect of COMPILE parameter, 22.16.7
stored package, 22.16.7
stored packages
importing, 22.16.7
stored procedures
direct path load, 12.8.2.6
importing, 22.16.7
effect of COMPILE parameter, 22.16.7
stream buffer
specifying size for direct path, 12.5.5
stream record format, 7.4.3
loading column objects in, 11.1.1
Streams environment in Data Pump
setting buffer cache size, 5.3.1
STREAMS_CONFIGURATION parameter
Data Pump Import utility, 3.4.40
Import utility, 22.7.28
STREAMS_INSTANTIATION parameter
Import utility, 22.7.29
STREAMSIZE parameter
SQL*Loader command line, 8.2.28
string comparisons
SQL*Loader, 10.5.2
subpartitioned tables
loading, 12.3.2
subtypes
loading multiple, 9.15.3
supplemental logging
LogMiner utility, 19.9
database-level identification keys, 19.9.1.2
log groups, 19.9
table-level, 19.9.3
table-level identification keys, 19.9.3.1
table-level log groups, 19.9.3.2, 19.9.3.2
See also LogMiner utility
synonyms
exporting, 21.14.11
syntax diagrams
Data Pump Export, 2.7
Data Pump Import, 3.7
SQL*Loader, A
SYSDATE parameter
SQL*Loader, 10.13.5
system objects
importing, 22.2.2.4
system triggers
effect on import, 22.4
testing, 22.4
system-generated OID REF columns, 11.3.2

T

table names
preserving case sensitivity, 21.5.26.1
TABLE_EXISTS_ACTION parameter
Data Pump Import utility, 3.4.41
table-level Export, 21.4.1
table-level Import, 22.11
table-mode Export
Data Pump Export utility, 2.2.2.3
specifying, 21.5.26
table-mode Import
examples, 22.8.1
tables
Advanced Queuing
exporting, 21.14.10
importing, 22.16.10
appending rows with SQL*Loader, 9.13.2.2.1
defining before Import, 22.3.1
definitions
creating before Import, 22.3.1
exclusive access during direct path loads
SQL*Loader, 12.8.3
external, 13
importing, 22.7.30
insert triggers
direct path load in SQL*Loader, 12.8.2
inserting rows using SQL*Loader, 9.13.2.1.1
loading data into more than one table using SQL*Loader, 9.15
loading object tables, 11.2
maintaining consistency during Export, 21.5.3
manually ordering for Import, 22.3.3
master table
Import, 22.18.1
name restrictions
Export, 21.5.26.1
Import, 2.4.39, 3.4.42, 22.7.30, 22.7.30.1
nested
exporting, 21.14.9
importing, 22.16.3
objects
order of import, 22.1.1
partitioned, 21.4.1
replacing rows using SQL*Loader, 9.13.2.2.2
specifying for export, 21.5.26
specifying table-mode Export, 21.5.26
SQL*Loader method for individual tables, 9.13.2
truncating
SQL*Loader, 9.13.2.2.4
updating existing rows using SQL*Loader, 9.13.2.2.3
See also external tables
TABLES parameter
Data Pump Export utility, 2.4.39
Data Pump Import utility, 3.4.42
Export utility, 21.5.26
Import utility, 22.7.30
tablespace mode Export
Data Pump Export utility, 2.2.2.4
tablespaces
dropping during import, 22.22
exporting a set of, 21.15, 22.19
metadata
transporting, 22.7.34
read-only
Import, 22.21
reorganizing
Import, 22.23
TABLESPACES parameter
Data Pump Export utility, 2.4.40
Data Pump Import utility, 3.4.43
Export utility, 21.5.27
Import utility, 22.7.31
tabs
loading datafiles containing tabs, 10.2.1
trimming, 10.10
whitespace, 10.10
TARGET_EDITION parameter
Data Pump Import utility, 3.4.44
temporary segments, 12.9.6.1
FILE parameter
SQL*Loader, 12.9.6.1
temporary storage in a direct path load, 12.4.3.2
TERMINATED BY clause
with OPTIONALLY ENCLOSED BY, 10.10.2.2
terminated fields
specified with a delimiter, 10.10.2.2
time zone file versions
Data Pump Export, 2.2.2.5
Data Pump Import, 3.2.2.5
TOID_NOVALIDATE parameter
Import utility, 22.7.32
TOUSER parameter
Import utility, 22.7.33
trace files
viewing with ADRCI, 16.6
trailing blanks
loading with delimiters, 10.4.5.4
TRAILING NULLCOLS parameter
SQL*Loader utility, 9.1, 9.13.6.1
trailing whitespace
trimming, 10.10.5
TRANSFORM parameter
Data Pump Import utility, 3.4.45
transparent data encryption
as handled by Data Pump Export, 2.4.13
as handled by Data Pump Import, 3.4.9
LogMiner support, 19.5
TRANSPORT_DATAFILES parameter
Data Pump Import utility, 3.4.46
TRANSPORT_FULL_CHECK parameter
Data Pump Export utility, 2.4.41
Data Pump Import utility, 3.4.47
TRANSPORT_TABLESPACE parameter
Export utility, 21.5.28
Import utility, 22.7.34
TRANSPORT_TABLESPACES parameter
Data Pump Export utility, 2.4.42
Data Pump Import utility, 3.4.48
transportable option
used during table-mode export, 2.4.39
TRANSPORTABLE parameter
Data Pump Export utility, 2.4.43
Data Pump Import utility, 3.4.49
transportable tablespaces, 21.15, 22.19
transportable-tablespace mode Export
Data Pump Export utility, 2.2.2.5
triggers
database insert, 12.8.2
logon
effect in SQL*Loader, 9.13.1.1
permanently disabled, 12.8.3
replacing with integrity constraints, 12.8.2.1
system
testing, 22.4
update
SQL*Loader, 12.8.2.4
TRIGGERS parameter
Export utility, 21.5.29
trimming
summary, 10.10
trailing whitespace
SQL*Loader, 10.10.5
TTS_FULL_CHECK parameter
Export utility, 21.5.30
TTS_OWNERS parameter
Import utility, 22.7.35

U

UNIQUE KEY constraints
effect on direct path load, 12.9.8
unique values
generating with SQL*Loader, 10.13.6
unloading entire database
Data Pump Export utility, 2.2.2.1
UNRECOVERABLE clause
SQL*Loader, 12.5.4.2
unsorted data
direct path load
SQL*Loader, 12.5.2.2
user mode export
specifying, 21.5.17
USER_SEGMENTS view
Export and, 21.2.2
user-defined constructors, 11.1.6
loading column objects with, 11.1.6
USERID parameter
Export utility, 21.5.31
Import utility, 22.7.36
SQL*Loader command line, 8.2.29

V

V$DATABASE view, 19.10
V$LOGMNR_CONTENTS view, 19.5.1
formatting information returned to, 19.6
impact of querying, 19.5.1
information within, 19.5
limiting information returned to, 19.6
LogMiner utility, 19.2.2
requirements for querying, 19.4, 19.5.1
V$LOGMNR_DICTIONARY view, 19.10
V$LOGMNR_LOGS view, 19.3.2, 19.10
querying, 19.10.1
V$LOGMNR_PARAMETERS view, 19.9.6, 19.10
V$SESSION_LONGOPS view
monitoring Data Pump jobs with, 1.5.1
VARCHAR2 datatype
SQL*Loader, 10.4.3
VARCHARC datatype
SQL*Loader, 10.4.2.7
variable records, 7.4.2
format, 11.1.2
variable-length records
external tables, 14.2.2
VARRAWC datatype, 10.4.2.8
VARRAY columns
memory issues when loading, 11.8.1
VERSION parameter
Data Pump Export utility, 2.4.44
Data Pump Import utility, 3.4.50
viewing
trace files with ADRCI, 16.6
VOLSIZE parameter
Export utility, 21.5.32
Import utility, 22.7.37

W

warning messages
Export, 21.7.2
Import, 21.7.2
WHEN clause
SQL*Loader, 9.13.4, 10.5
SQL*Loader discards resulting from, 9.9.4
whitespace
included in a field, 10.10.4
leading, 10.10.1
preserving, 10.11
terminating a field, 10.10.4.1
trimming, 10.10

X

XML columns
loading with SQL*Loader, 11.4
treatment by SQL*Loader, 11.4
XML type tables
identifying in SQL*Loader, 9.4
XMLTYPE clause
in SQL*Loader control file, 9.4
PK<PKN:AOEBPS/dp_import.htm Data Pump Import

3 Data Pump Import

This chapter describes the Oracle Data Pump Import utility (impdp). The following topics are discussed:

What Is Data Pump Import?

Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

Import can also be used to load a target database directly from a source database with no intervening dump files. This is known as a network import.

Data Pump Import enables you to specify whether a job should move a subset of the data and metadata from the dump file set or the source database (in the case of a network import), as determined by the import mode. This is done using data filters and metadata filters, which are implemented through Import commands. See "Filtering During Import Operations".

To see some examples of the various ways in which you can use Import, refer to "Examples of Using Data Pump Import".

Invoking Data Pump Import

The Data Pump Import utility is invoked using the impdp command. The characteristics of the import operation are determined by the import parameters you specify. These parameters can be specified either on the command line or in a parameter file.


Note:

Do not invoke Import as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.


Note:

Be aware that if you are performing a Data Pump Import into a table or tablespace created with the NOLOGGING clause enabled, then a redo log file may still be generated. The redo that is generated in such a case is generally for maintenance of the master table or related to underlying recursive space transactions, data dictionary changes, and index maintenance for indices on the table that require logging.

The following sections contain more information about invoking Import:

Data Pump Import Interfaces

You can interact with Data Pump Import by using a command line, a parameter file, or an interactive-command mode.

  • Command-Line Interface: Enables you to specify the Import parameters directly on the command line. For a complete description of the parameters available in the command-line interface, see "Parameters Available in Import's Command-Line Mode".

  • Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. See "Use of Quotation Marks On the Data Pump Command Line".

  • Interactive-Command Interface: Stops logging to the terminal and displays the Import prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an import operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.

    For a complete description of the commands available in interactive-command mode, see "Commands Available in Import's Interactive-Command Mode".

Data Pump Import Modes

The import mode determines what is imported. The specified mode applies to the source of the operation, either a dump file set or another database if the NETWORK_LINK parameter is specified.

When the source of the import operation is a dump file set, specifying a mode is optional. If no mode is specified, then Import attempts to load the entire dump file set in the mode in which the export operation was run.

The mode is specified on the command line, using the appropriate parameter. The available modes are described in the following sections:


Note:

When you import a dump file that was created by a full-mode export, the import operation attempts to copy the password for the SYS account from the source database. This sometimes fails (for example, if the password is in a shared password file). If it does fail, then after the import completes, you must set the password for the SYS account at the target database to a password of your choice.

Full Import Mode

A full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports. You must have the DATAPUMP_IMP_FULL_DATABASE role if the source is another database.

Cross-schema references are not imported for non-privileged users. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported.

The DATAPUMP_IMP_FULL_DATABASE role is required on the target database and the DATAPUMP_EXP_FULL_DATABASE role is required on the source database if the NETWORK_LINK parameter is used for a full import.


See Also:

"FULL"

Schema Mode

A schema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the specified schemas are loaded. The source can be a full, table, tablespace, or schema-mode export dump file set or another database. If you have the DATAPUMP_IMP_FULL_DATABASE role, then a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas.

Cross-schema references are not imported for non-privileged users unless the other schema is remapped to the current schema. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported.


See Also:

"SCHEMAS"

Table Mode

A table-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the DATAPUMP_IMP_FULL_DATABASE role to specify tables that are not in your own schema.

You can use the transportable option during a table-mode import by specifying the TRANPORTABLE=ALWAYS parameter with the TABLES parameter. Note that this requires use of the NETWORK_LINK parameter, as well.

Tablespace Mode

A tablespace-mode import is specified using the TABLESPACES parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded, along with the dependent objects. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. For unprivileged users, objects not remapped to the current schema will not be processed.


See Also:

"TABLESPACES"

Transportable Tablespace Mode

A transportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, the metadata from another database is loaded using a database link (specified with the NETWORK_LINK parameter). There are no dump files involved. The actual data files, specified by the TRANSPORT_DATAFILES parameter, must be made available from the source system for use in the target database, typically by copying them over to the target system.

Encrypted columns are not supported in transportable tablespace mode.

This mode requires the DATAPUMP_IMP_FULL_DATABASE role.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

Considerations for Time Zone File Versions in Transportable Tablespace Mode

Jobs performed in transportable tablespace mode have the following requirements concerning time zone file versions:

  • If the source is Oracle Database 11g release 2 (11.2.0.2) or later and there are tables in the transportable set that use TIMESTAMP WITH TIMEZONE (TSTZ) columns, then the time zone file version on the target database must exactly match the time zone file version on the source database.

  • If the source is earlier than Oracle Database 11g release 2 (11.2.0.2), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TSTZ columns.

If these requirements are not met, then the import job aborts before anything is imported. This is because if the import job were allowed to import the objects, there might be inconsistent results when tables with TSTZ columns were read.

To identify the time zone file version of a database, you can execute the following SQL statement:

SQL> SELECT VERSION FROM V$TIMEZONE_FILE;

See Also:


Network Considerations

You can specify a connect identifier in the connect string when you invoke the Data Pump Import utility. The connect identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID). The connect identifier can be an Oracle*Net connect descriptor or a net service name (usually defined in the tnsnames.ora file) that maps to a connect descriptor. Use of a connect identifier requires that you have Oracle Net Listener running (to start the default listener, enter lsnrctl start). The following is an example of this type of connection, in which inst1 is the connect identifier:

impdp hr@inst1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees

Import then prompts you for a password:

Password: password
 

The local Import client connects to the database instance identified by the connect identifier inst1 (a net service name), and imports the data from the dump file hr.dmp to inst1.

Specifying a connect identifier when you invoke the Import utility is different from performing an import operation using the NETWORK_LINK parameter. When you start an import operation and specify a connect identifier, the local Import client connects to the database instance identified by the connect identifier and imports the data from the dump file named on the command line to that database instance.

Whereas, when you perform an import using the NETWORK_LINK parameter, the import is performed using a database link, and there is no dump file involved. (A database link is a connection between two physical database servers that allows a client to access them as one logical database.)


See Also:


Filtering During Import Operations

Data Pump Import provides data and metadata filtering capability to help you limit the type of information that is imported.

Data Filters

Data specific filtering is implemented through the QUERY and SAMPLE parameters, which specify restrictions on the table rows that are to be imported. Data filtering can also occur indirectly because of metadata filtering, which can include or exclude table objects along with any associated row data.

Each data filter can only be specified once per table and once per job. If different filters using the same name are applied to both a particular table and to the whole job, then the filter parameter supplied for the specific table takes precedence.

Metadata Filters

Data Pump Import provides much greater metadata filtering capability than was provided by the original Import utility. Metadata filtering is implemented through the EXCLUDE and INCLUDE parameters. The EXCLUDE and INCLUDE parameters are mutually exclusive.

Metadata filters identify a set of objects to be included or excluded from a Data Pump operation. For example, you could request a full import, but without Package Specifications or Package Bodies.

To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that a package is to be included in an operation, then grants upon that package will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.

If multiple filters are specified for an object type, then an implicit AND operation is applied to them. That is, objects participating in the job must pass all of the filters applied to their object types.

The same filter name can be specified multiple times within a job.

To see a list of valid object types, query the following views: DATABASE_EXPORT_OBJECTS for full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode. The values listed in the OBJECT_PATH column are the valid object types. Note that full object path names are determined by the export mode, not by the import mode.


See Also:


Parameters Available in Import's Command-Line Mode

This section describes the parameters available in the command-line mode of Data Pump Import. Be sure to read the following sections before using the Import parameters:

Many of the descriptions include an example of how to use the parameter. For background information on setting up the necessary environment to run the examples, see:

Specifying Import Parameters

For parameters that can have multiple values specified, the values can be separated by commas or by spaces. For example, you could specify TABLES=employees,jobs or TABLES=employees jobs.

For every parameter you enter, you must enter an equal sign (=) and a value. Data Pump has no other way of knowing that the previous parameter specification is complete and a new parameter specification is beginning. For example, in the following command line, even though NOLOGFILE is a valid parameter, it would be interpreted as another dump file name for the DUMPFILE parameter:

impdp DIRECTORY=dpumpdir DUMPFILE=test.dmp NOLOGFILE TABLES=employees

This would result in two dump files being created, test.dmp and nologfile.dmp.

To avoid this, specify either NOLOGFILE=YES or NOLOGFILE=NO.

Use of Quotation Marks On the Data Pump Command Line

Some operating systems treat quotation marks as special characters and will therefore not pass them to an application unless they are preceded by an escape character, such as the backslash (\). This is true both on the command line and within parameter files. Some operating systems may require an additional set of single or double quotation marks on the command line around the entire parameter value containing the special characters.

The following examples are provided to illustrate these concepts. Be aware that they may not apply to your particular operating system and that this documentation cannot anticipate the operating environments unique to each user.

Suppose you specify the TABLES parameter in a parameter file, as follows:

TABLES = \"MixedCaseTableName\"

If you were to specify that on the command line, then some operating systems would require that it be surrounded by single quotation marks, as follows:

TABLES - '\"MixedCaseTableName\"'

To avoid having to supply additional quotation marks on the command line, Oracle recommends the use of parameter files. Also, note that if you use a parameter file and the parameter value being specified does not have quotation marks as the first character in the string (for example, TABLES=scott."EmP"), then the use of escape characters may not be necessary on some systems.


See Also:


Using the Import Parameter Examples

If you try running the examples that are provided for each parameter, then be aware of the following:

If necessary, ask your DBA for help in creating these directory objects and assigning the necessary privileges and roles.

Syntax diagrams of these parameters are provided in "Syntax Diagrams for Data Pump Import".

Unless specifically noted, these parameters can also be specified in a parameter file.

ABORT_STEP

Default: Null

Purpose

Used to stop the job after it is initialized. This allows the master table to be queried before any data is imported.

Syntax and Description

ABORT_STEP=[n | -1]

The possible values correspond to a process order number in the master table. The result of using each number is as follows:

  • n -- If the value is zero or greater, then the import operation is started and the job is aborted at the object that is stored in the master table with the corresponding process order number.

  • -1 and the job is an import using a NETWORK_LINK -- Abort the job after setting it up but before importing any objects.

  • -1 and the job is an import that does not use NETWORK_LINK -- Abort the job after loading the master table and applying filters.

Restrictions

  • None

Example

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp ABORT_STEP=-1 

ACCESS_METHOD

Default: AUTOMATIC

Purpose

Instructs Import to use a particular method to load data.

Syntax and Description

ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE | CONVENTIONAL]

The ACCESS_METHOD parameter is provided so that you can try an alternative method if the default method does not work for some reason. Oracle recommends that you use the default option (AUTOMATIC) whenever possible because it allows Data Pump to automatically select the most efficient method.

Restrictions

  • If the NETWORK_LINK parameter is also specified, then the ACCESS_METHOD parameter is ignored.

Example

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp ACCESS_METHOD=CONVENTIONAL 

ATTACH

Default: current job in user's schema, if there is only one running job.

Purpose

Attaches the client session to an existing import job and automatically places you in interactive-command mode.

Syntax and Description

ATTACH [=[schema_name.]job_name]

Specify a schema_name if the schema to which you are attaching is not your own. You must have the DATAPUMP_IMP_FULL_DATABASE role to do this.

A job_name does not have to be specified if only one running job is associated with your schema and the job is active. If the job you are attaching to is stopped, then you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS view or the USER_DATAPUMP_JOBS view.

When you are attached to the job, Import displays a description of the job and then displays the Import prompt.

Restrictions

  • When you specify the ATTACH parameter, the only other Data Pump parameter you can specify on the command line is ENCRYPTION_PASSWORD.

  • If the job you are attaching to was initially started using an encryption password, then when you attach to the job you must again enter the ENCRYPTION_PASSWORD parameter on the command line to re-specify that password. The only exception to this is if the job was initially started with the ENCRYPTION=ENCRYPTED_COLUMNS_ONLY parameter. In that case, the encryption password is not needed when attaching to the job.

  • You cannot attach to a job in another schema unless it is already running.

  • If the dump file set or master table for the job have been deleted, then the attach operation fails.

  • Altering the master table in any way can lead to unpredictable results.

Example

The following is an example of using the ATTACH parameter.

> impdp hr ATTACH=import_job

This example assumes that a job named import_job exists in the hr schema.

CLUSTER

Default: YES

Purpose

Determines whether Data Pump can use Oracle Real Application Clusters (Oracle RAC) resources and start workers on other Oracle RAC instances.

Syntax and Description

CLUSTER=[YES | NO]

To force Data Pump Import to use only the instance where the job is started and to replicate pre-Oracle Database 11g release 2 (11.2) behavior, specify CLUSTER=NO.

To specify a specific, existing service and constrain worker processes to run only on instances defined for that service, use the SERVICE_NAME parameter with the CLUSTER=YES parameter.

Use of the CLUSTER parameter may affect performance because there is some additional overhead in distributing the import job across Oracle RAC instances. For small jobs, it may be better to specify CLUSTER=NO to constrain the job to run on the instance where it is started. Jobs whose performance benefits the most from using the CLUSTER parameter are those involving large amounts of data.

Example

> impdp hr DIRECTORY=dpump_dir1 SCHEMAS=hr CLUSTER=NO PARALLEL=3 NETWORK_LINK=dbs1

This example performs a schema-mode import of the hr schema. Because CLUSTER=NO is used, the job uses only the instance where it is started. Up to 3 parallel processes can be used. The NETWORK_LINK value of dbs1 would be replaced with the name of the source database from which you were importing data. (Note that there is no dump file generated because this is a network import.)

The NETWORK_LINK parameter is simply being used as part of the example. It is not required when using the CLUSTER parameter.

CONTENT

Default: ALL

Purpose

Enables you to filter what is loaded during the import operation.

Syntax and Description

CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
  • ALL loads any data and metadata contained in the source. This is the default.

  • DATA_ONLY loads only table row data into existing tables; no database objects are created.

  • METADATA_ONLY loads only database object definitions; no table row data is loaded. Be aware that if you specify CONTENT=METADATA_ONLY, then any index or table statistics imported from the dump file are locked after the import operation is complete.

Restrictions

  • The CONTENT=METADATA_ONLY parameter and value cannot be used in conjunction with the TRANSPORT_TABLESPACES (transportable-tablespace mode) parameter or the QUERY parameter.

  • The CONTENT=ALL and CONTENT=DATA_ONLY parameter and values cannot be used in conjunction with the SQLFILE parameter.

Example

The following is an example of using the CONTENT parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp CONTENT=METADATA_ONLY

This command will execute a full import that will load only the metadata in the expfull.dmp dump file. It executes a full import because that is the default for file-based imports in which no import mode is specified.

DATA_OPTIONS

Default: There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.

Purpose

The DATA_OPTIONS parameter designates how certain types of data should be handled during import operations.

Syntax and Description

DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS]
  • DISABLE_APPEND_HINT - Specifies that you do not want the import operation to use the APPEND hint while loading the data object. Disabling the APPEND hint can be useful if there is a small set of data objects to load that already exist in the database and some other application may be concurrently accessing one or more of the data objects.

    If DISABLE_APPEND_HINT is not set, then the default behavior is to use the APPEND hint for loading data objects.

  • SKIP_CONSTRAINT_ERRORS - affects how non-deferred constraint violations are handled while a data object (table, partition, or subpartition) is being loaded. It has no effect on the load if deferred constraint violations are encountered. Deferred constraint violations always cause the entire load to be rolled back.

    The SKIP_CONSTRAINT_ERRORS option specifies that you want the import operation to proceed even if non-deferred constraint violations are encountered. It logs any rows that cause non-deferred constraint violations, but does not stop the load for the data object experiencing the violation.

    If SKIP_CONSTRAINT_ERRORS is not set, then the default behavior is to roll back the entire load of the data object on which non-deferred constraint violations are encountered.

Restrictions

  • If DISABLE_APPEND_HINT is used, then it can take longer for data objects to load.

  • If SKIP_CONSTRAINT_ERRORS is used and if a data object has unique indexes or constraints defined on it at the time of the load, then the APPEND hint will not be used for loading that data object. Therefore, loading such data objects will take longer when the SKIP_CONSTRAINT_ERRORS option is used.

  • Even if SKIP_CONSTRAINT_ERRORS is specified, then it is not used unless a data object is being loaded using the external table access method.

Example

This example shows a data-only table mode import with SKIP_CONSTRAINT_ERRORS enabled:

> impdp hr TABLES=employees CONTENT=DATA_ONLY 
DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors

If any non-deferred constraint violations are encountered during this import operation, then they will be logged and the import will continue on to completion.

DIRECTORY

Default: DATA_PUMP_DIR

Purpose

Specifies the default location in which the import job can find the dump file set and where it should create log and SQL files.

Syntax and Description

DIRECTORY=directory_object

The directory_object is the name of a database directory object (not the file path of an actual directory). Upon installation, privileged users have access to a default directory object named DATA_PUMP_DIR. Users with access to the default DATA_PUMP_DIR directory object do not need to use the DIRECTORY parameter at all.

A directory object specified on the DUMPFILE, LOGFILE, or SQLFILE parameter overrides any directory object that you specify for the DIRECTORY parameter. You must have Read access to the directory used for the dump file set and Write access to the directory used to create the log and SQL files.

Example

The following is an example of using the DIRECTORY parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp 
LOGFILE=dpump_dir2:expfull.log

This command results in the import job looking for the expfull.dmp dump file in the directory pointed to by the dpump_dir1 directory object. The dpump_dir2 directory object specified on the LOGFILE parameter overrides the DIRECTORY parameter so that the log file is written to dpump_dir2.


See Also:


DUMPFILE

Default: expdat.dmp

Purpose

Specifies the names and optionally, the directory objects of the dump file set that was created by Export.

Syntax and Description

DUMPFILE=[directory_object:]file_name [, ...]

The directory_object is optional if one has already been established by the DIRECTORY parameter. If you do supply a value here, then it must be a directory object that already exists and that you have access to. A database directory object that is specified as part of the DUMPFILE parameter overrides a value specified by the DIRECTORY parameter.

The file_name is the name of a file in the dump file set. The file names can also be templates that contain the substitution variable, %U. If %U is used, then Import examines each file that matches the template (until no match is found) to locate all files that are part of the dump file set. The %U expands to a 2-digit incrementing integer starting with 01.

Sufficient information is contained within the files for Import to locate the entire set, provided the file specifications in the DUMPFILE parameter encompass the entire set. The files are not required to have the same names, locations, or order that they had at export time.

Example

The following is an example of using the Import DUMPFILE parameter. You can create the dump files used in this example by running the example provided for the Export DUMPFILE parameter. See "DUMPFILE".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp

Because a directory object (dpump_dir2) is specified for the exp1.dmp dump file, the import job will look there for the file. It will also look in dpump_dir1 for dump files of the form exp2nn.dmp. The log file will be written to dpump_dir1.

ENCRYPTION_PASSWORD

Default: There is no default; the value is user-supplied.

Purpose

Specifies a password for accessing encrypted column data in the dump file set. This prevents unauthorized access to an encrypted dump file set.

Syntax and Description

ENCRYPTION_PASSWORD = password

This parameter is required on an import operation if an encryption password was specified on the export operation. The password that is specified must be the same one that was specified on the export operation.

Restrictions

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Advanced Security Administrator's Guide for information about licensing requirements for the Oracle Advanced Security option.

  • The ENCRYPTION_PASSWORD parameter is not valid if the dump file set was created using the transparent mode of encryption.

  • The ENCRYPTION_PASSWORD parameter is not valid for network import jobs.

  • Encryption attributes for all columns must match between the exported table definition and the target table. For example, suppose you have a table, EMP, and one of its columns is named EMPNO. Both of the following situations would result in an error because the encryption attribute for the EMP column in the source table would not match the encryption attribute for the EMP column in the target table:

    • The EMP table is exported with the EMPNO column being encrypted, but before importing the table you remove the encryption attribute from the EMPNO column.

    • The EMP table is exported without the EMPNO column being encrypted, but before importing the table you enable encryption on the EMPNO column.

Example

In the following example, the encryption password, 123456, must be specified because it was specified when the dpcd2be1.dmp dump file was created (see "ENCRYPTION_PASSWORD").

> impdp hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir
  DUMPFILE=dpcd2be1.dmp ENCRYPTION_PASSWORD=123456

During the import operation, any columns in the employee_s_encrypt table that were encrypted during the export operation are decrypted before being imported.

ESTIMATE

Default: BLOCKS

Purpose

Instructs the source system in a network import operation to estimate how much data will be generated.

Syntax and Description

ESTIMATE=[BLOCKS | STATISTICS]

The valid choices for the ESTIMATE parameter are as follows:

  • BLOCKS - The estimate is calculated by multiplying the number of database blocks used by the source objects times the appropriate block sizes.

  • STATISTICS - The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently. (Table analysis can be done with either the SQL ANALYZE statement or the DBMS_STATS PL/SQL package.)

The estimate that is generated can be used to determine a percentage complete throughout the execution of the import job.

Restrictions

  • The Import ESTIMATE parameter is valid only if the NETWORK_LINK parameter is also specified.

  • When the import source is a dump file set, the amount of data to be loaded is already known, so the percentage complete is automatically calculated.

  • The estimate may be inaccurate if either the QUERY or REMAP_DATA parameter is used.

Example

In the following example, source_database_link would be replaced with the name of a valid link to the source database.

> impdp hr TABLES=job_history NETWORK_LINK=source_database_link
  DIRECTORY=dpump_dir1 ESTIMATE=STATISTICS 

The job_history table in the hr schema is imported from the source database. A log file is created by default and written to the directory pointed to by the dpump_dir1 directory object. When the job begins, an estimate for the job is calculated based on table statistics.

EXCLUDE

Default: There is no default

Purpose

Enables you to filter the metadata that is imported by specifying objects and object types to exclude from the import job.

Syntax and Description

EXCLUDE=object_type[:name_clause] [, ...]

The object_type specifies the type of object to be excluded. To see a list of valid values for object_type, query the following views: DATABASE_EXPORT_OBJECTS for full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode. The values listed in the OBJECT_PATH column are the valid object types.

For the given mode of import, all object types contained within the source (and their dependents) are included, except those specified in an EXCLUDE statement. If an object is excluded, then all of its dependent objects are also excluded. For example, excluding a table will also exclude all indexes and triggers on the table.

The name_clause is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name_clause applies only to object types whose instances have names (for example, it is applicable to TABLE and VIEW, but not to GRANT). It must be separated from the object type with a colon and enclosed in double quotation marks, because single quotation marks are required to delimit the name strings. For example, you could set EXCLUDE=INDEX:"LIKE 'DEPT%'" to exclude all indexes whose names start with dept.

The name that you supply for the name_clause must exactly match, including upper and lower casing, an existing object in the database. For example, if the name_clause you supply is for a table named EMPLOYEES, then there must be an existing table named EMPLOYEES using all upper case. If the name_clause were supplied as Employees or employees or any other variation, then the table would not be found.

More than one EXCLUDE statement can be specified.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line.

As explained in the following sections, you should be aware of the effects of specifying certain objects for exclusion, in particular, CONSTRAINT, GRANT, and USER.

Excluding Constraints

The following constraints cannot be excluded:

  • NOT NULL constraints.

  • Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).

This means that the following EXCLUDE statements will be interpreted as follows:

  • EXCLUDE=CONSTRAINT will exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading.

  • EXCLUDE=REF_CONSTRAINT will exclude referential integrity (foreign key) constraints.

Excluding Grants and Users

Specifying EXCLUDE=GRANT excludes object grants on all object types and system privilege grants.

Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.

To exclude a specific user and all objects of that user, specify a command such as the following, where hr is the schema name of the user you want to exclude.

impdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:"='HR'"

Note that in this situation, an import mode of FULL is specified. If no mode were specified, then the default mode, SCHEMAS, would be used. This would cause an error because the command would indicate that the schema should be both imported and excluded at the same time.

If you try to exclude a user by using a statement such as EXCLUDE=USER:"= 'HR'", then only CREATE USER hr DDL statements will be excluded, and you may not get the results you expect.

Restrictions

  • The EXCLUDE and INCLUDE parameters are mutually exclusive.

Example

Assume the following is in a parameter file, exclude.par, being used by a DBA or some other user with the DATAPUMP_IMP_FULL_DATABASE role. (If you want to try the example, then you must create this file.)

EXCLUDE=FUNCTION
EXCLUDE=PROCEDURE
EXCLUDE=PACKAGE
EXCLUDE=INDEX:"LIKE 'EMP%' "

You could then issue the following command. You can create the expfull.dmp dump file used in this command by running the example provided for the Export FULL parameter. See "FULL".

> impdp system DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=exclude.par

All data from the expfull.dmp dump file will be loaded except for functions, procedures, packages, and indexes whose names start with emp.


See Also:

"Filtering During Import Operations" for more information about the effects of using the EXCLUDE parameter

FLASHBACK_SCN

Default: There is no default

Purpose

Specifies the system change number (SCN) that Import will use to enable the Flashback utility.

Syntax and Description

FLASHBACK_SCN=scn_number

The import operation is performed with data that is consistent up to the specified scn_number.


Note:

If you are on a logical standby system, then the FLASHBACK_SCN parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases.

Restrictions

  • The FLASHBACK_SCN parameter is valid only when the NETWORK_LINK parameter is also specified.

  • The FLASHBACK_SCN parameter pertains only to the Flashback Query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.

  • FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.

Example

The following is an example of using the FLASHBACK_SCN parameter.

> impdp hr DIRECTORY=dpump_dir1 FLASHBACK_SCN=123456 
NETWORK_LINK=source_database_link

The source_database_link in this example would be replaced with the name of a source database from which you were importing data.

FLASHBACK_TIME

Default: There is no default

Purpose

Specifies the system change number (SCN) that Import will use to enable the Flashback utility.

Syntax and Description

FLASHBACK_TIME="TO_TIMESTAMP()"

The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The import operation is performed with data that is consistent up to this SCN. Because the TO_TIMESTAMP value is enclosed in quotation marks, it would be best to put this parameter in a parameter file. See "Use of Quotation Marks On the Data Pump Command Line".


Note:

If you are on a logical standby system, then the FLASHBACK_TIME parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases.

Restrictions

  • This parameter is valid only when the NETWORK_LINK parameter is also specified.

  • The FLASHBACK_TIME parameter pertains only to the flashback query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.

  • FLASHBACK_TIME and FLASHBACK_SCN are mutually exclusive.

Example

You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME procedure accepts,. For example, suppose you have a parameter file, flashback_imp.par, that contains the following:

FLASHBACK_TIME="TO_TIMESTAMP('25-08-2008 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"

You could then issue the following command:

> impdp hr DIRECTORY=dpump_dir1 PARFILE=flashback_imp.par NETWORK_LINK=source_database_link

The import operation will be performed with data that is consistent with the SCN that most closely matches the specified time.


See Also:

Oracle Database Advanced Application Developer's Guide for information about using flashback

FULL

Default: YES

Purpose

Specifies that you want to perform a full database import.

Syntax and Description

FULL=YES

A value of FULL=YES indicates that all data and metadata from the source (either a dump file set or another database) is imported.

Filtering can restrict what is imported using this import mode (see "Filtering During Import Operations").

If the NETWORK_LINK parameter is used and the USERID that is executing the import job has the DATAPUMP_IMP_FULL_DATABASE role on the target database, then that user must also have the DATAPUMP_EXP_FULL_DATABASE role on the source database.

If you are an unprivileged user importing from a file, then only schemas that map to your own schema are imported.

FULL is the default mode when you are performing a file-based import.

Example

The following is an example of using the FULL parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DUMPFILE=dpump_dir1:expfull.dmp FULL=YES 
LOGFILE=dpump_dir2:full_imp.log

This example imports everything from the expfull.dmp dump file. In this example, a DIRECTORY parameter is not provided. Therefore, a directory object must be provided on both the DUMPFILE parameter and the LOGFILE parameter. The directory objects can be different, as shown in this example.

HELP

Default: NO

Purpose

Displays online help for the Import utility.

Syntax and Description

HELP=YES

If HELP=YES is specified, then Import displays a summary of all Import command-line parameters and interactive commands.

Example

> impdp HELP = YES

This example will display a brief description of all Import parameters and commands.

INCLUDE

Default: There is no default

Purpose

Enables you to filter the metadata that is imported by specifying objects and object types for the current import mode.

Syntax and Description

INCLUDE = object_type[:name_clause] [, ...]

The object_type specifies the type of object to be included. To see a list of valid values for object_type, query the following views: DATABASE_EXPORT_OBJECTS for full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode. The values listed in the OBJECT_PATH column are the valid object types.

Only object types in the source (and their dependents) that are explicitly specified in the INCLUDE statement are imported.

The name_clause is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name_clause applies only to object types whose instances have names (for example, it is applicable to TABLE, but not to GRANT). It must be separated from the object type with a colon and enclosed in double quotation marks, because single quotation marks are required to delimit the name strings.

The name that you supply for the name_clause must exactly match, including upper and lower casing, an existing object in the database. For example, if the name_clause you supply is for a table named EMPLOYEES, then there must be an existing table named EMPLOYEES using all upper case. If the name_clause were supplied as Employees or employees or any other variation, then the table would not be found.

More than one INCLUDE statement can be specified.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line. See "Use of Quotation Marks On the Data Pump Command Line".

To see a list of valid paths for use with the INCLUDE parameter, you can query the following views: DATABASE_EXPORT_OBJECTS for Full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode.

Restrictions

  • The INCLUDE and EXCLUDE parameters are mutually exclusive.

Example

Assume the following is in a parameter file, imp_include.par, being used by a DBA or some other user with the DATAPUMP_IMP_FULL_DATABASE role:

INCLUDE=FUNCTION
INCLUDE=PROCEDURE
INCLUDE=PACKAGE
INCLUDE=INDEX:"LIKE 'EMP%' "

You can then issue the following command:

> impdp system SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp 
PARFILE=imp_include.par

You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

The Import operation will load only functions, procedures, and packages from the hr schema and indexes whose names start with EMP. Although this is a privileged-mode import (the user must have the DATAPUMP_IMP_FULL_DATABASE role), the schema definition is not imported, because the USER object type was not specified in an INCLUDE statement.

JOB_NAME

Default: system-generated name of the form SYS_<IMPORT or SQLFILE>_<mode>_NN

Purpose

The job name is used to identify the import job in subsequent actions, such as when the ATTACH parameter is used to attach to a job, or to identify the job via the DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS views.

Syntax and Description

JOB_NAME=jobname_string

The jobname_string specifies a name of up to 30 bytes for this import job. The bytes must represent printable characters and spaces. If spaces are included, then the name must be enclosed in single quotation marks (for example, 'Thursday Import'). The job name is implicitly qualified by the schema of the user performing the import operation. The job name is used as the name of the master table, which controls the export job.

The default job name is system-generated in the form SYS_IMPORT_mode_NN or SYS_SQLFILE_mode_NN, where NN expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_IMPORT_TABLESPACE_02'.

Example

The following is an example of using the JOB_NAME parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp JOB_NAME=impjob01

KEEP_MASTER

Default: NO

Purpose

Indicates whether the master table should be deleted or retained at the end of a Data Pump job that completes successfully. The master table is automatically retained for jobs that do not complete successfully.

Syntax and Description

KEEP_MASTER=[YES | NO]

Restrictions

  • None

Example

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp KEEP_MASTER=YES

LOGFILE

Default: import.log

Purpose

Specifies the name, and optionally, a directory object, for the log file of the import job.

Syntax and Description

LOGFILE=[directory_object:]file_name

If you specify a directory_object, then it must be one that was previously established by the DBA and that you have access to. This overrides the directory object specified with the DIRECTORY parameter. The default behavior is to create import.log in the directory referenced by the directory object specified in the DIRECTORY parameter.

If the file_name you specify already exists, then it will be overwritten.

All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS command in interactive mode.)

A log file is always created unless the NOLOGFILE parameter is specified. As with the dump file set, the log file is relative to the server and not the client.


Note:

Data Pump Import writes the log file using the database character set. If your client NLS_LANG environment sets up a different client character set from the database character set, then it is possible that table names may be different in the log file than they are when displayed on the client output screen.

Restrictions

  • To perform a Data Pump Import using Oracle Automatic Storage Management (Oracle ASM), you must specify a LOGFILE parameter that includes a directory object that does not include the Oracle ASM + notation. That is, the log file must be written to a disk file, and not written into the Oracle ASM storage. Alternatively, you can specify NOLOGFILE=YES. However, this prevents the writing of the log file.

Example

The following is an example of using the LOGFILE parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr SCHEMAS=HR DIRECTORY=dpump_dir2 LOGFILE=imp.log
 DUMPFILE=dpump_dir1:expfull.dmp

Because no directory object is specified on the LOGFILE parameter, the log file is written to the directory object specified on the DIRECTORY parameter.


See Also:


MASTER_ONLY

Default: NO

Purpose

Indicates whether to import just the master table and then stop the job so that the contents of the master table can be examined.

Syntax and Description

MASTER_ONLY=[YES | NO]

Restrictions

  • If the NETWORK_LINK parameter is also specified, then MASTER_ONLY=YES is not supported.

Example

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp MASTER_ONLY=YES

METRICS

Default: NO

Purpose

Indicates whether additional information about the job should be reported to the Data Pump log file.

Syntax and Description

METRICS=[YES | NO]

When METRICS=YES is used, the number of objects and the elapsed time are recorded in the Data Pump log file.

Restrictions

  • None

Example

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp METRICS=YES

NETWORK_LINK

Default: There is no default

Purpose

Enables an import from a (source) database identified by a valid database link. The data from the source database instance is written directly back to the connected database instance.

Syntax and Description

NETWORK_LINK=source_database_link

The NETWORK_LINK parameter initiates an import via a database link. This means that the system to which the impdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data directly to the database on the connected instance. There are no dump files involved.

The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, then you or your DBA must create one using the SQL CREATE DATABASE LINK statement.

When you perform a network import using the transportable method, you must copy the source data files to the target database before you start the import.

If the source database is read-only, then the connected user must have a locally managed tablespace assigned as the default temporary tablespace on the source database. Otherwise, the job will fail.

This parameter is required when any of the following parameters are specified: FLASHBACK_SCN, FLASHBACK_TIME, ESTIMATE, TRANSPORT_TABLESPACES, or TRANSPORTABLE.


Caution:

If an import operation is performed over an unencrypted network link, then all data is imported as clear text even if it is encrypted in the database. See Oracle Database Advanced Security Administrator's Guide for more information about network security.


See Also:


Restrictions

  • The Import NETWORK_LINK parameter is not supported for tables containing SecureFiles that have ContentType set or that are currently stored outside of the SecureFiles segment through Oracle Database File System Links.

  • Network imports do not support the use of evolved types.

  • Network imports do not support LONG columns.

  • When operating across a network link, Data Pump requires that the source and target databases differ by no more than one version. For example, if one database is Oracle Database 11g, then the other database must be either 11g or 10g. Note that Data Pump checks only the major version number (for example, 10g and 11g), not specific release numbers (for example, 10.1, 10.2, 11.1, or 11.2).

  • If the USERID that is executing the import job has the DATAPUMP_IMP_FULL_DATABASE role on the target database, then that user must also have the DATAPUMP_EXP_FULL_DATABASE role on the source database.

  • The only types of database links supported by Data Pump Import are: public, fixed user, and connected user. Current-user database links are not supported.

  • Network mode import does not use parallel query (PQ) slaves. See "Using PARALLEL During a Network Mode Import".

Example

In the following example, the source_database_link would be replaced with the name of a valid database link.

> impdp hr TABLES=employees DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT

This example results in an import of the employees table (excluding constraints) from the source database. The log file is written to dpump_dir1, specified on the DIRECTORY parameter.

NOLOGFILE

Default: NO

Purpose

Specifies whether to suppress the default behavior of creating a log file.

Syntax and Description

NOLOGFILE=[YES | NO]

If you specify NOLOGFILE=YES to suppress creation of a log file, then progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job and you specify NOLOGFILE=YES, then you run the risk of losing important progress and error information.

Example

The following is an example of using the NOLOGFILE parameter.

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp NOLOGFILE=YES

This command results in a full mode import (the default for file-based imports) of the expfull.dmp dump file. No log file is written because NOLOGFILE is set to YES.

PARALLEL

Default: 1

Purpose

Specifies the maximum number of processes of active execution operating on behalf of the import job.

Syntax and Description

PARALLEL=integer

The value you specify for integer specifies the maximum number of processes of active execution operating on behalf of the import job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process, idle workers, and worker processes acting as parallel execution coordinators in parallel I/O operations do not count toward this total. This parameter enables you to make trade-offs between resource consumption and elapsed time.

If the source of the import is a dump file set consisting of files, then multiple processes can read from the same file, but performance may be limited by I/O contention.

To increase or decrease the value of PARALLEL during job execution, use interactive-command mode.

Parallelism is used for loading user data and package bodies, and for building indexes.

Using PARALLEL During a Network Mode Import

During a network mode import, the PARALLEL parameter defines the maximum number of worker processes that can be assigned to the job. To understand the effect of the PARALLEL parameter during a network import mode, it is important to understand the concept of "table_data objects" as defined by Data Pump. When Data Pump moves data, it considers the following items to be individual "table_data objects":

  • a complete table (one that is not partitioned or subpartitioned)

  • partitions, if the table is partitioned but not subpartitioned

  • subpartitions, if the table is subpartitioned

For example:

  • A nonpartitioned table, scott.non_part_table, has 1 table_data object:

    scott.non_part_table

  • A partitioned table, scott.part_table (having partition p1 and partition p2), has 2 table_data objects:

    scott.part_table:p1

    scott.part_table:p2

  • A subpartitioned table, scott.sub_part_table (having partition p1 and p2, and subpartitions p1s1, p1s2, p2s1, and p2s2) has 4 table_data objects:

    scott.sub_part_table:p1s1

    scott.sub_part_table:p1s2

    scott.sub_part_table:p2s1

    scott.sub_part_table:p2s2

During a network mode import, each table_data object is assigned its own worker process, up to the value specified for the PARALLEL parameter. No parallel query (PQ) slaves are assigned because network mode import does not use parallel query (PQ) slaves. Multiple table_data objects can be unloaded at the same time, but each table_data object is unloaded using a single process.

Using PARALLEL During An Import In An Oracle RAC Environment

In an Oracle Real Application Clusters (Oracle RAC) environment, if an import operation has PARALLEL=1, then all Data Pump processes reside on the instance where the job is started. Therefore, the directory object can point to local storage for that instance.

If the import operation has PARALLEL set to a value greater than 1, then Data Pump processes can reside on instances other than the one where the job was started. Therefore, the directory object must point to shared storage that is accessible by all instances of the Oracle RAC.

Restrictions

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • To import a table or table partition in parallel (using PQ slaves), you must have the DATAPUMP_IMP_FULL_DATABASE role.

Example

The following is an example of using the PARALLEL parameter.

> impdp hr DIRECTORY=dpump_dir1 LOGFILE=parallel_import.log 
JOB_NAME=imp_par3 DUMPFILE=par_exp%U.dmp PARALLEL=3

This command imports the dump file set that is created when you run the example for the Export PARALLEL parameter. (See "PARALLEL".) The names of the dump files are par_exp01.dmp, par_exp02.dmp, and par_exp03.dmp.

PARFILE

Default: There is no default

Purpose

Specifies the name of an import parameter file.

Syntax and Description

PARFILE=[directory_path]file_name

Unlike dump files, log files, and SQL files which are created and written by the server, the parameter file is opened and read by the impdp client. Therefore, a directory object name is neither required nor appropriate. The default is the user's current directory. The use of parameter files is highly recommended if you are using parameters whose values require the use of quotation marks.

Restrictions

  • The PARFILE parameter cannot be specified within a parameter file.

Example

The content of an example parameter file, hr_imp.par, might be as follows:

TABLES= countries, locations, regions
DUMPFILE=dpump_dir2:exp1.dmp,exp2%U.dmp
DIRECTORY=dpump_dir1
PARALLEL=3 

You could then issue the following command to execute the parameter file:

> impdp hr PARFILE=hr_imp.par

The tables named countries, locations, and regions will be imported from the dump file set that is created when you run the example for the Export DUMPFILE parameter. (See "DUMPFILE".) The import job looks for the exp1.dmp file in the location pointed to by dpump_dir2. It looks for any dump files of the form exp2nn.dmp in the location pointed to by dpump_dir1. The log file for the job will also be written to dpump_dir1.

PARTITION_OPTIONS

Default: The default is departition when partition names are specified on the TABLES parameter and TRANPORTABLE=ALWAYS is set (whether on the import operation or during the export). Otherwise, the default is none.

Purpose

Specifies how table partitions should be created during an import operation.

Syntax and Description

PARTITION_OPTIONS=[NONE | DEPARTITION | MERGE]

A value of none creates tables as they existed on the system from which the export operation was performed. You cannot use the none option or the merge option if the export was performed with the transportable method, along with a partition or subpartition filter. In such a case, you must use the departition option.

A value of departition promotes each partition or subpartition to a new individual table. The default name of the new table will be the concatenation of the table and partition name or the table and subpartition name, as appropriate.

A value of merge combines all partitions and subpartitions into one table.

Restrictions

  • If the export operation that created the dump file was performed with the transportable method and if a partition or subpartition was specified, then the import operation must use the departition option.

  • If the export operation that created the dump file was performed with the transportable method, then the import operation cannot use PARTITION_OPTIONS=MERGE.

  • If there are any grants on objects being departitioned, then an error message is generated and the objects are not loaded.

Example

The following example assumes that the sh.sales table has been exported into a dump file named sales.dmp. It uses the merge option to merge all the partitions in sh.sales into one non-partitioned table in scott schema.

> impdp system TABLES=sh.sales PARTITION_OPTIONS=MERGE 
DIRECTORY=dpump_dir1 DUMPFILE=sales.dmp REMAP_SCHEMA=sh:scott

See Also:

"TRANSPORTABLE" for an example of performing an import operation using PARTITION_OPTIONS=DEPARTITION

QUERY

Default: There is no default

Purpose

Allows you to specify a query clause that filters the data that gets imported.

Syntax and Description

QUERY=[[schema_name.]table_name:]query_clause

The query_clause is typically a SQL WHERE clause for fine-grained row selection, but could be any SQL clause. For example, an ORDER BY clause could be used to speed up a migration from a heap-organized table to an index-organized table. If a schema and table name are not supplied, then the query is applied to (and must be valid for) all tables in the source dump file set or database. A table-specific query overrides a query applied to all tables.

When the query is to be applied to a specific table, a colon (:) must separate the table name from the query clause. More than one table-specific query can be specified, but only one query can be specified per table.

If the NETWORK_LINK parameter is specified along with the QUERY parameter, then any objects specified in the query_clause that are on the remote (source) node must be explicitly qualified with the NETWORK_LINK value. Otherwise, Data Pump assumes that the object is on the local (target) node; if it is not, then an error is returned and the import of the table from the remote (source) system fails.

For example, if you specify NETWORK_LINK=dblink1, then the query_clause of the QUERY parameter must specify that link, as shown in the following example:

QUERY=(hr.employees:"WHERE last_name IN(SELECT last_name 
FROM hr.employees@dblink1)")

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line. See "Use of Quotation Marks On the Data Pump Command Line".

When the QUERY parameter is used, the external tables method (rather than the direct path method) is used for data access.

To specify a schema other than your own in a table-specific query, you must be granted access to that specific table.

Restrictions

  • The QUERY parameter cannot be used with the following parameters:

    • CONTENT=METADATA_ONLY

    • SQLFILE

    • TRANSPORT_DATAFILES

  • When the QUERY parameter is specified for a table, Data Pump uses external tables to load the target table. External tables uses a SQL INSERT statement with a SELECT clause. The value of the QUERY parameter is included in the WHERE clause of the SELECT portion of the INSERT statement. If the QUERY parameter includes references to another table with columns whose names match the table being loaded, and if those columns are used in the query, then you will need to use a table alias to distinguish between columns in the table being loaded and columns in the SELECT statement with the same name. The table alias used by Data Pump for the table being loaded is KU$.

    For example, suppose you are importing a subset of the sh.sales table based on the credit limit for a customer in the sh.customers table. In the following example, KU$ is used to qualify the cust_id field in the QUERY parameter for loading sh.sales. As a result, Data Pump imports only rows for customers whose credit limit is greater than $10,000.

    QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c
    WHERE cust_credit_limit > 10000 AND ku$.cust_id = c.cust_id)"'
    

    If KU$ is not used for a table alias, then all rows are loaded:

    QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c
    WHERE cust_credit_limit > 10000 AND cust_id = c.cust_id)"'
    
  • The maximum length allowed for a QUERY string is 4000 bytes including quotation marks, which means that the actual maximum length allowed is 3998 bytes.

Example

The following is an example of using the QUERY parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL". Because the QUERY value uses quotation marks, Oracle recommends that you use a parameter file.

Suppose you have a parameter file, query_imp.par, that contains the following:

QUERY=departments:"WHERE department_id < 120"

You can then enter the following command:

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp 
  PARFILE=query_imp.par NOLOGFILE=YES

All tables in expfull.dmp are imported, but for the departments table, only data that meets the criteria specified in the QUERY parameter is imported.

REMAP_DATA

Default: There is no default

Purpose

The REMAP_DATA parameter allows you to remap data as it is being inserted into a new database. A common use is to regenerate primary keys to avoid conflict when importing a table into a preexisting table on the target database.

You can specify a remap function that takes as a source the value of the designated column from either the dump file or a remote database. The remap function then returns a remapped value that will replace the original value in the target database.

The same function can be applied to multiple columns being dumped. This is useful when you want to guarantee consistency in remapping both the child and parent column in a referential constraint.

Syntax and Description

REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function

The description of each syntax element, in the order in which they appear in the syntax, is as follows:

schema -- the schema containing the table to be remapped. By default, this is the schema of the user doing the import.

tablename -- the table whose column will be remapped.

column_name -- the column whose data is to be remapped. The maximum number of columns that can be remapped for a single table is 10.

schema -- the schema containing the PL/SQL package you created that contains the remapping function. As a default, this is the schema of the user doing the import.

pkg -- the name of the PL/SQL package you created that contains the remapping function.

function -- the name of the function within the PL/SQL that will be called to remap the column table in each row of the specified table.

Restrictions

  • The datatypes of the source argument and the returned value should both match the datatype of the designated column in the table.

  • Remapping functions should not perform commits or rollbacks except in autonomous transactions.

  • The maximum number of columns you can remap on a single table is 10. You can remap 9 columns on table a and 8 columns on table b, and so on, but the maximum for each table is 10.

Example

The following example assumes a package named remap has been created that contains a function named plusx that changes the values for first_name in the employees table.

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=hr.employees REMAP_DATA=hr.employees.first_name:hr.remap.plusx

REMAP_DATAFILE

Default: There is no default

Purpose

Changes the name of the source data file to the target data file name in all SQL statements where the source data file is referenced: CREATE TABLESPACE, CREATE LIBRARY, and CREATE DIRECTORY.

Syntax and Description

REMAP_DATAFILE=source_datafile:target_datafile

Remapping data files is useful when you move databases between platforms that have different file naming conventions. The source_datafile and target_datafile names should be exactly as you want them to appear in the SQL statements where they are referenced. Oracle recommends that you enclose data file names in quotation marks to eliminate ambiguity on platforms for which a colon is a valid file specification character.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line.

You must have the DATAPUMP_IMP_FULL_DATABASE role to specify this parameter.

Example

Suppose you had a parameter file, payroll.par, with the following content:

DIRECTORY=dpump_dir1
FULL=YES
DUMPFILE=db_full.dmp
REMAP_DATAFILE="'DB1$:[HRDATA.PAYROLL]tbs6.dbf':'/db1/hrdata/payroll/tbs6.dbf'"

You can then issue the following command:

> impdp hr PARFILE=payroll.par

This example remaps a VMS file specification (DR1$:[HRDATA.PAYROLL]tbs6.dbf) to a UNIX file specification, (/db1/hrdata/payroll/tbs6.dbf) for all SQL DDL statements during the import. The dump file, db_full.dmp, is located by the directory object, dpump_dir1.

REMAP_SCHEMA

Default: There is no default

Purpose

Loads all objects from the source schema into a target schema.

Syntax and Description

REMAP_SCHEMA=source_schema:target_schema

Multiple REMAP_SCHEMA lines can be specified, but the source schema must be different for each one. However, different source schemas can map to the same target schema. The mapping may not be 100 percent complete, because there are certain schema references that Import is not capable of finding. For example, Import will not find schema references embedded within the body of definitions of types, views, procedures, and packages.

If the schema you are remapping to does not already exist, then the import operation creates it, provided that the dump file set contains the necessary CREATE USER metadata for the source schema, and provided that you are importing with enough privileges. For example, the following Export commands create dump file sets with the necessary metadata to create a schema, because the user SYSTEM has the necessary privileges:

> expdp system SCHEMAS=hr
Password: password

> expdp system FULL=YES
Password: password

If your dump file set does not contain the metadata necessary to create a schema, or if you do not have privileges, then the target schema must be created before the import operation is performed. This is because the unprivileged dump files do not contain the necessary information for the import to create the schema automatically.

If the import operation does create the schema, then after the import is complete, you must assign it a valid password to connect to it. The SQL statement to do this, which requires privileges, is:

SQL> ALTER USER schema_name IDENTIFIED BY new_password 

Restrictions

  • Unprivileged users can perform schema remaps only if their schema is the target schema of the remap. (Privileged users can perform unrestricted schema remaps.)

  • For example, SCOTT can remap his BLAKE's objects to SCOTT, but SCOTT cannot remap SCOTT's objects to BLAKE.

Example

Suppose that, as user SYSTEM, you execute the following Export and Import commands to remap the hr schema into the scott schema:

> expdp system SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp

> impdp system DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp REMAP_SCHEMA=hr:scott

In this example, if user scott already exists before the import, then the Import REMAP_SCHEMA command will add objects from the hr schema into the existing scott schema. You can connect to the scott schema after the import by using the existing password (without resetting it).

If user scott does not exist before you execute the import operation, then Import automatically creates it with an unusable password. This is possible because the dump file, hr.dmp, was created by SYSTEM, which has the privileges necessary to create a dump file that contains the metadata needed to create a schema. However, you cannot connect to scott on completion of the import, unless you reset the password for scott on the target database after the import completes.

REMAP_TABLE

Default: There is no default

Purpose

Allows you to rename tables during an import operation.

Syntax and Description

You can use either of the following syntaxes (see the Usage Notes below):

REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename

OR

REMAP_TABLE=[schema.]old_tablename[:partition]:new_tablename

You can use the REMAP_TABLE parameter to rename entire tables or to rename table partitions if the table is being departitioned. (See "PARTITION_OPTIONS".)

You can also use it to override the automatic naming of table partitions that were exported.

Usage Notes

Be aware that with the first syntax, if you specify REMAP_TABLE=A.B:C, then Import assumes that A is a schema name, B is the old table name, and C is the new table name. To use the first syntax to rename a partition that is being promoted to a nonpartitioned table, you must specify a schema name.

To use the second syntax to rename a partition being promoted to a nonpartitioned table, you only need to qualify it with the old table name. No schema name is required.

Restrictions

  • Only objects created by the Import will be remapped. In particular, preexisting tables will not be remapped.

  • The REMAP_TABLE parameter will not work if the table being remapped has named constraints in the same schema and the constraints need to be created when the table is created.

Example

The following is an example of using the REMAP_TABLE parameter to rename the employees table to a new name of emps:

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=hr.employees REMAP_TABLE=hr.employees:emps 

REMAP_TABLESPACE

Default: There is no default

Purpose

Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.

Syntax and Description

REMAP_TABLESPACE=source_tablespace:target_tablespace

Multiple REMAP_TABLESPACE parameters can be specified, but no two can have the same source tablespace. The target schema must have sufficient quota in the target tablespace.

Note that use of the REMAP_TABLESPACE parameter is the only way to remap a tablespace in Data Pump Import. This is a simpler and cleaner method than the one provided in the original Import utility. That method was subject to many restrictions (including the number of tablespace subclauses) which sometimes resulted in the failure of some DDL commands.

By contrast, the Data Pump Import method of using the REMAP_TABLESPACE parameter works for all objects, including the user, and it works regardless of how many tablespace subclauses are in the DDL statement.

Restrictions

  • Data Pump Import can only remap tablespaces for transportable imports in databases where the compatibility level is set to 10.1 or later.

  • Only objects created by the Import will be remapped. In particular, the tablespaces for preexisting tables will not be remapped if TABLE_EXISTS_ACTION is set to SKIP, TRUNCATE, or APPEND.

Example

The following is an example of using the REMAP_TABLESPACE parameter.

> impdp hr REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpump_dir1
  DUMPFILE=employees.dmp 

REUSE_DATAFILES

Default: NO

Purpose

Specifies whether the import job should reuse existing data files for tablespace creation.

Syntax and Description

REUSE_DATAFILES=[YES | NO]

If the default (n) is used and the data files specified in CREATE TABLESPACE statements already exist, then an error message from the failing CREATE TABLESPACE statement is issued, but the import job continues.

If this parameter is specified as y, then the existing data files are reinitialized.


Caution:

Specifying REUSE_DATAFILES=YES may result in a loss of data.

Example

The following is an example of using the REUSE_DATAFILES parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=reuse.log
REUSE_DATAFILES=YES

This example reinitializes data files referenced by CREATE TABLESPACE statements in the expfull.dmp file.

SCHEMAS

Default: There is no default

Purpose

Specifies that a schema-mode import is to be performed.

Syntax and Description

SCHEMAS=schema_name [,...]

If you have the DATAPUMP_IMP_FULL_DATABASE role, then you can use this parameter to perform a schema-mode import by specifying a list of schemas to import. First, the user definitions are imported (if they do not already exist), including system and role grants, password history, and so on. Then all objects contained within the schemas are imported. Unprivileged users can specify only their own schemas or schemas remapped to their own schemas. In that case, no information about the schema definition is imported, only the objects contained within it.

The use of filtering can restrict what is imported using this import mode. See "Filtering During Import Operations".

Schema mode is the default mode when you are performing a network-based import.

Example

The following is an example of using the SCHEMAS parameter. You can create the expdat.dmp file used in this example by running the example provided for the Export SCHEMAS parameter. See "SCHEMAS".

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp

The hr schema is imported from the expdat.dmp file. The log file, schemas.log, is written to dpump_dir1.

SERVICE_NAME

Default: There is no default

Purpose

Used to specify a service name to be used in conjunction with the CLUSTER parameter.

Syntax and Description

SERVICE_NAME=name

The SERVICE_NAME parameter can be used with the CLUSTER=YES parameter to specify an existing service associated with a resource group that defines a set of Oracle Real Application Clusters (Oracle RAC) instances belonging to that resource group, typically a subset of all the Oracle RAC instances.

The service name is only used to determine the resource group and instances defined for that resource group. The instance where the job is started is always used, regardless of whether it is part of the resource group.

The SERVICE_NAME parameter is ignored if CLUSTER=NO is also specified.

Suppose you have an Oracle RAC configuration containing instances A, B, C, and D. Also suppose that a service named my_service exists with a resource group consisting of instances A, B, and C only. In such a scenario, the following would be true:

  • If you start a Data Pump job on instance A and specify CLUSTER=YES (or accept the default, which is YES) and you do not specify the SERVICE_NAME parameter, then Data Pump creates workers on all instances: A, B, C, and D, depending on the degree of parallelism specified.

  • If you start a Data Pump job on instance A and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, and C only.

  • If you start a Data Pump job on instance D and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, C, and D. Even though instance D is not in my_service it is included because it is the instance on which the job was started.

  • If you start a Data Pump job on instance A and specify CLUSTER=NO, then any SERVICE_NAME parameter you specify is ignored and all processes will start on instance A.


See Also:

"CLUSTER"

Example

> impdp system DIRECTORY=dpump_dir1 SCHEMAS=hr
  SERVICE_NAME=sales NETWORK_LINK=dbs1

This example starts a schema-mode network import of the hr schema. Even though CLUSTER=YES is not specified on the command line, it is the default behavior, so the job will use all instances in the resource group associated with the service name sales. The NETWORK_LINK value of dbs1 would be replaced with the name of the source database from which you were importing data. (Note that there is no dump file generated because this is a network import.)

The NETWORK_LINK parameter is simply being used as part of the example. It is not required when using the SERVICE_NAME parameter.

SKIP_UNUSABLE_INDEXES

Default: the value of the Oracle Database configuration parameter, SKIP_UNUSABLE_INDEXES.

Purpose

Specifies whether Import skips loading tables that have indexes that were set to the Index Unusable state (by either the system or the user).

Syntax and Description

SKIP_UNUSABLE_INDEXES=[YES | NO]

If SKIP_UNUSABLE_INDEXES is set to YES, and a table or partition with an index in the Unusable state is encountered, then the load of that table or partition proceeds anyway, as if the unusable index did not exist.

If SKIP_UNUSABLE_INDEXES is set to NO, and a table or partition with an index in the Unusable state is encountered, then that table or partition is not loaded. Other tables, with indexes not previously set Unusable, continue to be updated as rows are inserted.

If the SKIP_UNUSABLE_INDEXES parameter is not specified, then the setting of the Oracle Database configuration parameter, SKIP_UNUSABLE_INDEXES (whose default value is y), will be used to determine how to handle unusable indexes.

If indexes used to enforce constraints are marked unusable, then the data is not imported into that table.


Note:

This parameter is useful only when importing data into an existing table. It has no practical effect when a table is created as part of an import because in that case, the table and indexes are newly created and will not be marked unusable.

Example

The following is an example of using the SKIP_UNUSABLE_INDEXES parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log
SKIP_UNUSABLE_INDEXES=YES

SOURCE_EDITION

Default: the default database edition on the remote node from which objects will be fetched

Purpose

Specifies the database edition on the remote node from which objects will be fetched.

Syntax and Description

SOURCE_EDITION=edition_name

If SOURCE_EDITION=edition_name is specified, then the objects from that edition are imported. Data Pump selects all inherited objects that have not changed and all actual objects that have changed.

If this parameter is not specified, then the default edition is used. If the specified edition does not exist or is not usable, then an error message is returned.


See Also:


Restrictions

  • The SOURCE_EDITION parameter is valid on an import operation only when the NETWORK_LINK parameter is also specified. See "NETWORK_LINK".

  • This parameter is only useful if there are two or more versions of the same versionable objects in the database.

  • The job version must be set to 11.2 or higher. See "VERSION".

Example

The following is an example of using the import SOURCE_EDITION parameter:

> impdp hr DIRECTORY=dpump_dir1 SOURCE_EDITION=exp_edition
NETWORK_LINK=source_database_link EXCLUDE=USER

This example assumes the existence of an edition named exp_edition on the system from which objects are being imported. Because no import mode is specified, the default of schema mode will be used. The source_database_link would be replaced with the name of the source database from which you were importing data. The EXCLUDE=USER parameter excludes only the definitions of users, not the objects contained within users' schemas. (Note that there is no dump file generated because this is a network import.)

SQLFILE

Default: There is no default

Purpose

Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.

Syntax and Description

SQLFILE=[directory_object:]file_name

The file_name specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.

Note that passwords are not included in the SQL file. For example, if a CONNECT statement is part of the DDL that was executed, then it will be replaced by a comment with only the schema name shown. In the following example, the dashes (--) indicate that a comment follows, and the hr schema name is shown, but not the password.

-- CONNECT hr

Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr schema.

For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.

Restrictions

  • If SQLFILE is specified, then the CONTENT parameter is ignored if it is set to either ALL or DATA_ONLY.

  • To perform a Data Pump Import to a SQL file using Oracle Automatic Storage Management (Oracle ASM), the SQLFILE parameter that you specify must include a directory object that does not use the Oracle ASM + notation. That is, the SQL file must be written to a disk file, not into the Oracle ASM storage.

  • The SQLFILE parameter cannot be used in conjunction with the QUERY parameter.

Example

The following is an example of using the SQLFILE parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql

A SQL file named expfull.sql is written to dpump_dir2.

STATUS

Default: 0

Purpose

Specifies the frequency at which the job status will be displayed.

Syntax and Description

STATUS[=integer]

If you supply a value for integer, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then no additional information is displayed beyond information about the completion of each object type, table, or partition.

This status information is written only to your standard output device, not to the log file (if one is in effect).

Example

The following is an example of using the STATUS parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr NOLOGFILE=YES STATUS=120 DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp

In this example, the status is shown every two minutes (120 seconds).

STREAMS_CONFIGURATION

Default: YES

Purpose

Specifies whether to import any Streams metadata that may be present in the export dump file.

Syntax and Description

STREAMS_CONFIGURATION=[YES | NO]

Example

The following is an example of using the STREAMS_CONFIGURATION parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp STREAMS_CONFIGURATION=NO

TABLE_EXISTS_ACTION

Default: SKIP (Note that if CONTENT=DATA_ONLY is specified, then the default is APPEND, not SKIP.)

Purpose

Tells Import what to do if the table it is trying to create already exists.

Syntax and Description

TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE]

The possible values have the following effects:

  • SKIP leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

  • APPEND loads rows from the source and leaves existing rows unchanged.

  • TRUNCATE deletes existing rows and then loads rows from the source.

  • REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

The following considerations apply when you are using these options:

  • When you use TRUNCATE or REPLACE, ensure that rows in the affected tables are not targets of any referential constraints.

  • When you use SKIP, APPEND, or TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are not modified. For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded (using EXCLUDE) and they exist in the source dump file or system.

  • When you use APPEND or TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table before performing any action.

    If the existing table has active constraints and triggers, then it is loaded using the external tables access method. If any row violates an active constraint, then the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.

    If you have data that must be loaded, but may cause constraint violations, then consider disabling the constraints, loading the data, and then deleting the problem rows before reenabling the constraints.

  • When you use APPEND, the data is always loaded into new space; existing space, even if available, is not reused. For this reason, you may want to compress your data after the load.


Note:

When Data Pump detects that the source table and target table do not match (the two tables do not have the same number of columns or the target table has a column name that is not present in the source table), it compares column names between the two tables. If the tables have at least one column in common, then the data for the common columns is imported into the table (assuming the datatypes are compatible). The following restrictions apply:
  • This behavior is not supported for network imports.

  • The following types of columns cannot be dropped: column objects, object attributes, nested table columns, and ref columns based on a primary key.


Restrictions

  • TRUNCATE cannot be used on clustered tables.

Example

The following is an example of using the TABLE_EXISTS_ACTION parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
TABLE_EXISTS_ACTION=REPLACE

TABLES

Default: There is no default

Purpose

Specifies that you want to perform a table-mode import.

Syntax and Description

TABLES=[schema_name.]table_name[:partition_name]

In a table-mode import, you can filter the data that is imported from the source by specifying a comma-delimited list of tables and partitions or subpartitions.

If you do not supply a schema_name, then it defaults to that of the current user. To specify a schema other than your own, you must either have the DATAPUMP_IMP_FULL_DATABASE role or remap the schema to the current user.

The use of filtering can restrict what is imported using this import mode. See "Filtering During Import Operations".

If a partition_name is specified, then it must be the name of a partition or subpartition in the associated table.

Use of the wildcard character, %, to specify table names and partition names is supported.

The following restrictions apply to table names:

  • By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, then you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.

    Some operating systems require that quotation marks on the command line be preceded by an escape character. The following are examples of how case-sensitivity can be preserved in the different Import modes.

    • In command-line mode:

      TABLES='\"Emp\"'
      
    • In parameter file mode:

      TABLES='"Emp"'
      
  • Table names specified on the command line cannot include a pound sign (#), unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound sign (#), then the Import utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.

    For example, if the parameter file contains the following line, then Import interprets everything on the line after emp# as a comment and does not import the tables dept and mydata:

    TABLES=(emp#, dept, mydata)
    

    However, if the parameter file contains the following line, then the Import utility imports all three tables because emp# is enclosed in quotation marks:

    TABLES=('"emp#"', dept, mydata)
    

    Note:

    Some operating systems require single quotation marks rather than double quotation marks, or the reverse; see your Oracle operating system-specific documentation. Different operating systems also have other restrictions on table naming.

    For example, the UNIX C shell attaches a special meaning to a dollar sign ($) or pound sign (#) (or certain other special characters). You must use escape characters to get such characters in the name past the shell and into Import.


Restrictions

  • The use of synonyms as values for the TABLES parameter is not supported. For example, if the regions table in the hr schema had a synonym of regn, then it would not be valid to use TABLES=regn. An error would be returned.

  • You can only specify partitions from one table if PARTITION_OPTIONS=DEPARTITION is also specified on the import.

  • If you specify TRANSPORTABLE=ALWAYS, then all partitions specified on the TABLES parameter must be in the same table.

  • The length of the table name list specified for the TABLES parameter is limited to a maximum of 4 MB, unless you are using the NETWORK_LINK parameter to an Oracle Database release 10.2.0.3 or earlier or to a read-only database. In such cases, the limit is 4 KB.

Example

The following example shows a simple use of the TABLES parameter to import only the employees and jobs tables from the expfull.dmp file. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs

The following example shows the use of the TABLES parameter to import partitions:

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp 
TABLES=sh.sales:sales_Q1_2008,sh.sales:sales_Q2_2008

This example imports the partitions sales_Q1_2008 and sales_Q2_2008 for the table sales in the schema sh.

TABLESPACES

Default: There is no default

Purpose

Specifies that you want to perform a tablespace-mode import.

Syntax and Description

TABLESPACES=tablespace_name [, ...]

Use TABLESPACES to specify a list of tablespace names whose tables and dependent objects are to be imported from the source (full, schema, tablespace, or table-mode export dump file set or another database).

During the following import situations, Data Pump automatically creates the tablespaces into which the data will be imported:

  • The import is being done in FULL or TRANSPORT_TABLESPACES mode

  • The import is being done in table mode with TRANSPORTABLE=ALWAYS

In all other cases, the tablespaces for the selected objects must already exist on the import database. You could also use the Import REMAP_TABLESPACE parameter to map the tablespace name to an existing tablespace on the import database.

The use of filtering can restrict what is imported using this import mode. See "Filtering During Import Operations".

Restrictions

  • The length of the list of tablespace names specified for the TABLESPACES parameter is limited to a maximum of 4 MB, unless you are using the NETWORK_LINK parameter to a 10.2.0.3 or earlier database or to a read-only database. In such cases, the limit is 4 KB.

Example

The following is an example of using the TABLESPACES parameter. It assumes that the tablespaces already exist. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLESPACES=tbs_1,tbs_2,tbs_3,tbs_4

This example imports all tables that have data in tablespaces tbs_1, tbs_2, tbs_3, and tbs_4.

TARGET_EDITION

Default: the default database edition on the system

Purpose

Specifies the database edition into which objects should be imported.

Syntax and Description

TARGET_EDITION=name

If TARGET_EDITION=name is specified, then Data Pump Import creates all of the objects found in the dump file. Objects that are not editionable are created in all editions. For example, tables are not editionable, so if there is a table in the dump file, then it will be created, and all editions will see it. Objects in the dump file that are editionable, such as procedures, are created only in the specified target edition.

If this parameter is not specified, then the default edition on the target database is used, even if an edition was specified in the export job. If the specified edition does not exist or is not usable, then an error message is returned.


See Also:


Restrictions

  • This parameter is only useful if there are two or more versions of the same versionable objects in the database.

  • The job version must be 11.2 or higher. See "VERSION".

Example

The following is an example of using the TARGET_EDITION parameter:

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp TARGET_EDITION=exp_edition

This example assumes the existence of an edition named exp_edition on the system to which objects are being imported. Because no import mode is specified, the default of schema mode will be used.

TRANSFORM

Default: There is no default

Purpose

Enables you to alter object creation DDL for objects being imported.

Syntax and Description

TRANSFORM = transform_name:value[:object_type]

The transform_name specifies the name of the transform. The possible options are as follows:

  • SEGMENT_ATTRIBUTES - If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y.

  • STORAGE - If the value is specified as y, then the storage clauses are included, with appropriate DDL. The default is y. This parameter is ignored if SEGMENT_ATTRIBUTES=n.

  • OID - If the value is specified as n, then the assignment of the exported OID during the creation of object tables and types is inhibited. Instead, a new OID is assigned. This can be useful for cloning schemas, but does not affect referenced objects. The default value is y.

  • PCTSPACE - The value supplied for this transform must be a number greater than zero. It represents the percentage multiplier used to alter extent allocations and the size of data files.

    Note that you can use the PCTSPACE transform with the Data Pump Export SAMPLE parameter so that the size of storage allocations matches the sampled data subset. (See "SAMPLE".)

  • SEGMENT_CREATION - If set to y (the default), then this transform causes the SQL SEGMENT CREATION clause to be added to the CREATE TABLE statement. That is, the CREATE TABLE statement will explicitly say either SEGMENT CREATION DEFERRED or SEGMENT CREATION IMMEDIATE. If the value is n, then the SEGMENT CREATION clause is omitted from the CREATE TABLE statement. Set this parameter to n to use the default segment creation attributes for the table(s) being loaded. (This functionality is available starting with Oracle Database 11g release 2 (11.2.0.2).)

The type of value specified depends on the transform used. Boolean values (y/n) are required for the SEGMENT_ATTRIBUTES, STORAGE, and OID transforms. Integer values are required for the PCTSPACE transform.

The object_type is optional. If supplied, it designates the object type to which the transform will be applied. If no object type is specified, then the transform applies to all valid object types. The valid object types for each transform are shown in Table 3-1.

Table 3-1 Valid Object Types For the Data Pump Import TRANSFORM Parameter


SEGMENT_ATTRIBUTESSTORAGEOIDPCTSPACESEGMENT_CREATION

CLUSTER

X

X


X


CONSTRAINT

X

X


X


INC_TYPE



X



INDEX

X

X


X


ROLLBACK_SEGMENT

X

X


X


TABLE

X

X

X

X

X

TABLESPACE

X



X


TYPE



X




Example

For the following example, assume that you have exported the employees table in the hr schema. The SQL CREATE TABLE statement that results when you then import the table is similar to the following:

CREATE TABLE "HR"."EMPLOYEES" 
   ( "EMPLOYEE_ID" NUMBER(6,0), 
     "FIRST_NAME" VARCHAR2(20), 
     "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE, 
     "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE, 
     "PHONE_NUMBER" VARCHAR2(20), 
     "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE, 
     "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE, 
     "SALARY" NUMBER(8,2), 
     "COMMISSION_PCT" NUMBER(2,2), 
     "MANAGER_ID" NUMBER(6,0), 
     "DEPARTMENT_ID" NUMBER(4,0)
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 10240 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 121
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "SYSTEM" ;

If you do not want to retain the STORAGE clause or TABLESPACE clause, then you can remove them from the CREATE STATEMENT by using the Import TRANSFORM parameter. Specify the value of SEGMENT_ATTRIBUTES as n. This results in the exclusion of segment attributes (both storage and tablespace) from the table.

> impdp hr TABLES=hr.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp
  TRANSFORM=SEGMENT_ATTRIBUTES:n:table

The resulting CREATE TABLE statement for the employees table would then look similar to the following. It does not contain a STORAGE or TABLESPACE clause; the attributes for the default tablespace for the HR schema will be used instead.

CREATE TABLE "HR"."EMPLOYEES" 
   ( "EMPLOYEE_ID" NUMBER(6,0), 
     "FIRST_NAME" VARCHAR2(20), 
     "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE, 
     "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE, 
     "PHONE_NUMBER" VARCHAR2(20), 
     "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE, 
     "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE, 
     "SALARY" NUMBER(8,2), 
     "COMMISSION_PCT" NUMBER(2,2), 
     "MANAGER_ID" NUMBER(6,0), 
     "DEPARTMENT_ID" NUMBER(4,0)
   );

As shown in the previous example, the SEGMENT_ATTRIBUTES transform applies to both storage and tablespace attributes. To omit only the STORAGE clause and retain the TABLESPACE clause, you can use the STORAGE transform, as follows:

> impdp hr TABLES=hr.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp
  TRANSFORM=STORAGE:n:table

The SEGMENT_ATTRIBUTES and STORAGE transforms can be applied to all applicable table and index objects by not specifying the object type on the TRANSFORM parameter, as shown in the following command:

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=SEGMENT_ATTRIBUTES:n

TRANSPORT_DATAFILES

Default: There is no default

Purpose

Specifies a list of data files to be imported into the target database by a transportable-tablespace mode import, or by a table-mode import if TRANSPORTABLE=ALWAYS was set during the export. The data files must already exist on the target database system.

Syntax and Description

TRANSPORT_DATAFILES=datafile_name

The datafile_name must include an absolute directory path specification (not a directory object name) that is valid on the system where the target database resides.

At some point before the import operation, you must copy the data files from the source system to the target system. You can do this using any copy method supported by your operating stem. If desired, you can rename the files when you copy them to the target system (see Example 2).

If you already have a dump file set generated by a transportable-tablespace mode export, then you can perform a transportable-mode import of that dump file, by specifying the dump file (which contains the metadata) and the TRANSPORT_DATAFILES parameter. The presence of the TRANSPORT_DATAFILES parameter tells import that it is a transportable-mode import and where to get the actual data.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line.

Restrictions

  • The TRANSPORT_DATAFILES parameter cannot be used in conjunction with the QUERY parameter.

Example 1

The following is an example of using the TRANSPORT_DATAFILES parameter. Assume you have a parameter file, trans_datafiles.par, with the following content:

DIRECTORY=dpump_dir1
DUMPFILE=tts.dmp
TRANSPORT_DATAFILES='/user01/data/tbs1.dbf'

You can then issue the following command:

> impdp hr PARFILE=trans_datafiles.par

Example 2

This example illustrates the renaming of data files as part of a transportable tablespace export and import operation. Assume that you have a data file named employees.dat on your source system.

  1. Using a method supported by your operating system, manually copy the data file named employees.dat from your source system to the system where your target database resides. As part of the copy operation, rename it to workers.dat.

  2. Perform a transportable tablespace export of tablespace tbs_1.

    > expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=tbs_1
    

    The metadata only (no data) for tbs_1 is exported to a dump file named tts.dmp. The actual data was copied over to the target database in step 1.

  3. Perform a transportable tablespace import, specifying an absolute directory path for the data file named workers.dat:

    > impdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
    TRANSPORT_DATAFILES='/user01/data/workers.dat'
    

    The metadata contained in tts.dmp is imported and Data Pump then assigns the information in the workers.dat file to the correct place in the database.

TRANSPORT_FULL_CHECK

Default: NO

Purpose

Specifies whether to verify that the specified transportable tablespace set is being referenced by objects in other tablespaces.

Syntax and Description

TRANSPORT_FULL_CHECK=[YES | NO]

If TRANSPORT_FULL_CHECK=YES, then Import verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, then a failure is returned and the import operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

If TRANSPORT_FULL_CHECK=NO, then Import verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, then the import operation is terminated.

In addition to this check, Import always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES are actually contained within the tablespace set.

Restrictions

  • This parameter is valid for transportable mode (or table mode when TRANSPORTABLE=ALWAYS was specified on the export) only when the NETWORK_LINK parameter is specified.

Example

In the following example, source_database_link would be replaced with the name of a valid database link. The example also assumes that a data file named tbs6.dbf already exists.

Assume you have a parameter file, full_check.par, with the following content:

DIRECTORY=dpump_dir1
TRANSPORT_TABLESPACES=tbs_6
NETWORK_LINK=source_database_link
TRANSPORT_FULL_CHECK=YES
TRANSPORT_DATAFILES='/wkdir/data/tbs6.dbf'

You can then issue the following command:

> impdp hr PARFILE=full_check.par

TRANSPORT_TABLESPACES

Default: There is no default.

Purpose

Specifies that you want to perform an import in transportable-tablespace mode over a database link (as specified with the NETWORK_LINK parameter.)

Syntax and Description

TRANSPORT_TABLESPACES=tablespace_name [, ...]

Use the TRANSPORT_TABLESPACES parameter to specify a list of tablespace names for which object metadata will be imported from the source database into the target database.

Because this is a transportable-mode import, the tablespaces into which the data is imported are automatically created by Data Pump.You do not need to pre-create them. However, the data files should be copied to the target database before starting the import.

When you specify TRANSPORT_TABLESPACES on the import command line, you must also use the NETWORK_LINK parameter to specify a database link. A database link is a connection between two physical database servers that allows a client to access them as one logical database. Therefore, the NETWORK_LINK parameter is required because the object metadata is exported from the source (the database being pointed to by NETWORK_LINK) and then imported directly into the target (database from which the impdp command is issued), using that database link. There are no dump files involved in this situation. You would also need to specify the TRANSPORT_DATAFILES parameter to let the import know where to find the actual data, which had been copied to the target in a separate operation using some other means.


Note:

If you already have a dump file set generated by a transportable-tablespace mode export, then you can perform a transportable-mode import of that dump file, but in this case you do not specify TRANSPORT_TABLESPACES or NETWORK_LINK. Doing so would result in an error. Rather, you specify the dump file (which contains the metadata) and the TRANSPORT_DATAFILES parameter. The presence of the TRANSPORT_DATAFILES parameter tells import that it's a transportable-mode import and where to get the actual data.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line.

Restrictions

  • You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database into which you are importing must be at the same or higher release level as the source database.

  • The TRANSPORT_TABLESPACES parameter is valid only when the NETWORK_LINK parameter is also specified.

  • Transportable mode does not support encrypted columns.

Example

In the following example, the source_database_link would be replaced with the name of a valid database link. The example also assumes that a data file named tbs6.dbf has already been copied from the source database to the local system. Suppose you have a parameter file, tablespaces.par, with the following content:

DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link
TRANSPORT_TABLESPACES=tbs_6
TRANSPORT_FULL_CHECK=NO
TRANSPORT_DATAFILES='user01/data/tbs6.dbf'

You can then issue the following command:

> impdp hr PARFILE=tablespaces.par

TRANSPORTABLE

Default: NEVER

Purpose

Specifies whether the transportable option should be used during a table mode import (specified with the TABLES parameter) to import only metadata for specific tables, partitions, and subpartitions.

Syntax and Description

TRANSPORTABLE = [ALWAYS | NEVER]

The definitions of the allowed values are as follows:

ALWAYS - Instructs the import job to use the transportable option. If transportable is not possible, then the job will fail. The transportable option imports only metadata for the specified tables, partitions, or subpartitions specified by the TABLES parameter. You must copy the actual data files to the target database. See "Using Data File Copying to Move Data".

NEVER - Instructs the import job to use either the direct path or external table method to load data rather than the transportable option. This is the default.

If only a subset of a table's partitions are imported and the TRANSPORTABLE=ALWAYS parameter is used, then each partition becomes a non-partitioned table.

If only a subset of a table's partitions are imported and the TRANSPORTABLE parameter is not used or is set to NEVER (the default), then:

  • If PARTITION_OPTIONS=DEPARTITION is used, then each partition is created as a non-partitioned table.

  • If PARTITION_OPTIONS is not used, then the complete table is created. That is, all the metadata for the complete table is present so that the table definition looks the same on the target system as it did on the source. But only the data for the specified partitions is inserted into the table.

Restrictions

  • The Import TRANSPORTABLE parameter is valid only if the NETWORK_LINK parameter is also specified.

  • The TRANSPORTABLE parameter is only valid in table mode imports (the tables do not have to be partitioned or subpartitioned).

  • The user performing a transportable import requires the DATAPUMP_EXP_FULL_DATABASE role on the source database and the DATAPUMP_IMP_FULL_DATABASE role on the target database.

  • To make full use of the TRANSPORTABLE parameter, the COMPATIBLE initialization parameter must be set to at least 11.0.0.

Example

The following example shows the use of the TRANSPORTABLE parameter during a network link import.

> impdp system TABLES=hr.sales TRANSPORTABLE=ALWAYS
  DIRECTORY=dpump_dir1 NETWORK_LINK=dbs1 PARTITION_OPTIONS=DEPARTITION
  TRANSPORT_DATAFILES=datafile_name 

VERSION

Default: COMPATIBLE

Purpose

Specifies the version of database objects to be imported (that is, only database objects and attributes that are compatible with the specified release will be imported). Note that this does not mean that Data Pump Import can be used with releases of Oracle Database earlier than 10.1. Data Pump Import only works with Oracle Database 10g release 1 (10.1) or later. The VERSION parameter simply allows you to identify the version of the objects being imported.

Syntax and Description

VERSION=[COMPATIBLE | LATEST | version_string]

This parameter can be used to load a target system whose Oracle database is at an earlier compatibility release than that of the source system. Database objects or attributes on the source system that are incompatible with the specified release will not be moved to the target. For example, tables containing new datatypes that are not supported in the specified release will not be imported. Legal values for this parameter are as follows:

  • COMPATIBLE - This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2.0 or higher.

  • LATEST - The version of the metadata corresponds to the database release.

  • version_string - A specific database release (for example, 11.2.0). In Oracle Database 11g, this value must be 9.2.0 or higher.

Example

The following is an example of using the VERSION parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See "FULL".

> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees
VERSION=LATEST

Commands Available in Import's Interactive-Command Mode

In interactive-command mode, the current job continues running, but logging to the terminal is suspended and the Import prompt (Import>) is displayed.

To start interactive-command mode, do one of the following:

Table 3-2 lists the activities you can perform for the current job from the Data Pump Import prompt in interactive-command mode.

Table 3-2 Supported Activities in Data Pump Import's Interactive-Command Mode

ActivityCommand Used

Exit interactive-command mode.

CONTINUE_CLIENT


Stop the import client session, but leave the current job running.

EXIT_CLIENT


Display a summary of available commands.

HELP


Detach all currently attached client sessions and terminate the current job.

KILL_JOB


Increase or decrease the number of active worker processes for the current job. This command is valid only in Oracle Database Enterprise Edition.

PARALLEL


Restart a stopped job to which you are attached.

START_JOB


Display detailed status for the current job.

STATUS


Stop the current job.

STOP_JOB



The following are descriptions of the commands available in the interactive-command mode of Data Pump Import.

CONTINUE_CLIENT

Purpose

Changes the mode from interactive-command mode to logging mode.

Syntax and Description

CONTINUE_CLIENT

In logging mode, the job status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT will also cause the client to attempt to start the job.

Example

Import> CONTINUE_CLIENT

EXIT_CLIENT

Purpose

Stops the import client session, exits Import, and discontinues logging to the terminal, but leaves the current job running.

Syntax and Description

EXIT_CLIENT

Because EXIT_CLIENT leaves the job running, you can attach to the job at a later time if it is still executing or in a stopped state. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS view or the V$SESSION_LONGOPS view.

Example

Import> EXIT_CLIENT

HELP

Purpose

Provides information about Data Pump Import commands available in interactive-command mode.

Syntax and Description

HELP

Displays information about the commands available in interactive-command mode.

Example

Import> HELP

KILL_JOB

Purpose

Detaches all currently attached client sessions and then terminates the current job. It exits Import and returns to the terminal prompt.

Syntax and Description

KILL_JOB

A job that is terminated using KILL_JOB cannot be restarted. All attached clients, including the one issuing the KILL_JOB command, receive a warning that the job is being terminated by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.

Example

Import> KILL_JOB

PARALLEL

Purpose

Enables you to increase or decrease the number of active worker processes and/or PQ slaves for the current job.

Syntax and Description

PARALLEL=integer

PARALLEL is available as both a command-line parameter and an interactive-mode parameter. You set it to the desired number of parallel processes. An increase takes effect immediately if there are enough resources and if there is enough work requiring parallelization. A decrease does not take effect until an existing process finishes its current task. If the integer value is decreased, then workers are idled but not deleted until the job exits.


See Also:

"PARALLEL" for more information about parallelism

Restrictions

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

Example

Import> PARALLEL=10

START_JOB

Purpose

Starts the current job to which you are attached.

Syntax and Description

START_JOB[=SKIP_CURRENT=YES]

The START_JOB command restarts the job to which you are currently attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issue a STOP_JOB command, provided the dump file set and master table remain undisturbed.

The SKIP_CURRENT option allows you to restart a job that previously failed to restart because execution of some DDL statement failed. The failing statement is skipped and the job is restarted from the next work item.

Neither SQLFILE jobs nor imports done in transportable-tablespace mode are restartable.

Example

Import> START_JOB

STATUS

Purpose

Displays cumulative status of the job, a description of the current operation, and an estimated completion percentage. It also allows you to reset the display interval for logging mode status.

Syntax and Description

STATUS[=integer]

You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then the periodic status display is turned off and status is displayed only once.

This status information is written only to your standard output device, not to the log file (even if one is in effect).

Example

The following example will display the current job status and change the logging mode display interval to two minutes (120 seconds).

Import> STATUS=120

STOP_JOB

Purpose

Stops the current job either immediately or after an orderly shutdown, and exits Import.

Syntax and Description

STOP_JOB[=IMMEDIATE]

If the master table and dump file set are not disturbed when or after the STOP_JOB command is issued, then the job can be attached to and restarted at a later time with the START_JOB command.

To perform an orderly shutdown, use STOP_JOB (without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.

To perform an immediate shutdown, specify STOP_JOB=IMMEDIATE. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.

Example

Import> STOP_JOB=IMMEDIATE

Examples of Using Data Pump Import

This section provides examples of the following ways in which you might use Data Pump Import:

For information that will help you to successfully use these examples, see "Using the Import Parameter Examples".

Performing a Data-Only Table-Mode Import

Example 3-1 shows how to perform a data-only table-mode import of the table named employees. It uses the dump file created in Example 2-1.

Example 3-1 Performing a Data-Only Table-Mode Import

> impdp hr TABLES=employees CONTENT=DATA_ONLY DUMPFILE=dpump_dir1:table.dmp
NOLOGFILE=YES

The CONTENT=DATA_ONLY parameter filters out any database object definitions (metadata). Only table row data is loaded.

Performing a Schema-Mode Import

Example 3-2 shows a schema-mode import of the dump file set created in Example 2-4.

Example 3-2 Performing a Schema-Mode Import

> impdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
 EXCLUDE=CONSTRAINT,REF_CONSTRAINT,INDEX TABLE_EXISTS_ACTION=REPLACE

The EXCLUDE parameter filters the metadata that is imported. For the given mode of import, all the objects contained within the source, and all their dependent objects, are included except those specified in an EXCLUDE statement. If an object is excluded, then all of its dependent objects are also excluded.The TABLE_EXISTS_ACTION=REPLACE parameter tells Import to drop the table if it already exists and to then re-create and load it using the dump file contents.

Performing a Network-Mode Import

Example 3-3 performs a network-mode import where the source is the database specified by the NETWORK_LINK parameter.

Example 3-3 Network-Mode Import of Schemas

> impdp hr TABLES=employees REMAP_SCHEMA=hr:scott DIRECTORY=dpump_dir1
NETWORK_LINK=dblink

This example imports the employees table from the hr schema into the scott schema. The dblink references a source database that is different than the target database.

To remap the schema, user hr must have the DATAPUMP_IMP_FULL_DATABASE role on the local database and the DATAPUMP_EXP_FULL_DATABASE role on the source database.

REMAP_SCHEMA loads all the objects from the source schema into the target schema.


See Also:

"NETWORK_LINK" for more information about database links

Syntax Diagrams for Data Pump Import

This section provides syntax diagrams for Data Pump Import. These diagrams use standard SQL syntax notation. For more information about SQL syntax notation, see Oracle Database SQL Language Reference.

ImpInit

Description of impinit.gif follows
Description of the illustration impinit.gif

ImpStart

Description of impstart.gif follows
Description of the illustration impstart.gif

ImpModes

Description of impmodes.gif follows
Description of the illustration impmodes.gif

ImpOpts

Description of impopts.gif follows
Description of the illustration impopts.gif

ImpFilter

Description of impfilter.gif follows
Description of the illustration impfilter.gif

ImpRacOpt

Description of impracopt.gif follows
Description of the illustration impracopt.gif

ImpRemap

Description of impremap.gif follows
Description of the illustration impremap.gif

ImpFileOpts

Description of impfileopts.gif follows
Description of the illustration impfileopts.gif

ImpNetworkOpts

Description of impnetopts.gif follows
Description of the illustration impnetopts.gif

ImpDynOpts

Description of impdynopts.gif follows
Description of the illustration impdynopts.gif

ImpDiagnostics

Description of impdiagnostics.gif follows
Description of the illustration impdiagnostics.gif

PKmo?L?PKN:AOEBPS/img/impopts.gifGIF87a}}}{{{wwwqqqmmmYYYUUUSSSMMMKKKGGG???===;;;777555333///---)))%%%###!!! vvvppphhhfffddd```\\\XXXRRRPPPJJJHHHDDD@@@>>>888222000...***(((&&&$$$"""  ,A:Ʌ֫ݹ$ 8VI H1bAQ Bŋ3jȱǏ CIɓ(S\Ë*,cʜI͛8sA"W$4a⭨k$Rh;ʴi)4T$ XKTׯhXYh"@۷,@p˷/)R귰È?B#KL,ϠCMgSV).Vӂ ۸s . :vWI>μyŝJ>j؉A7E=;w㢬{ORˇJJO'>,  g߀j2(XI@E Lj Vha#l LaP(f(:a`Qaʆ8"` m̈5h:~㑄0fPX%ɒL2Tp[cD.:y1囸h =rD Q Ye* j =Ea#"''th)D RNY5I設$6+rADe*L&? a{ TscmRW? B Lֆ` M%g,!pG֪&.r,`46K$ kY`",N="@:2*^;T#'D`G,WQR8>Iy/.w C]\&T X$&y ɇV;ADAJZ8"(ax)  &?IDa)8aSH%?$A  T&D臃I%QK[6J]0OK/."c MHB p*zJڜ<]!8:3gcZ"s ȁ&KZ5jw" 9ZXh`d .R,S aニ0jj!~b ^T~t4E xK^5N8DgAb YX~mobyf5/Y߆u?Td-7?t}W]ʲUje&=dRWE^1"`z[H8,S_Ռ0:mR.:ͳ6b /1-!l8 b1Dla S&(PjG,A),2m43%xbYfhOK:{Bώja?Quɷ3 }ISlˋLIi4̗Eئg蜞ztP+D5*Ӆ h[W#ñ~ȄlXoew P\>XHucGb!}EyX@6XTXBF,/9`Xn΢S2}Bw)R5H\Q}ӈJTV (e਍MUr!3$#xV Px)DUX`vCwh(qxCh~g5d+N" +tdd`NfXzxy!yސ -c'M4ɓF t@80= NLn5QpXDfc邡ۀa ZIvIɖ me) vti }y>wHÕgySo㘅sw sP>ǙoI9)ItAVNrY(7owP™zpRIx"aYٝPB |ى"y !q呞o"}H`*y"*de3AxgC S=G&;AB_1~~8|كx(CfEFF-IDG4_C?Zx)!XfHiX .C艂"OQ0J&鍧'1Ek t~ Ō1aZL nVnTgo Sq;ݻ\%nJx9awCTREW:_5Z`)s2s<,;R{6J:8V 샌J5kl*ugFh4L> |}%l]{5\_^ u1EJN VjtN1P;Ols3†maHh" )t&JN\a̽~CcYU\Pd_D \jZ ,0\%u!O + ʙ9˴\˶|˲E ,Z!kfEKE  @@0}(> ā]urhB!"ڏ͚go۬lYf' T p'1P@ T?M;"ɾ$H*&>~ab+>~ԃMDƭڴ;N8Q)cBDU` 2 dCHPF쥂I犣f4Qq5H  Po|GpPjmP`LS3LedA;ͺC$931.!6^.-׳=hl$rc rP*B6s|dŹ_jB$ڵ Kakx*S x,ӣRaƑňgQf6#*luUd?3!= x_GD}ZC (IfH+t܊ZꬅT_>#JxzRa)C;3M`m$C#MQULi(R3,舸43DYIh99* 3bAȔ~PE\@5[[ڏ_JE^%ġlg﵍,4 .Psq)1~XGu79 i* u9tт*H <:OIҶZӏ?F{C_}D1RoU+}G++ގt>͋.L2񟨧Qc pyr {?x}t2wW{͡G!_o.t ߨQ)7%>+Kުx/? oq[P| ?ԟL~f"$^Yۘ_Y6 _^/^\;0`5A4IYiyy8a*:JZjz +@4KZ{[; H[l| [jPÔeJ"`,jU @$}ޮ{,L잯K//?N9Kc)R|R1ĉoKv[o 2FF$"`,(Ȅ!kڼ3]c;DQǜDT!KSμ*&Z5kŸ;j,h "a^HlQ 2bd`oif$d Yx0ȥc- F,AP~P5qNN:3 QJ)li;7Dx 4_?3ILnlDAD% VTȉmzF0F%_Kkk5l+R,ǒx\Uafʃ#a{2.#621, >L >pëqϭ#{@cl0UL~$ U{]5/1q`Inw9}LKz, ֐HA2l vxUKy\zi`IN}4sA1 "0(8$BK4> [$< hos?/8zbgq|?}OO}_}o}#: PAN("I1;ۤA)FFT9RRHbT_0K(X E rPA.Ȁb: /sLp< qD,$*qKB}PM#p3="9iǭxNI!׈@:@F{lC *mcvǛ0.U!FPU{؉NaP| *,r` IB04Y#"C~o\̧"⑕IDƤ$J%(i]xV"R`>܋)ZIU1d7@U!!^,1(RfXl5)QU]u^mk>Δ*2"&DI(2uz{Yʡ*銂%SX h c?+OÆBQԪvmk_ vmoׂ$83V-ɭqW;H%i?YUi&jww 릂5TAX7d _Fd=w o"M-1fĂKxXQ0V!`T)b0>ņ?!-8x4q)GPĦ8 _8:.'cө톐 eJGNe1ڴwS 0s7iI c L.#8˹dSSR wsG`<@ Z-3K:C2QY Hfo;iJzӜFӑgR9/B^$Y(*촫_M&CUEED6l A4 ` kA"h5@ hCwʠKcf[7K,^di^˴ciӟ*kj%A%Rf3bpQ`zt{F҄ȳآk :&dDA qW]mR$AO h%{,o_|4o|͑`EpGHH9ĨK}Tկk}\׿B'vw`CQN'3'o嶺pitoTJd f"yo}X'Re7̋y>;uVӣ5/=9Q`t+mأ_Q_ V^1_w*}GJ\S_>G{:xo;8We\mobeLmr=KS=;),-kWm o`']cmh&gk ` (ji`8lHb(|H}VmX+(i 2"HȂ3d.c09/`u R1 @ȃSH4j6hd*:WsXf@((3GA6|1x^2QkVY jwir؅@eXdX|5VO(X\xWv}cqwD!qLz M2x';' Sf;v|exhL(Xpvrjq)R%P08 Sm v9fcWEu q6 ܠRPC,ʄY^}ANFbՀbrgˉ\_EG uXri{Txk *ȷ S~+P; ʝyKR8`+;xSW>\Tא˫p,lrX3 z 0woȺƙB1 2nB?\j  eջIK*gp.kpkRz!g"̻ni<#>jֹa4{[` l KS@ۂ׋+D$%6xr$XQT+R +!5$!/riy ##,h%_48qs[t&V $F2CUAt/wbOdK ӔAnPS,g>,X'9'2 Q3)33f3[ymjnwp|"Pxfz|Wɼ51? aGȪ/3ܞUjVmIwy//Tiי3<{UǙs(:rc|  :PPX:BYJy ]j  ̮$W@l5ƭ5] #,'p&r4  LL !-#M@꤬ |&̿Y1)z.raF"ҩ0ˊЖTAv:-~*D tkQƌ*|<7ʼn W5h[|)9}˺ ӛjr9rxNNw<{qg"kv}P}~?|` @Ѱ q@z}zxz-3T[Jꊹ 3^.ܡ{ ڬUEs9KfҌm#Wk `& ŰM Kc [ H۽b=yHQǴU[. û~5uLK޹r8`$-ݵ9/^ *޽jRQ~!P> .'nލ-U$31*2N9ng0&?nlk,n  FXᮛdCӖ5ghI6RQ>e LHa_|!ac^ 'vS,kj>lnE݊V)Mg e` `}~n,sTc!9P Q6ĩށc9V*f?*v @yNԘsݼܢd趕Foiqom!)fW 9s04Vk›p 5IC:)ԎNpծO?Zpj"$`F%/OouҫBȆNrA9,70r)4 > !/#O%o') \ y-Ƃ ?D:0*N&A*2$Bv99Ko]9-)$#A_Cm RToc8W~nB!@j^~O\~><y?{_2>:hjX*[/\_8g $P5TX7 r=㵯ܯ{⼟~_/߀_?/4GҏRmuvd>T@k_χm\A D\D \)QA˓ťƩׇ):ЮڈLn W[ȰÇ豒13hA <†#S\ɒS/WY|Md!\@5,aћH*]TQ3]\Ju" 4ȚPȪhӪtS[NQ[M]Kw Mk1j! !Eg*^̘L5e5q]&DspA(ɂS^RkLWU^]­`zQHs="U_wd,ȑ39aNHB$N|J厙G-h"˟O X")9\xGI`\Geރ^3(5ņv ($h(nh{ :]7,XeN < 7R A6nD.f2KD( +wg kΒZ[Gg*!Ӧ $ ] –-uJq8RR_GҐLj^jI *+,,izRV @- 0U'qTnSzNOmEƳ@+IR!q0AZQQ.Dk$2z m.Vag@o H`q%شOѲlRNn=? pKM`GbK&UVMaBR Kmn̺·*wcEKmd-n`㍯&[͆}Ċ]cxn?4ex!#,:iӑṔ 9HL`lݝ0S VIs48 |_X$&αdGQwn嘐^Afbw)^v®c'\^D A<7.k<KPf3#ʏΆ~\(t3gM~2%AK;"FCҎf3Lc47mN;:sLjH:)Pk!: r.ûiA KtYעl~`4Ujj(91yifjDМJX#*:I0󒞃j8WFHqTn1JJeB訸IVvCQs|.9c<\EJYh|f1Id$햢9$ڂD IJl :ax\Za4ªjϊS p2f8 <ǨۡX;A$KG,%<үz;@"q9E0V:6*G#Mct1[y06vBw n:Zڥ#"k${a>T6?+๨,Zz[zZ:T2)")~wxz|۷~Zkt x5!F {,P|,d0ڇ",PSQa,@E!y;[{ƥۺ@D 2JeVזXi$ wI,jNI{MO4[M#/ eF3q 0#+nYaG3y$囤j:0 kS >HZLz[ ,kBmR0^^(*F(r !|:Qp}*%O\ oJFÊDi~XEyH-Qڪ)ĻNHs-) 06-NpGZ¯x;K2-u5`dc5 , FzA8U,>$ 䥌{ƂƱZe\c}ACM5~AUdY`'/17k¸Cdguv# ʈ udiIfv#a []Hzqzyc 6Xw#ᣬ%˳ @)BD;S9n9l\b4lڇt%峹 tPa)qc/G "g G/en/֬&60)ϫ :K+4NÇ/F ر mav}mؤBy3 %=`XoߑܙW9m >mZʮ>ѹ>m0KIw,.ynrΞib,l֚AݬHmuhv 'Hb~Sn6G%xBtN&_/&M[g[\N߾9?٣FKMO_QSr \}P`^[Aseog_΍DYmOnoϋ+*L3¢ `/m_Vn/+Oؗ@˜nn:1GD`[j: -̡oGA"yK`z۝1BGj]D6(y -?.Ñ#L' JD2`;Ns C1䍊QZ-da%˨ы8݁5hҚ XXHDd)9IYiy *:J*XjxEZH`!j `cH$Hd;8X k} j=~ Nԁz;#oATA˜A+Tv :|1ĉ+Z1ƍ;z2ȑ$Kȣ Y;w 4kR=ADVXR( RĂzt,ylQ3֭\z 6رd˚=6ڵlۺ} 7ܹtzBw/E4[vYQtB_6,X1tP@.ѤK>: ‚i "W HnHb"( a-h# T[=a.=lX YO?w ? ?EA X} .`֧_(_*Jz' )T`"Hb^mȝHc6ވㄟJh nX 8dJc'Mz#)\1ّL-e^~͓ILirD `$rIg ) Đ0Eh.hEvNJi`y X)Cjjj lZ ki%Z)(k:  l. 쭓8KI-2Km m$J"@^ n#Bmߎn*x.5Ď.'Ⱦo/#6R&Kp{7/(2&ܠNLBq6 p( gpj&ڱ*-Ci+1&ϼͫI͗3|>sFk[tI]ж. uՅւHM V_T 5ZGִO]QXT>er< )l^ل-|e EU<@ 29GNـG"䣙Irv-ɫy;dn:8frSq*.}.ٔW# DWXak7_6覡"gSGD A.JF Io搯s{n; H0 @ սMuCx 7m gt .L @ Aρ pĩqfTBٯ/$$pOJRsk4hÃ5ٷts$*qLlœ%A~.P)`CdEsrDTlll8ʑb"#0(0 80>)xߡ V0]#Oc$lA28$Y3∖$#V%H 8nЊ/NmT('Ǽ50&td)I$8&E8YޒW,"aUލ7/T/u p+ b7Q.KMy46RBDِF[)7Xfc({U9>t j\{YE 0xCB9~yՕhY(=hF-ـ " zDH9} sW4aIwٓFw!}šJGx{𓆸2'`y 'h0$L BwYypD~xB" qp`w9FG8㛧w@2{`ܙU$й9zO֕TW Mdrs$p72%?HT9SOIiTwIEF;Sy 5~vdIU |144xicQU 5_mlCN =d`^͠/ZB1jiI^W7 []55XNI#7q(%*RpLJ͉PJiP[#P\5TՐ'f`Q3 Kag_Mjt{PMV(Y=:dfID ]韋XI V%A0#Snރh&Jj׊*fjVl֖^iFn0pr0P\C"dHLC°TQ4l' vFk k+yfgihh+WqaTU;1#bG1/p A73٧l ymժmYsnBt#k ŔSrnAY r%8xJ+eJrԇR5r k " Y8zpnBsbd۳_YhRk xۂy3{Wc;51uu  P T ׹Aózĺl y_wkC5˔9  0LțDE|ZyPﹺ '{>˓ 6UdX { +ٸ; da>¥)0~ B˼*<1e`)58=~YTܽk{h.zS|>1{ âjO CLRĤzK, Ť;YUFZ ƦùaƦ;KWi 0zIw s|elQ銵x2LZ6^Ff.&yƠk^llj|ȶƒ@Yx|,Cȏ5Pg< PGɤMr*[vd=ߘP6`t1o5c]g]!Q|ҡmDFt# Q/SOUoN]tah`S!|VgN%qor dE%rTnPyhboYqՇ/OߕSE`YKjogUOb`@1rOmp8ytŁ ޝqp@zmw`ČC>4\e|# e. D A$D\ D\:$D)LÐ śߴښ衶͑)66(Sg *\'rXI'w1J1|$Ū(RIeǗ0cfKB5 sFHg REQ ȁuFŤЫXj3ήPѭh%da(.(hP26߿偽$V\Zq B\a#1 $/TAK E6u^zV_PU(rͻ ;E{(נIӠHDI[ф yYhMyHzmv;A0[OϿ}XE4xg#Qx@ @x0yfXzD^a%ihP`d M )x8n4bubD^dcD&q% ޏـdTV9ώI磖Y%BLܶ@ (I _K p&Z xې_i(Xd$y'zAvR❄,/w)':@]f(g!}.Irh|D8\ͧ8K 6dmםjE+OjhIT&vF.*l6ɳS(疫nyr)[.>KۨZ'lVnwĬ%KǀY$+( {)0ɭ* )&s"U8̒N>3sG7͑ȱb,sF;!gA-"Хl\m_"t;p]Yià,|5R_e߄s>2Ն7ĈIxJgi*.: 9BY~갗^ix(EDꏀ  bg#"q) mK0>QNz|0cK!X+"YSnЀt$eJGLk+Kȁpü(*Sa*l#{4O1ʱeG2e3 T歎FtBN#Q'P܅*A3 VD7C.x[s]vJ4'K]AIA<5W#x3Zb} .W87jN]<:}/25 !!B,Yؠ9bgf(k"y N0`@8L/&&f$cDA1wCHa=#\FS; BI쫜3ޡ%K2whkaElL6GׅnRhZGAeԙ-(K3JAIb_wOZZ(=)O`Ӝc1Z-Db{40|\c7W" A~ए=-M[w/9g",6z 1vMzs6)@e6w7н^S;#N;_8~̜?F4 p~44|3 BR`H&6KQn;0)\qN+whyX>g9A 7s E(s{:1yF~8Yi8;oI@=eKHc6 / ͵Ȅ: 2su'vݮZJ/3c;qUBSZ igAewsA*NC9ks lG)%SgyاFR |zٜgb\{ 1VaN5pŬ.!&ꤚ18xTCӡav@[ v7G`_gF?`!g{P.bSiG"hjQ?FV_?B:ddS1lWs0 F'NG~wgf$)wR}DMU veg|fh}1M~GvBkǃez=Xzn7 |[D Gj_F*`$p6Ӥj"z&|0M2@x?OÆmoov%^3(Jb~)!`(vd*"X~+bc,h'iFC& PQ(Ũxؘڸ؍YDh6ԀXZ劉u 7 NbI*hj"lb}j1xӀV*( <0 Gp%Ԑٷّ yPzmd6E mAؒP2 *@Z'1lr&,5||+w:K:b4rTr7;%mzxiDKhF{js,Iαזm%{qHg,nQIP nG$(kl^pNA~ hWWw`o9򖉜W*W,fA 4kqQ]^9IA@rP(j It۵%2 vI)9fSwuөyɝ!Y<'+nA!zY qZYnny6=?? ?X7IZ>r$ˇoL0Ms_pw,bwBGFyfLxp٠vAky8::: . FF<A0{!_VA7| .Tv`b01 &Pҹ"Avb+ɕW ťmFb ST8iGF c@6 {Y۩#`'F_XV$ ɢV6*~JnVg?gs֗ydZ٪`G{DžQ-<1C|G>A ٫fٝ7pyeYjs$pj0x,) Gz"v\C?slqy@Kֹ7deB0 vu|myU+s :Z14;p.+s *-[F[e>۰ZDQ+@u ijZ \;cOb;tTvU;Zl nەfRcy;{+e7+0X eM+ոd{a׵Y<+\ui| W۹7gj鴥k/8jX麧-Zߺ˸{I)aJ냰Z:[ K:߳˻W1I뽮J웽 اOL;4ȿX=6 q;L *!B_җ*`ft˶LQe&r&na*Z5f̼ҋNxd"@O/̞;É{C7B`b`A@#lZ} Y t$Ą`bmyH!1.x)J-|X̡ [u$ g)k gL[ m|ql O=AE4ɪÄњB~wr)λʪ[\#ublEL<,m«=`\,8lL tK%L\1ңql-ѢL+~X#ɛoL[;NϺ@ҽ,6m8:Ӧ D=K!\L]yʍ]T KgQ\\Ԑ٭O-E- 3K L%kM z%Nv4}xiFC$ʁr g y]`J)6p-@u5Ғ=١t3LrxPu@Zڤڼ؞(˜"/q'G6nA2hٵ˨;-} UG!BVێs`ͭRj,2 0nܚKP ~XZ='5|'dH) R|fb#:bkBScB%Ѿ<[%{흁֖̩,*LHjy"Rϐ SL]g#g1;&s* G,gzX!ÈȁQp1=adaJ&S',~6A(P"ްK2ɭfo1"h!Dvr~ 6Fc~Tx\8HFL"Z /lᮐBH Q>&.! oob: ol)Nǵ-q-,JN!nv ~5x@Ed!h(HdyXRiy8Xh` +;K[k{ ,@EJK,|{| Jl})PKI̚}>O_olo_-M " &eGF1e#\t<հ8l Cʕ,[|`.Nޣ9KfD#mB(vy@ h>pZ5J̐Ы,.5LJ5ҐicU­ݽ|iLl,"\k-ڿ/fkT&3n͜;?D< XD&Kq]gE,&Vò\Hv;@2}<䯌RML1>e9[FE}w˛sU郭靈@=S<yV@ہ>W|rPȍe~b"Hb&b% GeX`c:b ԰aB^%c5aa$$peRNIeV^eZne ŅfuIff%CɦUE.aLL^盗fzgyܑLb.hlih>B^iA*(|i$iY钒ᨕJeZk~ʤ7*٭W: LKm^mnm~ nKn枋8XD oKo2(;J.&p /p? qOLq_1PE r"Lr@u >4*نl6o/ڹ˩L] H̴/8t1YAV34JK 5$ vٺX˾ p/rM P &Ȉ(E E4B!6ATp,!@P|3>pCsWІ*E)2V<&BE, 2Ez(*v BY8tfHyϓSyby\K^4ܿNG|iow2X]iW/ֱiU^,QNa5'Evpd'ִ%h͵h3-Ƭ6_m^w^63r{{Awk5K7CovP=- D $ 󃙸8NK@qp8bG?w_E|jBEK1L$<4DheJZl'W!)Xn vK}O~=ϝpUC蚨F d,'x~>T:Pͬf\8 DRI:! Pf9ȉC:GHѡʇֵ߾R?ta;u4R2M mZRGp!X@FcBC\x]l P{UqQ &H֑/`w=8VSe^DX2xw}q) TuBwgmf"R@'n6uSw!ZW.7 0@DL h)[06 $0XWz&zWF2sZa+8i(Ȁ|IlGrGt7own](TQ 83',*xdeK= ҄~Y$)`90ujn(z'l (Iţ )È~,@Apx`(m9vD@2"3tQ"`d'Ӈ}aWzB#U`D3ۡ, _A^!Hfhɘ'^* &lGPQZ Zu p bq !GPA 6('O3 +IЎ6} * XKittP0RGǐ6l"]̲`-r xNATcpgW&$ NI5pw-@_h#qk)%(y Fp0&2gAG "hT:  hgE Ӎ{+i6pyՁwD<ЕΠw@ ~W YQD,RIl%7chc٥@D$4TAJO.ّ$9W:P :yWP*!靨9IS3is)fr]p3J @P0T pMr  QYgH4١noX8kI :X) `ۨ"*t>Zi)s )$^ #S9 x ٣9k zxʅ7 ^$a`G(#RHY*$@*[gtЉ2x Vdzh*Jj40~ Rj*kmg~ZP EZx`5)v j)$`l \-a7¢]:mk bH0~ XhB Gv\ Z4Å30pjS ^i20"\8 9r`! }g 9Zʦ/O) 8H /%攮ؤX_* ( >،kf lv@" JucJ*:@61تea C 1^3-*Dn ˳-nɆ0 * C PϪPϔ{·0 ` +:HZa{1ckksm5@qr0k5ɬK#jBKA$IH 0 Jqи sV !\+/>d, uzr{].EvuFwJP?L-C#î ג#\-'υ˃a `Y-RprGiI9}nF^ QP><-I`]-3.G=G]-m-GD!| R`"_ i$v΁>ƮAP!qz.Nށ$౎0P1rzc~iN!!K^>b*🮆k՚ -/Pr:<^hC%*~Öp(ɨ~+ 5:-y=o`ӧXy[R`bkNfjlo@p/t?vp[`:|~ntQ3B4??8x;7yXc@B>_ߨ!?/tKkﹱ io͛Oo@a׏-KӇ .z"pФ:XyOA\ D )6A)66)AQD ɑ\ʇΎѼ؅ֆϣ Q뵂 ::D D.FdR#{.!qH舡.HU0@ SnCDQ~.cʜIWCG YSMG-k '*NAb|,o `E{JSu sׯ`d3Xde]4[ijpK7N|aD<&' 5$ȕTi)(t#KL˘- ϠC?Z(!L[׻puGA)dC*ߣHD)(jHf+_μС{M[-M7@Z~uB$Twy"x vC0n^#_QQ@aa ,ЂJ%H~ (2R`ED7.UH b|==E5_XСLJ¢ *>K2a1ߕ)`~L}bCDHeUƥh^)'Ri&țs ~|gW'(ZhL⠏VZݡڢ+jR斠VGi~)Bc5 騀 ۩ժ J RڧV[lLhh.[SQafvVҖP>̫j>-qk.@d+6蚥6ڈ˰4?,q1u0/Jlq1,2B,.22!, ϼ:Gs=WͽF 2<J,yw& k$Mu"L_esC.kVb;us'挶фtmwNj'5SQk'-}3up?6ظޖyl[yҚ9rNȟknވ`N:R5QuwμZz5Xq--+A9a?HoA=Ɇ$o[;=OkLvڎ&`qL<A'A `%Pߔuo!A`'2߅` ,t- #B*zV(`-%am<<:4:" ;L #. @" mhQ[bwoc[HQN WH[dwE XV"DYu4/ԥS?FdXünl"B|B(AQ#^Zzq|JKYnr9 $;DŠ neOx2]?  (đJHMwMcgS o3r\>Aq $ 1Oqd1y7V ,cb'8W˺$YgR"V Aڨr\AL6!kPH~Gjk.㢈yֿ^4lWrUd$<$dOJ CmA̓ȱ\Q:kFISաzWvzHB"8X~- 4$yhѶ&9Hp״/C($-#_aps&v~YGfNEvd*}fLɥ> -[:pe]5 ՐEIb-~[;T'tӵ˲˃O| KwCH[f}/ʛ7,wޘ/|jɃt=:g_Ġwxוu?_PzU4|Hq/|;}YOy[ֿ~(/lߗ؞PE0!Aek] kfyy-UyDvPvٱh}8x5&mbS8e=CGx(6Ȃtt4q=K$&-X./O[Ё0np@P"C,EsIH3bуkW5kY4[8r9/f`>'>6i?aqaSGH8z39$g !Pg!'!*f@`H Q4!aH~@z}qbDS1B4d4vtIѸ X+yjLId5&Ip#XWA_Foယk`#`Bd QpAA!H{t(\`f+dEfV0@C6ACxFi Q ! $p#( F2?MVmwgh hq5^X){Xw3Յ635GHUgM 28O8ȁkۓo?|g{RuxR]1+Ug2&Hc-Yɡ0wYL2ik!AxLt9YyٙAxh)$q#ӌVd]dQ!O r(p#7  0> f҈O𘪙IQ09Yy虞깞yZEPxieG+ $o(fWX;AB:h$(9Ho-$ @mhi9zgВOdF:PA 9 Ϡi(7QQA')*=99i~ K6.id-Nf7 W*XXXtĨ)7QFRu4^a&\J(qxD٠gX0eXJjBצ@9nl|J?(hJ,Y Wj9bz ޓte:j|&oq:}F g$NHL@'jʦJ^ڡc m8";j}}f-7 y~=zxx*\̺J!-4h"1ԮXH׬ٖz*+ʯKRsاXae{J}*"*jD}#xgՇ)[IJy D[y~ٞ1 檳MB H RD HJC їd sY;[K0c%q `Zò*[P0wx{)<{sj`$S{l툃?XA"Tʫ"C{J t+" 0F{۵[z `$iN*˲1cDsdpA :@7F+=[ :~I@ v3+`z,CAk߰PPdn$o+2>u4qBF+㵴&2dtcX MDriuŚ1h΄Êcl)[2 =P2=0)po" s9õOiAD*=*?63Π:Th+ Ti/fB}X2nŵE4p;  (ȈAi|Ǻ+ {|̱@8\3Pb>9I`W@ %ñ8 ʽGSFv&,+a\A3˼@ `B6@#,'<4YnǤ4zZ̰Z=̢|4$Ϫά.h!}JC -GKϊ|K5p10 W " mcL5w Q; #M[1}Ӱ;%8P3 |>=Բ'MJmќh-R-MԓSTԋV Z@,}֑a[c2\m+rk [mak dfw[yE!u"*>j=ؘUSFrm#" xF(T b֡At^?!k*R} 0, v<^.+l~ =难ꢞ83W|&~tH)˘,0lf| p|p䧮FT>bٚ>Yn ˭.Ϛ 7*Nenr Q +Ã!P[vp? 0R2 zqnNLøPg4&uN OCD b1Gxcy= ^JׅYDaNXdp. bD0Ǣ*P0Gaa3 1PGRd3*dR0؆.D%bP(kX{> DLhBp@>0/ d8@!nQռV$VFLF>"̄< T8d=HcsG z@<@Teօi < 0p@9`!;@$p.a4A$)T$O`0ƁAQ3e3)2x pp/-TYҏiIPgE XA"4%c%H]ћ:P<e\-I@pIT"PZ+QwF".!YcxsFKbdH \LyӔLu( eF& H% &$PHQhP<¶*+8  y*g= )I |Al=4%j q(Pz6 (B鱟ID5h NaUy-(E5% r,AM>JbBA  EkS!-Y:qVhӂCDpT3+ځ k5$ y""]0gk /Fea# %ŒPKQ\ׂ0o*3Fn܅Ϯ#(FEA^c/xB$}e"ɆZ~$6  OAZ[V,( V-wI&i8/FC2ۘ~96MDvf~' Mv gb: kBh8pԌLذ8 4P dBY0@dpv-ܺ,:K kcN-"i{auB]..U7ffG!Pз њׁ 'r ܑRI7hY%+LXk"Y:5 X#ӊV4/=o/%g9&uuu}MMvaϓC|t Mhn&w]A Ro^=q [o^n_R~#*]SE RqdTcE?KeZ9z,|ɭc`6pص_GAB)cyK ya -p1^i2҅ cRAfnjzV@ʇ~m Y"Fr%:ᔨFDzOhO+ Bȑ+%5.x0xC|rC1cF~#qw,[A%[L& /r s77Հ~F4Wtuvn1J:(13`T4B|gv4'P56TXV8$0` `64&`ksfa`ZhP1;*)8g z~ Bځp^bHg8 Y%ӈ+${&ZgT Op :|p0Gy#>;0+8Xx:& <1`pӋȘ"p.3 r:?>.*HƸ8(!/ <+@)'7wGgA.,1;}D%GzG3v}Â%' #!zB5EWEB#_*,?5?NEU,$b10X7Z*tH s<0gG G&y@L>K#8OTSeD"K,a8 W 82@.E3){xgO}41;^ Jl9TW8qj&wMB>SG9EfDԗ\, o ooS ݰ 'ͥQ`eu j`Pk0ZVqjy\]`\ IfY8 R Ph yeٝY|b tVg)99cEoH)Fk՟&91 o@ yF_plbq ZpjD;PK= PKN:AOEBPS/img/varray.gif PGIF87ae[{{{wwwqqqUUUKKK;;;333%%% ppphhhfffXXXNNNDDD@@@>>>444222000&&&$$$""" ,e[@pH,Ȥrl:ШtJZجvz#xL.ػP߼~u: #U# && U  |{ GSK#xP.ȁLO&N۲ػ IJO4Y?.( M*(M,3$BT>\:Q(b]vM#ZT{ bZT 4%1q< VUඈtADFȀ{ m3إ]oꐹ$ Vl2r蜁s#--#D[5;W:k߅ )IN ǛPwp%? ʈ7a:{ZUz1Sz|DaP HZrI7z ރFڥEH`6'A*8VBhC<ǝSX[Ld@ZɅ4/\ı{\냝h> D_y^wxWO(@բc(ιjtZG$G59N EtoI#ъl9|jQZLC<4R@OtE+ kQςҐ ؃vؒ]uFXB#HhhN w|F0!Q.! Y^uC 8&c8 l)4, J \1T !7 RXъ A+zъ@A/~1Nb5Qtm|(:&vLch=xQYz'4v!If61t"s`]%F<ԡچRba洚Uɻ%T\`01X֐>~7 r7 1Jӛ0H>F܍c.Ia1jd*3#] UjPIyCʶ1"iXu䧱{/"HHT5%R~ j7,>FNZl[8(|j@$Ad4Uj"3@DU Pj$p,2$w+(*Ov6I2gP Lᔜ9% 2,0vY)K0 huINJ՘xTP-'XT)PjL_]jgMZֵ.@=g9Qx}A=zl#)Y hLkcTdP`$3 %(ۃMwyI7RS'>n@Vo `b1u pBf-xm!`LP,́Alo|i Gk6tYG_LD:!4MQԏ}6'p֤/^t%̼$hl I )rt(n_8F\ټ̩8ɱ?ću:}tll5" GNdK̅69@@ZǴo-1? 9ONȦHrbu\:TyY?+U|f  }7| Cܴ|}x~Ad^QOP@* SbFKAw5 5ŀSUgrwg׃|0kbf&GHhyGGruWU>WBnY$n#pq6k5{U4~!pne- _YF\52!nn1_0)X Gv%F#fLb.F&v&:a) f ~ߗ,Te1#G*b7UЀkxȘʸ8k }>FR.v#B,28{`bA↍KV626n'ToX\ tB5D1rfrt8r7&h&;w@QԦW0ESySPG@QYqBKh,Ԓ2M@{ zz3QJ<ɓq@CA;PKTM PKN:AOEBPS/img/sut81008.gifDGIF87akR?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,kR H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXv̗/_>}G0A~'P࿁O߿O_߿/ o <0… ˗O߿|G_|O 70?}/_>(@ ? 4xaB 6a> w0| _>~O`>} ܗ_|7P_?:tH0_| ?}W0|70_70}70|sСC+| O_>~ ӗ` ߿|ۗO`>O`> `9tСC /_> ԗ_ g0| O`}/| '0?} g0~:tСÄ }| /#(0|߾~_>/#(0| H*\0_|o߿|/A #` ߿}'0߿|_3`> 2dȐa|/_>~ ̗o`0@ _>߿|O߿|߿@߿| H*\0_|/?o`3 |(߿_o߿߿8`A&TaC 'p?#80|߿|/| /߿|G0|O,h „ "W0? _߿}8P`>7p?__˧` 7П>$XA .dP`O`>}w0|7_>/_~#o`>СC +>~O߾ /a> ȏ|/߿| /_?}70|sСC+ϟ| O_>};o |߿_߿|ӗo_߿|(߿'p "Lpa|/_~/>}˗/|(___O_}߿o@ <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{j̗/_> O~߿|O|(߿~o`~߿ @ H*\Ȱa| o>~@}#_(@߿_|O`  <0… :$O`?~ o ?~O`>#o`>O`> ܗ@ _>|`> '_ ̷O`>` >7p_>~'0?>|Ç '߾ _?~'0|70| 70| o~=|Ç '߾ _?~'0|70| 70|o~=|Çد@}߾~_/߿_>ԧ߾8_>$XA .dؐ`> O_|?` |o߿|!C:t/_| _A}o_? /߿@/o_(? 4xaB 6̗/_}WP|O |߿7P߿߿}'p@ O@ DPB 70|>}/|o`> ˗/ _>|`3ȏ`? g0_|7?>|Ç `>/|G0|/|_|/Ç>|X0@3/? G0߿| W> W'P_?>|Ç˗|o|G0@ @߿}߿~o@ H*\Ȑa|#`>`0@߿| }˗@  <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{ 4СDc+ZD+_B}E 󗓟;/-wP_тܗ_I篨E ˗/-˧_A}E W_|/_>~+j(p@,h „ 2l0|߿8`A&T!?˧O~ HO@~? 4xaB 6_?$XA &Էp… O,h „ "_?$XA ,h „ 2l菟@~ H*Do… &>$XA .Dϟ@~ H? H*\Ȱ?~˗/ 6lذaC˗A} 6l0!?˗/ ˗/_}kذaÆ gp_|6lذaÆ ˗A} 6l0!?aÅ ? 6lذaCǯaÆ 6lP>~5lذaÄ_Æ w?$XA .dC%NXE5nG!E$YI)UdK1eΤYM9uOA%ZQI.eSQNZUYnWaŎ%[Ysa>h0|u/_~ o!|-/_?۷0߾|ۗo|Cԟ>'0BSO|Vo|ۧ0)߾}h_5jOam~˧>~)짏@~8`?}O@ DPB ̧0| ̧0| 6lذ!A~ kO_🾆||||5?} kO_ > > > >🾆5ǯ!A} k/__/>}#/_>}G_|# |G0__5|WP_| +/_}ԗ/߾_'p A ,(P_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,XP +X` +X` +X` +X` +X` +X` +X` +X` +X` +X` +X` +X@$X0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,X? 4xaB 6tbD)VxcF9vdH#I4yeJ+YtfL3iִygN;yhPC5ziRK6u*ɀ;;PKsPKN:AOEBPS/img/into_table1.gifjGIF87ab}}}wwwqqqYYYUUUMMMKKK???;;;777333)))'''### vvvpppjjjhhhfffdddbbb```XXXRRRJJJHHHDDD@@@<<<888444222000...&&&""" ,b/:C:ћCC222)ʔt| gNA1{*\Ȱ '6|hȱǏ =d1 .!bHRɜIs8s RO'!*]ҨӧxC BQUZ=/Z:h"Кȧ<r#.TMH˷/& :@Ѕ !cu}1YYR~+e7Ӌ?$ā_(6_ @m;z\f*9rձ˸;ڳ_y[6|г6x^<Ѕ[a/\7@c݂Ao Nadp/|]8Pl$%>7kk6Y[X|XH,S9Beפ]tdW)l8AZ1pi t!Bb`i)Cr 2f"@gC Ђ/B5 VEdx[ : *{j5g[hIœ`x ƛ V\JĄH = LRgFv]m\ hO**dT& En!?f>r&$pG:oYE)H]11$Cn/j lX^P*>$xH\H!{Ke:htbaa-ژ& z=pg ]]:2 }k3ļe˄ªo8T:f]`P @p F`xxц* (;Dyw6#B:c|X sb=yYS%cD ļ睅 l2 j 0{ɝZ:MYқ ͫ SX4k6*NAnx:i{7Ql3 4ѡL<зQN:#INF+)[A=˃H&tlfw!(C׹X H=B}k4,*T'>=-29 ohƦm,Awae\9rL81U ޝ{fLe2W"DLb09osO]ڲ*&M}U,^0K28k<цT<x$>"2rq R}oק081OGw0A.smgmb~qa#*8`&vR-!.d+UUq;G'6qk h D>7q" C'3?n0>WWwsAl!xsc4ƇK6x82Xk675 DiVC5hG9CjBj@zjA(%jO> `A5IDi>m$:2ƣ$REhV U4$>01 G W*')Pw,*}*Ȋ(g JRVhX &0ʇ4hd)= U OŌ͈"F dPh8荾8Q`gQ (DΨg`b{)w3,#s s}!(!jx?Ud H O8 @Qh$u(QbcĸM,WeЊ8A:5Dopix,IGl6 '!$H:Ep0=G֐*ɑՒox.EV |D"XC4kHi <2D"(bOD wG^. ]ӖpE@}ɕ!Y=K)_4YF u j ٗ@@3XB` `gS EI> 9ngqBG3[3jiRT 'Y?po%Wkv0֩}a'C#} FV)Miis!yVBpzSfUy/镛aʛqQ2$jˤ=镠`fht&tNt!u7:pI\"1ge$*3U*;5.2囲*Oz7j(INX"Ԝu({'&/eQZXӦ TZ22>`;">(,`TQ g0yJژGZAՂ0 6*pZzȚ-; PT@- ;@Q9~X - ]*nȂqh),x+( -h@0X/ *8 Z{ ۰;3)1ך7@Xb@Ѕz#}ɉFZ#5·+y%U!b2JWÅ_FE!ԱgH*XT0⹧޸:cy~Ad8Y"a؂YR(EEbNz .Ң1 ,jÇ y {a"aP=QŖ k~[v{sPJ 1vozv[97f o 8@@76з Gk/vLGA׻ ě@˛5Ҟ U99j W31Hj_17k9k`ݣlzJԽmoם!}9u#.9 =lo`Aſ~ҟawe ,4PIKs.P'ʤH,(g†!HS:0Hll<E ~_Ă0j7|I| Nҋ73,1(Ր2N[*4Xƫ}:0P;JwhÜd{ƄVK޸3B6<'&Џ}ijrh\Ș\. 'Ї?y}+k8ɬLI ʲK ;˸|<˾3ܕ|/\$, ɼ l|V|;<f&,$ε0:ϰ9|ϫ@ƅe+- |`}пЙP!=>LW&-g<+r=v;$Ҳͬp0 - f .1+:}-dB=NH#x>C=Ҷ";Gdan䷲dMbmsD\z%iDcמSGi-!QrǖF,ׂ DzхE7 W='8wz]_@q9ؔ=^3G#ŬzgZUي p#V٨M 1q p4d^82]Beی`A41mn=O"Sh/R0ݛ Y(ۗ7MB ״=b]ުpާ;ީ}߆k*Mߨ`]#hLnf= ;PKjojPKN:AOEBPS/img/et_string.gifGIF87a{}}}{{{wwwqqq[[[YYYUUUKKKGGGAAA???===;;;777333---)))%%% tttppphhhfffddd```\\\XXXRRRHHHFFFDDD@@@>>>:::888222000...,,,(((&&&"""  ,{5MM+8 /D A' ʊ/ D—Mӿ+MM + DM+ ւ TA$q a ֛HŋNUKG)tGMlD0cʜiGHZ5@Dg @$DKF\ʴi D*z 'KXKPJ:[ǵp8shٿ_!`%ADH M"ąR̹gD) \Ȧͣa zvs6h8ǽWR]aۂplͽR1Y$͗ȗSVȊ/F#YqL_1HՒ\8(e ?00:BXt ($h(,~#L'EDF 䨣$Wc<B|@:C 6VPF)TViXf\v`b~)C4ՀT ,-D(0wCjW Cߠ5! ]LG xI&,a( [j|r8P DX hGA*[!԰5psZ[ՊAdЊZLK[B"Cc aFkR“! sl0PkM}ZDX<8 jW&TU ]m1&b]e/I:N BlE/pp8Ȳ!xI@#Nq' 3HH#Or nj׈4UK9X~ @JЂ1@6;J{ U'` `zqd(2'D=\M?6թO%ܨKքn~x s T*H6የ^,ayX pJ^7[Huz U r!m͵q`:܉m>:rhE‑1 FK"6a!O0 P4XHTky?f<#}.q{R#W]rо1.=mG=."DCc06ع8$Ů@&]fBq`4U&-KsA\񀯹% e1 b'%wc@axd*R,"(7M4BN(BN%ۘJ!ӣרN~VԨS'.%R?{kH)TDl2SYj-СY$+y~'s?"Q?d8[%[ ޛ9yTzcD79(a|AK ϲKA0 vb)>Q*} `KHx~#v&U NTw  L|5xjB$DW SpUxr| 'fU1 Mo@-G-t~oz<03DM RHHJx~L>6e/p1r{p&D0>/xV|^A'tr(%Mvт_'Hh{byt O5`ntU@&>tX )R$SK׏{-*p+@2*@)7wy@q ;R |6 )P+eT`*jWR-(cň~4WR=@C KGa+${plQVA#Pr'fV a)wl$ti(iCa-\IXQ3^6w&BbLPp sv&!(H锌 |et`YSq^*fv#{*UWL,pH#t6B{00ɐ6瓃 i r䗈.ؕgՔ%KE)7m9ii eUG^W7 xv)ơiLc TZ)8#s.Bҝ`)Y!TQuy4si`5f%pj^675' H5m9ySM!cPq t#07Y\^#ʢs,;6+-XOgCk@+~(3)EB{Fjk4KU ;;A+PBs6Sl|Z4 Y:p[44WZ>/@o*17UJK34`rʳ"`n +5G8;Kqm^[JKi753Cvmзy;9 ѐ F "F L> "Z7*]͠gAv"/t €r_ A5Hzwz ]仚OYa5gR$t)q]T#qK\?e1{I)nB{,M: De3( 7kIu(=wq袧3њ AgywĐ2ś |)7B9w46ܚnKQtA iL",FI ^Yj yxf,XM1& @ yB)3whzrEXȜv 2vFKy%ŵp 4.<<A K,ĄWxg1LdSp) 5eoyt̘!se(wnԨSD*p*cGʬ 'xa°ͻlry% Ok۩bȞQD;|R ʞ>zq\T7~M&7y13fЇϷ`w͇}ݑ".VF{йASp@\ wїD[xa؁QQjWlbXʑI݀K-!M]И{3mk˼ܡҌ#T9y92o39`6" 3pL\%SXHM( LTI YN $](D#P`' 5 Mq!N\y P4SQ$F%1`<Tx)Q3H%6܎=9<0&%1$ S2pkEV!.>?{qvx~Լdw2= ڿQ!'tbP0wU}xK)99/q,N !H, g, ]̀a߈Bi l}99,Ma1`= uMB -E˰{zھ"zx]Mߢ3 yb|ҡC )jVy ^=(2ml.%a\Tj MDpuAG6R| 0>N'`J +w,|qBޡV޿CR_qC\! ':.n6ۼ2D뽊|@ o :'\>fSc=9ӅۋPCj&Ze bAöz]k 3l?@sq1.᰸p9!O!"OMp' ;PK pPKN:AOEBPS/img/sut81088.gifMGIF87a{.?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,{. H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJ% П <0A$XA .dC%NXEP <0aA$XA .dC%NXEѢ?}5jԨQF5j?ӨQF5jԨQƇiO_~5jԨQF5jؐ?OF5jԨQFѢ?}iԨQF5jԨQcC~4Z_?5jԨQƉ8`A&,? 4xa)TP!BO@ DPB >QDO@ Dp ?} HO@ D ?_?$XA .dC%N W1"} H?? 4xaB 6tbD)FǯbEW!?~+Ϡ?}UXbŊ+V ׯ_ŇUhП~ׯ_Ŋ+VXbŊXѠ?}U|ȏ_Ŋ׏?}UXbŊ+VUџ~*>ǯbEGП~*VXbŊ+Vȏ|*O_~WA#O_~+VXbŊ+F`|˷O | ܗ/|O`~ ׯ_ŇUhП~_?$XA .dC%N䗏߿}/߿}ӗ߾|/W!?~+_?WbŊ+VXbD~G0_/߿|O|CO_~WA#O_?+VXbŊ+F`|_}˗o` _>O_~ $H A#Hp`| $HP? ׯAO@ DPB >QD;/}ۧ_ۗ߿}'P_ׯ_Ň-̗| ׏?}UXbŊ+V >ӗo ܧ?}?ׯA $H A~/|'0||$(П~,h „ 2l!Ĉ'Rȏ_ŊׯC~'0_ ̷_} '0C#O?+VXbŊ+FǯbEW!?~ o`>/|1_?WbŊ+VXbD~*V4O_~0_>O`| '0C#O?+VXbŊ+FϠ}_|C~'0|O߿|'0C#O?+VXbŊ+Fǯ|`>$80_W` Ǐ_o`>|/珠?}'p "Lp!ÆB(q"ňܗ@/>|/?~Ϡ?}U|ȏ_*?8`A&TaC8`AП <0… ?8_>~?~?~۷_? 4x!?~̇!G|,h „ 2lP ,hp |,h „ 2l7p`|70| ̗o`ϟ?$XA!D!BG_>$XA .dؐa|8`A8`A&TaÄ(P_>/| '0|_? 4x!?~˗B/߿|AC!B"D!‚Ca|8`A&TaÂ'P`|o_>~o`>/7p?}'p "Ǐ`|w0|CxП>B"D!Bԗ/BӇ!B"D!B/_}˗O/˗O BǏ`|/_~_|˗?} ܗ/?7П>B"D!Bԗ/BӇ!B"D!B!D!B!B#/|/|O`/?_}߿,h „ 2lP_|˧ϡC:dȏC ϟC#/|/|ۗO`O|@#O?:tСÁs0_>}:t!C~:thП>:Dȏ|/>~o| /? >ϟϟ?$XA .dP|*̗OC:tȐ?(߿|'p Ǐ|_ '0߿~_ ׏> ϟ?wӗo>?}O_| ϟ?w~ϟCsСA#O?:tСÁs0_>}:t!C~ ̗`'0_>ϡC9tР?}ϟ?:t@}9T/>:t!?~W0߿|(P,O? ,X`,X` O?? 4xaB 6T/_> СC2ϡ| >~sϟ>:DȏC ׯ@} <0… *ԗ/CsСCP`|ϟ|9O?"ϡC`?O@ DPB ˗ϡ|9tСC sСAs!?~:4O@O@ DPB ˗ϡ|9tСC sСAs!?~:4O@O@ DPB ˗ϡ|9tСC sСAs!?~:4O?:tÅs0_>}:t!C~:thП>:DȏC ϟC:tp|*̗OC:tȐ?ϟ?СCСC:\/_> СC2? 4xa O@ DȏB"DO?"D!B"D|"D/>"D!B"P <0a}'p "? 4xa O@ DPB ԗ/Ä{Ç'p O@ 'p "P <0a}'p "Lp!Æ ˗a|=|Ç 8`A'p 8`AO@ $П O@ DPB ԗ/Ä{Ç>|Ä H8` H*\ȰC{0_>}>|Ç>|Ç>|C{0_>}>|Ç>|Ç>|C{0_>}>|Ç>|Ç>|C{0_>}>|Ç>|Ç>|C{0_>}>|Ç>|Ç>|C{0_>}>|Ç>|Ç>|C{0_>}>|Ç>|Ç>|C{0_>}>|Ç=C=C=C=A˗? 480_>} H*\ȰÇ#JHŋ3bԗ/|iԨQF5jԨQƅØ/>5jԨQF5jԸP_|ӧQF5jԨQF˗c|4jԨQF5jԨQB}a̗OF5jԨQF5j<ϟ?ϟ?/>ӨQF5jԨQF'p@} H?} 8,h 2l!Ĉ'Rh"ƌ'p@ H? 8? 4xaB 6tbD)VxcF80>$Xp>? 4xaB 6tbD)VxcƉ8? ,08`A&TaC!F8bE1f0@8`(,h „ 2l!Ĉ'Rh"ƌ˗/|ӧQF5jԨQF˗E~QF5jԨQF˗c|4jԨQF5jԨQ#|0OF5jԨQF52집F5jԨQF5jԨQF5jԨQF8`A&TB$XA .dC%NXE8`A&Ta|8`A&TaC!F8bE˗o_?1bt߾|aĈ#F1bĈ|0bhQ_}1bĈ#F1bt/_?1boF1bĈ#F#F#F1bĈ#F ۇ#FÈ#F1bĈ}0bĈ|0bĈ#F1bĘ0?1b4/F1bĈ#Fˇ#FD H <0… :||#F1|"F>$XA  <0… :|b}#F1@}"F1b#F1#FQ>}#Fȏ_Ĉ/bĈ#Fa>~#F1@~"Fׯ_Ĉ#F1|"F1bDE"?~#:_#F1bDE1bĈ1"D~"FtO_~#F1bĈ1bĈ1bDEП~"F1bĈ1bĈ# /bĈѡ?}E(|,h „8`A&Tx0 .\p…[p…'p "LXPׯ,h B"D!"D!BC!B"D!B!D!B!BgП~"D?}"D|"D!B!D!B"D!B"D!BC!B3O_~"DHП>"D_~!D!B!B"D!BC!B!B"_?!BC!BׯB"Dp`>"D!B"D_>"D@~"D!Bׯ߿ׯ,h B"D!!B"B"D!B"!B"D8?"DП~ׯB!B"_~"D!!D!B"D!B"D!BC!B׏?}!D!A"D!!B"B"D!B"!B"D8?"DП~ׯB!B"_~"D!!D!B"D!B"D!B ?~ H_?PB)TPa|SPB SPB *T| *T0!?~ *TП~ϟB ӧPB OB &OB *TPB)TP„)TP!B#O_? *4OB)T_~)TP„)TPB *TH_> *T?˗o_?PB)4/B ׯB *LϟB *TPBSPB SX0_>'P@O@׏?},X`W`/߾ӗ`W` ,X`A'p "Lp!Æ sСÁ9/@}˗}П~ϟCs(0_o@}_~9t|ۗ/_~˗/70_> s_>:ȏ ?}/'p AGП> ,X`+X|70|,X0_~ o_ ,X`| $/0@_| H/__|,/w̗oA ̷_ w?}ϟ<ϟ7p_>~߿}_?˗/_7П?/ԗO@'p`?}7p_|/_>} /_>~o <0a|Kȏ_B ̗O`ۗ߿}?0a„%<0@ ߿~O@篠@~  <0a> ˗O߾~+/߿| O`|'P_>~_|/_/?ӗ| &L`>'p?~$8`>/>}O$O_?` ,ϟ ,HП@} ,?'p "L,O_| ̗`|+X_|70_/_|/_>/ O@ D0a? ? 4xaGП>*ThП> ˗O?>$XA O@˗O`|˗_| W0_>/_>}70_ ? H? H*߿'p "L8П>ϟB ӧPB ?$XA O@70_>/_|W| O||/߿| ̗`'p "L <0@#O? *4OB _'p A~  <0a> 'p ߿'p |O߿'߿}/'p 3hРA O@~ ? O@? 48П>O;/A˗˗o_}O? _?}@~'p>}߿8`4hРA˗/?3h_|3hРA#O? 4hР?} ̗>˗A`}4hРAgРA3hРA 4hРA 4hРAg ?~ ˗>˗@~3h?}ϟ? 4hП> ˗/>/?480? ϠA 480?$XA .dذ|:!?~ ˗O|_>~oC#O?"ϡ| _>}s/?:tp`>:tСC9t@~W0| ̗_>}5ϟ?!B ̗`>'P>$X0? ,X` ,H0?$XA .dذ|:t8'`>O>$(П>_ ,X?} /_|O~ ϟ ,X` ? 4xaB 6l/Cǯa| ̧@}/C#O?"ϡ| ?}s(0?:tp`>:tСC9t@~W0|?}П>ϟCs|sСÁ9tСC СCsСA#O?"ϡCϡCСC:4/CϡC珠?}9tП>ϟ?:ϟC:t O@ DP!A~.\?}ϟ o… … .$… .\p‚-\pB-\p!AO@}'p "!B"ϟ?"D!!D!B"D!B"D!BC!BO@ HC!BB"Dp`>"D!B"D_>"D@~"D!B8 ,h B"D!!B"B"D!B"!B"D8?"DП>$ <П>"D_>!D!B!B"D!BC!B!B"ϟ?"DHП>"D_>!D!B!B"D!BC!B!B"ϟ?"DHП>"D_>!D!B!B"D!BC!B!B"ϟ?"DHП>"D_>!D!B!B"D!BC!B!B"O? H <0!|O@ DP!|.\p… o…  ? 4xa <0!?~ H,h „ p… .\ | .\?~8`A&ϟ?$XA 'p O@ 'p "L`>~ .\p… ˷p… 'p O@ 'p "L(`? 4xp @,h? 4xaB[p… .\XP .\x`? 4xP @,h? 4xaB 6tbĂI(QDI(QD%J(QD'QDOD%J(QD%J_~%J(!|%J(QD%J(QāI(QDI(QD%J(QDOD%*ϟD%J(QD%J(|%J(1|$J(QD%J(QD(QD'QD%J(QD%J%J(`|I(QD%J(QD˗o?%JO_>}$J(Q"$H"$H"$Z ~O@ DPB ˗,h „ 2l!Ĉ'Rhb~ HˇA}C!~ H*\ȰÇ#JHŋ)˗"|e̘1c3f̘1cF/_>3f̘1cƌ3f̘|.˗_ƌ3f̘1cƌ3fĨ/_1cƌ3f̘1cƌ1˗"|e̘1cƌ3f̘1cF/_>3f̘1cƌ3f̘|.˗_ƌ3f̘1cƌ3fĨ/_1cƌ3f̘1cƌ1˗"|e̘1cƌ1Ǐ?$8p?~'p "Lp!Æԗ/ă"D!B"Dۧ`>ӧO@~ӧO>̷_?$XA .dx_| B"D!Bذ?}˧? ׏>}˷D!B| /?!B"D!*/_|~˧D!BQ| /?!B"D!"ܗ/_>~ BTo_|A"D |A"D!B}"D~"D!6ԗ/ă"D!B"D"Ă"D!B\/_>"D " "_|8`A&T(_|.\p… ԗ/…o… .\p… .\|-\p‚[p… .\hP_| / .\p… .\p-\pB[p… .\XP_| / .\p… .\pB}.\pB~-\p… .$/_ ˗… .\p… .\P|.\p|-\p… ./_ ˗… .\p… .\P!| .\p}O@ DPB ˗aB~sСC <0a~ H A} 4hРA 4_>8`A&TaCs_|:tСCO@ Dp>} H }4hРA 4hP_>8`A&TaCs_|:tСCsСAoC&/?:tСC0!|9tСC СCp |9t!|ϡC:t(P_|˗ϟC:t ?~:4O_~ӧ/_?:4/>}sСC ԗ/ÄСC:4ȏC ׯÁӗoC/_}СC:/_> ϡC:th?_??~O@ DP?}O?.\p… ӗ/…o… .\paA~.\?}-4O@p…˗?… .\p[|-\p… "P <0aA~ O_~8P ?}ӗ/߾~ H/_} O@$XA .d0|&/?:t!} H?o o˗o_?$XAӗ/>~ o H*\ȰaB}9L/_>:t!C~:thП>p ?} ˗/_}9t_?~˧@ܧo ,h „ 2lP_|˗ϟC:t?_?p ?}ӗ/_|ϟ? Ǐ>} o_?$8p H*\ȰaB}9L/_>:t!C~:thП~ׯÁ9$}8`A@,(p H*\ȰaB}9L/_>:t!C~:thП~ׯÁ9dϟ~ (П?8P @,(p H*\ȰaB}9L/_>:t!C~:thП~ׯÁ9t!@,(p H*\ȰaB}9L/_>:t!C~:thП~ׯÁ9t!@,(p H*\ȰaB}9L/_>:t!C~:thП~ׯÁ9t!@,(p H*\ȰaB}9L/_>:t!C~:thП~ׯÁ9t!@,(p H*\ȰaB}9L/_>:t!C~:thП~ϟÁ9t!@,(p H*\ȰaB}9L/_>:t!C~:thП~ϟÁ9t!@,(p H*\ȰaB}9L/_>:t!C~ ˗/?:WП~ϟÁ9t!@,(p H*\ȰaB}9L/_>:t!C~ _>~ WП~ϟÁ9t!@,(p H*\ȰaB}9L/_>:t!C~ '0˗o@~/_~o_|_?p ?}:DП ܧo ,h „ 2lP_|˗ϟC:tȐ? ̗_>˷_>~|/_?p ?}:DП ܧo ,h „ 2lP_|˗ϟC:tȐ? ̗_>/_>/|?}ϟ?s!B$XP>}8`A&TaÄs_|:tСC̗O`>} /_}̷_>70_A#O?СC H}'p "Lp!Æ 0!|9tСÅcȏ|_ ̗O_>_| ܗ߿} As8>"O@7? ? 4xaB ԗ/_CװaÆ /_~ 3/_|O_|ӗ/>'0@}WП~_C5lp!@,(p H_|'p "Lp!Ãk_|6laÁ˗O?5l0?}ϟS/_|C@$XP>}8p?} <0… ˗B~O@ DPB'p> ȏ,h „珠?})O@7?0@8`A&TA}5T`> 4xaB 6?8p _'p A;xp ?}'0>O_|˗߿|/_~˷ϟ}O@8`A&Ta HA8`A&TaC?~ Hp`|+X`AGП> ,(/@}o˷_}O`>ۗ_|8>}(`> 4xaB 6< O>$XA .dHP| 8p| '0߿}/_}ϟ>ϟ(˗O`|ۗO`/|/|O?'p }'p ?P <0… ˗? 4xaB 6t8|O@$H_|O`}_>#H?}ϟ? OAO|o`}'0_>'߾|'P>$O@`>'p "Lp!ÆB(qA~㧐'0_>}O? 珠?}EOa|'0߿~_|o|o|O?$O@$/_|8`A&TaC!F8 |0_'p A+X@~ ̗/_>~ۧO|˷O?}'P_}ӗ/~˷? ۧo />$XA .dC%N4!?~ +O`>}o|AOE 8`A O@O@ DPB >QDk/_|ӗ/˧C#O?X!@,(p H*\ȰÇ#JH"?~+ϟ?'p  <0… :|1ĉ!WA#O?X!@,(p H*\ȰÇ#JH"?~+ϟ?'p  <0… :|1ĉ!WA#O?X!@,(p H*\Ç#JH"?~+_? ,(> 4hРA H}'p "Lp!ÆB(q"EUhП>$ ϠA 4hP @,(p H*\ȰÇ#JH"?~+? 'p 3hРA П ܧO ,h „ 2l!Ĉ'Rȏ_ŊO@ H@~ 4hРA'p  <0… :|1ĉ%WAH'p O@ DPB >QDXѠ?}U$O_Ŋ8`A'p "Lp!ÆB(q"ʼnUhП>*䧯bD$(> /@~ <0… :|1ĉП <0@~'p  wO@ DH0,h „ 2l!Ĉ'RD> 4xa‚O@ w O@ DPB >QD 8`A 'p 8`A;x#F1bĈ#F c| ? 4x>}8`A&TaC!F8bEۇa> Է ? 4x|8`A&TaC!F8bE#|ƇaĈ#F1bĈ}a\O ?~P?1bĈ#F1b$/> O>)̇Q?1bĈ#F1b,/_>~ȏ@}S>1bĈ#F1bxp_|a,O ?~O>}ӗO_ӗ/?~7P?1bĈ#F1bL/_>}0'? '0|O ܷ?}˗?O?$XA .dC%NXŋ˗_'P/@_ O>? 4xaB 6tbD)Vx?˗/>~$ȏ@}O`|˗~/? o>3f̘1cƌ3f8П~O@}ПǏ߿/߿/_>}˗?~/_>߿'p "Lp!ÆB(q"Ŋ/b<ϟ~ӧ`> O@ D8P?$XA .dC%NXE 8P ~?~ 

 H*\ȰÇ#JHŋ%'pO@ O@3h`A$XA .dC%NXE O>O@ D80,h „ 2l!Ĉ'Rh"ƃ'Pet/cƌ3f̘1cƌȏ@}Cϟ@$XA O@ DPB >QD-^H0@~0B}P ~ H*\ȰÇ#JHŋȏ@}C?~2:/cƌ3f̘1cƌȏ@}C?~2:/cƌ3f̘1cƌȏ@}O@ Ǐ߿8`A <0… :|1ĉ+Zx1@~0B}'P?$XAO@ DPB >QD-^O ?~c>ƇaĈ#F1bĈ||Ǐ@}0>#F1bĈ#Fȏ@}C?~'P_|+`>aĈ#F1bĈ||Ǐ@}C`>aĈ#F1bĈ|'P??GP_|˗|_|!#F1bĈ#Fo?1̇PO? _|O`>}˗? (P?$XA .dC%NXE|Ǐ@}o߾} '`>ӷ ? 4xaB 6tbD)Vx@~a>O> 70_>~/? >3f̘1cƌ3f(П>}c>|˷/>}_|˗/_?e̘1cƌ3f̘1A}*?et_ƌ3f̘1cƌ3篢>_Fe̘1cƌ3f̘1A$X@O?$XAO@ DPB >QD-^h? 4(P,h 'p "Lp!ÆB(q"Ŋ/bHP OcC}4jԨQF5jԨ1>C~ӨQF5jԨQƈO> iOF5jԨQF#?1̧Q?5jԨQF5j?~'? #H A? 4xaB 6tbD)VxcFO>8`> $H?}8`A&TaC!F8bE1f$?~0F}4jԨQF5jԨ1>|ӨQF5jԨQƈO> iOF5jԨQF#?1̧Q?5jԨQF5j?~0F}4jԨQF5jԨ1||ӨQF5jԨQF a>iԨQF5jԨQ#E~0F}4jԨQF5jԨ"?}0F}4jԨQF5jԨ?~? W` <0… :|1ĉ+Z1B}46OF5jԨQF1󧱡>5jԨQF5jԈq|,h ‚8`A&TaC!F8bE1fԸcGA9dI'QTeK/aƔ9fM7qԹgO?:hQG&UiSOF:jՃ;;PKMMPKN:AOEBPS/img/varchar.gif*GIF87a#.wwwqqqUUUOOOKKK;;;333))) ppphhhfffddd```XXXHHHDDD@@@<<<888222000***(((&&&""",#.@pH,Ȥrl:ШtJ6.vz Qjx |NKhx_Pn  Q& .*n*{O gn [&FPT NC B.E N S*KI\ * IL|L6"Q.FSD$ TKBO#ŏ-.H7@% $ a88@ZD} s#" #"0SK)/#A#(Iuڅd pq i,P.!Jh8!ʪx =\$&|@* ;`h;R@ M'0J,k>m`LxZD ݼFXX BOС?!h`"*, UXjpBZz *EB"h`, 5ЋNeB&d"Zuv.KmrPDI-?VyC%6IX2e+ !a/׈}DM RNِ:J. i8@Hd (GIRbH6 0N^ .xXA@ҩtY^^IEBF Ќ4IjZf5;dH$%@(X#+RV¤020ڊTMm *mSg>'ڄ}c.+ɐW$lQ"O)2\UGB$ ڴUOMoS.tyPe5StHM^ǎԦ. S UJuCmZը]=)I եA;PKpEա/*PKN:AOEBPS/img/infile_clause.gif'`GIF87a>v{{{wwwqqqmmmkkk]]]YYYUUUMMMKKK???;;;777555333///---))) vvvppphhhfffddd```\\\XXXRRRPPPJJJHHHFFFDDD@@@>>><<<888444222000...***(((&&&$$$"""  ,>v;H) ?Ž40WлW? )H)! 4?; M(E{*1@"J(j|:P$uFrMBpE2u1Ɨ|)LR'L&R"ɘY]@&h!.s:JW}D$FX'Rx &h b, !v bdb85օ&rve;`U^(^V5=w^C~фdƷՙDdB]@ZH*bSHg8v*&Zs0. 2h1'D;Qk;Ku& WWur@ghJoRm1q!(K "XGEn%%0 Nѐ 9#?b-n%$X#IUQ]V?t@Czg [J$^lIXYaD0ddi֨ˈNz6LP{@c Uri4Npz0$(tZRqde >=^@U@n僫eZh"J:ik[) ٭0iLj4m&r*A "^f MO]r a϶obb-Pp쐂iӟfȰµVvɵ#'\Hʝ4r!'I5U 4+A)"h^Z֫!9Ǝ>.eeD†2t Ԉ 4u/NpgVda_y u!2l6!t!$ J4x% .a_ᘠMıՖ=+5rʥR!fDM@.4iMr lq?NȊ6MpbB%wԉ z-] %8ݷ//5biMUVn Rvp04(P~Gg%zĊ.ЪKb xci $>0YXe7/)x0Z0/8Z ?BxS7#Dҁ4V3dQbҮpq(YQ(#젉"c B:ntDWX@|04:!-4rD5jbH܀nTSd!V e8lxZxo*0`RFd|Ȇ->UeHh*#+d2qВ(,G(*Bv-GҩmRozIHhn\D(0!Ȯt3FdNG yKL% Hc.I0lDW"zsɈg E@R_,@j>lSQҒ h"vj P ;X8B@AXumQqcQ%hZ3k$AVy5 FWsHW%!xu ZRDh΢ j#qk,W*}Ƕ\ŰJFKqlsVⲄf1ϸ?@;JTŭpwFȶ!./цi1$0McT =-J,qF7qx4Ptg1EH"]JA&ģk.<::٬[X4%C60]ScHV[o{GaI4cA;wU&ƪٔ19Gze¡HH.CaC9!dU~Q1 ,0]ē !%YDe? (#f<iuc P@&WG$eИQ25BfzPY!d5 (:llYz?^gx%fEYAC!Ia2F un֒>"]- p§e"ahӂ`n-4@n(04@xmmb f`;sXg]U+[A%{bqCjiJ18)̩5x䂀!]@ss=ǙwFwJ-bq᪪^;ϙ=E!u hǎS{pD7c[,ϞIG;K1v3#t];+tz(  zuS^{<((#B\'6`л/_SolIs$A 5I> ҫg?U ~>:SE!R/\!ON[ t콈L3M9b_#ƴjVI, U.C<*V-"12A _!0&Q60'+SY!Qp2%<('HDYB= <c[4B(/T+r*J}4?XD*׃TXUHs*3WR{{=Q;:'6hj`+ʀtb774V@M?b<Æ~Qd#eDu%3uu*{HWhxV(#{$|PZ (17lxZXjvp0؋<($'!X<zu0^`75E|\ȃ ׍Y8Xx蘎긎؎8Xx؏9Yy ِ9Yyّ "9$Y&y(*,ْ.0w4Y6y8:<ٓ=)7B9DYFy|i|/ƉJyLI EiZZ^xuBA$;z!'10\AR$){P@2V;p4Pp9 V#Q mo\%#FrEsKq#+-sFB"6`!%zǜHWKHsb#.y2p#Sh"Xr;[aBC&k{"U/$&)z2Sَ`^I%mI`GG%<_ >;zxf mi0 )% 5`69T\HP7VJ@J>z0K1*й#C'w7{>2L=+9jgZhH'ѵ(WL&A섫U8DHS  zLfzxEizSzt|2{} V M)i uPh P"y SS Z:ɩ,'Gz hB`4#!Ktd +ya{@Tsd֌ Y9> ‹``+֫0a @@:93V (Z-M"jB@e "q(RaO`s-!_m|G)dFlӸ0ΕI* S4ǙzI@k&%hGB5)2^➟ʳ'oL1{F`O0.=dXb00[=Wfrr4 λs%MA",y7"pz(4™ ЖH<Bd̔8 Kʐ؝=17-tB#^($4]`3$欜[bͥn]Vm"ژbH!$Jrma(ʡ~DŀTnlӵF52ud tEG~١vg;Bx1!"I#}? 9Lv0o䃿җ8 ?t .wBybXo')5r(.)Dj\ȗPs'Ebj\ρZK5'#L;NUKm1~PP87U_?:$`O$1#퍢?_(80=p2`?_:Xy h)iH(ގTh?_؟ڿ?_?_#8HXhx)9IYiyi8q PБiz +;K KT[l||<sո .>N~-==Q/?Oz5M\YS0&4iR :|1D X D;zս& 2ʕ** !˙4k|P/Dh+ͥL: 4iÄرd*ڙЭ.B5яfڽk@ n)>U  (;~ 9ɔ+[93 T : ?H]a}TݼuJnj;*fޮ({W8 z_|+|=x҇G_]oo}o n<%x- VFRQ ᎘Q_-NK5 0Uc -čw&b<ʼ@.S/BlxmX+Dubց(." $KE i$*0m ۯY~6!h2:0LhI<‡8[Oq4S_t& Ș! I_TY,$neC:Bnf1 fvf3[h3 ml A7fB."zy[+%̒x y[F`nGxKllt HRvR ƮaFV)avwz!сl|,oU~^Yr;6@U9#J QyqF~Ed:%)L\K@ uλwHp /]3ѢF|3,>[+ gP-vm^y+W};-A H>?3SP?DDgsmzf>|QѱدzP[1T:=bS?Q`FJ` PY%_wIG~~~{z QsVI4qG,P geXZdEFQaK'ѷ (=Kw  5MBU1,5C3Yt~Ry/~HBmupXłOhf 0!VRڐ5DPDI\f"ywGRQQ4Mp` CDtVQ{4$L oMxr$s 0.5,p|-K IsqMc(D1NRJჼC[J% >ٻ2P} C#CjP8+07[<`K$Z*P zS{T ,z5WP{tejeh<0||HGP҃=0RkbK5l79;!&1l`bAaA~K*MO Q-ڶY+|]w[ ]UNQ;PKm ''PKN:AOEBPS/img/into_table3.gifjGIF87ac%wwwqqqYYYUUUKKK777555333### ppphhhfff```\\\XXXPPPDDD>>>000...,,,***(((&&&""",c%pH,all:ШtJeŁHq1J.4k9X{Nۑ~zm'"{+ lw1wBzr"`1+t Wv1w"wEX' DXsGD}C`gI@{ڃ#JkB:1tan?x.,&DA˗C"LPF6 &\I,+rR!( QH!JCpdyQ"1XC4j|œŊ'f%f@*\C%`Oc>t42XNa+"Cb#WxP ,֩ivJK6rwTI"ht!TDݸB[Hx>’;tlw'&& !V\9xS(A51T+QA] ~=J K]%"#%B T9 +[+#0!v!LblWwLvr] t)rsͤ"f RG<@p%UUZTz4RDvQ幧Sr0vy@p 0f!+U٠NeRKE5i vzX@)efZXT e qBiUhbr@dxnpПƩf^Z7izV8Ag}J_EwlQSlo2Sp/!\LĀ?@ amC P*.*P%SIBp|"I"U-Bk@]VpC _]WZl?iVD5I7懀 L GUe"&P u7p\X,|ol$3Ï>LF̀AqLFMK7 @VQM@u %!eRb6T5&HyY(w9r#H3X㕤1fKTC*9$yCR0 ,.Wpj8koE5h ㌧LXbM6UQ9ET~M 2 R`8ahhh% $p\J׺xͫ^׾ ,^;C# jʵo rDYqUF6  MEfl ׻xݻw U~Z-l‡ a#KL9[fMEr̲6#1TƌlA!5.vȓ+_~h!ߐ^m "QW\9 &=QwT^nA67=_ Ob#@M Lu  r\6_8B= `\8!6B]v0^szBZ! 0BBFdC3 Y@qH B$I"!6Q.LEI6%N.@0,v ib0Uo 9_;ifPi`6裕X=C*ULo hh'0 `(r " SՀXeHT tG@>ؔC,aȈ@W xX˲*t0~&/O-aH*b)s+vx6`7TU}Z"lxc5='XoԚ݅ y# Qje#<}iIc2)$E+z򓠴# 7\|HհOYcRTR^R̠FIRCD`KI(&aI_tp9[I&h  'Њ1Aa/šlf$urDFzڳQ00g(pJQR"@g!.ΩER{:) TH !.X<z\(MidJRPPT.A D,JɆUH:HŖ٩MUJԢjF=OsP! `ERXD菪TDd#*6U(ԨhMkNZ2"n\皔b\t#x] R5 _+W>UM,DJ >B!]ʖ㰏e7 Y*cGКhEQԺjCl]*n?Q}p=Q:%r;ZإGK[F9x^`E+rEDuOkպ>4+SM!w ~⻉hBE0`}jg ۔^) vnZ߻XuaL6Zy!& V8+!@%F%Q(WT #,22 :NrWx|E  :q+ &Vje:4Vpt &ܲqKX P%)I3L>C &O!қzGQɁH+D:S(iKP% `HuREϨH-fڎPٸca BјP[HQI،浏~M!ܯfALr!9ڋC`(2 tZ ?Pz900A:_c!~giҀBh !b"'=8scP8gd@³y@F4ĄX/]ٸMe$塬~x93l`-G@僸ptb{}@#nnf" 8%zdCwot{;*;]ޫf%"]1 _O%+=AVmO%cCXt7 nDF+=&_P'[:#38bUͣf4HW8s3#`9R&Y0wEs؟:Q_t?@D1d}OtyԍڔOE^^APdp~8V !VD~ e{1eU Q amBVqU5}% f!zaberJtc1D#烍?'z'sx&J>syG@wwNWg$R ~I1RJT،}d8M oګ jQIAڷڎShUmϏp&ەU\LT]ۉP+0g)1m m`=DȎ$[h!U} \:f7Mmd5 G  >A ^ּI!p^Xp0ʁۨc 24h6n =ИT+e#LU&9 >9[0OR. $k˸Q:10%rV<`SӓxMW[pnҗUAlaoXO搁朇ca M9vQ3H^&9aC݋4?>!鉀=0%BoAkڢH|kSTt$Λxk^DlS"ep?(>^~CS4@: *D@2D~Bc:7 $  QT[R &f}}i:P:0@cK,. !~#"~gvAvDoT^q=/!Kq$Z,8~aXOT[W#<V%i. [)%n?^ tCf݀z}xڀ_:&LA:UW%?;PK w''PKN:AOEBPS/img/delim_spec.gif GIF87abwwwqqqmmm]]]UUUSSSKKKGGG???;;;777333+++)))!!! ppphhhfff```XXXRRRPPPJJJHHHDDD@@@888222000(((&&&""" ,b@pH,Ȥrl:ШtJZجvz*Gznpw0~Ifs9d5g F v]""[2-%C52W"]Qe9Y-Z}25FT|AoB=b@*yF,1`B#FA]&P,\0cvB%RсB5V} 2hM8W94b6F} ť2ÊÆ"5@ lHTHxť]!. ScFyĸǐ#KL˘3GƁ͓"8Kd ^`#u!Ȍę>.ճx3ഷ Nȓ+_μУKM"UΘBEjЯګKbHXG.-u[xw097lU`2 SGZqt.gPP%NP*P{^ VPBN7qtb,(D (NXT|QxEN5@%JBX@I vXL{@`JE !@塔Z2$Fd!Qѐb2ZX4~S5+fkk)žDD01o>!F:Vh;ģRv*}X4-*zj@E媸4ƫL▅9*&Ϯ:qD+ԶQ΁f[Jٷp䦫.&{Gy+^A/M{GlA  7,q{Mw ,$%PT(+t0,?-#5_'@s\s;KHhM# QGQ?GWkՉ-_L撆9`ix oU*cj^KkLmf WiMF:PĈ~aPͷZ0sm!:DNCd8lH| 0i+),`\7y %~ؙC^ax*`ȿnw ay޷c1Hjk$N MUQͧB#1_q=.B}Q)Nh³ꦠղ)JЂT=[dBVN;40.X*I2ݥ7G3vDOkqbJ.KE< *!ISQc'4g#aLnDP'-'ȧ]k #rnfGba2zkgP#Q,r%IFM%Ѧjv<ߑ]rw/F\ y .ǣp < b;+L 0t;i54N#/z~ꖱ0+:CVp&{B2 X!jE2Y8GL l|:sŒRHSWB0 x×f4JH6CB`c76hna:P.a"փ>@B&/ Bv:H_dN=1ew\e5eR*8g~bYx&"tL! 1}Ayh1h%r"!t[c;kF$Ph$'"zHBDiWZj`EEjH4 G%Չu% OT؊2YP芴|DaXAc77V`+h+t WAȂ8sxip-((IǍ ;PK'   PKN:AOEBPS/img/concatenate.gif"GIF87aR}}}wwwqqqmmmkkk[[[YYYWWWUUUSSSKKKAAA???;;;777333111///---)))'''%%%###!!! vvvtttpppjjjhhhfffddd```^^^ZZZXXXRRRPPPNNNHHHFFFDDD@@@>>>888222000...***(((&&&$$$"""  ,RDN\Nơ5G((5\SG %% GS\ h`G*\ȰÇ=i8/3jȱ\]*Ǔ(S\ɒa  @DZɳϟ4Lh& (.]ʴӧ+('5jʵWo=Ex30qŽa].ʷ߿AI V]".ؕ;89ǸMϠo.l̙C]h DW/F48۸sGcNۍ6Hg\УKN9.rνwF4x[6$D.гjH`)݀ZTqDL%w@[NLT[Wx ($2ʥXV4[(3a6BypS@i[ iH&v#!$9.Be#\4`)di&<2b!/II C] gĤ6`|J:_!eU\2>"0磐GZ!j}E**GvTj"Ku#ysAjCn!\0 <#&ϮDp1!4a&Wܳ! 4BC-zh3"wH -+(%48*k:=>ִoS"Mۯ NP}Ul- DHKL:.(i 87kz 5 uDɽLD˂<Ż 6:ЉX\Ȇ82gvwS2TWˮkvp` {b3#@!5t]ʮk(-"pLy3=߁ .2"H?5ݔW ޤ7ӧdhf(T29":kNMn;r 6εQX;B7C3 3#6pEC_KDw@uw/o觯u\HɊOg0;J13Y¿A8R5`HG  \ t0 Zhc GH(L W0 ! Q ;"ʮį'=ɡ PC'8A/!Ѿ6#"J؆/!=`Vb.i*Uj=* VrFB@+rH! "QDH(Ѐa4\ъk@0`a"H ̍R =ʏ"6gNjZ4ÒK9L1 9NP!$ ,9GuP o \X PHe*u̓4Y!9"N^Bk@T'2.UG'IP8+8*Ne^ d1|q"AlFh\ inz!)INE(*ޒtYTB Ҟf0BQP ZNԉ(/UDK Q/.OTt1 50=񦿬Dժtj笖) xWU˾]#ƶhA̘qߎܓ 41L׺jv1 }2C4KNcrxIIM nps.5r(%LrNǞ%"@p,VY.^ 6=Mkne͙$2Hܤh%sT]f3$׋J42w&Ōi6aV% @LHƹy}#/ܜLXᄤ,B(&M2~c )jQ o;٠Ky*\򶰘ALjPxF[وG#\O6+#98J!Q6CplN O{bO;;?!OyZ#H&__%:X !ϛ~(Oӻ^'QK1۞QG~W}ɾ1'|F@Sg\(:~(ɭ8ыO~Sp@ؗ k>?19e ˟@}jk'}>BOS 0k g~$MH~ D0,X4 ]3 h|WԂ. RrLg|7\ | yu Eu`}kX5;Bɺy*'*@art Ŀ7 g@',&:F,4'#'̺p"hĆD0Jf+fbS`!?at,eC}352-]E?PNjH69Jӱ`SQSiK Q &K@^=B`*@*``Q_J2K0@gsp}4`&`@e&0ˇ=)F3tb6YcU-/Spۼ>)1HаSڦ=8JpS yڲ=СB?گ۴]>=Pp-;>Mq7xd5M0]wQ&%UXS r[~]0 -'%fSwr,lՒL$qӫ. A%4W:yՀ2^-]v12lpᐼM0brd4σZSu\Ҟ >YAʉ@`N|10pGQZVHќF^Day6ފXmU4ISkʪPok aUpG* Hn= `\W+Mo}km_v!y࠰t!=&pEZL-SѧK^<f]缠\) zT2k\8 ~Xo; . 7 D0RÑP+5 n C$L.!ThM31[CB`@F^ 퇈Cfը04szk{Vɽ,s ZPoKjl9n~_crKp񤰑\vD+2R$j /} ^57 gi8/ .!^Ma!in|-:TyNUߣ+ W m/ 0^jrN4cpEߕZFʦɏ]>(dFt&`+TlS (1-üD҇dIY12(Wƙz'ϓ;;HH) {O 1ݟbW/ِ/ %f.P~ 4!;)zQj)΃ >!b^T9ㄭT \ǫ<(XU`&@gSBJL"%,˜w#$ݗ$Ì(a؎Egpm;; &(>"@'RS!a g cə}a? QpPi^Z#le@\O"#h@/2P^TuN2I˟E4H/}I@֢%5ǧMy*Q kg_sC3"&9l4\eZ; ,8VYѕTjX=?#/>-ٜ6 Kĝ栕 RҢ$|r* =Yce %>P8(?ǜL7IIC2I`:6t Tu^*"Ӝ+vIRR\c9 -p*Vb^=UREXH=HlI R@I]:T ~ITn +Ҋz.DgtI[ړ$,Lf9dpMV]q,652*ǺfnaĒ.r!Fz +zD6Վ54"EYXL2! {8cz9L,Om4+aJTƓD̓2fNO, wP(wdZVFE:r, L~WjxnDĺJWN#^b.!)f.KUza@BpEcTOzUl3"=lOSB蜝E,]NELrO:m Yc#٥t!u0ԐIYvRKj]"6n2p)*XYyD5b7<*n j^1X-¨C !v.&Zv"Ǡgiom8M Pe{,2k WF)x+Cʚw*M utu˽N!% ;.%tIN0KWDq-  o WڑQMn(w K9Hd`S0Qd BWuE 2lR*35$7hoE@X8Jȋ苿o3hKV n}9HJ9A8tȌٸ4Vt&ȨE?#Uġؘ}pg'H Qa\g`H~cRI2 g ɐ 䀐Ii9q>ɑ0I%i-0'ɒ- /I5i07ɓ=JEiG"8|wHOiO;PKo""PKN:AOEBPS/img/expmodes.gif8GIF87ao}}}{{{wwwqqqmmmggg[[[UUUSSSMMMKKK???===;;;777333)))'''%%% vvvpppnnnhhhfffddd```\\\XXXTTTRRRPPPHHHDDD@@@<<<:::888666222000...,,,(((&&&$$$"""  ,o=XüXXf7J*\pAB5 R@Id zTx`ADIɓN(FƋdd;8sӝA5jP2#B7+,ӧPJ]SE%5` Z*h FMKYCUx0xP׳x0moܶR]̸crJV({T0AXFTϠCÈ#z绂)0۸s릤`VAY QC+` \!f֡OSZӫaMwTNϿ?f(1?8R 5` Y jhb2g*Vl0NqN%ENbbiH laIIbJh䑸8 u#q@#.䊆\vɘ0y#E"VBv!e^)gOJbTQ-&!q ȞɛΩ%Չ[!C Gf"R0)zq]訤6 [Z" $ZAꪉ0D=`p$9dn F+m:u꯾6 =9I\U "覛M-U r+!r<0Uh`O\ El"l(,0,4l8<@tЏ`w:kPGmć`1(CPߐ\C=o alp-tmx|߀.rӐ W!ͯGobo!rt@2d2Q%&F묯AT$ bĢyjpq d3( `6@M-?`N;=qܣ&GjJ`%ⓩli0 JsD ?eK8O jVv-eILtrHodʿȗjzCbZ$ǔF2D4N0H.{,kSPNd3 >p)/zQpͫ0к . J a(B{ JF,+,^HgGЪB!ኦEMj3 kU[T67H\gl=ooCem2߁w}"AȮ/<"'P8#$ ^,84I RP^ HAV`'Tg?D;\Zbt82FFZRXXNDe9tGEa,gb R8eQhW.U?~t&!riؚ=}ȟШG JB:yVמ;g{@MQ/~բ9zerDxC+LVEsO`gQKD+mhmzf%\8Atkݥ[k"dK倨K6d/1`A%*..'?_x8DNs1hWAp~?p}z=/IٌP;CQd7kn+ϳ3%vT4~7I&zX |Nw}x? dAv_N/Of|wxEDm"tyL3[BJ]Y X >d:Dt!iGDVvJQi w+UxY@.彯@ZDi/# :g^/{B4Q#57Ay5!gC|7lP6y3 qpB`IW$p"+T"++0amhsS N0ir{rVaSh *5p"h~AyahG;evaA{DuQ-LVH-aaBwױln؆QU< z?87&CQix=sJ,FbOQ&3X&|8 ƣ:YhvDNaIa'?BOz0&]bB8QAIsQ?z"h@CbxAQ)#ĵ- `h SO P 6!8Tap@214䏄#-NFQ>3`X `WYyّ "ّ) XE ) գFH#( 0] ax) 8؇l R4?Kqt/L)4M Qih9R0\ٕ^`b9dYfyhj9AWhAsq%Lr/4&&4'uOɘQQ򔔉-QqXG/GBш_y897@ka L"/  vp1h@bI֘$)1HgBF5`Hdu9Espgy8 L,! F` c%jAș϶@yep߰:W`Vo :pQf""A頎זؐE @{Utj #'IYByG '1 (DG+w 8uA3:5z lN*COJr"X$JlTPRyPDFj1-D7h[wVo: 7?1eS7tWH6 n;n1iztUenvw:8~*8 rj5j.ZYZ ڪĪ  :Śpڬr_ma$"jBvx ᷭ%ZD+<'$D@ʮGeA* ZI֯:~( tFIjHr ʛ@yJcF+102;vuB`H<۳>;%0,LHJrRPR[ZA;\IWh,?t1(=Q Q<&+ak>"x[ dLQhuBh2fq# 0"!Y0[W 0 FZCc =$^XD AI1٣zT_;by{;A ëk!Bw "{B,] mPtн + Wҋ{Z;_˩ [Tk16 q H& l{˾ω(h:/+Bژ M]m噐-Sۼ:՗0I  '.X:Bͭ}ؑ '2`ݾu? d a)u+T0VY { J!Ih LBfj ʘ,~܂1Ap; P ֧;baX&0 DN:ml}-ݏOdp )΍(TԤ|?rAa0Q&Üv.+nII  0h}~UF.2NN)# ݜwΔgܭ 9]OtΉ0!Y"֝ d8 Ƭ:g #+@C Zcj/)wk ɻ|*Ӭ+L7Od ]˹.=rpbN U CApVϑ(ƱDG}cTMD M)jE[.dF0΋%a D:up.@?<m€k 0\(pncUBz_-qlxtDgf jw hrywyH6RLqu#JT//xtU;F45%&6D\giǐUHT ŏ#%Rtf tm, Eauafjwwd7#8HX(2c Y٣e932R:8i:Xɉ`{  ۉ95M@}Mx.KYHnhYΩJyS0 @ Δ>fR0FHq>~ Vdʕ,[ 8SEO6ڼ8EOv>PQOJ$TCFBꬩZ08YiPz*HA S<U0C 7n3(ȽGO~`HI 㩡q* Tၢ+TUƂ‚Sy[oU&tnϝt~ \nN?u/P$މsGޣւr?nRjWG=OVݩFz:mb vGW7_ 'XpWJh!jhi9Ț3gt:a!pzJjwNvRhg*Trb g`8w@4 &V,Җb&@gJ@0&t]UAyփW~Pw)A." *.$"ffNDl(!X4 'Z2DLB!Uv#8q|?а&@8,,,'/LE Z`& r:s> tBM4.,-^/W[Ue7er.[_$%LM 9Ptn wr_st3|3w~6MdL;)+!(χ(σ&S Y'C;}>N Nz馟zꪯ'^Ie+`0mYW8B?x8tz|ʣ.=-}ҷN,' VɼjIn"p!C^!rbP!Ϗ88͟+aWeJVokS 2ŏMk5'+su*#)5*5A,v d<*g5e | | 1X <*(A-<Dpv$*HA]a˒*+#2DF(L;&%x|hO|"y"eD]K; "dQOUQ""3 @*rI,STP+v5驗d( D1s++qJ RQ,1#MB& 儂Iq>/Λ 7/Nt|E8}s}g~iy,*ba QDk$ZRt4h! Vȹ U/uh(?q7ҜңjBFD*F%D-*#IJ-"0N ?)vF*A >ui|NN(Yz @ +KBQP,hk]ߺθ굁^ X!u-aK5a J@i#IjvM*m4)HiOZ΂6{giS vbk:v:-*o;v-}fԸfJW )p:sKv ^ܨhO!$3|.ylwe+_nCY &D־¼Xl ASK`3.;qVxÆ< ,X'mbmRoMTBNAimd=bqbAP ag3%oqЩ@#]~t5 }DWxdmt>A9}+(.9e_yxn;q.ʸt7ٍP}Bih~儇i>o<K^yg /rF~v-O_מQ3 V~?ow+Ur$k _L<=u? W˯DֶBԭ^GzOD/b_S;(rD@~ybW X*Ms  @0U- hz5z'0Dao0PH$PxmH!>'E@a+8)XFA'!%"W>"$a#N.+ÊX*`d(Tc=V$>0%x%"95%ҊWt>!Pv*VkP ~(!(vt R 'rGsRNЅXa{#1R J1z+19URmH"@!mz>CF(y$R5@vtJȘv5zA@;d5w 2Ɣ #w&88j+EpaQ2|) Ct6Y;"sq#%c'' b@["CDT2ט'="8>좐Z EFaِFy|{F+ JV%y3J XPƒ2)' sȎj:5-M&Xvo J{1cm;KɝE"n:lh}c|YΐƫQNBKQ CaH'جh_&b`5y`ϚAz0UŏtDtB8ǭ C:t: ;G &F Qذsׯgg'V$ /U?L(K}!:bXe9}? rV Mɱ9;rۣRj+ֳYaE6z_K][ika fcoslfn u;fgv~$w{ "Cr+htU f+IH+k;WM }kKKTtT+zh[۹k# yh@]!tś/.'", ȺK]>䕺*u7+528ɻo-:V A? E=+ w1h A)>1˿;s J0paqI9N1aaA6GDK0 7+E `0;H+ A)hLuAii);AB}B 6LKP»lgp\0 ț.h7T7Ǻ {~F140ѷ;Hb7ᒛǹ'=ZG"?(f~M g ֶSUK(QL,Èk PӼy" cV95wdP#"%]##j 3N&^|ЫL |15Sl\k|'cyYH†`؉#"WmEw|0 ̌[˾Njy -N!2"L==Sė8"] i⹋IsJ<eM1О l DlQkQ0 KvMTA~zstgT j.}~[n1E"1ȹ8YBLZ@&֙-UB.Hy\0Pe]# A =ع{Zۓ!" Q@@POE8 8~$ ڷKj1p3KYݒ P(%>B =I1U},*idΞ 2 K;pYg9 / b~p?γm= R,O&[>Iq32(3*tW;?:uC|n cj_-=^ɅҋuP@襂,֡F"xD=X >-?+ ?##? XX? 55Ð0˲ޞ=N#Պ  XNɈG $ }PX=YRhM#AIIuO"臒N<ȧ_ k$ - LY +*\E= ͞Xj57g\ʵb렫]˶!#JHYFUzH'BuXj1u˸<vͭ-%v̹sI%?8iTxoFXȏ?,ܚtYa1A~Rf-oH+_K{grK@s^]rHǙ__<O>0B{Y:)_} vv`,HV}H=G_<_8Q5ZbVHNPfXbG:##0(Y- h&Nm$)YLfZ5Vv)&;PnS&XIy֗cƩr^ ''n֩5g6"щVY&袌6裐F*餔Vj饘.,(d*ꨤ*i9f"t~bwJˮ *()᧗PCLF+-)@O%XFr+87&m[vVypc-47M$xޡ" NJ/楋Gh,kH#$ jgsGlr40(º1|ԷO B|c4/$p$"1ͤ%K2,f4|L.nOH 4t5vŋ&Hi2c2u! E`6.\p VD P<`u1PlDp1I@LsB &R=\D* j7 =ea"ш|PEXPm]dIՇȾ7TԥQ^-3!RϛƼ!5%R}G׃י?(A>"}P;'$ @J} #3vxà9"9z a4H!^OƳć6O0<#6T!S>)?ػD !66|SJ̢rzXU yӕC5h8Y7'.I0x\Dq;{Q! $IE f7Xq%#(=l 觃G1f _4('EC6AC& $j Xp\  B‘ٴL獆|!b`Y l 2XGQ2x);Q^%HPt) bh.,N)\!E1ry8}CZ; GћX f[6(^-d>T҇ )'ftd!pɉZB`Iԝ  4p?~ B QhL E;+qn}j C%lu1ͳjN|OP4cC0Xb븚:դc&C{t,% et [:d*_</@*>R0s)$QP[Q"8nRX\BtP e1E⃔`(y*h`Bp)܅b;Xt( zC (S+1`!DD%z ɰGL(NW0gb DTk26UsX@o@f!СI yTe|*\򁧜byQhr+X`;bpaAF8p,`x BL!]k,1:ܣg730g5'A;GMRԨNWVհXp؄ AwM}/ݻlY_)5PvG C82T ` %$<ݝ?>攅i k3 OG8/%_`]K: L?2bK'D0p͟Q`YKYY`|\!޽eʓ&D>)Us`1^na!&X> ^A%FJx"`^543'̯?/9xq2HuDtqd\>w;|Tt-N3%>P;BuǓᗗ*B孻6b\d*|/{xI]Pug '"ޫ޶pql}0;Usp4^:?&^:gPS5K"*б{ΒogcHqVF HB X!4ӧ28X)50c PWS-)S$H*Ƣe*(~7340280 0 1w,4 DX X VEPh$fQxX"D!Z`YaC(hj$kr84u rA5Ct؇~;ahZx zSBkaA_(aUUp4؉8Xx_@*"a*fT֋8XxȘʸ،Hw6gF OwH׈ڸU u8Xhr渎؎P;PKc!88PKN:AOEBPS/img/continueif.gifGIF87a3}}}{{{wwwsssqqqkkkUUUSSSOOOMMMKKKAAA???;;;777333---)))%%% vvvtttppphhhfffddd```\\\ZZZXXXRRRPPPJJJHHHFFFDDD@@@>>><<<888666444222000,,,(((&&&$$$""" ,3:ļм-S-SS I)w*\ȰCg C?UpKFDIɓ"2  tTE8sSJC-:A!{*]ʴSY? uQ>ʵׯb7$ ۷pO:nUHZLymܿ \c'h L8˘iAỠZ ; kװcdʟ*$ã51ҳ+_6Wڥ#6p+n(|Ͼ{2 5/ $ Ͽ@}߁&~+  Vha2X>LQх (,SL|8,8塨.h㍐AI@i䑾0BL6_ak͔TVYNfe{V_0d [_#2cmM!6*@6,~p*HS` C$ 1$@i >`rv)>(!tf2m*L8髰v*Haޒ"ywȰ5V`F~NXHfm00*b[1  J+9&g-X`m%l6/ơ)+9Lw ̀JqfyYu[Y3kBy Lv! -c߭-yc pL gz$Q7 !li:$Mb#HQϊ o|HϿ=+ `zhx#/ф3J @tU#NMA?A`^ 4}1ÄFnx1 ZDB!.y 򮒿 B2@%C dpikb$)ϒ' ^*QB8xᇄ2@\_(dQŹ`Z H 9u2S V}KMPІ_Pjt_^ HV`ljd'R$2@"W ;M}D;0v"N␗%|`DѠWt!h.0R4<*& RskU)8{h=&]sS )<O2\O>=OIt M ]#ūd%J\z-%3($a*Elĕ"TGA&gOotL)!.*є 1\s9`$!V$F,DńbzkXJXH\KfeU J@E80p&68VSc5X րj.vfBzDT؈<~%*J*aH`3F5jog-*BP+7S[ wӭDKqBTk%ˈ,ln#[\z@QR9%һwT<7E1hk%%Ũ%b/k 1$ C4Nol9.- ~ ,Zkq\?U{9"u4<PYZ`gby@T ɩa?͌蒥\/KЋ#VpJ (RVu& !=[ k'T&*٪J 7*Mrh{b$۱jCuivA#cnwQ:z#p> l/q["2EvP#3AGКCuhqv8Rj#q aAs 'd|w| BP@PsrDP̪I43ʄm10 RsI:vn2$6FXP!4{XTeDaD8۩G&FG$e4k cpI2m"i+gI:q 8F1qT* ֻ6D HBGQ 0$C4Ƴ y<*`a2ʊV.A8Ka{s#̑͠0P! m9EFsgjtFIF;Fvuup %0t`ce:ŠJҬo:JI]$IzAoi\Зo(̥R ,BcJtѢ=bnLˬ*¨[ fz %Ϲ@֊ 05ẗ́MTMmlN43DNHJ@RQ7+P,鶫y<#/5`PL, P]\VyS o9%[oAe˙Hյ*RJA4M(^Ϣhs^Bƙ%o MPS\Rq IT|*ˢ`((B /ɑ1,`+>.]=]9EMpapIIek1A2!n: p09%k& n_EH!hE *Rǹ.)r*dI;qK+zVixi up2)ُy#"ڕ.>& tO.'5 :@33[@;Pەς 7 aoR ?3Q㚁FTz& L i`A U7{gD\Ղ+U0imBYJ) . 1\* ,3Y B&< PZ".5İoT h|_RgcK?L,nR4vݱ0 Ad:%bKPvhnzp6C׳ѥ ?4![WZu6_3"E,ֹ.>irs@8LA{ s[Rv+ -h77vp~U2E7P0%8HXhx)"4 *:JZ3`iJ[k{9E p @)| I s@]D"tS .>N^n~^#t}.?/N#@/^ Ҫ` @@>81ƍ}2 |T %Ǚ4KŬ3Ν<{R СD=ZT2L:q&$%3y&CJUMr ̧du,6ڵlEzhl۹tF4Ŭ&tTpṕ\.[71);~ 9ɔ+KӰ͜;j{ :hȘgj:լ[\״k۾;Q7 <ċ̛;=eC/[=v}W>i"=>ۻ_,ƿ?@;PK\{tPKN:AOEBPS/img/dgen_fld.gif!GIF87a;}}}wwwqqqmmmYYYUUUMMMKKKGGGAAA???;;;777333)))'''%%% ppphhhfff```\\\XXXHHHFFFDDD@@@>>>:::888222000((($$$""" ,;.11*B1ΐڊ.ꃣ;B;"' D)\paB B\q_ň8W(Sp$Ȓ.TD;ъ 8qqLs'W JVKB*ځ!;rGS?-ʵWKG%F#U7_@E$d}ݻZԬƂj!AŹ[*^LTaAXgL"4ӨS^ͺװc˞:HkTeʄTܴAn_ )BfG-HNسkνËOt631* 7L 5&s1pLD;D\!t1@-#'7؀ OP I d b!n(+82Cݲ5scD.6Q2Dh2$^yK -)dOfU <ӐـBILAadlnfy!vZ'|b衈g3Qh@ۤ8Oó@VF3#@}"1@n/6fIy#M2ߍHy~!CI#ݻ@#HP.y׫ݹ$ 21rW,JL+9X.E"~IeilI Y@r(0'x% L`AOg"DKȨ. $! /BoiJ4C~ pg?tFP*2c|l"HFXeLc'ΘC$Kp$pi HSPh>Aψ! XJoJ"a0bb\z$"p c:(K*&}GI!@19`iT 5EYuZsLc,E2߬'MHN^0l-S!iM&@93839Fħ1Hh%›"X)N7s@ qCS d T,ɑ,{ 5`=y;@ZH Y2uݖS1!*Hh\IԐԜ' NqJB֪^&*A q"aE5˳.s `ݓA4NWNA݌>Ef "v[lAֺUtB%SJNఈ~-aTC0 qMfe\†èy ~Yw? s>?j%7[-_*77j@IĆtӣ83' ꗓҒڬzG T) PP,^yYK4a댨c)G'RUEDWh 6h& -1ɕhS׃y/wL0p14Y 8f#XB  :jD5b/2Z|4(ICҖ^"3Msƕ[Rh`g\1Է4\MFXJΠ5m+\7C׎FuU `o׭2q:`D)lf:8cGZ}$6-Nq7]am+"{U^hU[:tNtm*Yt)eVJxۣԶ_:}]/ p] ?ћERW+c``>շ`4\magIh c%.T鶖U .&H#9<_Ԏ 8`nW?MnI-BO2S##g@]'`%kcޯl .)XnIɹ{quqV$0nD'W˪h톱N(ɞMpRN9/\9[:IofH]L׆? 5=D{ m&VFb @LHm$*W0a){Oֲ#:p5 *+(W~PU'SRn1aB! sUx2Ej[H?AS -4FSRO}ͷ)‚;g6Ap,;C0L,sg (D8sBSu@A/qA"UZ`@8V2$C?,UE|(CueDux⇮t TRm$qrxX2!W x')$D@ rKk% ӊ|\e FL#@N_PnbP82)+5{PW`mǔ=wJh ;(c`H04 hv`֏ԡ=,ZG0"WhPN'ov偅pt`H`j5 c{v0 4I x0@x:{ vah$d=/d˔G8 aDdqZX?:h$Sf wJR TɎbg;1`l \}yHYiUr"xPn&Z?Xq9Y8SuKk"v?F9Ո6SSAN֠Mr4YA9%TBZN%7 #zHQQUt:B|4FYm*U!Pc 5 rtvSDK~~&NSSa3H`>^ߔ~Azre9=J5e@pb8RHQM50 ,kO!7nYXqj3-,Zm,6,-*eqY`-а)ӄb>0;ߺZ #<!ޚ,q@<)]4@[ı7?EjӪ911\S-'tx v0 ZU\N\0&r9uuqM1h+'ka|gS P^I0A7j:8T0?ldz''z#p˪#⃏BhS8p(B}S|$@⳦+cgfmV˻;/Ĕ y_8ed<&7 WA׬%a)|$rI  Fbjl JSK4e0"2 ɇlzBu89{7=!jz%d""-P|p,3ym M,$Ҟ!]?\R "yR͠fג!,"02)ȝkӕe:Q`е # XkZM8) {J aM5c}ՆzȜ;[] _lt-pǶ֜,ױ,Lط J}'9̂М؟@زٜ6ٛ٢M5]SCڨ}kǔ9אu(ۈ4 cbrB"@fvpܳ} x Mژ{ݚݿ= -Bm} ޸l TG2_m8v,ݸTn6p ]X֞:g7L (ߋ5L!}J<µ4A-LRg@As&U237]0B)PHua"P<5TS )8~ \e;E.=maE`M 8lJjCṠ1Mۦe y`s!H! '\,?Ɛa<󭛗ЋŐnF~Ŕ>+ {|T Y^R  ~Z@ Ci%ЙHڐirx: lw֋ٲƲp@mޮyIګvVt.lB۔ \e ^"< n`? $l럺 OhFۤ| .O r\t\^ϹdϮF2J w0K wT YI#?.C}  u&x`qBO7ా?Y `e E[F1:?,_Nn6驮ir_Q$1so7 Ͳ`}5}G! oE*pEcH+C:#%'艬pλO` pD>O5reϭ6wSϦ a="1.#8:`-X"WWrABC(ѱ#㲃CYH( TYYR Jxz *Xx*)I ,x"()0pc2 CE#1dZyS4$!QθA̓Ôffr إ_n&!y3$R:0LNgSgwVҨwx0WhBsB`9!J1T*]:jBZv!@]ƚ$ޞ9?|={o hA%,k>(}1ia.T-/E51_0`J C:yP)*D0?ɜ!toѠ05&9I"hc&  oh1ՑR db}UB8s C0pPp~hAH, Eb)JTpQlǦf 8q&XDBf1 "ڈ=r!{Cb,PXyԱםO@!?Dc@@!˔GaHю72.Vt^*BT.ۥ1EXxh镺:GiF1%'؅jV(6+jR;㬓Mt␛&(40 UNhS=)t@.uv/m} D/Q-XAѐISFNtKi%6Ҙ>.MPќ, -Qʐ )'̄"u*UU eQAE ֕Tas*@èlae>U8^g gHZiL'\:"'p@k;{jL+($%NčUv p@V ʲXb;KAvDiX &18R抵1+`JQ@=話t%)Y٢IU7_ A38E YTW2x'~2(XL] %N0L)5$E9L vǏ*=BR"c.[^]ǒ)D;6_4KmŭJdԷģ$Oʟ`8}PchC{L\TYaQv1Y8a,&] >Ct8qcEJ;)MKXf2椚ø; VN Q<j<#|dDXh'v툊T"É9ҞUiG)XDDF ) ÑP5p=m{;*h*%$4yp%I?³sR`+I 0Ņ0GFhh7#pλ3Aڹ^~L# ̥ HgfLÞwY` ~AGK}o,b}!ylL9zF2SR`dLǕS:N5C2ͼ@ xc A~~vO2aaAȡ9{2{^أ)n(*^.`jM<[ $@1/Qn?mfa[’h.'χL8l[ҟػg y;*oqXdBok%'*c %lb AIP!l%#%#B2f`4 E W 6FTx\4H0(c6?+X\0% $}g0/2(4x  ҃P})/ Q+jBh'D(4(gE:'e@ "QgCWuySs7V?O p1NĢy*_;2Uh22@5p/vx 5- zcuuljtqԉKA@F&;PK"!PKN:AOEBPS/img/sdf.gife0GIF87a8?}}}{{{wwwuuusssqqqmmmkkkYYYUUUSSSKKK???;;;777333111///---)))%%%### vvvtttppphhhfffddd```^^^\\\ZZZXXXRRRPPPJJJHHHFFFDDD@@@>>>:::888666444222000...,,,***(((&&&"""  ,8??N+ 0]ղ RC+]  ] C,H >]0HPŋ3jh@dɓ(S\Igd`I͛8s9 Jh) u*]ʴjK 0Ѡ<.{ʵׯ`CHZг, ˶۷mHP H!H @J‚Z˘3kFCD?  ?hC )?H0$( W` nNqwAOR5U=JI}`.ȹΟO<+ 0ldpmݤ?Rj "#E1-t"dp7#B;7CG=8ORwk.OHkh "6k'Ҵ#`Òx1>Sހ0|JICHaGNk#NG\2#^bn験Ns8KN XP.,j PW3"{O؞H8ҙ>b}LK3|f(GP)A. D~PiH'A_fH$O*#(v&z{!1VD^竛]AJLٰJxBp\@x%[W 53K+!aۡ7,y#b]"T͵kDzlR޸SQaBmaE SfAh2;.JCo=:I^4&!ݨH!}n6~'j'F ^zgCw9;_a!r\Sc|<%y)*e)rq#ռJzꡨ:VHE "J,%[j.젾@0f>r!$Fπb]ޯ1/~A!h #cb&~4Qє bȵW^rfaHwHHE+hiC 8#aCh8skrWLfu ! wrc E bfG-J  S?W`W+h g~P 5E' d$!u4eԃe1`L n÷=\C@I 3ZiX@! 3cK`8p*o@\'DPx S>z FFw(  =y;a耒 [$N8Gp Y#Lx  q0/` Ay2{%3 9ċ GH D=hG>@@܈S&wc M2q,#=3 Uth:  DȎq(I Ɉ U )?}# 2L␡Dy2y2:HPX 5k\ٕR 19LOٕf) LiJ:f.}T >IhB? R9Xveppar |i  lp/i [+2y<d>% a"UH}Sy,5sC淒n S cLbbJ']ljBH08*sNZf К-:Re'Oc_1N(LbB9D4q 3mLR7&1Y\tTBQFd@s"ZacUDޕa.2 LJ!-r? lVk7!p Uv%C/6.0Wc" aK$CceAq($-%]pl CMJ-FaA3& X _p(Ͷ%Msk0_@02 #\ yS0#mک 20S2 ;LPZ!cƱ0p(=מ1eUʤfP3'X )*$#vթ֫V@?Œr%; O; D7|XZlԋ y!* %L,Dl_ZT|Y3I < C( `~g30 oŀPȠ]U2sȌ~̭ wȭɦP&VFrogVFgaEA ¦` |YSNMYvQYE4,˥ 4P,Ɍ$ :0 4U5{6YxVvQCP : W+hmj̅F5JBafTj!vDVnAE0لdMD|Kj35z, evv~uu}، 'Wd ~8F%5mMIc}mBI!yPЩΈ0$&l%ʵY E˓&" ֽ` #B] `htun( c]@XE߄HʫR 6=(V; ߞх2 }>L0N2)&nn)+ nV@;> ii9; kEM ԰l6mF,SoT.7WDLUFwXZĬJdEU#U޽8GwLxL|]zur\fdUeNw8A߱k~(<ϗ JSq)_$nJF{*6/)ςEvt*[^Sf6 Kw\ġ#]4S^9Б% Mc+QdBA0=&P _P&<@&m 1}e [ϻϻz*$İX@Zg(>PO@o.5G0À^PĠ,? sg|4O{@pI=U(/ b6A.|-iv QEhŌ;~ 9ɔ+[v B- C2\@fq5լW%DaӴ  ݼ{ <ċhrR$-A ҵNjܻ?W|(ׯ#/{ tŠ e2c8Ǧ bHQG(ល Hcȫ?멂Z(Nk>Rt5%׼v\ "n2k mVm"Kة< 럸$ol1ڋoo b *j \YR4 C0s/qP忴.rʙ]Cܴ6  :mmJ Ag& ,t9e km=|30ݖ! Mpmy .!@NiDe Rʥɂl-w 9 iAŅg|g@=7~=in7 bHY \vm"N,~N^NPf<+bMj8!zGI[y2[åzźal H"}x:qWBUP$V0'b~J]@洔-5v)nΔԲڑ T2`Р4"TȪ&`$(I2wˉ&*晴m~PX9"t(:+J)$zSEJRE@/ЙTr:l+SOTE'^z!6+: ~ =XkOj̭2q#vd1ɫ&$'5'p-\)8XOdɎpL ®+f+GLV*xe}TB"L4KΎdzF_thV%a5ܱ6o۳4NX(Z7%)nHr7Lk΋o|+ a-ɘ;fBdOQ(+x,LT6<(Gkx&p Ծ">[vX$^10bJHQÃJ: ) C "վDa6Es|PTmƳ2"(AD+zьn H/zJn8;LIzԤ.OTzլN L0! B"@A ` { #8h@hK{ԮjӀG@opߚu%l2;ݏX7#w+V\J1*E7 |/+| oK|/k|?|$/O|,o_|4o|< }D/я+}LoӟK}Tկo\zN4nŎĞ=kOە_t^2'c{x]`uN B&{ǵŌ @f١d.Ow%$[xm8O [u 4tG7T_ fEnxP{oMU?꥖LR8r0~77ѳ~T}֊g~C+gI2#C((zgN-B0-B{ w*ct— fW-]p&sR#@+,@! .y/xs1}˒ X/-% @D5lY);Er#uZAr#-7]HbF gd~@ðT5Y=P&'B-~7dUQ$>nHHKߤBPdcg&ćf%[dO?`L{Tcc?І8 Pw1+035h[1>ϰEyYSQѷ&cee8-tHV%Q8 6X Rpd//xTSCq "MW +: R&vKXXHѢOgU5H| A~5;~4XA_N $ $Oh僆L~ E,ؐ`%)*A%#"Y_3}pvnX(-2Q0c@AU0BR1Y ԈdG=iF9 .%u38nx3VPOɐxo p7ݒb%pv%Yq )w'`j `#xqv4XG w \wЗ x'Z ki|Ę gG<$`y\0~ 6" ٍ%7pv@Pų 'Z)搛YI y9Hr FQ NPj!֖ة © X }DDA,9,ȃtay̑*pf",:)+"8~9 *C ! g\,RrvF5#X7V㕂Y\16~@L*h.#{XqE)<.* LV@GR"2:[z~]:/aZc *2*/L-  `PAh rINVt6 ~2"M4_FFRIo٠^J=*pCt2ڧ z@]G39jHCBџG1PYCY2OʯɭZF Kfwo6:{ zW ;aw y8X?9;S8%(f(8~!R @,+MK+/78ЦR'ffP٦al> Ĵ KC+ ]t%YX}6֥`BѺAWBQƛ< LDpwhB8~6;D{!S[G P&0gI"@jhC3d.["=JHrJ::IG[R祇9# ,']3^J2#5(y5<ó-R~hL$\=asFuN\yy쒌pC1cF5XQ$V/<(&YHkQi,+`ghRT!SP"|EdjnUXerIrUdA:ʯHN, ]],E3SǾ`Yxe2l10aFʃFiWʕ9ϵDL(hB}u ުکVY ]zڪXOwK% tfKx(GI#޴x%k DRf*%D٤5 xTc5QsPEe2YR!8m(ͩ(fZ3u‡肇t|N!'1 B4[E.4:a@9 `Op::&b2a5dʫ}{|#Zdc'|Hs8]"x Ajߝ`o&39!Dd>dW3"Bv.@ c+յPk޲Zgz](2?*lt @@$&pDHS>E4k1_%;.zqtz(mJyLX Ze44@y^0C p ",t8!;RM! ~Pn5~6~>)%9{Ct)in3™'!]D)@~<ُM?… a5 ^!̎6'-mAV>6]bK9SڲT|(!0C4M+Eˋ:. .NK\&?nMQZ쵀rnw6 $kz\YnlT,^ <厐"l5ӬrpWp> dzRӏO O ?S a! 7,#Bw ˰%@a DPH3$? TFXҍs/ q<>Hofv,Z^jAo&tFgۖF4l ?? ]C+RC  6ĈNC NE?N]?<%ԥEN,HU<tZF# b[ŋ3jȈO9I"P & $**[˕8s܉q1O J ?;uiTEd$"C֤Tmʵע0$Z ٳ8m`lNnv" 2?@ ۿ5 @̸5I 8KHǫ&lDvRЍع-dY*װc[F<uͻ ) @IGRW8y'Ճ,, E P>H<G]wh`_ D!M =X^W=DQPE(VhfZ8@7 ,0BQP 7HQ/`< &T GP -UiD|5\`)dihlp'A!ĕ8x!<矀*蠄%G'QF & <!̕P` M2DWpEGߩre]j t:|`F*,c p:t1&mVk*x^ -,Et90 +6Ww 7  7p<ΒGHgV$l2IQn|10jP&3 a>><<<:::222000...(((&&&"""  ,4JJ-7 غ B JّJ &-?7 TtÇ#ѡDC,ȱǏT8H!%B)TL,ndZVɳOD9臐(B5>EϫXB\Pqh-h1uْeX@۷pEp)L(ATw>,Qtma+z4iD!l˘3:<2"Pi `,P+ǎl$ ٵߎ94ڳ j|QuEXn^{ mS w8HN~&7t_̍fdsAdE F+% 1yNh*4Où3nE: "$(n@*.¯*-*-U,[Ww<[r}H=%1\mg;M-B("&;*Xv4bc[Ж2`IQiL'-bX].ȼ,IG`ͫ" YH4WgEnw x xkYh]rXB9 JW|ZDvsզ<d0oƍ+LQyܐ~TQ+7 Tޅj a\#~G;Y"&N 1ǴIИ;.qJCNJot1c'z=#XTb)ʊYڈ#)vǒ {s$\xVȜPj(@= fFD_E##Մ$Z};hNp/q]0eg5Y HA#'laz pfX I(?J^F*Ȗ&oȱ4(r2W>#%JMv!%BbkeHƼ +S[%jvK\3Q'wz& 7ZVt4~D`v[(?=b!wjcژUqV _C#οroS ٤<\"p&t=k>F`$ f6!BvL1-p ZJ8,GUWPkbDCG ~A76Ž~bnCv!,GX __0Aˍ0 @Y|`6 7*)7Q|| X&CKI &_kV؞%H*@[75"ҧ>` uKB!R- ;?;ܹ2 Ex({'M}l#F 4 "rG"axE27gPv%V"}g^7<L7V57:ˆa*gRg#!DG!F'ސ8JztMtQB7:er%!~_w!$8SRLef<.jjjT7a=1'USEUV'wi6yfu:3QHr#U0`*hb?HUF[5Љ^2@3if ޖR&1 'DdX~u8ҥ# Z rXgƅ( S fGX](Ќrf6F5J!c UV7"u &90VYA6짊Vz mRZW5x&a6P\V8/ri*mVH*I Z'"k^ژiT)5XWb m*U)۠5n*5Ep`4YMh T窛2 #q*S=!s©/1tVO{YLA$PRf] 1W"%E!Kx!dB:K*_J d8? d6NEup o5SR"x Dq򃲧' ɺlS/!,{1IV p- Wa0,DK**D[$HZ* b"PkX-,С)Ly p@;B]Ȁ(M` U  )hMH2Eg {e0 Rj(Q1(HEb@peՕ7{c꟥jJ[]AR tKvi a;O1Wм wypbm謳0u8 cuh>J0ʫ9zYש` $+ ەoe;)dpRY$CWX#B ?%g )|pG+ A#j%‹@0yB)V kK <97 d𘩩 !%\2~ \Ű8z#hmE9 /R;4)"Z4b y~҂:\a4ʃXu).6pE0YV-/|C@W=d]E 5+P29и\v}װ Ap3(׀؂=؄ (Л)Lʛ%\mRWBVxq0}h#sL TP +Hg 2)#x 1X΋ rB1ч@6R")GF" 0#H9T5G#+2FC&cXƒ(gqdB@Su%#R6u¢QوF J*h) A$yLzh6r2 PCp5Mb' ʣ2h'-`X^R"*'c2rvJh2,(9)(jFl3YH٪ Xҧn `+"v`/ q黑" +تN$..8![klcgF HI4GrX!0ͱ+OU B?PSF; 1p; obAWlλ{QI/Cr|̄rI)ؚ༉73W|T;Luʄ!Gjo9!;#ҳ Y*"6އ+H< F꫿7dA סcBx&  pmH@k!"'>XBkL#-|(!A"|Q?`sdHجs "x@ ~$ F @T=8S ^cPHd`}.$IFeM^VqBXVAHLX9j+|.2M釅ȖRlQS 8ID# ԣYJ{J"zĨ\ f+wX?as,,#E~YL*v=.bUcD (LJŸ̭ ZU_$qb3NC4jM^(4#% ƴjg8`O_󡒊!,cqzqTK5ӣ:6Pt38B\2)XpWWH4|@uDX aߘhMZ: ֤. S1Adk * 7Gmk0у@V4kSsURD ,jb7,**QT紨MjWKGpCЙ^+TKZj8ě.*Z<}t18 #)1>G}$SɜU<ē"+)&R4*Ǹߡ&Bimf@-Ô~1o'v"G0B/p% e׊S{a]^-(šX;("h#& 1c.KK\@$d"+~V*k*8nġ=x`8%H݁[a7̓x!rm-+2}+7Ϲn:F9̾9)!ᠹ|x;C?" dahE sPE++#UZ|MKEm+1_ؗ;"ʼn<ĎywTc>tMf-nŇЈo ?{د"ԭn/O`$ ׿V6#\H,E߅ݪhX1SXzۊc1qoQUk+ *pMbsrT@Pvv6`OO?-kq-X3ft@+t܈0#H#"@<RJY$'Q\›'OĞ>oyUY}A' aTPbmX\`жuYc+C4j\^F\Rgkɯu򻯠gN>6,7!o_}>-R?Ѽgiw^a5ӒzbvQ W_W -$|!bzb@d#9[:b׽! X$ x,L8#9i#رN&:XȆzq8{FHH0 e7H{X3 /r8X4;e}"(/XM~5s1ӌPg1GD!0~ m39q 3y!( >8&cABDcwR(/6RMx:3847i,wEGL3y9/I e3;s~$\#0!0wk@p;}mЧpQ@P!^%5DϤN۬>e~xɇbbvS}fk5ǩ&q~ }PGh+r~R[*姦e 㔿utx&窥ŬXf3`Z&B' |Gz0v Wey $FF)kRB;%u ;2';Ua(;}*V )ͫ+k O~36upqA'aOm밊`.8?Rdp(k7 ˛[["{^,DI祷.һ2uUj۬dY b/9Ƅ Mz/gV@8)"۲30 "h%@"o敾',~ EF$@laR2bi/o 4j/i5mg#/t?K"&Feɚhxx>(^$Xt2bh\WGǮQkcrbe.9f`Vidב^N$ɱh[Eچm\˱0c|'@`d&\&])l!m\8^^9c"l8`G(<{*о4ܦ*sneR@|>0Po@T0+U= +< U\mCśxSwh7rcG)y4&}"Β͢X{Ac}(]6-UF}s}0#3*7]FDVd:4 @@>\^`b=d]f}cJ9hi<"*(/.F~׀؂=؄]؆}؈؊.: ?}Ԗ}٘]#Pٙ٠]ӛ#@ڦ}-;PK<&PKN:AOEBPS/img/sut81009.gifGIF87akR?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,kR H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXj:0_|w0|o> 7P/___ |(0_|7P8p|8`A&TH0_|w0|0@ o`>}?}/_|(߿ ?˗O ?~/_˗? 4x!| #?}3o`>/ '0ǯ` |O|K/_ &L`Wp_>3o`>~/߿|o|#`'?W0_K0aB /_>ӗ_> g0| _}'0| '0?} g0~}+o_%L0a„_|/|ˇ0| ߿|/߿|߿|O߿߿| o`?}O }(߾? 4x!|>/A `߿} '0߿|_3`>'P_?} &L0a„O?۷|!70|O`o߿|W0| o~&L0aB_|/|g0@ o/߿|/?~߿}/߿ /߿߿/?$XA .d/_|˗@}`>'P߿|߿}߿|7P|߿?'p "L` O}o` _O`>_|/|O? 4xaB  W`>߿| o@80}߾O`~˗/7P` o`'p "L`O`>}G3o ?~ۧ߿| '_| G0| ׏`> ̷p… .`>}O߾̇0| __>˗_?} G0_|#O|-\pB /_~ӗO>˗` _/߿~_O_}߿o@@o`? 4xaB O_|/>}!7`>?~/߿/߿߿/߿ o`~ /_|7p'p "Lp!ÆB(D%J(QD I(QD%JLOD%J(QD8`A&TaC!F8bE1fԸcGA9dI'QT}'0_|0@o}߿߿_߿@_߿| o_| /> |(0_|7P˗o|8p@_|`>>߿/O| _>ϟ>@˷ϟ~0@ 7P`|'p "/_?O| _~/o`> /_||/A ߿|/|#O`?~ `>̗0? _>70? ߿|/ #O|̧o`> _>/O`W0~/|/a„ G߾#o`_}ϟ+o?} +o?߿߿}߿| o`?}O } oo|o?~O`>o|O_ o?߿|7} '0@7P`O_ o@~ H3_>70߿|/|o`> `_3>} g0| o~&L0|G0|/߿|_ +> O`|g0?}`> د@}%L0a„ /_|_>O߿||ϟA ?}GP`>`>߿o`},h |(߿_80߿|߾~o|o@ /?~ O#(0~0@_> ̗/_} <0aB}/|׏`?} /߿|'0|W0|o?~׏`?} /|#o`>*TxP_?Co`_>/_|+` _>}/A}|7ПOB 엯~ˇ0|'_> /_?} G0|'_|'P_O`|/߿| ׏`>̧Pƒ_| (P`>뗯~'_| O`>(P`>'P|/_?'P`|篟> 'P|'P|/,h „ ӗ/>!70~O`7p/_| w`>|'߿O}7߿˗/_'p "O_|ˇ0|˗o?˷}˗/O` _>߿~(|80_|  <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{ 4СD=*TRF/'?HGCOR!ԇ4"?Wҟ>C:RA~CP_| C篤|+ґ˗/_>}H˗/_>~!}篤|G?# 0 <0@} *T߿8`A&TB~O@ DO@ DPB  ϟ>$XA SP„ <0… |,h „? 4xaB 6t(P@˗O@,h ‚&LР|߿8`A&TB~˗O>}8`A0@,h „ 2lP˧Ç&ԗ/_~=|C/_>/_>~{Ç w_>}>|0|{Ç;/˷ ?>|C>|a~ H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`Ê1_>廚a>#0_>廚a>#߾뷐_|˗B~co|[o_} ˷oa}O| O?})O`>o|ۧ0)߾}d_5|)70|)70YǏ_>$O?~O?~O?~8`A&TaCSo`>So`>:tП~9簟~/_~/_~/_~/_~9簟~9W0W0W0W0簟~9tHP|90|˧O?˗O>~/>}#/_>}0|90|ԗ/߾ ˗o_|˷`W0|90_C0|90|90| A$(0A#H|  A$HP`> G A Gp`> G A$?  A$HP`> G A$(0A#H|  A$HP`> A$HP`> G A$(0A#H| ~#H|  A#H 8 | W | W | W | W | W | W | W | W | W | W | W}8`A H` +X` +X` +X` +X` +X` +X` +X` +X` +X` +X` +X ,h „ 2l!Ĉ'Rh"ƌ7r#Ȑ"G,i$ʔ*Wl%̘2gҬi&Μ:w'РB-j(ҤJ ;;PKiPKN:AOEBPS/img/sut81018.gifGIF87aF?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,F H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKْo|}Q7_| ܗ/>O`>#ob|wb~/?`?}G1|_>~'0~b>曘o ?}g-د>~}ܗ/?'p_>~˗ϟ@O@ DPB o? Wp> ˗ϟ@~/|O>~6lذ!|ӗ/?'P?~O?~O |/ϟ| /aÆ 6$O?/_|o?o?~'|6lذaÅ'0|%Gp߾O`| ǯaÆ 6G0_>/@~#o ?}o?_>~/߾_>O@ DPBӗ_>K`/|O` 6lذa?~g0| 70A_>~ װaÆ G0}'|˗O| /@ O>70?aÆ 6,@#O߿| 70߿~/|(߿| H*\p?׏?}%G0|/߿|O@'p "Lp!o_>~(߿| o| / _}/o_>~ H*\p`?}뗏@}˗0A}/ӷ|߿| 6lذ~/?/a>_>o?'6lذa 䗏|߿|'P߾ӷO`>}o`>~O ?kذaÆ ˗/>˧O˧`|/_~˧O?ϟ| װaÆ 2/_}˗O>˗O_|O_>/>}˧O?_>$XA .d|/_~˗|'П|/_~|/_~˗?װaÆ 6$دaÆ 6lذaÆ'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? )r$ɒ&OLr%˖._Œ)s&͚6o̩s'Ϟ> *t(ѢF"MtG$XA .dX? 4xaB'p "Lp!Æ'p "O@ DPB  O@ DH@},h „ 2lP?$XAO@ DPB O@ <> 4xaB 6t(,h 'p "Lp!Æ ,h B%L0a„ &L?} &DO_„ &L0a„ 0a„K0a„ &L0A}&LP>~ &L0a„ &,O_„ ӗ0a„ &L0aB%L0@~&L0a„ &LhP &_„ &L0a„ ӗ0aB%L0a„ &LР> &L8߾ &L0a„ /a„ 0a„ &L0a‚%L?} &L0a„ &4_„ /a„ &L0a„K0aB,h „ 2lП>СC:<ϟC sСCϡC9tСC A:tСCsp!}:tСÃ9tp?:tСA:.ϡC:txP?СC:4OCsСCϡÅ9tСCС}:tСCs?}:tСÃ9t>:tA}:TC:tР?}ϡC:tx>O@$XA!D!B"D!!D}"D!B"D?}"D8П>"D!B"DO_?~'P>$XA!D!B"D!!D}"D!B"D?}"D8П>"D!B"DO!D }"D!B"D>"Dhp?"D!B"DOB!B"D!B }"4oB"D!B"B!B"D!BӇ!BC!B"D!B;oB ۇ!B"D!B!BC!B"D!B? 4x?}"D!B"D>CA~"D!B"D?}"D>~"D!B"DП>"OB"D!B"A~"Dh>"D!B"DO?"4B"D!B"!BӇ!B"D!Bw>!B"D!B!B !B"D!BC!!D!B"D!!BC!B"D! }8`A /a„ &L0a„K0!B&L0a„ &LhP%Lp } &L0a„ &4_„  /a„ &L0a„K0!B&L0a„ &LhP%Lp } &L0a„ &4_„  /a„ &L0a„K0!B&L0a„ &LhPO@$X } H*\ȰC}:TC:tР?}ϡC:txPo>}sx>:tA}:TC:tР?}ϡC:txPA~:tСCsP>~:tСC9txП>:tA}w>sСCϡC9tСC A:tСC;oA~}'˗/?9tP?|sС?>? 4ϟ ` |;x;x;x`>~;xA|'_>~'0w߾ O |ӗO?~o_}?}_|8`A4hРA3h>~/_>| Ǐ?~/W0~#o߿| $OA ,OA#?~o_>~O>~˗|˗_|Ǐ }4h>g} oAǏ'0߿|ȯ`>'0|O`>~O`㷏_> ,ϟA 4(p? 3(_>/| ?}'0?~(>} 'P|'p w;x` '0'|'0߿|ȯ>} g0|A~wA~>'0|O #/|o`ӷϟ|;xp>~7P_|'|'|_>~O_~ ;x;o;xP|O`@~߿|'0߿|'0_O ? O_ _>߿|/|~/?;x~˧@~>~O`>>~O~'P~;o;xp`>~o`ӗ`>~ /_|˗O>~/>}˗o?~_|ӗ/(P?$XAK|ϟ| '0?}O |/?W_| /_>}˗/_>} ӗ0aB%L|O_|?}O |㗏?}+/_|ۗ/>}˗/ ;oA~&o_„ &L0a„ 0a„K0a„ & &L(p &L0a„J O@ ~"D!B"DП>"OB"D!B"A~ۇaA~"D!B"D?}"D>~"D!B"DП>"OB"D!B"A~ۇaA~"D!B"D?}"D>~"D!B"DП>"OB"D!B"A~ۇaA~"D!'0'p ,X`W` _ /_>~ ,X`,X`+X` W`˗_ ,X| ,X@} ䷯@~ ,X} ,X`_+X` _ ,Xp>~ ,X>,X` W` ` ,O_ O} ,X`,X`+(߾+X`,X` o߿|O |/?8`A/a„ 0A}/_>׏_>~׏_>~O} ܗ_/aƒ%L?} "o ?'P߾|'P߾|_|ۗ}O_|&DA~ۗ0@~&L`> ̷o?_>K0a‚%L0}&<}ϟ|ϟ|O|߾}?~&_>~۷|o,h>;o;x(?/߿O?$XAK0aB%Lx0| '0| '0A_/a„%L?} "g0| '0| 7_>o`K0?}w߾ ۗ0a„ _|g0?} &LXP &_„/| '0| >~ '0߿| / ӗ0aB%L0} '0| '0|O`>O`}%LP/a? 4x!|O߾ӷO`>0a„K0aB%Lx>ӷ| '0| / '0?}| /a„K!}o?O`>/ '0?}| " }Kp } &LH0_|ϟ|/_~ ̗0a„K0aB%L_|ӗ/_?O`>/_>}'0?_|%Lx 8`AӇ!'П| '0| '0|/|˧O?!B }!B"D!B!B !B"D!BC!!D!B"D! }!B"D!B!B !B"D!BC!!D!B"D! }!B"D!B!B !B"D!B? 4x?}"D!B"D>;oBC!B"D!BC!B!D!B"D!B"Dp?}"D!B"D>;oBC!B"D!BC!B!D!B"D!B"Dp?}"D!B"D>;oBC!B"D!BC!B!D!B"D!B"Dp?}"D!B"D>;oBC!B!B! }8`A /a„ &L0a„K0!B&L0a„ &LhP/a%L0a„ &LР> &L(p &L0a„ /a„K0a„ &L0A}w߾ ۗ0a„ &L0aB%L0}&L0a„ &LXП "/a„ &L0a„;oA~&o_„ &L0a„ 0a„K0a„ &L0aA&LП &L0a„  }Kp } H*\ȰC}:TC:tР?}ϡC:txPA~:tСCsP>~:tСC9txП>:tA}w>sСCϡC9tСC A:tСC;oA~ϡC:txP?СC:4OCsСC }sx>:tA}:TC:tР?}ϡC:txPA'p "Lp!ÆsP>~:tСC9txП>:tA}w>sСCϡC9tСC A:tСC;oA~ϡC:txP?СC:4OCsСC }sx>:tA}:TC:tР?}ϡC:txPA~:tСCsP>~:tСC9txП>:tA}w> <0… :ϡC9tСC A:tСC;oA~ϡC:txP?СC:4OCsСC }sx>:tA}:TC:tР?}ϡC:txPA~:tСCsP>~:tСC9txП>:tA}w>sСCϡC9tСC A:tСC;oA~o@,h „ 2lP?$XAO@ DPB O@  4xaB 6t(? 4x?$XA .dС@ >O@ DPƒǐ!C 2dȐ!ÅcȐ!C 2ȏ!C 2d0!}cȐ!C ˗C 2dȐ!C ˗!C 2dP ? 2dȐ!Ä䷏!C 2/_>~ 2dȐ!C 2\/_> 2dȐ!C2dȐ!C w> 2d|1dȐ!C 2dp|2dȐ!C cȐ!C 2LoA~2dȐ!ǐ!C 2dȐ!ÅcȐ!C 2C 2dȐ!B~? 4xaB ˗C 2dȐ!C ˗!C 2dP> 2dȐ!C;oC 2d_|2dȐ!C 2dP_| 2dȐ!CcȐ!C 2DoA~2dȐ!ǐ!C 2dȐ!ÅcȐ!C 2C 2dȐ!B~ǐ!C /? 2dȐ!C .ԗ/C 2dȐ@}2dȐ!C w> 2d|1dȐ!C 2dp|2dȐ!C П <0… 6? <0…Ǐ!C 2dȐ!C ǐ!C 2d(> 4xaB 6t(p? 4xaB ˗C 2dȐ!C ˗!C 2dp!? 2dȐ!Ä1dȐ!CǏ!C 2dȐ!C ǐ!C 2dC 2dȐaB~2dȐ!ǐ!C 2dȐ!ÅcȐ!C 2\ȏ!C 2d0!} 2dȐ?3/_|chC 2dȐ!;o_|1d| cPa? 2dȐ!Ä1dȐ!C`|0a> 2dȐ!Cg_>}c0!'p  ;xA} /߿}/_>~˷O`>~/|˗|/_> />O | w;x~߿|/߿㗏߿}_߿| HC!B"D!B!D!B"/_>~/| ̧O`| '0B"D!B",/_>̗o`|/߿|70߿|o`>'0B !B"D!Bۇ!B"D|̗O`>} '`> O| H*\ȰC3/?˗o`o>~(P߿|/߿߿|/߿_>$XA!D!B"D!B~"D!B/?㗯| O~߿|"D!B"D| ˧O|o/_>߿|70߿}ۧ_/|"D(P_?$XA .dذ!} H*\x_| ˗/O_|ӗ/>cȐ!C 2\/_˷ϟ}/|O_||/_>}O| ǐB$XA .dС} H*\x_|2dȐ!C 2dP_| 2dȐ!C 2L/> 2dȐ!C 2/? 2dȐ!C .ԗ/C 2dȐ!C ˧!C 2dȐ!C Ǐ!C 2dȐ!C ǐ!C 2dȐ!ÄcȐ!C 2dȐ!CcȐ!C 2dȐB}1dȐ!C 2d0a|2dȐ!C 2d_|2dȐ!C 2dP_| 2dȐ!C 2L/> 2dȐ!C 2/?˧A? 4xaB 6tP_|˗O{Ç.̗OÇ>|Äg0_>70Ç>|С|'0{Ç.̗OÇ>|Äg0_>/_/|=|Ç˗`|/_~_>~{Ç̗OÇ>|Äg0_| ̷|>|C;/?_>_>||=|z衇_|80_|׏߾| O`>$XA .dСB}̗/_>}/|Ç>|x0_>}>|Ç˗| /|Ç>|P_|+o| O`>||=|Ç>L/_>~+_| O`>|B}̗` 70?>|ÇÇ>|0|̗| ԗ߿|{Ç*ԗ/| ˷O@}O`>||=|Çz衄/?$XA .dC1bĈ#FX0_>}#F1bĈ/bĈ#F|"F1bĈ ˧/bĈ#F|E1bĈ#ԗ/_Ĉ#F1b|E1bĈ#"/#F1bă1bĈ#F,/#F1bD1bĈ#FxP_|#F1bĈ1bĈ#F_|"F1bĈ˗/bĈ#F`|"F1bĈ˗_Ĉ#F1A}E1bĈ#̗O,h „ 2l!D1bĈ#FxP_|#F1bĈ1bĈ#F_|"F1bĈ˗/bĈ#F`|"F1bĈ˗_Ĉ#F1A}E1bĈ#̗O_Ĉ#F1"B1bĈ#F~#F1bĈ1bĈ#FX0_>}#F1bĈ/bĈ#F|"F1bĈ ˧/bĈ#F|E1bĈ#ԗ`>$XA .dC1bĈ#F_|"F1bĈ˗/bĈ#F`|"F1bĈ˗_Ĉ#F1A}E1bĈ#̗O_Ĉ#F1"B1bĈ#F~#F1bĈ1bĈ#FX0_>}#F1bĈ/bĈ#F|"F1bĈ ˧/bĈ#F|E1bĈ#ԗ/_Ĉ#F1b|E1bĈ#"/@~ H*\ȰÇ˗/bĈ#F`|"F1bĈ˗_Ĉ#F1A}E1bĈ#̗O_Ĉ#F1"B1bĈ#F~#F1bĈ1bĈ#FX0_>}#F1bĈ/bĈ#F|"F1bĈ ˧/bĈ#F|E1bĈ#Ǐ|Ǐ_Ĉ#F!?~˗?~"F1bĈ _Ĉ#F1@8p>$X A;xp_|~#F1bĈ'p@$X ~ ? 4/__>$XA .dC/bĈ#F?̷? 48_|˗;h0_~#F1bĈ'p ,h|˧_|ۗ/_>}ӗ/>/_| 7П?O`|O`|˗|;x`A <0… :|Ѡ|E1bĈ#O@~ HA̗_|˷|O}o߿~'P|'_}/_|_}_>~˧? 4X|'p "Lp!ÆB~#F1bĈ˗/_Ą̗_|ׯ|O ?~/?/_|O`|'0_/?˗_ą˷/bĈ#F|E1bĈ#̗/ G_|/_|/|'p`>/_ O <_|"D!B"D!B"/?"D!B"D!Bˇ!'П|'0_>/_˗_|˷o`|3/@'0_>!D`|"D!B"D!B"/?"D!B"D!BC!A} ? o/߾˷_|(`>o~߿/߾'? 4xP | H*\ȰÇ˗_Ĉ#F1"~ӗ/_>~/_|/_|'_|/_|˗/?OA~'0_E1bĈ#&/#F1bĈ#F1bĈ#F1bĈ /bĈ#F1bĈ#F1bĈ#F1bB1bĈ#F1bĈ#F1bĈ#F_|"F> 4xaB 6tbD)Vx#} H*\x_|2dp } H*\ȰÇ#JHŋ  <0…Ǐ!C ǐ!C 2dȐ!C 2dȐ!C 2d } 2dȐ?cȐ!Á1dȐ!C 2dȐ!C 2dȐ!C 24oC 2d_|2dp } 2dȐ!C 2dȐ!C 2dȐ!C ǐ!C /? 2oC 2dȐ!C 2dȐ!C 2dȐ!C1dȐ!CǏ!C ǐ!C 2dȐ!C 2dȐ!C 2d } 2dȐ?cȐ!Á1dC 1C 1C 1C 1C 1C } H*\x_|2dp } 2dȐ!C 2dȐ!C 2dȐ!C ǐ!C /? 2oC 2d!C 2dȐ!C 2dȐ!C1dȐ!CǏ!C ǐ!C 2dȐ!C 2dȐ!C 2d } 2dȐ?cȐ!Á1dȐ!C 2dȐ!C 2dȐ!C 24oC 2d_|2dp } 2dȐ!C 2dȐ!C 2dȐ!C ǐ!C /? 2oC 1C 1C 1C 1C 1C 1ĐA <0…Ǐ!C ǐ!C 2d8_|ܗ/>%ܗ/? W0_|ca> 2dȐA~2dȐ!ǐ!CcȐ!C 2? O`|CO}2dH0_|cȐ!C 2dȐ } 2dȐ?cȐ!Á1dȐ!C 7__> o_|ۗ|ۗ|˗@}/|O?~| /ϟ?}?}o_>~8`A&T?~6lذ!Bkذ!C~6lذaÆo`>C}ϟ|ϟ|O|߾}+o ?}o?/?~?~o߿|߾}kذaÆ װaÆ ˗_Æ װaÆ 6L؏|3_> O`>O`>'0| W0_| '0|@~'P> '0߿|5lذaÆ? 4xaB ˗C ䷏!C 2dP?AC| '0| '0߿~'0| W0| ̗ |o_>~__O@ DPBkذaÆǯaÆ kذaÆ o`|'P~!O }O`>O`>_>o? 3O}ӷO` |O`>_ 6l} 6l|5lؐ!} 6lذaC˷o`}_|+/_}˗| '0|/>}?}+`>ϟ|/_>~˗|/߿ϟ|'p "Lp!5lذaCװaC5lذaÆ &װaÆ 6lHП?6lذa5lذaCװaC5lذaÆ 6lذaÆ 6/_|6lذa5lذaCװaC5lذaÆ 6lذaÆ 6lذaÆ װaÆ ˗_Æ װaÆ 6lذaÆ 6lذaÆ 6~ 6do_Æ 6lذaÆ aÆ 6lذaÆkذaÆǯaÆ  <0… :|1b|"F1bĈ1bĈ/bĈE1bĈ#F̗O_Ĉ#F1C~"F|E߾#F1bĈ1bĈ#Fto_Ĉ#/#1bĈ#F1_>}#F1bĈ1bĂ1bD~"F1bĈ#˧/bĈ#F!}#FX_|"Fo_Ĉ#F1b|E1bĈ#:/bĈ ˗_Ĉ1bĈ#F/#F1bDEQD/_>~ HOB *TPB *T80_>} *TPB *TPB~*TPBOB OB *TPB *T80_>} *TPB *TPB~*TPBOB OB *TPB *T80_>} *TPB *TPB~*TPBOB OB *TPB *T80_>} *TPB *TPB~*TPBOB OB *TPB *T80_>8`A&TaC!:/bĈ ˗_Ĉ'p "Lp!ÆB/_|%J(QDO@ DPƒǐ!CO@ DPB >1_|#F1bĈO@ DPƒǐ!CcȐ!C 2dȐ!ÄcȐ!C 2dȐ!C1dȐ!CǏ!C ǐ!C 2dȐ!C ǐ!C 2dȐ!CcȐ!C ˗C ䷏!C 2dȐ!C ˧!C 2dȐ!C ǐ!C /? 2oC 2dȐ!C Ǐ|ǯC 2dȐ!C  ䷏!C 2/_>~ 2d8> 2dȐ!C 2,`>O@ DPB >߾#F,/_>~#F/bĈ#F!|  <0… :|q!}#FX_|"Fo_Ĉ#F1C~ ,h „ 2l!ąE1bA1"}#F1bĈ8P,h „ 2l!DE1bA1"}#F1bĈ'P?$XA .dC 1bĂ1bD~"F1bĈ!˗/߾#F1bĆE1bA1"}#F1bĈ˗/bĈ#F!}#FX_|"Fo_Ĉ#F1b|E1bĈ#:/bĈ ˗_Ĉ1bĈ#F/_Ĉ#F1C~"F|E߾#F1bĈE1bĈ#>/bĈ ˗_Ĉ1bĈ#F1bĈ#F߾#F,/_>~#F/bĈ"("("("("O@ DPƒǐ!CcȐ!C 2d_|1dH0_1dȐ!C 2dȐA~2dȐ!ǐ!CcȐ!C 2d0_cp`}cȐ!Â2dȐ!C cȐ!C ˗C ䷏!C 2dȐ!|O_}O | /? ԗO | Է/?'p_>~˗ϟ@}㗏@~o_>~ǐ!C 2\oC 2d_|2dp } 2d!b!ڧO?߾?~_O߿_߾}/߿_~?~/߿8`A&TaÂ9tСÁСÄ9tСCO_~ 7_@~O߿|O`>70߿|O`>'| СC2ϡC/?&ϡC:tp?o`~/_A ߿|/߿~_맏|/߿_(߿|/,h „ 2lX>:t8_|:t>:tСÁ _>|>~?}?_>߿~'>~O`>:tСC9tСÁСÄ9tСC/_}˷|O_|ӗ/?'0?~_>˗O>~ۗ?}O`>ӗ/?C:t>:t8|'p "Lh> *TPB *OB O> *TPB *4oB *Tp|)TP)TPB *TP| *T_|SPB *TPB)TPB ˗B *o,h „ 2l!Ĉ'Rh"Fex_|2fo_ƌ3f̘1cƌ3/cƋ1?~2f̘1cƌ3f̘ }3^/ۗ1cƌ3f̘1cƌ˘|e߾3f̘1cƌ3f4o_ƌ/ce̘1cƌ3f̘1A~2f/_>~3/cƌ3f̘1cƌ ۗ1E˘}3f̘1cƌ3fh߾/˗_ƌ˘1cƌ3f̘1cFex_|2f>$XA .dC%NXE˘|e߾3f̘1cƌ3f4o_ƌ/c(߿(?˗/(߿(?˗/(߿(?˗/(߿(?˗/(߿(?˗/(߿(?˗/(߿(?˗/(߿(?˗/(߿(?˗/(߿(߿'p "LpAcȐ!ÁϟA O@O@ O@ O@O@ O@ O@O@ O@ O@O@ O@ O@O@ O@ O@O@ O@ O@O@ O@ O@O@ O@ O@O@ O@ } H*\x_|2dp } 2dȐ!C 2d> 2dȐ!C 2DoC 2d_|2dp } 2dȐ!C 2d> 2dȐ!C 2DoC 2d_|2dp } 2dȐ!C 2d> 2dȐ!C 2DoC 2d_|2dp } 2dȐ!C 2d> 2dȐ!C 2DoC 2d_|2d菡@~2dȐ!C 2d!?} 2dȐ!C 2d> 2d?O@ D/?O@ DPB >_#F1bDE1bAQ |So_Ĉ#F1bĈ#F1bĈE1bAQ ? O@ DPB >QD-^h߾/˗O@~,h B}  <0… :|1ĉ+Z }3^0 <0@}o_>  <0… :|1ĉ+Z>10 <0@}O_>~ӧ? 4xaB 6tB"F1bĈ!1bĈ 'p@} <0… :|Q!?}#F1bĈ1bĈ˗/_?/bĈ#FQ!?}#F1bĈ1bĈǏ>}1bĈ#FTO_Ĉ#F1"D~"F1bąG>/bĈ#FȐ'P8`> $80A#/A#H A $H>$XA .dC/a>~K"D"D/D#` BDoD!B>}C~˗?}Է/OǏ |?}/_>|˗D!B(0A~O|ԧ/>'P|'_|#o_}ۗ/?'0?˗DA"D!G>'|_>/}|O?}/߿_} H*\Ȱ|_㗯`~O`>'0?`>~o߾|/9tH>:tСC#O|˗_>~ W0| ߾~G0?O` '0|ӷϟC:tP`|_>`'0|O`>/| '0>СA~:tСC ԧ ?}#/? /0O?~7P/߿_o_>~ H*\Ȱ|'0߿|70_?}O`>Ǐ_>_|g0?}ϡC9tСC:OA~O?}' O?ӗ?}'>~O`>ϡC |'0?}W0_>_>`>O߾ӷO`>o`> СC:tP>}KO_|O_>߿/?}'P?}Ǐ߿~O_|O`>~? 4xaB[p|'0?W0|˗O>~ _ӧ߿|˷O`>ϟ| '0|[P!} .\p… .80|0D"D!Bԧ ?} 7p_>~O`O_>}O '0| /˗?}/?#o_~/_~~!B(0A~ GP|˗@}@~`} o_|˗ϟ|/_>!"D!BQ>%>_߿|o߿| ߿|_>7P ?~ϟ>/@~(0?ϟ>/?} H*\/| G0_'0|?} G0˷o?_>?~2d8> 2dȐ!C G_O``>70|߾| O` '0| G_>!C ̗/_>}'0| O`>#/| /_O`>}!CcȐ!C 2d>}K0@/߿__> '0|O`_?}Ǐ_>'P80|맏| O,h@~"Dp`>`/| >~ G0|_>o_>~"Dx>"D!B"D!B}Ӈ__>/?~W0|_>3/?~_>' O`O`>!‚!D|'0|ۗ?O`O`>_>}O>"D }"D!B!B!?O@#O_ ϟ| 'p_>}˗/_'0|ۗO`> |˧Oӗ/? '0|˧O` 䧯` ,| G0|˗O>~ _ӧ߿|˷O`>ϟ| '0|+X` ? 4xaB 6t_>~ #OÃ.O྇>|P ?}><Ç>LoÇ>|!|SOA~OÅÇ>OÇ{ÇÇ>D`>'p@} <0… :|Q~#F1bĈ1bĂ8`A0'p>} <0… :|1ĉ+Z>10 <0@}O? 4xaB 6tbD)Vx#}1b`> 4xa>8`A&TaC!F8bE1ۇ# /_>~/bĈ 1bĂ1bD~"F1bĈ!1bć˗|Eq!}#FX_|"Fo_Ĉ#F1"D~"F|O@? 4xaB[p… ˗… o… .\p… .\p… 80$/_> H*O@? 4xaB[p… ˗… o… .\p… .\p… 8p?$(P_|8`A&Tx߾ .\|-\pA~.\p… .\p… .\|˗,h „ ۷p… / .~ .\x߾ .\p… .\ .\|ԗ/ .\x߾ .\|-\pA~.\p… .\paB~.\p[XP_|.\pA~.\pB/_>~ H0 <0… :|1ĉ+Z} H*\x_|2dp ?} H*\ȰÇ#JHq| ˗_ŊO@ DPƒǐ!CcȐ!C 2dȐ!C 2dȰ|ԗ/? 2d(> 2d|1dȐ@~2dȐ!C 2dȐ!C 2,/_>!C  ䷏!C 2/_>~ 2d8> 2dȐ!C 2dȐ!C ˗!A}cȐ!CcȐ!C ˗C ䷏!C 2dȐ!C 2dȐ!ÂcHP_|2dȐ@~2dȐ!ǐ!CcȐ!C 2dȐ!C 2dҗ/,O_| 4hРA oA 4hРA /? 4hР'p "Lp!ÆB(q"Ņ僨/_>+R䷯bŊ ǯbńUXbŊ+VP_|bEUXQ|U߾+VXbŊ˗|UH߾+*/WbŊ+VXB}Aԗ/)WbEWbB~*VXbŊ+'p ̗/Bˇ!B8`Aۇ!B"D|!D!BO@ DPB >Q|8`A˗!|Ca|8`Aۇ!B"D|!D!BCh0_|!?~ ˗Bo |"D!B"Dx0?ԗ/B!‚!D }"D!B˗B"D8> |!4O}̇`}C!B C!B!BCP_|"DX0?!B"D?C!Bۇ`'0?}˧| Ǐ@}/_~/'_| O /_?'_|O@~o_~/_>|8`A&ϟB ˧Р|)T0? OB *T8_|*TP?~'>O` '0? '0?~/_'_|/?߿|߾}/@~󗯟|O`>~ ̧PB SP| Ǐ|ǏBSP?~*TPBOB O!A /߿_߿| o7߿|O O߿|'0|/|O`>@~ '0,h „SP| 0'p  w;x~߿| /߿o@߿| HOBS8p| O@ w|_>'?_O`>_>~O`>'< ԗ/ o,h`><8߾ӧO`>ӗ/? ˗o@~ϟ|'0?W0?/_>}'0?~ϟ|/_>~ '0?'0|;x ww>O@ w<8߾囸0 8q"D8!}7_'N8qĄM\/_M\DM8|Mؐ߾/ĉ'N8qb|&.Ǐ_D7!}'N/_>~'6o|_|ۗ@~/?}| '0@~O`>ϟ|M8qb|'p "Lp!C5lX߾ 6l_|6lȐ߾| '0'0߿}_>_>/@~ _>߾ 6lذ|6lذaC5lX߾ 6l_|6lȐ߾O߿|O`>70߿|#o`>@~ /?aÆ 6lH0 6lؐ`> ䷯aÆ "/ 2䷯a| ϟ/|O?~_ ߿߿|7Po@? 4xaB 6ϟC:dϟCsСCϡC sx0?~˗?_>߿~'O`O ?>~70C:t0!?~sP`>sp }:tp|9t0!}/|ӧ߿|/>}˷/?GП| ܗOO|/_>~˗C:t0`>:0,h_><8߾ '0|O ? |}߿|ȯa>"D "D"D!Bb|'P`_?~߿|߿?_>~O@+X` <0…Ǐ!C ǐ!C 2dȐ!C 2L|O`>0@߿| O?~(0|(_ ,>$XA .~ 2d8> (,80A 4h| H*\Ȱa> _ '0?}/A}?'0߿|O ? kذ } 6l|5lؐ!} O_Â6<| 6lذaÁ-̗/_>}˧O|/_>~/_~˧O?/|˗_|6,o_Æ 6D/_>~ 6do_C/_>˗߿}>/o_|ۗ?}/?O_>/_>ۗ,h „ 2_Æ 6lH0 װaÆ ˗_Æ a'߿|۷o?׏߿~ȏ }۷}O` '0o߾ _ 6lP`> 6lذ!|6,o_Æ 6D/_>~ 6do_C /ӷ߿߿|_>~߿|߿_?~/߿߿|/,h „ 2_Æ 6lH0 װaÆ ˗_Æ 0_| `>/|_o@߿߿|/߿_'߿_>$XA .d(0 6lؐ`> ䷯aÆ "/ 2䷯!|O ?ӷ| 䧏?}7P>/| O?'0|_>kذaÆ װaÆ aÂ5lذaCװaC5D`>_|ӗ/_?ϟ|/_~O`>O_|O`>~ӗ/_?kذaÆ װaÆ aÂ5lذaC/?$XA ۧPB *TPB *TPa|*TPB P)TPB ˗B *oB *TPB *TPB SPB *ϟB ۧPB */_>~ *T} *TPB *TPB &OB *TP`> *oB *Tp|)TP)TPB *TPB ǏBSPB *ϟB ۧPB */_>~ *T} *TPBRH!RH!_|8`A;x 'QD ؐ>%J/_>~%>OD%J_D~h0?%JTϟĆI(Q?(!}%J(QĈA/?(QDIloD%/?'QD%J/_>'`>%J0? (Q'QC~$J(QD!|I4ϟD%*ObC~$J(|I>%J(QC~_|$OD(?$X!D!B"/_>~"D@~"D!B"D!BO@~ H |,X |8`A&T!|6,o_Æ 6D/_>~ 6do_Æ 6lذaÆ 'p $/_> ,H0?$XA .dH0 װaÆ ˗_Æ װaÆ 6lذB̷? / $,h „ 2$_ÆkذaÆǯaÆ kذaÆ 6l0>˗_ ? 4xaB aÂ5lذaCװaC5lذaÆ 6l|˗_ ? 4xaB ? 4xp }"D!B˗B"D8>"D!B"D!B/_|㇐ |!D`> 4xaB  <8>"D!B!B"oB"D!B"D!BC_|"D!B"D!B!B"D?C!Bۇ!B"D!B"DР|"/?"D!B"D!BC!B"D_|"D!!D!B"D!B"4/_>!B"D!B"DaA~"D!B/?"Dp }"D!B"DB!A˗? /_> H*\ȰÇ 1bĈ/bĈE1bĈ#ԗ/D~1bĈ#F,o_Ĉ#/#1bĈ#FhP_|/bĈ#F }#FX_|"Fo_Ĉ#F1A}A/#F1bĂE1bA1"}#F1bĈ_|"F1bĈ 1bĈ/bĈ8`A&TaC!F8bE1Է? 4xaB ˗C ? 4xaB 6tA}Aܗ/_#F1bĂ'p "LpAcȐ!Á1dȐ!C 2dP|/? 2dȐ!C &䷏!C 2/_>~ 2d8> 2dȐ!C *ԗ/Cǐ!C 2dȐ!Ä1dȐ!CǏ!C ǐ!C 2dȐ!CcH_|2dȐ!C 2d> 2d|1dȐ@~2dȐ!C 2dP_| ˗C 2dȐ!C ǐ!C /? 2oC 2dȐ!C ˗!A~ <0… :| }#FX_|"Fo_Ĉ#F1A}A/#F1bĂE1bA1"}#F1bĈ_|"F1bĈ 1bĈ/bĈE1bĈ#ԗ/~˗/>}E1bĈ# /bĈ ˗_Ĉ1bĈ#F4/_(W 8p,h „ 2l!DE1bA1"}#F1bĈװ| O@ O>}#|w|'p ̗/_>~ gРA 4hРAgР~  $O}O?~˧|7ПO_>}O`} /_|'0|O|o|˗ϟ@~߾| / /_?}/_?'_|ӗ/? <0[H_|-\Hp_}O`>@~7_>'0|o? p…-\p…o… ۷P`O  o߾G0??} ߾} /O .\O_| ˗…'0?~ '0? ߿O_?}/߿~_맏|ۗ@8`A/a„ &L|%L0aB%$0@_O|_O`O>? 4xaB˷|.,/>O`>'???߾|| o… [p… ˗… o|0@/߿|@>~>~ 70_?}o_>~(,h „ ˧~ "O`| '0?W0_|/>}/_>}ӗ/߾ӧ|/_>~ *TH> *TP@SPBS80@'|O߾7p__O`>| O?)TP‚SPB *TPB *oB *Tp|)TP)`|O_>} ˗/?ӧ |˧O_|/_~˗,h „ 2l!Ĉ'RTo_Ŋ˗_Ŋ XbŊ+VXbŊWbEWbB~*VXbŊ+VXbEUXQ|U߾+VXbŊ˗| UXB~*V_|*VLo_Ŋ+VXbE_>*VXQ!}+VT/_>~+&䷯a|a+VXb|߾| /ϟ|UX?~*V_|*VLo_|/_|+VXbE ?˷o?_/'p "Lp!Ã5lذaCװaC57P߾˗ϟ@~~aÆ 6lذaÆ(?/߿˧߿O@ DPB kذaÆǯaÆ kO}۷}/߿|} 6lذaÆ 6,o|_o_>~ 6lذA~6lذ!Bkذ!C~'P`__>}? 4xaB 6t|O߾ӷO`|#F߾#F,/_>~#F䷯a/|O~1bĈ#Fd/_|O_|ӗ/_/_|"F1"}#FX_|"Fo_|}o|Ǐ|#F1bĈ#F1bDE1bA1"} ˗Oӗ/__>˗_Ĉ#F1bĈ#F!}#FX O@ D } H*\ȰÇ#JHŋ ۗ1E˘}3f̘1cƌ3fh߾/˗_ƌ˘1cƌ3f̘1cFex_|2fo_ƌ3f̘1cƌ3/cƋ1?~2f1cƌ3f̘ }3^/ۗ1cƌ3f̘1cƌ˘|e߾3f̘1cƌ3f4o_ƌ/ce̘1cƌ3f̘1A~2f/_>~3/cƌ3f̘1cƌ ۗ1E˘?8`A&TaC!F8bE1/cƋ1?~2f̘1cƌ3f̘ }3^/ۗ1cƌ3f̘1cƌ˘|e@},h „ 2l!Ĉ'Rh"F8`A&T|1dȐ!A$XA .dC˗oD%J(",h „ /? 2dȐ!C 2dȐ!C Ǐ!C 2dȐ!C 2dȐBcȐ!C 2dȐ!C 2d_|2dȐ!C 2dȐ!C ./? 2dȐ!C 2dȐ!C Ǐ!C 2dȐ!C 2dȐBcȐ!C 2dȐ!C 2d_|2dȐ!C 2dȐ!C ./? 2dȐ!C 2dȐ!C Ǐ!C 2dȐ!C 2dȐBcȐ!C 2dȐ!C 2d |'p "Lp!ÆB(q"łWbŊ+VXbA~XbŊ+VX|UXbŊ+VX_|*VXbŊ+V,/_>~+VXbŊ+/+VXbŊ _Ŋ+VXbŊǯbŊ+VXbłWbŊ+VXbA~XbŊ+VX|UXbŊ+VX_|*VXbŊ+V,/_>~+VXbŊ+/+VXbŊ ˗_Ŋ+VXbŊǯbŊ+VTQEUT@ <0… :|1ĉ ˗_Ŋ+VXbŊǯbŊ+VXbłWbŊ+VXbAXbŊ+VX |UXbŊ+VX_|*VXbŊ+V,/_>~+VXbŊ+/+VXbŊ ˗_Ŋ+VXbŊǯbŊ+VXbłWbŊ+VXbAXbŊ+VX |UXbŊ+VX_|*VXbŊ+""|UXbŊ+VX_|*VXQEUTQE/,_|8`A&TaC!F8bAXbŊ+Vxp_|˗_Ŋ+VXbŊǯbŊ+VX|{/_>~+VXbŊ+/+VXbŊ˗/?WbŊ+VXbAXbŊ+VX|'p A <0… :|1ĉ ˗_Ŋ+VXbł8/_>~ H*\ȰÇ#JH|UXbŊ+V$`>'p  <0… :|1ĉ ˗_Ŋ+VXbE8P>$(_|8`A&TaC!F8bAXbŊ+V8|O@? 4xaB 6tbD)/+VXbŊ!|UXbŊ+VX_|*VXbŊ+/WbŊ+VXbAXbŊ+Vx_|_Ŋ+VXbŊǯbŊ+VXA~{/_>~+VXbŊ+/+VXbŊ!|UXbŊ+VX_|*VXbŊ+/WbŊ+VXbAXbŊ+Vx_|˗_Ŋ+VX** O@ DPB >QD_|(RH"E)R|/_>~)RH"E!˗C~H"E)RH|QH"E)R/_>Ǐ"E)RH"ŇG"E)RH"|=/?)RH"E˗E)RH"E_|(RH"E)R|/_>~)RH"E!˗C~H"E)RH|QH"E)R/_>Ǐ"E)RH"ŇG"E)R"/?$X_|8`A&TaC!F8bAXbŊ+Vx_|˗_Ŋ+VXbŊǯbŊ+VXA~{/_>~+VXbŊ+/+VXbŊ珡?˷ϟ?+VXbŊ+ /+VXbŊ!|O@ DPB >QDǯbŊ+VXA~c`>O@ DPB >QDǯbŊ+VXA~k08`A&TaC!F8@XbŊ+Vx_|O~ H*\ȰÇ#JHq|UXbŊ+V8? 4xaB 6tbD)/+VXbŊa|WbŊ+VX"AXbŊ+Vx_|˗O_Ŋ+VXbŊǯbŊ+VXA~{/_>+VXbŊ+/+VXbŊ>}+VXbŊ+/+VXbŊ?+VXbŊ+/+VXbŊbŊ+VXbŊǯbŊ+VXA~XbŊ+VX@X**: ~O@ DPB >QD˗E-Zh"E~ ? ,h „ 2l!Ĉ'RD/_>~+VXbŊO|,hP}O@ DPB >QDG"E)RH?~(P>~׏?}˗O~ H*\ȰÇ#J_|(RH"E)/_|G1~˗O?)RH"E˗E)RH"˗o_?ӗ/~)RH"E)˗E)RH~"EǏ"E)RHDH"E)RO_|(R\o_|(RH"E)B/?)RH"EG"}H"E)R_|(RH"EǏ"ʼnG"E)RH|QH"EQ@O@ DPA}-\p… .\p… .,/_>~ .\p… .\pB[p… [p… .\p… .\X_|.\p… .\p…[p… p… .\p… .\|-\p… .\p… ˷p… ӧo… .\p… .\p!A[p… .\p… ܗo… ./… .\p… .\pBp… .\p… .'N8qĉ'˗ĉ'N8q8q"}7qĉ'N8Q|M8qĉ'NO_>}'N4/>}M8qĉ'N/_>~'N8qĉ/_>'N/߾}7qĉ'N81|M8qĉ'NO?8qb|ۧϟ?'N8qĉ#˗ĉ'N8q /˗O_?ĉ'N8DE|'p "Lp!ÆB(?}˷ĉ ˗?ϟ'N8qĉoĉ'N8q?~˗/߾~'O_|'p>}8`A&TaC!F_|&N8qĉ'/a}ǯD˗/>~ 8p H*\ȰÇ#J/_>~'N8qĉ?}˧o~&Ǐ|ӷ@O@$XA .dC%F/'N8qĉ?} > O|П ۧo ,h „ 2l!Ĉ#˗ĉ'N8qM$?~8}? 4o H*\ȰÇ#J/_>~'N8qĉ7!@,h}'p "Lp!ÆB(1|M8qĉ'NOD H <0… :|1Ĉ7qĉ'N8?}'p ܧo ,h „ 2l!Ĉ#˗ĉ'N8qMtП ~'N8qĉ7!@,h}'p "Lp!ÆB(1|M8qĉ'NOD H <0… :|1Ĉ7qĉ'N8?}'p ܧo ,h „ 2l!Ĉ#˗ĉ'N8qMtП ~'N8qĉ7!@,h}'p "Lp!ÆB(1|M8qĉ'NOD H <0… :|1Ĉ7qĉ'N8?}'p ܧo ,h „ 2l!Ĉ#˗ĉ'N8qbA? 4x>}8`A&TaC!F_|&N8qĉ ˷!?}'p ܧo ,h „ 2l!Ĉ#˗ĉ'N8qbA~㧐8`A 7? 4xaB 6tbDoĉ'N8 ? #HP`|#H?'p ܧo ,h „ 2l!Ĉ#/@},h „ 2l!Ĉ8P ?o/8P`>$X O@ DPB >Qb} H*\ȰÇ#F߿'p |'_|˗?~o_>}/?˧ϟ}O@ <0… :|1Ĉ8`A&TaC!F?8p|'P?|O?_~ۗ_|8>}8`A&TaC!F8bE1f`>'p?~/|/|ۗ_>߾}˧_?/@ܧo ,h „ 2l!Ĉ'Rh"ƌ˗/)̗O`>}O`/?O|ӗ/> П <0… :|1ĉ+Z1#F~[O|O` 70߿~/>>}˧o?'p }'p "Lp!ÆB(q"Ŋ/b̈߾)̗/_>~ۧO|˷O?}ϟ|/_>}˗o@O@$XA .dC%NXE56oA$XA O@ DPB >QD-^ĘQcC~6O@ 4O@$XA .dC%NXE56oA$XA O@ DPB >QD-^ĘQcC~6O@ 4O@$XA .dC%NXE56oA$XA O@ DPB >QD-^ĘQcC~6O@ 4O@$XA .dC%NXE56oA$XA O@ DPB >QD-^ĘQcC~6O@ 4O@$XA .dC%NXE56oA$XA O@ DPB >QD-^ĘQcC~6O@ 4O@$XA .dC%NXE5:oA$XAO@ DPB >QD-^ĘQC~6O@ 4O?$XA .dC%NXE5>oA$XA'p "Lp!ÆB(q"Ŋ/b̨"?} 'p ܧ? 4xaB 6tbD)VxcFh? 4x>} H*\ȰÇ#JHŋ3jOF HO@ DPB >QD-^ĘQ#D~6O@ 4O,h „ 2l!Ĉ'Rh"ƌ!ӷ @,h}8`A&TaC!F8bE1f_8`A  <0… :|1ĉ+Z1Fm4П $XA .dC%NXE5FocA$XO@ DPB >QD-^ĘQc}m$П $XA .dC%NXE5N̗Ɓ H <0… :|1ĉ+Z1ƉQ @,h~O@ DPB >QD-^ĘQ}П $XA .dC%NXE5n,O_|П /_|8`A&TaC!F8bE1fԸQ!?}ǯC$X_?~˗?$XA .dC%NXE5n|؏|ӷ?'p~ۧ/_| <0… :|1ĉ+Z1ƍO|ԧO>,h „ 2l!Ĉ'Rh"ƌ7r4_?$HП?$XA .dC%NXE5nG!E$YI)UdK1eΤYM9uOA%ZQI.eSQNZUYn+Ҁ;;PKPKN:AOEBPS/img/expracopt.gif .GIF87aUf}}}{{{wwwqqqUUUSSSMMMKKK???;;;777333---!!! vvvppphhhfff```\\\XXXTTTRRRPPPHHHDDD@@@222000...(((&&&$$$""" ,Uf@pH,Ȥrl:ШtJZجvzہaL.t9n|dX~>t,rH\qCpD>D5*  IZDBF #KC1BBEFCl*BMXr"yYCПÇ ʞ @> ChŃX\AG艜I@ 2`A5*}b)TF@(@tׯC:*Qʂ0V"#X%Cݻx#RiaY.77?#Kz #%Qf%dO q|8G䄙],9KSLhF{d%R%|^BKa9!a$12r#w♉=g.mRz qT9)"Z I ޗ$çY, oZ°plF+ACJ5@Dm085ۦ%>4)A + +.>@'>8)V,a 5$AlU覻bI @2sH{,Q*[nblt*HD(õ E<3͙* BQvyNnGX@XAp'|6"!ɳ0Rr޴#+;u as2{+^‚wi=ԈpN(xs`t8)4By`3NU&07?D~h׀zF{[ |C&kyγQlA|(سSD Z2Ź8eH%9œL7jP}4` Ra:~!@J<0`uZ(WjYBEMZ֊jr `J׺ʵ')EY# lR6Uh@JyRWZVNIi2 P #[lp&ISq-EUBqеqmIH 7mxql&" y6!J4] V.f:l@Al!Fd*Xy3pA*q-Gu@ѪČ 8( h"QG/JcK6U-HcLB R& zڥ"2&Sr#E\T1 t?@!y7 Qz{l-ì@ޫ|([U"pIdqJI(~? hc"^s%Vtˢ"3!f HtHc-x4 LѺR hz8 .7MDh`>03\owt>q@l,iT-@k h,j(g< B@n\NB2mC4"б>ywQtCd.baZ_dsb!62"h1BP6Hf.`hhC(5w@ .X9x B xg#x~}Y{X{vl6o2i !`]W F ~MFk'Eo`N_k6dL ˄&TXX6Ņ9AW !2C?d^\=oQ3'rsVȆu>sIPZ~`GPZcHejO o ꣈S8W=u؊BŁGXm0cUx PXc-dFZ%8ˢWV\@_`(03|!XT iYeX!9Cч:2xYbl#ȅ)`18<<"9$Y&y(*al!:<ٓ>@B9DYg  z 1C`pLДV)R9x\;HA;PKJO> PKN:AOEBPS/img/terminat.gif :GIF87aT{wwwqqqYYYUUUKKK???;;;777333)))''' ppphhhfff```XXXPPPHHHFFFDDD@@@888222000&&&$$$""" ,T{pH,Ȥrl:ШtJZجvzF.zn|N~zwFw`aK[E.5B B5.#C#&}~# z B|.C.E5.}HᐵD B#DD+$Y5[AqQ ]+F|'/XE1VqцL1o SBnHG  FXؖ #ؓ ʕHBi)d:2!g1zXH@>!<Ah2a*t ~I6?c"bO 1A`řMcO' pLx`C WYɆ!``+^=UZue4cX2}R#DqabӗswVDDclzFT. $5 Vf⽣x_u~?ׇh7LoABg{o*yUho3.4nE ɂDeK24 - cH:P&$X2O~BYCb-^0VI=o06 jK `v'E#B>$< ~7N3Ge3M + dC(paV:B72F\ :h6\ǣOq_gC+ڰꨈ)  TD#(gZ*).Fh[( ?016 zDx81K;Dq <*IBᕠ<)WhC.'Li ff,"\kI(Q <_ b10A {P˗b><^_%fF.*0$2Ҵh -Ƅ 8r'4 HD!1ۥ"(^ULAm&7@Dg.YJ P#Opj҆6ȍ w4ѺUdy/eeكq<(NyJ.,9H- Z&XA c x@`V '%$a̞ Ӽδ| !X8(ueJ| ,Un<>5 쪳A($:7KHR2pl \3X=o*@lP/az*UАuuEuր &yL!@I@owu  Bc+v.Ge9t0.͌Z֠O}b5|VG6B 0tno@DH]ϐ1-^͋a|X EZe$HS*38_T#zzh_-!rGp|͗wjJ n"|1 U8R`A1Y8 g^E-qa#$;\ZWI~Y?ъPE22ٗ:_%=DOY!rw BIPa*LhKiw0!8,FS2F('kG0QȂ.ႉ0\$bf1Adw`Z@DOBP=c~["ׄESh*69h4r^X 6ﱂV aCtJR~8C 4eJuT(&K8u݃SmƉ|vH%}Tr{Xxi;PK PKN:AOEBPS/img/et_access_param.gifI GIF87ay%}}}wwwuuusssqqqkkkUUUMMMKKK;;;777333---))) vvvpppnnnhhhfffddd```XXXPPPJJJHHHDDD@@@>>>888666000***(((&&&""" ,y%.F7.7 11 FҜ7 ;)$1; Qÿn|f9H{FċHQ!rS`,(ZtIӐFH)q`nHEБM=?>sѩpa1v3إu]aԵ;\R`{ȫxi_՝Ž/b[i}@x gFxhM6T xaP]iն;LMݻPqA %W\{DSƜ}v9ˈO+vANp9ѩzIu(ya"'$\x3BxܰVzw5 4 2u5f mҡ&1WIj)@8?`Yg.4a+!W #G&S@V (y_t]a DIE!Wi0MjWHqyH `ţ unI2Kmm\oB Smw>i&nbX\c!VO}d dJG: I7찐C QˤaJfx~n`kNő JmnB 965;er+H_!ߪ>\OS pƯ qn([;›Fld.{!E{o>0b˰¼́> p'FC}HS M?-H-:%rj# =#V p].퉋fA-}bܶg7-]sk?xƃ*NWwmiH=:*H;zu{XR9 C?8ˠ7nkj7;Fk Gt qŹ61s@?g8i{* 'C*AOciN9!G%8P#H@Rv0am>@@>|_ @/4@qW $V\@l^>!僆x $ln ~cF};k'bUUd!I(O/gȠc&h$$IAelSHB8!/$"G:Q#x6 M&_HJ"g ""ULC@5RܥYl##pfj! W2ɩ=.cF 5Hrc"ӾLh ى40GЭi)l P|Z>yUbFM hK^N_'=S*|̛Dܲ`4iB-7MVT se oppiC9I[(2&>N@Ę(PL l_VD7.gBj1" YZD , ZOIƼ8E=Lh(,$`lІHlY$03*+%8 Zs"F}b&_j4!(Һzը!} zı6bYe$OAARkEe+yFHuKMhXֳ5eXhpJų+/URӊhv& Z؋lJmԪb#.ra ly =g|`O8(TBmU;>[| GNdq \.Gnq,0y7&:-xw )n< (ZxA hcD`v !ߎX!bywTYF}?XA؊0RYyxt 1TkE/*%sП¾@b2_=8(Ct^5!}^o[^δ {OWhfMߧfs>w]~@?b#"Vk9֪~î,yƂN~UDԦ˿ %'AQ@XABdD-8Xx C<tApG`&(*,؂9CP ,:<؃C`AG04p>L؄>h6@>p 30$ׅ^H 0HI'pI_xh `K1jr axdszhqfpQ1hhW];x]~@x8I(wŧ\80~hmb;PKeO]N I PKN:A OEBPS/img/et_oracle_datapump.gif&fGIF87aC;{{{wwwsssqqqkkkYYYUUUKKK???;;;777333---)))%%% pppnnnhhhfffbbb```\\\ZZZXXXPPPHHHDDD@@@>>>888444222000(((&&&$$$""" ,C;.77#'; Շ#E޽00E Td!CҊ@`Ç#JT%a>id8Ǐ Cǯ1  !a49 e02]e ϟ@YHwh 댺(ᥠxtj)6< ʵׯXW$8#ҩoRt!q˷/ hPܘ'#mٹKw*Bد˘{0+NєT M+ cWfM?XUvQ^.6S8n^vH(inx`ngpQ-.sӫ- ꉎ b ?Z?]M pY%CLY rJ7ti\#]@Ag zդf.0 ;t&=00a}My+&e4V`P  փ̖\v`>BZnb!X ^,pB9p)t9B^geRUX 6'H=rd;1Vס!% SRѐS"a驖a 8S0>#^J K5ԝ\d#´eIXl3ZB˩+Q{ Cp^ZQI R?5BgktaQfQZQ*#MK# 4 :K?KJky7 .˽YX]iUȿ/2̈ īT]G -}| fo1ʍ)5:BXKgc#` )u̟'eZPOtut`}k0JR07\!t!o_u7. Ee'jѻԇ9edAO$NS.;U;^A0ySW#Bߋ UxgPkE  Zd @/o觯'ԳW#J 6+iu$tˀ̰8p=Q_aw' ,0@&oQ}\ܓ--$?I_LD$¸I!l)bx@ 9Kg!A@ bUZCBqKA6C`bˢ.zV"BĈ:\T-h[Xa֊D9lbXGE m+P!H=<"  uPa!3M*$q'.'@=2p & 0zd=@9U*Yy WēטiV\e*%@E*5 C df)6vlDA(n5%_cZpRd!Nuj@B 'HI$9rC 8Tr?WsB"iPQ/df,FoZi~ sav‚TJ(l[m0Awʬ_qX- s #PX Br*V7!W.>\Ȳ?r!F69}"m\]犉uj炖PJO&J/C%17q[5bTv4af0HCAUuwJ{;|RC׷8.X*;r5r'xfnt"}<jLl]ťh.Tі e;P}/T4+A05h~@%=o)p>~7ŀb&V `CiW^ovqaXR6joL<9 +ȬdgHv4*OEOHȈp22w|s 3ji7w3AӟkG)0F<[#НglZ8vsR>2]Qt.tC ==5]Ñإh=Fzo3-icp%&@r]mjxԫ(PFD@NޚE5 uh+-;؅f !NO;'N{ݫASoy!@ޡ(QNk7nfsWN=+M /N#j`L=VI0 \<0M-;]W#zvU]+_6J[q]sǵl;@wZLN<^x  仡ƧtüͿǕGTx^? }0ȽwO|'ˆGҞI[ŧO5 xAo/ޤޮ~!t$3@* G+N/D{*?!ZAp]g€PN Z}~SDr| ޕ~ep !pLr-|\l!)P~`~YhSbY}VV=8p+ p pZ2[46a#` 0pnc r6хARth-U"C.qVx* js1 uщ:Wq63Xx/AE?YYU#j'\)Ya%b%7pgA,Maظȍs.J /0HN(X)2W@ '!)K _OOK`)K{p.{a%h78҈2S80J."HYQ1*=u4i8َLօDJ۔@w" Z SxQS aq?D1#?HÍQq9ⱃtjR$1dim2︔wiE bAIz ==4RU?qs`!rHNerj8(# RAb) BAͰR)*&IIHDf𑥠B^ωW)d=p09 )0M:03hzy%6IwX~x3X# )Ȅ|MR|{9`0I:s  &{Ƿ)ϗ{G}E~_@ 5+p- >'{ydi gD-) v!8 /`g- {p~ JC$ GuFqjl"I%f [@uD+d @?Z?:`eĥX|`d&c0. 0BmyELtfmI@h.Gm3ʩ +h^-XV@FsA,s:[)$c*p`'xjg5lOᔘW^De )^Gk8!)la ~SG}8K !~88yH:٧TRmAJjd;y>#H"WMEY{u*vjTt#<+, d[R@09鈔K9/ aWv<{Q@pxp3rCm`b;d{50YqC9u1:K٪=E ̃'ƛ q f@y.b9T!踈"QZ@#s$ s`YgTb ;~IQ,;tO4SgA{zu;َsK9j9ʫdz`W 'PЛuc l'_ z)ҽ𽷆^РpðvY&ʽeuD b6 _Z?\I f ̈ $ZHFI;P0{vfl\)c'eg{.,=[ {k}qx{<1{ÕûGFePB:`j [7 TÒ`"Z7iNR_ Ɠ?j[il@)Yvq$sL.MZ?f fH 3 >M (es2TdZpǝ@]hV N*Bs 2r0l3 08VQ`X\V Bl ilʽ2u[}6THLr2r 4, 8rkBBmVMlL4NVe \7\JB\N&1|OᲸC0IBkK)PV|ČL.X±QVp7AY{S5@yl̖é};ㆊ2!9Vp͑PѲ*M F5E! V@+/`c 'L:*BUE6-mW1Ve@SWκݨQzڤ ݗ4,~-PX" 8B#ԅҘirR$QöФ\̍+ 䴹꫗vԯ!'kyM} .&~)KO<0EWE >lڍEҼzSiq@_?C9\)ܘ^̘m"nl|1Q4*\s'-SAAӍ-8,М7-spT+HBT 0zN3xBudFη 9qq'g ?#|Ez:^_>>a28q}c|\~LN`q:Aȹ ]FQtSD5 x 6}DlzJ*etEL.˭v A:DҸK4##ErY52=䳛VChn:!DB)V9P?~M?Zf19|lOnA/JGd r{ EQtQxYZ8Y}Nn/YeSSNLrɏ"8HXhx9YxPDIi99JZ9a Pr3p2rB p#8{;;H q1b-=]* J=i.hqQJh*+x "`@p\/؎ƅ 00!)p)|Q7L #ZƠ>AxPir?I(y1LCM3g%utJ];@% 0- NA#3+šnj+$`*¡{G@YU _lk-"<EÌM-vqV$j9sȊklp$ΠeJҬF乵lB) 2Yw]gP=ѫX] zXtڢ/hi7n10:[Vo *E8 ןZ5Xb-Hp0na~"'5o '!!I@c6ވc: #cV)iYd aF bRN) T>"Q^eXeNOef&c͘e g` tމjf4v hs͟]b.E{F:R>pB=c~ *3 j|b7w0e74S^ƄꭎߥI`gBB 4IYO N*Dʦri&!*Awq C "_v$}M"/ӗ1Nۀ+pC(]}\RDj5 س9d0JO{;.WIɝ"Z@e, $EA>~0;t?޶w40>L R   mȜ6kl:u)wyC0Н. %">&]";8{!Kt-mhO8C ,]o @<^QY)WCP V *y;9D-2)p{B08ZGjWa"" \vi1OaBzbu`: "C.rgE7Y[!E0]T!9c4919H@"9Nٸ. Sak0.%&"\#E,2ЂA$ K{zJcHlXߗ*y -Y"DiP sĬ<`FFtV/g+-3U\3eM*MTD5[)mR|7YN)3Rd9K&P|;AO%3P%$PlZ,P*t mC шJ4,D@ a LtšEڧ&-nIOҔt !. 4]iA** i$KӳōYcuN$R1qD>]n8SՓM1UmDv00ᡣ8ݫnt^wޢ."}d"zpD=G?{H(*NZpl{|g<_-7Y޴oވRsqn=k+o~|r=܊,gA_,W0~+G+2+''r*+2o*J H)K H {z᠀Фx8Xnm-]!?}%q1\#h/&| y(LT`r 4T9X=8*iL y`! /wAayrOzU*RVV"YQT heUdp<Eń%SW!AaX,6v "R'vP d4Hgхrw2ipaY]?ⰉZaԇ=e98 hp!>v|p'!/.m$G%α6ΐy1*<.8Ҙ 5FAEhC 64Qރ A>ͨE<ۘp& `x8YSX [hG 1'Y p1@s 3Ѐ s[ǑҊ!D/Ӑa!) u % 1 7)@#0 O7399,w(h3!DݨqYsC4S 1\l Z۠gA&!a{3aW[!?@ (іXap2jF>_w9Ze;=k?b)1CJ2X0>vy^@Y"5p ;u ycTP C=7!aYe/ Fr1P )Y}r v ƅOB^C ܹXF!åg04sb!s`c 'P^x;yoSZVO5o!z ɟ䙚iI9sZ@7w.j!sepeBטPT.Jg(-Yx:IXh2 BYH_xt~']Hf'jڀUN3h{ XRcWѣYZ'!D_ZAHI&jq1ߡc!l"5Re0\p`:&QaX>S`d n#WIҬgЪ >Hڄ˪  7}**: %x`>$ `$ $ ; Ht; pT;!I E H @%%;/ Df8)".o8z;+.f 1{g٫eX?<˴ F(/0@[1(!F`6@\k˶mo q+sKb8`B0i!6 Q "k&"%0 +Kk %9 #Mk`'{[Pצ$U&;PKߞ&&PKN:AOEBPS/img/fields_spec.gif GIF87a b}}}wwwqqqUUUKKK???===;;;777333)))!!! ~~~ppphhhfffddd```^^^\\\XXXRRRPPPNNNHHHDDD@@@>>><<<888222000(((&&&""" , b@pH,Ȥrl:ШtJZجvzX<Xzn|g~iv9g! 3, yZ369cWŃ R*h3Nh{Y3 N,ZGZ9_*Pjs k!P!bȖj4a͛8q!4T3lA*LQNjYD*Շ|j z4"x\˶7jZXaS1IUI#]!v 2OÎm(i̹ϠCMӟuۤ/8BqUDe0jȋm,&ʼУKNسkν7wPCHFAx‘ u/_7}~  }F0ރ8 D *=4Rm 0TJZXͥˉ`!XjD"fĜPCh[[aU V LN@cW * @xLN/`)[1 AL@ * foؐ@B+,>+Z+K+Zڹ/l[lU 0 GJlIL ,$l('N2'2j|s#Hd@(8?-`WPA5J7;IWVij5GBw]&z=6kجmjlI X%w~:y.DrC]uU\ JML E$ʉ{Yu\c%~+8{9QC.OI2~ AZ\AhzѯO;'/Q;5NBO[nBp?]0nIh&\7^vydž Sj^$mNp)9gҵwcL{EfI TMw`*!Gq2R?wM' &u1<(+JqMh8(F6 F"!j@ D5$A|wA ȡVh )lXXVD+d:J scq@}'XRʦ;h0,EEX辜 U,*mY8uq$~)e=g貙JQB5xA7G9f EЂF 3p@ P*#`"hi>i)@V:M'8,B $5a2O;@T %7HA#tsd<'@#vAU  (DG4Pҙ%7V=k骏'yQcGfzQ'e&ᡖ-;r)F!NS&S'%?e&m"&G F.bhO͆M9U%\UzM Dz3 iZU X `;b@B^1U,z^;T6; $J(׸2(kZ!ԡa>.=9׉a\xc7׸zv$XԃiVgLʯA:Z!y{Nj PC17 wba!xq%+bJ:_ e7z)H>Rw1R?np2#.`ؔzI _BP@(Gpx}{}=u]xg%vxx 6p@d,#“IU:uVv`(%+WC2s*#0a! r#Vdy!" ! )q7?gjO+T$G qeff= <GnGFxBfkfG"fDBg1"igdf-Ep'= #`X-pjI dJBNjQ| `$5PPXx5"}|8/JUU i:!TNvmmIpv-k8#0|rdFS 'd6p CeXbY!'Fwqq#"J4r##Ar7k;B̰CS;lj&um1%r0BOJ%Yt>7\@ ' %X':V kOV%(L Ȁr")ҔTI rWYZ *>[P+ rK)jwPXPv Zᷗyiy 恁).A;PK& PKN:AOEBPS/img/impdiagnostics.gif+WGIF87as{{{wwwqqqUUUOOOGGGAAA???===;;;777333)))'''%%% vvvppphhhfff```\\\XXXRRRPPPHHHFFFDDD@@@888222000...(((&&&"""  ,s2,,ħF(,-5?E63;5>  H*\ȰG9Hŋ3ۗ2V8ZFJp&)m%s>PBNHY ӤPJLT RʵkI~|J[#p#1ٿr2սwKv^"s$5&.2|F%X o#{$)EG%}8+gW+ D_>BPR-?>x)敖Ufˉ@>*QQWؐ%Y*@@`}5wg4@d 'Åb}xs o**"8 ES4F@ ~[U{<쳂H !)C~~ਠ J 9hM qUL^(+=$NB!o :B02(,I8pn#'v뉸+>uy~y!TZod_H$wȳt¨i B* R4P(0ߖJl:Y\ثbcZBL2t8x( 7 ݲeigfr.n'>,wy'8V,* fg6!%|驯[ir7G/WoSI]XVF”U" %(<&Jhd5<_ 3 B'+n%Zr+y'Mvf<4:*,ru!F-(តa6S6, AY)ֳB.H#@5ǦApeX' XG܄xQ8S$Q1 =5bč@W (8Tcq@Ƽx YqhW(Eb@ h"YFQH@[i #qGFq\Ek*IJ"% Y,1P ZICP7 E^Rg %29"ĆfAAsS 󘔠$Y r Ɣ.D3%rԩ 81aɲ<'Q#b&P3FflN\))m-E1 }G:ɗ{pf70RGP838LG7ͩPSEGMDPԲ$UlgT;Ԫf4ef)jծ<O azQ#SV1p]F[Eְ6qm׾z\MUWVx*l:ri++.W٩5W3;ΖĴe YˀVY֭}d9[jmncZV}Oeg(wbg8"XPoUۂvԼR;h " cQ<*o DݨwY;QWb xF,5 P2 O}T޸7L^6#*0!'~rOeQxV~+;˂Æ.tj[>L+_i{aQv8% `D61B$;bNji(b/&㙬Yy2̅Y2?V.P6@$aO@̜I1+-4J,]6]z0 ^97(` z,*"C' 2MiKU/{fW'oo]KҟM&_ abBcgP)9TAx}ޞM G\*cM6lOEŎmgyE6uu!qFr;D "Ck_4.um9nkbw1[e8&bSA*Bw"O3| xjrk0_Nf\7c_tFA7uSɅַnS"Cw9'nuDA`LNhO[ntڵi{Iv7yǹ3w ]ةpß>Wx?;_[mxnjMGt=m힟s}Lsŧ^ëw{Νdb˾h;/|/QxcD)|3_ӊ0|Rm,j}|mcŦO]o=>}KTח(%)%a$&s$7@jq%}_|}()!Z%(cE5nf39зǁ?T6<3am1W$C(}|7|VW|s!j$8" (:D)a.G` 7rM1ho}5hls+XL(qiP$[0Uq!12(3oe j&ƃ+/Aqg!a/v6X NvP {f1'=p"i>Sl(*H}RxtT8%6]HL֋.c$WqHy|Wb6!QJ6&J)(;Vw[{ 5h(Tk؍yF7Y% 9_}F[ lJGz,ْ.:*&G8`ӎᴐϧ}h{nh帓Tq%J8Րtd|!5M8Eٔf׆OuQɐ&?ȕgugtSd&( Ur"~)&]jq"D$$tacp#߄lη<ف0ZS$_&l^)MAh[B^r"#&_*:>4Q__) iiJ<~*Kғ)*Pi#5+("bi`E(<3p .tC<&ly Yޘ c7bbzf/&2O')4X+%Y.4q1A!NwG(鞌Wr.>/s2/o*&r2x|S?Ww 4M`(qe422ϢH2ZYt8zi9#&A?3xg/7jk5󉗲bR:cC+>@ۉueȄ 1 P* Ay\9L8P I,jy7XBJ0%I%*"Ob5@WOx{9JACv01u{%*T.dɒ2(єҢRVJZè|8r:SQO ZbUyᚰIE H B! ]Yǖ( !o~twеyMt-ڞdI82(wpvOE$ј -5p(K<{ W#(bo! ;JɰEyrt(gx," jC"Zx.kK,o1a{Fx0핀rMj.؀&*˸m+|ioXc<``dIB)+)ien[ŹZ3 {ՑF(rֻY:Iy^i[˶]KU㘹n.fH" cy{ħYh$Z:Tidz.x> vъsM+]6 p\ A5l%ڷK$S ˸ A.$y>'&\lWꂕƻ2'w=q|͕Ţ g+a1ùbka^#+8kkm)9 >\(=bkE9Ԉ@;%NW>6)_^9mL] @]gv`;wȲsNunny.M}oǁ.N]*a} cYNljq}QX`86i>۶.`x^>\~˕`rmga> 1X$%bҌPW|NtL @<3s~pJ<=֋5.O*r8ձ]>K@Rn IqXx 'NZDݮ}I GDAnSۯ(M%FTLDM/^07bBfRUE-2YBԈT:&K86H$5pE&ڡ7Ddht%?'Z.6 8Uy n*'yN$xU޾^<FL"GokK;g?7=pt-zSQel?_vz}<~6V_GU/YBK DO_QP$Z~&y?.^_Aty")TP#3HhC)h0XsYyٹy I`I) qAI:K\lK |j|(xpЍҹk |n LJHJ] &%@k@-l #7""X<9ɒ'}$T@_ y#|da*WBT)g!zǰI8HP'Z-&OeQRU[!EE]:ALbAPMXu 8$!B)NRԘ5;~ 95|Myfb_Mԙsh/-k{6A#r)ະ[]Qf#IhD;8=ԡ`;|vRm}|>VS`}9# v̕a6[9pu5b؂bg‰G˩R#b^O\p >8>( O'ҕs   *v >䄬HX|U$$`DVB@`)z0THo'!;0Sx`e%|A(i/zPh> iNJiz09Hw@h$=C) n`BI2ta``78 :l > mNKm= 쒦Ѱ`)Hɀև㾹_m!>иԪZJq i ZU,H`jvdIz yQ @zɪ)hɖwi È#)٫c'o\J4ӌGۭb3hĄ]E˪ }2pH_:P&u#C %p|9KJ^i:֠j'>%2ot 8dawgv s7des$_bP #Uyϑ ͅ;x‡{4nnUX5X| AxC:z&Z {;R{6>f~z'c(|)_ CHDhOSgeŸ9摌pvi~DRv0o'@b9y ')QZ̈+Y&p4 oCYFi K I 9UlykTxOc:lFiq\زM:"Fnxx"U0 ^q9|?bc@ kX"% zw#8; :0YqJR$w:Q @!K!(ic#Q] S8@,AŐjP a-&>iV LF1D#!ƥ&2%*/x\)Ki񬳞t |&L|_챠 As>Ôf !PDO"YC W QsyTg0SJǓԀ2NaS44)zzE#ju!QJ ]ѫDmHIS%$ىu;b֊^u@i8׼DlBꕯ*ߠ*rtjS+i +̖f];W͊V5;~%KU#um\Yˢ9ռ8}nyCay%n]-MjF`mYVӾeU չnZmb+v\n5+ Z^‚8gxV}-}A؇FK҄ _w0{`)@ʯoVU3 p~ yQPBD^ԈW4]qPUb8YSZ"E1X(=NLQRAkc2FP%l!%['Y,M#d(q9R_ܘ0kṽ1V 9v69,ֱ_0-Lcgg|ϊ"mOTwxlcAbPPzwjYT2 /0Ip7V_j"HEfG[Į2ڢ :mOȥ.=a#Z$,Xd) RHĈv6{E<`&#o}D Үm|~H{2$*NL H0hrtFTγ\# 3KݵXɣ.o 1*%-Y*&+Kqd >0_FDg@",>RD48 .a<+:6J378[d"=?+I &Vd%ExY7V!MYg)|b"~ qi`R%-W0/?g5#({NgUgR*w?t2xBzta2*G0p`-jK/(3H5,2sFcvnpKu%.a 5pUzQxMw.R"(ZL` (4(0XG"bgi)(BwMŃf1`a dh+S(H$O<.RO[P/~G4k#avA~pxn:hqst49! ! 8yC;،F_t7F Ђ'a&X kGSG tgiHsVdwTG\ ib #HZ'wV`n MFx /FJSthyCHqz)dD6leTHQI%+q%ԍnuWqt OEc`_D ]u1g0eT]ᨖk \6gԍ vc1rkgXX8i L]cIGɌ訒lW//ᓚFwUP1uaHF8aoK/`?|(2 *x9 ,Y)WI + ~@'uQFZɛҩ 8]D^7O XIi}wA9P9\)Y][i=giJS(w ڃJ ʗ# z^zV *z)Q7)9j5T7* xr; nF8CJXɖт/IYψa%ڥI*i꥞Oy)qЦ&qZ٤:]n:N khJ_jZfZ~b68JcbhYaO7:ZyDs`J_ʥZXa'=r;2ڧXcbQu zhy` "h&bj׺ =ZnrWiԚ^GRi3uvdzW*u$K.NqjneQ0m1zMRncʡ w*8agnٛ+\x#!&!32,* 3]T Z(5/'$b<ʲK˴UN[ #j^6via_[?#𴏐gcJ$&[h:ۡ\dt$ >!{hƙp]{m%$' FS&op:Q$#am}V3f9J+DʖQ$ɓ!:zVF[ujzx2*Gs*Q .st!PL\1pT[dŤɵ%qQwK1a h!}J ;%naEҴ";iiʻg՞ zW(f ۉLDHx4biFz W((h׹F" ~e# Zy[x0|'I'g@}f!?pW(ӫZ8ۡ 89*&(H.lPs('HM|- ŢBr lz5:#4^q AN3I360?e8. ʡ3` 50Z,c7]\}1rH/ %СBrGqp3,'p \g+Vjq^w}DK. 0 2 +7QǟO|ʢy\};7uQơ4FrJ\l: `"Qĥ1.\ ͑V 4Si_Qm ^ d.ݳ0M"51pQT Ԥ>EA" Z<.;T$ as &Y} ը̯-.9R5# ֥BMqkH˦,э=) dla7^ٿ> Izf\+<ƶFǎ`L;= e|=iM1egnQ:-תZ`meI=}p1{Q]m"_ ܭ<ȍ=_M=\˭ =,t $&@=4~(䴽^r<^ (.]%ANTt].=*QdN u+Y.) DV!s¥#燎 8=4 P[葞#l r$^ 1H'·/Y@PK7[1^5MQE`ƞ-.NE2:8- .7Nn 0ZT1Gp h}*CW /;PK㙭++PKN:AOEBPS/img/parallel.gifGIF87a:wwwqqqYYYUUUKKK;;;333 ppphhhfff```XXXJJJHHHDDD@@@>>><<<888000&&&""",:pH,Ȥrl:ШtJZج{xxL.A۴znw|NB~C AxΏ&9dB (M7CLfNGHgeUT`ZVW'  Dy&0Q>><<<888666222000***&&&""",?[pH,Ȥrl:ШtJZجvzG.zmJ~?r%50%-}G %Fv[ J-#L wNYNHP վ-Q0#O |R 5 H`A%`#B4 `&PǏ CIdI ,˗0_b0?jśEU-$IC! :uKQA(~ad䛑mBpUOeşr.]Oꩨcs}'t <3?2L~>ϟ?+Eopa& __>QCr ӆ&qAr֎NQ!9"\)6E ؇hXC"P.Ҋȣ>"a h#|TSG@ɔpܑV٥Ba:bEF|WK)Ƭ([YGL$+q|@]NJ`# O.V "YI4H@) |M.Ш.$ar y6V$jBrNe#,@XkC,^&:gbaaJPL]PB rmakFFk5؀ ֊ @ C%ƈ ,,v?%?ጕK.>B~i{+pSw̖<`H?NLF ьu鱁ތh=yr=['YQ_k1H 5( Z-!,Z̷=VK BDj]Mr nLH""9Bn{{Db*>nTuo¸5)N:OG!Z;'͑Ln3PAH@xB6Wo$Gd BIH0&)N {B@Hث}rILTS.U s;ȕi~DPSX /{B2:/csGI&p8^+6jYl<CP@|͑Ha$1r/ PAVK?H|,Pl`P@D€]D\$H `l^@[Rr ,5P2PC@@4@x^$  4Qd($0<4 "PI\ , N1)'*SbJeƀ$: =Op(XUM'R륌JK\drOV!*!ܔu}"$J6*`e4/hId'ˠ mQ FJVX6 QeK!fCH- xlgK.7sHd AX`*e@ Ewe ^!a[$n^/^$cZ,(M9ڮLj3`=H1Ѐp˥^#`)oFz_*A+*$3Qyyj` \W~)X/φjKM:`WV'^ aL뺐$R)z2xL,4`pQe'k67DY9}DBZ pJ,JWi锬K= L`-ԎhH`ff_,/q0k`\, qiVpZ4ʩX 4s$0.J#b:j,mJ_BRC(N:mZ/7\d(]37zr]v1Z30CwUMNܠ"NWi`@ $ b "p@x ,6L-:J-bpD0A|D xCz8v@2xd0MD#^!vNJ\ı *gq0PͿ"DBJV ]*17_諟Xg!̑B;գĜ{%ܩDt!IgOd!yAkXS<0[ >@xk W0̬ "?\F*tX`a! ) snh@T/EYJb߂#T<q 1y*b26Lڵ*et˃jꭘM`RE>rT74.@%|Y#䊌C!43_Yj! $(}\P# >g&én͸@(,$jsM&s#Ww<6EbP>pm:02+ GGDX@ Yo4 .$a&G a(` !oBP9LVaiM=iG URHLH FK(7NZYVxEC7D$j'DX% iD @*-w H>d*wa“G4Ha`e)DNB9: %`AUxu0qN& )5vN(I)#L@ p&QCTJ##_B ŋ`DX+B$th( AEHk\6XdP+ \PC֊6.׆E$&E܃]#6 w( ;eNBVrl0<_V e~q,r/FؼqȽy Gä3ԄVxwmv|{BԘGȣKؖl*wtDpn:%c昛]_$[ȜT$^i(eu:EHAJ";e?:ǫ5\QQ{ +{JS}@]QgF"Q0@ Jd6?*(l,F~ h7$%{@|Ip  $`*h. b|cŰ5n@PanIt !: v:Hbpq27h*pԄthDc,E&Pi)X'PAȐ01} WA L$!H ##ɗ@RV($&Mz2,)R&Ad*WT|,(DcrYz{Ïm\XGsxjnF5Il6!9|INf2 2"9`NQY@WEYKguQAħHGFXA{{DH 6 2sg+1=! aB ΀=d!$PG3y]Hx2q٤G N1 {k/iO( EQkYX)Hd*lKB2uլl3c9d>g67י:(!$+Ȫ&f-@n窀UVExYV mn&W6j yBìa*xENxNc5f' 0UeaT慝lC#rj暸8G6) fظ#|)yR!-%I 0&"8Wn!rR$!IهRm&_aQ^dy,@5!dtTRHVp1&L B6dvWnh$Xu$ZJcDlf*@=ITRi)KU^.mWHc̡2 QbqQj?L2(`oMV"K+*[L4 @f.[ TKYjp F^8d:i @C"V:@/\MB E~l8׮Zp_C |qY }50x% ` 2:BR7|bzUN \4{;dLVbAk k1'#vU>ZQ̳kɣ#!Hb63hV@41?&\v.¾ֵE3ZعMN`X隱Q"P xp_&58&m?‚HLq5gqV]8R$HXă R QXDXW^(]d*udjc؆Ćpo8$vhcHgtq ;LK0 XP: f8MEX uh.xLppy61S` Dq9̦gMNjT'v7Y IVʨ@23XF ь$hָ?~q; AG#l'3e]&3 }vTb$bS\x("V 5b3L%ȄapO!b?(QD fɀ&%5Q'eIffؒ.fPp~Y-`8;84#)uewHJLٔpR'8eCS[Mg0-rR3PFGt=i7EC 324:r$tb9sQ1x>88}wE09tYX0":VpviW7QyT{Co:ϋr_IqϿե_$R&ف0 !Vք``}W2DJ,0 _ " W9tYhh8c%;ȡ &b!|fl CI He5\c%_cZ& 2H `tw#`Yviㄉ]dLPW`fĉic2'Fˣ%%Y bJ Š {c@[& "" ,ة*찹ڈ9JH9JY_ijHw ` al"svZ LFI t?"&0~.o @eZyd9iD^C j`uV.ˉ❷1׈݊4#^r.퐒.NyˇDp@۳kG;R `9;bW;Ko~c2~!ڃ$eЂ//vhg oK+Z93k YG>U)FXЦ|!*FUH8*g*T+q0D;QqoOc &M4l6 "yHY3Lq:iH#>h15 }ې*Ί.ʎHeEP4>V~*dS;GcZMJ9HJŎ!+AM+#TLD&O JZd*%<~r>/`&=wS ?7)07<ojK]ħg}K`S?|0o~?aO=53_D{\r9wXuPv(z8<}l"9pFÆ+A"-oŀ~}`fVD"9Y`<3h`98*сnGsă<⃔}wqu`4x?Є`4脡0@|"v W&>+jk;A!Sӆ)",r63g5lWXXz(u+R]UpC'b;dHOhCeeV-b%("-'d$e.t(^Th3$HM$舘0O3OXyxE#DD'~Oyr#LtgCB/Fv/0ajt&o n%9S'+xo!&iFoJfwH4 wj%(MW^󕐸"9MC."ct"#+yCS{jHChWWG$+F{4kyaphuk3)?"ݒTG *&1va%‘ h_e&H&%2X޶ӒEgh+֌qBmxH ;g)JV?o"GRuB%{h9uH C=nU";_B(.c;ĚF0!QKX2@|-9eq‘ZSWabt棖p%=h"PO}57P?hO rbMuyE@/A~Yɞ`(G)+*10ȝ6Ɩ 2(yrX21׏S(SҰ]\4BS(Csm1"" ÍT A8Z42>棎9$)P/by%L2"DȡPW VK$BbunE8ib$"'lRYuHu"<@meIeE$ :$ЅI"&eb^*Jy*8j&~3 >F6 **sթOPeCé"pyD8[ HE 7pī@| 5 ۪8I cǂv׭`0[ஏ*k T8: 9X 0Z*e_*rĩ sJ C;(*,Xa^8T -L ӰGDM@_S.`xcd  O E[T6%"l:u<{Xq|naZ{2."p58_ ' ͊Rk'|#j |b7n[r_' pP00w2a&;[V2a~zoLpzvp픻ۻQ|U g׸K!+5۶ [C}Yfӛ ` 6xe7;3gAV._D+@{s#ۋ [{ l~xۿcI{6رs*Q.;,ã [STó;$#Oa;+r[Gk!:0 2H4+Lw:]U`hIW, x¥‡ F#9<<ʭF1KÌT|h13?9+֏&-u(xxfSàd`!챡vZ9<ꐯ9bMh8Sf%XBSEs3zZ?&FӲAg !|# 8B ǩq(Ψ@%R$g !&<( *e.{:lk̽@L~\@ FL rt q&5j[p CJ{$ 9 P/S6jt^MұݳnN/C+m&+B+*52+{fPSh(|bk<99ҙb15I#O8EsSD()S|ЏJpX)$3%+P21(b"&P]MKpsM FMqƇOhR2+1 ʹM!wbl{K9Z*$s>6~ƍ ڃȕ2@ҳevɕ}‚kt = (d cx>:iml1Gl ®8{ݑlU黰\<=촊92_ꫢX}Tar]YQE ®iQ[ y_$]ˏc7 ٞn枵FP4.)޿MܮQ9 ^j̺⎉pO P ?rm"K^=(o !._llN,/{ 3%W A/bGy:%=hR q6՝ T IX c ܃9kL.W[9EE8sUU1&WN8 aZO h%|A|yKHKk0lrJHWa$ Lb훿 3ϐ0ϒR@cFR+][\p !R?u&:D#*$Y0Я hT0pY4,-%n|(b- CD8IJ0Yiy *:JZjz91J;jX[ykx,-Vx+a.!94!fdXK6Љլ[~:p'=D;7 x*٠v?<̛;=tj;9LxI$ܠld˛? E?~HO d!›  9` .`>aN]e2D#L 8s!(uqb*,hJ H ɸ)i8 V(@QP]ZneO YoE$)Y>f9]r%4V(NUYqމh[BX14DM+t痛9 3ک('yZa@$H<Jis% `:Qf~R 12-q&J*d1Qi&"V1mFY.T;K% 3A ʲ3TzcDw@KOVU9X"LR ^RŞt,&n;jm&>d sh0!s2'5+n&2eQ#cbvbEIҳGI9`6u8vѯ3!2]fO7j-Oe-'ݔ[ F yo9aCV/XNmYK6$is%;8{'wCPHz-۪K aCL;p{R3b"ΪF>8,;BSK tT8o#֚UIhE˟ X~LZ꬯Ͼ2`oos? Op9`AWv,p r^1 N 3441H&2jo0k _؉ PzQ"p'/;)RA  ob璆FO0iHF"` Q1K+i@. n $fM9q =prpAM!PDa. ` s%>:E37?Q͢I M~q?T; 0&\ f~鲄҇0ӍX 2k=Dk # gJ6U%  ?W@H`A¤C5Up5!2&@tU09Btur OHGuЩrxM'"ЬYӘ8Fv^θ#&S9_a 3tA"_quw^@(nTSAP!wIܛMZt("C_Η8v3 }B&Ϲ <,i"ЬأeMiCBqp aQ`L"_>N:] ē5E3y@p]8_8}O&)79RKSA1`2U'pMQ~6wJk0|M'OǃQg@zPCiQ.'8}6+ FxA N!cNJwA`2SP=\Xnp"' EKUC0;N{GӴ~:Pp{f‡2iBuie!JR&d z&?H>C[7u}R7oT88cW u@E<t!>bt5dp d/z@PauQBSS ^,U9p"&$^1 QXM6MTW؃gd%u]apbgEEpxQiUysȊ &WSQS1s ('V] 81@8|s8(ȉx,ᄡefxI[ xJ0J5W_d咾*sfkJ}$ g]bCK.DLb}CIW<x0iԕf! \8W~gATI(ә%&D T=А]ذ~FNVv􄄬4(LaO ٶvbd+yntyt.c >kuXpc'IYPDG1jsOYS Ǐy^ q0eLJ3Q5+ -Y{|z}$| 88sc!X ˖/t,1C (\XS8=.%rHprP;JbbEVj !46WA3 :WWCi40)q>]=h!}Se-/d?OJPTw4B$N) y2 {2^=uO[9+S'7U O(ޚ NgQI?fev 3j 3PR o2v`:`a $^Z4P^m NӪ1-z0i VH2n!h=d+;>ʰ- H+KWX'(CP2jMdLIo)+KF˲ L-R_ q+Wv0Bg&!6!Ϡh_JIkk˸1{arCKek\; ;fY ?fb[ik۸N+Tq ])7;f+`Ozƛ {z;; <S s˪+ ;0䵩YˢCս嫤Ĩk`0;k#xg/+BJBJ{)֫[dr>[ס$*"Bl 6Ↄ$;!/"P %;)q;*6 š/+𑗏!IElGIKMO\ў;#+l686dYŭc$*Gkmo q,sLulwg+`VgŊ <'`k |[̞\ *Y E|ܬi\ O>Ӥ6DŽ<Ȑ\Σ, Ix1@ E H{vW - ɬcq|єldǮ3YkI*TH-=']bS:1.) :L\,ԋ2ԧ,z8ն;\T=֣ x0~9[ӵA]1Z5i@}]w )E+mՆ L`agH- `΋jФRmd# Oԣ}Ο׮maګ-ulۢW(}%{.ܟ8ܴJL=̤݉ѭMݰ}y{]ɭMWܚM}.$ݻ-]P=0}ܳ}Gg iNGe޸_آ~k |ᴝm⦁)δ+>Z]t=. ;GN4^79fkkWgcr= S`alYN-LX6ڱ},&槍V I4jlNnT{; *Rb肾./Ԛ+ڧ$ 0'x禡# 1!.$gNߦeU 2T&J1lawiP(u70olu0w_T0Nnԣ #vm:lFQ,~pᯑl>Nή 6XAW5mf4'AK35"}qRgmZ ާ_Q.ŧ!8o>j^(9>}_7TPGDAzX]W=Ut'!,0f[˶۷pʝKבIcՀ݅WdC]0SÿWȪ+[2 3ONcqf 59ngyGӭ(Wp 3-I TMl>&j<`d biں?Ͻdpy&_Ͼi[a ,:eVvh BfBg0x-bENerɃ QA ,R;AC, 8أ=4'ST'(NYب4|&DiK%Y$xYV-Ҵ,ߙhl:0i#![oj1 N9"bd:iB%,Des F:r(k%0A! )+cDzCL-F-{0O! I'L"2D*8B9䯗+9,B dIaxT"m-ص+nu5fS0 V'U,%-C,Jٛ =۹Fl_-&fHCÓD,;a22HcF,v/d!܈%IЦ,CH#'kXg2r./]7[&B&Lխ1oC1j|-q`v0d뉥0hk;|8F/?k(߭ 8I <騧:1tynt9㋟]3'v"p,3g'Y#%;`8GԫLc a0oGS<3G})qc (N/TFT=_\V>`Z@;ST>cq @!0vK[X}@Z9ʧ<'`B/ !L@p(D\BPH*ZX̢.zHa;э1Ub6;0!+"d/Y~2Us 4@:eRD'{RΚ\&E6#x)FGJv~ %ҒhKgJӂU+[Xӭx-P%jQt+PftP hQIx^$vQWSԬzuS;Jί&̶tc I-!A5\Vi + " %V5ͬ1ꗝj2&Q -#Cfթj5{I,)v6e.Dú@eH ]WoK\S}ccNsx*SL.k|Nn\6%Ubl:޼I·- c̻Vt9@7T{Ǘ8N5j`fĂL!)M@wPL~ z%vU$#g\?_THP@L"9j3&T?Di2=19fp: .{`Lf2,KAt aqmLgY٠kYG^oe }fCR0\:;|R8q?]\ gؓMjٻx}8>ah*œfQ*ZŜ"3b cсul9× ;Mw,N+994Ƈj;ׯcF_ňaUh;tc>ݪYA8e+逇 Jx؛EHAc ,_C8dP3M(r޴"6-҈G>)CI= s𔖕Բx?R2Q5eO{N9g<֟ꃤW JfQ58+MAH4P%zY䉠64V{XZ\۵^`b;d[eKXdRTI !r;Hz@w|z ;PK<;PKN:A$OEBPS/img/et_record_spec_options.gifbGIF87a:}}}{{{wwwqqqkkkUUUSSSMMMKKKAAA???===;;;777333111---)))'''%%%### vvvrrrppphhhfffddd```\\\XXXRRRPPPHHHFFFDDD@@@>>>:::888222000...***(((&&&$$$"""  ,:A#QŸ#'Ї֯# 9\B> .`1!B[*\ȰÇ#JHŋ3jp CIɍ9b6ӡC`50al͟hjK%+ 5*EmVZAիX2 BӬ`ÊFdK(>Ǫ]VWT.Kl m˷YZÈ+^̸ǐ#K6k *h.<qeDK:/0[Zˋ\8%&.2Gaݺ-RO E ӈʢ}K<`ޥ'DP O:ᔊ֎(#:!ADhA0& WrA0r%_9r qh'12#$5 ogV &iAr'D|1Ih晐^&)k> '4$CZ駘jZ* mh"#AMx̓AU\f'ȗ|鱕L:J\ ďigTIK^+R- ڄuĈ<&WLPig[Ï lܠ쿑(+ 8-vPV͡+Qа apLP%cY40’* [mu2#B0"\A0Zu~~5=Z3,":eq҅X?#t^+\g`-dmh:Ԁ"12!f0Ѷ\;H xˋ {gxp8ֈ; ,oL/* Bx! r}sH)$u[脭r %D2-wͷ aT_2w'H%2Y:]à~fa­4iiPD-Pfk"椏UVAg8Jdxz}bn1޶ن\ ػIE5hӈj\"H^ SNX"k`8©|P:k= 8RF1&>>q;Q d_ 1~EDJq|QH;x|XA.f$ xO咘DX!"IQS*WGL| F5*o^"22/IJ`IL]*'d& jLC5 )lL g9M2L8:% m)N&$Xʧ>~S2!$<54 ϔ)#+!?6>^'F -$CYFM@`R )YM4+=gwn2DP9K^z-:DxxL6ݣ`PDؒ 4!6Oj9)ͥQb[VQDPԨVS ՅoU! @-bB`%"k5"<ށKC?"$tҀLvOݯrVi9W7Xi P̡j¶q\~j<})B5P k5+M%/)JT\|ʷ)~4P3$UDF ]B;S&e4mQHA"TL+k"F5K)cQ-y` n `(9C\Dj^>%80đdӯ0DOPBa!LI^IIFA؈IJ9 H>7~T4$%+y$el 2Cu^To%N֗+uNN4>Gm)!\Tx`+L JɺFU" F@"ME"ou)X5 h͑K`6d ZZ LK.%n^"c&ύ#RUflihGmG[:p%XWؾf*R+W-eo;c5%Btf}}}]sxFM$ MD4nD4qf4I8ϹfYHpG>_%M9'BDJ;}9 x9,b'],H$D%#W#^ˉR8vaeK{TWɓu8]ϛi[1`y. M>`O ?ž _b;n$X>;+rPUzeGЫ"t$$,ei- sǗP d! ^ڋ Іox._zo[-(j!6iE.s@="TOn ISQum[H9wt(IJG᯸QMTu* ~W 8;7CpB {b<-1(~ax\}%y MbzZ闀%* W3A85(b3n<\* C؃Hx/r۔B&G KRU%qZXEZg,/2 h^$V87w12]3.W.S.k648.xiL$r E-h5($bآ/֥(I1]E\eKA3}{v*?7Ң1R1҉w0("#@QwK fuo[_D2S._G 1hUP@;ȁj=(:%3L&"?4S_0qAc%8`WV8I긎> H%.\|b8b?Ўa Xib Ҍ :Sy7<;5 'gX p29 {w(!2I`e]M9!%@&diQe 9 51Ш;Q~";;x/Lppmty"jչOa(iy៓c`dyOɠg3١/n I) ,z:/ڢ2+t98 5$7>YADj ;EiPJCZS锠'tI$9tKZ~נGb=\J֠k2mʤg* s 0ujՠ{:i=n]Z)2`DC(7_^ըq?L֜J}ꤖ]S$AL*~H20c: +O :XtP5%h*yT@i 5'9Ґ?萢)Ra}J1ʦ*sJ1-J%p09Oz"uJ0ϒF~Үc'DZT禆 >!@B>(C] 4jm*ww/+jґJyu_8@e AwIziQ3B;!%1]#8\)k$WbqL&QV!fd9F|eSq=Z[bDnƴ節jΠGP Lt4Cf+:~;Ķ͠GBvyu{ NK"05p# n  ;u1;~ }RjExRkZ<]| jjEp= +誻OkŸ + ) KpQS3 7&3"rMaXq(r%R绢{s q%R F {4;us^YU%@6,`+#Y[7ٻx{ֺ3 $9n"&07&o7a3ĤNOL alqRb0j,BE.;Y%9*np| Yŏ'l:ÖHLJg}4\jez$l@Y[c†ly A54zih+ >( |TȘʜ5J4ygH|ڋܫT  䫀0iп/ 7w9 ;k?r(.-x-z eNѿ!4?j(\8~Ri d2X {LגF-ؖ}OCє.f 57U1'#w> }@ݕKOUgֿ پh DsjVݛZ:*mnT ,}ќӼL ޕMl} M  ~ՐIކ{>* .zDnnl}N]}L.>>ZZ68I>BD"s GNl]H MW Y~[]~ _acn egj* q.Y< Y]0xzC|>w.`}άi%ڃ^D؇حSw)cq2ٽ $20ZN0 'ܧ@9~W!-.Ҷ%&t4%M.mDAA(CrmvDk~JjKHe(N(Ft Kk6> eP۞^J$zp) +KÔ0z _n1p) gwn)< 0S4 CL-"51`d͈!='c ^Ze|r4m}opI>?4B_wN#$~_%o [|QC(zT##r[(eA Ar'D$Ǻ 䬑?߰AUT ?u􊢿A0֬:oy/0U-7t6_Iv׷ufor+d_Jo UB ?~9)Ik9 58萂@H)9IYi鸵%)d@) )urA(JZt: ,nOBYe^~ב YыƘ%AQR2%&[fMq>]gIR9]{b9%kNBq9S@5xggiezK'Zס蠎NrA^J'ɩ, ^%j\h9J9NJ 5Gɕh VjcҺ)Gm^1f˭VTqAar#LGݩֹKKo-HJ{K2#w*n=@ALh3_ {q]1[ ըrBł! [9ṣ19_,3A*یtZ2VL+]^'4 (hM 7s @pBP~LvoLЀ-A?BnCv~]hfϪ 1?^T} ]kF2NQbWzҔ(y=]@_" INK${l_v^{_<͸Oooy}~_;"2z׏O. V(l+`7 9Zɷ A(0~/ .A߄$,Է p,L`G p4mpлAd 1@Li*qJTVĥqjO'+"̡CdӦgddq6zRl87PGqW"Q d'B]'pb"ò[t,.]D L l>aB2\%Z'Ir5g 0K!,C&5|I'^@Zk̤4ÄI2Js\$f)4Ѐ(Tba8)?2'~OFjSv4 0QQPyșd\@W+U$E-'*R)\(<g|MCRwy3q٥A6J1#B8'ȡ9 ;!Qtta%|Ĭju\ SF&wy*$jίc+tpŸur4d;Ŭ@Zг4lW:txkOyjDuzFzڥXƱe V>Zf2%pMU=UX ,V+E֭ rϯlSY`Z'C*#i㲊$'Z3([ BfQB-ˣidv%d{ۈrcsuE>P'|.Z@o ը)+vD [x3ޒ{U"u,hNbb|YSӨ%B.7l!FX*p`KLŇ}/dɄSw#P'k{k$SJz t;ڹlamykHbCX4s5LVNE>9۬_^1>y]\.&gٖ^)ZO[H^#$F/'zvc^4',8kHACtX(pE6a(ΑmɈ9xmPgnjvlk{⁆:UYaPtE%X&f1Na,϶|-ٯY7:݈.V5ujǯsfA&b{̺7 Tq ս L>+Bd> ͯь`Zj‹sLDITGhhѱ.uߍ 9+]v?xgb#?spT{ӷb}fNh[0(-f K A£Gc6xG/[;$ Ăbv\FNqk߽27g }:c%J`8.?]~`7"]SW}ܷq)HS2 K3ɴVק?(?C] iZNya5{)y<"%ss+7g]_Q߷Qw#WOQ:Q wL#׀AT.!57Vpbad!FV,J5pP78c;V%I@opnQ_'z7^GC4N͇| SFsq5vgȀ'X9Lm( 2,dHK_JIq(yZvV(Nze+8^k(D'w؇BR%hv~N)[vK+ZXfDLYHJ$Hf8EU}ϸ Ňx v{h! 8j8ᨃX(hi8wZhHpH}XXuXwi} И#t RK9˨iG)>(wu#C$Y ;ReVgr9AbIdfr^EfdTh$cI)2!yOl&oVѐb"n Ap iyKfN 'hŗ&)X `7t56q qzuPT% UD#Tdr#sәH2槛QS. 瘣D9Ib5..TGT9aiSX)W&&Gu11pq~{9P4z!{Ivv dthɓHgP1$$㡿&uq]_ҟqwhw"w3 %JI屢A}eL'Dc7*}Y'?=-:0jb\02V}GM/J*bpnAJE] 灅>6 H 5d.K#m15BLƛQxQyڡ b]ikd@|I{ٔz:z:7D3^ eB [BX3#Y1$wC y%v)g0ĔȖbyG.f]rzDDBc㦗:ˊ Q ' h J @Hױthōc:; 3ɦ&6`JK {W &*Uyi&ۘ#{k 4K-+!Po (C/cJbY]YN+x`/G[D0D"wK%aPs͊Em<۳h>A2CK%Ԇ!pћwl G'F RgR ᷢblሏ;v!%gmr"!ކZnjʶIXaZtH 8z1mU\qkQֺݺfO jv>n1p ( 4ŠV3{!*[gD 'Ok+wD_A1kthj)ҫ) [y/CqrRi d/lsօr{b;dO{w뢙w']/Es,5%(;ç€ԮYj)j']z&jaZMOçěZ ʥX|ު;X,ЛAU#ʐF\Z|?u" J6a<^¿^B%Vm1#V Ys|jh>H\;cS>b5!|ȧ3 xR7T֛օ%VЙTrT*IIjK*dlW̥ (3sU̖@* p|V_J'j_ &;ZU!鼚ߗD6 ()"z6`\NTH!F2˨{V< 8`BAPe9g伋L.g0A1+64y:qHȑyR&,R7ѹ0 _K_j` W8L|ECyPsH[6'0 m0G(l砄ۼշ.1H}Lpus( n%86e x7[Q.wX ˭k~ԃ=wtj s`>ˣ'{-lcSыDXPc陳- 4JmS9R,]ܓ0[|}ܕ=ܽ$7L}jMb ޽ ńs뭏Mh- ݘ;]v|#/c (- RN<6lנ\8%:ra1_*'&~"{ȖP.IQY /.)7P $M6)Ο =6)vt\ \V`߱FҎڭ<eu ՠ.ͽ4Bd0Ͷ.|_׻Z Ӂ|HXщw6KU߬vn1:Kmjש˛湑bV | $[H_4MƳ .'廌5TTNqozܠsE.XI3^rN!5oj^~v&*mE)Ur "s|)g,k#Ɯ0 \ADb?1Baraɨ;8J$xBUwJi D12:ϧ2&>j/ƢS"rBA0a.B_!35++-]R,,ʪϢLj{r~V8+2GA`@Bm;;LQH̶-!C?=L?20,4lgl[Fr Ǔ`r0I :銰FXg\w`-dmh_%KL"H:knDmi G1 սF,*Ō<㿨C[x?#.h75HkE0S̄EysoM>&q^wBrrE.[ ST9 gf9ʣ ⒗1_KARA>8^=\^JR?  ᕎ!טBL!*.@W|Ev88 YE aTߘ[JV4 J! \pFXt`D` QI*TlBPAVh3( 5X :?7ADo$|&IldR|E(,yIl餗! 0򓨌%ZR|*K2.Ѥ_BbibJЅnHjR3 ҙ-.`ZF68 3Oeg .O2Nvv5X -OWF'ɋ^Ę)J wNCRD8't&, g$JY(*zOVmQ '$@e]v C"W1}7z&6 =efC,ه JÐB!igl3,|Drȥc\~ dh@RC[֨6%kI(DS$VKjڨ,­勒`V5cB1I]A$&'@yA@Dx&blqe!@qɄB%itNkHµ@VG#ZN* FVR% ﴹr,,Q  VP]b&aQ u+@'uBohca.U)K]B +a I7<rzXL|a 7ksAΊm!^NާF LT|JV\Rlq)`sBhX@LUuG2#WT⠬RKNɁN:#T (gP$be3fM Um)l) PJ]5YQ@OxTV,yPN*NfZ 3C@gY[X(& 0*NYѫ(+J<+U.BԺ;xWvı_I4715z4f609KjF$wBW!!>Kd xyJ41z0̼_<`0"4 w06}GQ(s5+Jݘ1jBתXY.P B,0ɓ`N = U% .N!K=+7[Gs1t'* 'ɥ0caeTܐRa0`W1 !'x= -I';Ƿrݔ`♣|;kɻBjNp#C>~@58H ZcpcƑyh%o#8&+50a٘ _Zw0mi6kMS1x(P r99/dY'AWn!77o i#W)0uh YYCYPfrCqƂߠ!2!"B]|W'a8P&aIC x Ĝ%9Z&{)w%$ݹzgh ܙ0;M{#)! 0ziMMי )$I&ـځv[ѠPx *& 9y*:%0Zz4/Y837ڣ=&Ң␝>F $"ᐝLJڨ7"R T Vj#XRyxQ "9m o2Pa{2 %G4P;e'x{'{(eiХR,72b")2;Q30}bҨaItz! J͈ f+AZԘr+Щ8V1,!-wv#@-7_"jr. 2vw.+nXmX[+Q:Z n3g9nx 0:GگK30n3 # V'PDCVb"+ )10wҕ(3ZTA ";$0Pv&DaXԐ4Q+mA#^#s8PZ5EuW9,{77В`)a&FM\ISrXD/xHFT?!r?~yY1חQl*:UJW+4_s1pt+ÞgqgH Ыһ}ۻy {[+⫾&ʾ7 {+B{**ӛGW %PU T҉!0\۾c%JI W@C<z!pvv3<9*b`E/K~RR>9W[S T@7g{ X!<  lU2\C|U }UY ň 'a o0p7Mo W ʘ͹] 1[7Z V@_߮P}L-k % mN( >WGc)Cb dx3m$ 8eRKC CeD^1m$t!=[3hyv@4^6~  7zy H6.>1\R5N <yZ[U"+L'|$:ө`QIMG#M&WPZ(UM[j\=mN 6J'g\Q_w: 4B5嬎6 Q (`~ y!㚦>|ڸ8;@P=GUç2s|_Ap}Rp{XPDžIq|-;Hb ,,πR0vB hJ4sx: j [|*ijY c-:]XMFGDqroʵρ%ώޟɿjªQ4 *ԬH]vsJ$4HGfl\ QX3Y+J&تښR)-iTDZ 1C -% H_|N -s!|,o` hdz?bJp="O 4̷3:pUw( i@ڐ掉wR U}bqV~wq Еۏ`R`RP ER`pyMpi{b*x֒e I#Oh_ﮂ5@hXHxrAx1pYiyD٠TՈChJh8bZrj;rb@ -=M]m}쬡{i|ajxl>kItJ=m-Z- /@8 5"EqȨBbX^)Rٽ$Km3аF;7&&D:۷=Ǣe?;u>aDY>1xxr&>˛?>}3ёAfl;s%JH`` .`n[`JRa~"'Na&n#JVb._+>b6ވR3*Vc>4;cF"CUdN>i~5i]eXBɥREe% Daafuf_f$mfvf]qJ'u (}g,g&w(Z1.J)lĎ@MAij. PFiaBZ݃9#mqa[Dq1 ,+Wg c)AP sLA,l>m}MU+")DKKH[ &K[>f2 pRي,_eQ 2pT抬*Ta^` H+X1AI3)iItFtJ/I@S,BL:pN_K \U0ʝMoQQUuj뱡t5\.]h StCU$3D*!i{^EǶ8㕆+OAvzȶ%B%斻2(ԶnƮ|'p'GsW [ܧ+>r\ ]f:OYf-M+ ? 4/ 4IpfJ<Ђd퇉 vp$@!B5#Ƅ.D o8J¡Y~"?,%t?20P#0(CxD%$flb*B@q|z 3!RbX!9 ޖsT, H8$i4\|Rp 9(& hHS@5Gi $ PP#$$8aF6E%KI$F]b8!O9eq G P ' !0eFVt)(D2Lp''4zyPb) rgKpܡ tlf$dbLϸM"I*ag )G# hhbL2iDL` GihLRAF+і\9&0)H7܈!T"4e;N#fy^4OuU`i!ZSpժ_[70Vb)T)1l:TQUZ%xd9(Dfno<(Rz$07\Q_ 0p O"Z*͜>wuנCMU~W -FZTģ6b>}By';̐:̡Ա. )B jKLg 9f{n":MqH`劾R! s+sUG!دB 6EUDC+>Cu8z"H0bI#Wbt?ȁ &+V*:ZH8a *8^ ˴UMj[ _"pwrf 6YDb* U.Qf8\ dB&E>a1My=H&sE/SI@.P[2x &8vS4I2Fұ&u]F>ѶHWNӃ[Rs$X,s'F&)YfIXB%yuÀ@;N,Шym+n B\*8e] .iTZ 8fFQ󶯛k''i7ph>\:UR4/*S)g)uqMpف{%a41Py@A$ zd *|iUo@d t|` a8"{sD҅%4j^ Mւ8dw @m\qdy&a,1 5373?ε<Y0Pn'HmR`]b\P]Z$h29v3v!6es6L/W]ĵ` _le Es|1C8c|\82}`J]FVzfjvW zC:r`e\Mqs׀eF`8\O'rn<<_~H~(& ~Vf{m؉jz֊GbQs yz"yg񑞱 IgJw7~.ywwsR39q,ْJqHvG@7>ّxF%QpqvjrDuOCot[) >G0?gYQ A)myfq)jl)w)vK{)#YɗtY"I 0wO#{N4x #`4ST#u`^~rto6T5InBYup0H9 Y/ 50$"1&SRR7ф* Ry(:0e㈇V J/ 8UyuUyU]uѣО'{9NL 7ks PZnaZb1_ɉKA4pDž* aCge#)2a꩟ mGMaMIM 5pr2_QÆI)4uH(~ʡz`.58 cD,}ՎH͕dc,)θ Ji@ c\qX)E ~ q,%f~QWJzy}:%*5 aa/y1P{]sqԡijg %s"۱J@$ ڔt!&)ٓug@TS'9ٕH> dG&DT2aO7tPk4ИTG:۵g;3'˶m o *suk;y 4 +eg) ;z[j{j;[g{ sٹޠq 5isDIet+*0̦u]dKuɫ,ӷ=+4k0Ix3crh |)|U·37ÄSY++G ӛ-h2Ҡݖy}  Ą]h x,ؿ5-!/sH/ӓ uы +31ыA> %jM*Dj/+J0.d/*@@K*EJ\ g,"v9L젦1fJk0MLHJL1UB#XđȊ[>L 3NJɕ =œE;1 `]JZSF?exDBE3<Ȭr\e|6<,cb,^55[J rL ʺ֪/ZA*ggS'LU϶ {U!-AUWYEV|ѕXƴKSEu H;ѵDF'Aڼe2}D]L>*DM F=5<}B@=LJPݱN TRX-V++cMP˘k]A{˷ qMiKWmw͗p}}| x=ײ؉؋؍؏ ّ-وm])ٟٛٝ ڜ1m Bڢڣo(g#}921`@%zjL}Kijȍͬ{ۗm {ۛۉݿ1;dMmz}n`- R\ &+ݟP`T}},w5|3 ٤BeKWS>ƗȬl6æ4|8(Fih[إ5qɬ G77!;=qy3 \i\+\,`]KQ{|G}XT)}9:U!Ejy p۽C< ~޵]ϐLEJٞC*_X$X| nڎ^S$V^ cm\R"W)6:(e %%r#| 6V_WZZV e^l"\kYv=yjH` 9p <_ >_ B*x?POQOA p I>XZHόƵre{Zmoq/sX]"Kʌc+7_wOR"K.Q ZʟUyt?2/tz ᥞr著ݙ sDkľ#{`ηPC$0 ' :oayRQT:sT?]8`.ٚ=2' hݲooG E7KR:'Sh DAQLLD:5[.''5[ L'#[#'ĀA.QQD 캫5э'.υ.@3 ^ '\ȰÇE&o TT@ A_B: 081~-cʜIw6YUTuA[H.kXڅ5ݢZ4frJJ+V\e  X;,cLҠ۷pʝK KDf˷oΰ~ meua e1sq\T 0LOB̹3<~jJAΈ<iϰc hV l!>zg{{UtϓKN=UE?S]Ϛië_|JϾ}wW9 ( DM6,6h=XS+ipه rh":$!Sޤx4^5ۊ5樣z2Ƅ@\-(HnFDF&S-QPViDRBD\vNmdCb(tyɛpvfCi &!*蠜'mnf^{2ԧ6Σz&$zv|EФ]v8KōRxChP%@&&)Yj\!<%pAlqTkc)/8̳FȮV+<^)l4*2Y.!D`m{j&25 o!5nIJ:joM۶ӭ{B Q s4-+f'/x\rB2[1arF]I O3j8I% [;d7˜)ks;7<_ñ (B~zH9P:`oAAxBXyw;9wpV&Nc>P!eb@w%>B>`"7PuF0 q+9I/h PqV|٘1K#+ O?[G 00!c. 8 [, 6+YDsF%B&2KG$qP2L89zp hLW'-@M1;* ˳bF-ϑidD&QQMT2 P^K$YH`Jz 2,Af,c7岚#gpMjh! @2r#Ȧ!n~tEm 0 S!4@T&1ddlI.Ō|dwᇸn,J#d': KdJY &QeXp a<˜ ^bP`x2,gg9/+aϜvij>A 6MY{1QfFc8g@S iA  FK$̅?'KʞauAAlhcg52=DiSW02qQBcڵ r@#pm=[3Y֥ mڢ@DKTق*{:Ҥ3{YzFQ*op] 9x礧0Zj϶Z`oPJ\>LY'h%|npA([%VK\'sQv 5ywoGY~пPGGK O~_d? O?~~V;PKmQ2 bbPKN:AOEBPS/img/char.gif'GIF87a?.wwwqqqmmmUUUSSSKKK;;;333!!! ppphhhfffddd```^^^XXXVVVPPPHHHDDD@@@>>><<<888666222000(((&&&""" ,?.pH,Ȥrl:ШtJZجv˕6`L.hc4pe"60~?^h,1qO1w.1ad qdlvc.p. V  Q L+QN#K1L  ! .MY+ JC+/`?NՐ 4Rp"12R1@B,FAh1=#϶kL!/- Yd *C3K3&1 ;jtB\H(ˤ  9BLEQ%h@2&8 2=v ٬[y"W$ fD,=".ӚPxcdȅ7.֮sǐdjnFB9f!-\+ V(3VM@D! apX^c1%  HHA$ X8QؕnߏB$we @twU8 UDTHSv!*la.d"c4g8g!Hl X0֒BARj\ȗAtY!PʣZHք'Y"[eWյlyr.Bal]IcZ"GTBޫ+Ll[*^c%\Đ9%` 5q8G1½t3Z'S܅rPc@OŬVDO8ًr32caM4d :=cQ3 Mn\1d$3#kD3M(lSF3 46KTb7[޽$PM @wRxLƸk!4#N~6u7kK`^NEY^CZ!)f_G>y["+5#]JPsP1TрY|k6%{.=!'~X1ͺo.;lq"ﰈE2 cLW Oc`Fp<*$p60tD>>888000&&&$$$""",~@pH,Ȥrl:ШtJZAazx`Nn|Ngxf:}F{hWTxEF%BCHgDCCDWʧB(%CUپۨ|BSߗ VBx&q8X>tly7K#;L(P(D,bρ .*bfA5RЀ`~EFvl%. cMr&+Q4G!PńBW/_e? A9G `hAK2Xd.-NU':B`0]oCT1$|8Lѐ3bІEЊ[vtz۸sa0N\"_žlẖ$=魭 N[{ ϽqH|D@˟O?>W% ȿyqf^T 8)D~ `. aR@IM2&}+"~#X6"P4"ؤ=}Gt,q3Np DUζA (Cu[` jyFp*Iqo[z^E#$^~YR}Skd`%`|':ONw;-U-o ڄ+XN <;T^=p+~99 8=kֈQgZr>hA'`0f:0ma9fX@nC9h>D݂خ ؝%NT8MX P1J؄fpdԃ8FRHhtXuc3ŋ@2ۘ!$וL׳\7>ҏ z$NMk;4sМ 9.rL0"o$u_5PUnSHB:) |U5H]3ubNx= 6BپnYKSY)cFMEbiIVY4.(#$BS,QvvS 4mB^)@*o(xK`D.K^PxHd 7cL[MA ?f[X6qBU<,I81Dž+ /;O 2o=K o@S&`+'KLLN;PKgPKN:AOEBPS/img/impinit.gifJGIF87an}}}wwwqqqUUUMMM;;;777555333)))%%% tttppphhhfff```\\\XXXRRRPPPJJJHHHDDD@@@>>><<<888222000...((("""  ,n)å>܂,, ܦ 'KЂ,@oŋ3j hH@Ǘ0cʜ#6hX(Ȥ!)-TѣH*mр"I.PDAg˘3k16trO-V̺yYd!RH#i&yx<Δ)+b 3B}~16 `Tbc)A @t%#_ިINViO0+6Wȗ<`t'$a h*(;jyOEF"u%،bf蠘fMiSBH.Ҋi`Rު}触 ˅魶j`Bx4# & ޼*ТfC%!"jRI_ԇ'T)隞ʜ+,0{W|ܲg(bna hc76L4ܰ `r,$l(,0,4l8l@/Q 3i .6ARf+H#  Dmoǫp\w`-dmhlp-m` rDlRz(ѸNJH.bo! į0 8;I<wBRc ­T>HyxᏧ. \Iʊk鍈kijǨ.:p>ʨJJsÁ H<%Px싥ǏM8<< $n?p5ʈP.iD+><!)e.(5y/|D Ф +ؠ Q@v~p.`Dt`a}9 51:noog|o<'^^x"P X@gt#PpAEQ1 6-O #̷8:sLGAY2YJD% ^"If38D{*ItL%gj?*w / YR "ls$b0a FT{HR@JJFDh3k4a ]SxDjHEV-j%NIQ30k\f:P.T@1 1"gC7 #0,9Jb<4 I'ҒT-#h"R"8=MQP񨄔-QbPK_hȖ|aֺPA!eKu,@d|ΈYYG6"P3,~m'b>!nk;Vŵygp;bW\ .{߆&Ad7IH\q~@qbLw yk*fq\.TISBD'D"que{:8g6Gw ,i >G=p$(@%@JKv“ZAd$DsMS>+4?gxB$OrFKd9[ȄiDe [AIuo/cތ/IT _j 7&&f>}wY{bG`zJUtS\mqkfB@?@2=.ؓ[+[sHiHײ|#`cGc+&Twp;͇Wr;eq BZq?Cd\$+ #=yqv~1?CP`o7a%Ay5' hdM)'818@a`~x"' 2,D 4@? br{фa@*mg S$|Lx#xpe1wЂD\S0L+& 0{E3eH+ ^h05& >mQ=b> aQS'[1+;a?`Ā9S!0VHq#0LC x-6BP{$L!NcV8C9hdIP [s' Zb[ !)%Dr.~B؏byހ'L H2Ow8 +;B8!/'.x4i Jx x [cevg !Iu3i a&enp+u"ax(F6{#x°2 <\V"17)D#^Z<y5rT&YsI˱zcU81yy$"!CKq>s\b_%MVy&$&Iwx s%,2ѝV lifH )/}, V[R!derx "U ;DE*M#D<-Eji Z kGR; l9qpj2}*}Z zvj訊@Z$Ф6@OJ Gr>j zP¤ЭS #?zvn7* !ZbfmZ?5]Wڦifq݇ > Z 4p`[07 zbo^; 4ː06$kIĪ1T 7 4&X%~dn(*Pr0oC@z.I O L I Yc g+\"_W-6#)l"3#@+nh( ģyp:u2sE]W8eksf+ oG629/ 3ve,ULb Qx Sy;dpt}fj6G&&^b+>s`aX>I+87K$:|M {3zrb hIW;AK{5[-=5e. C[|+&m)@,a7Vc%[g#Cu!sT$B1DC1b7؈H$ *X9S\n{ +١qN#[hh!_%L'\bjX,*Cq=a]grMQ_UhV#?(0q]/q%;vGёGbX_^hf_/ 9|3ʪ0?2^:57˼"p;P0BT]gyܻua}Ե0<|lE6'rH'a4VN)?j:]e`t㬐h5hts 6-7@bVA@ } 7;;ιhƏ̼.t|9;\+'c8%iEܒ+둣 ORxӎY2-EW:>D9Cq#hF١ = 9֩{ G+߂mڻs ιI 'Г1BkIq ނ*mDg⸕fQߦ3 NVbo 9ASۇy-NҜC5 H#dD4qXz lN0` ֬T]goZ 0 XZ[I |紀eW.GFن~D[ѱ3 q' 4 qN ;PKkOJPKN:AOEBPS/img/et_lobfile_attr.gifGIF87aNwwwqqqmmmYYYUUUKKK;;;777333)))''' ppphhhfff```XXXJJJHHHDDD@@@>>>888666444222000...(((&&&$$$"""  ,NpH,Ȥrl:ШtJZجvzN.zLM|tQp&zS:3 :#/} jsFKK// Mg&NRP&C 3Q:/PyQ!*D*\ȰÇ r" 3j\X": c!8RL* (L IftfK=#A*Ct;lM`FL2G LUH ^K^P0DS&Lս7c-V@P*f2T @. ,H(Ǜul7:TSeLkjb Z83λ :u-ӯWJ˸}05O ={'r/IvtϿ PrUg 6F TMȆv xL1&4N,0¸W"a!F9LHr$+5Ah̐I7&!kqW#Dg'3Vj.F&HDG٦Z!D^+ug6I'a*b9ZF1+,P,q6i_+/Pcx;֖0VO ç:[[_)Ƀpz"+$vD,Hl ⋻avS-OkMs Qh 8[Gxt7KU+DU6eʖ/Bg,-m\E]Vf iQf唀>QC$ Ɍyjr6h`[+``_b!yx |`H, l:2lա{-{غU-z("4^n pF3#pc3w2GP!'ObPq@ `+-@8@%TrIH/Ln)8$F:ZI(;)RXsJH|3(+{Q!u[|щPsrWXa$ rŜ ]v%K 0{t^=.&4ռ/j@rH0dU7f{ 4paa0ϓe=[Zc XOFS1Bmb- T_L =(tp(Luz3 99P ~cD5 _UAp@##pU sS9acZ2:P>ɓL; W!<-{]d3l2Sʟ4q<7OS/I#\9vNF$6A~ShCL>B:]l:(lWi4EH Uv:$~x ?X`b$V .4r"CU޺&vrZÉt`vԜج0HX.sYbv픯eXqn[ ֡0 JExY\8ր;6-M`Jq'N[ϸ7{1GLsЀ;u-&TJrJgN8yӀK;SΧa|sx3uOM3 Jfe]j^ꠗk]5b&#hS:^w we0',VGȲ"x;s C SMO7͖}QZ== ^9!Cu9H2y{43?,@T@YP3@l۳Wy?_vIo/{A߳c8h0g۳Fxh E {yǀqw`~`GhvI was,5k +{3vx 5Se=b.CG.F'4XtׂbE\,4mWd0ŃjZߒma@MrEX3CkORj"u42BYpip&G1idFd ^7aTkuo9;QyUyx&rA&QYqy9iζr9l*j'Ys0HkRpV4csЊLM`"XQp&F=2X޶I4hbcȟoËpsZ SpC |75ls7B\lT6c6@cS:8t9Ɯmk2Uw.By5n3AD+7j'K+gA YV<`<-JAp"K2p,[rh& ]$uN-~ʟJ;4Bל/,7BH<: E2(46SSKONp3PdTh H:j (8P W’ p2,$ V°*\75az8T1:6Tf4D5?&K+VTgDO/U&P$IPS Zaՠ)2iטz Z 511s־6 ]?ƨT(SxuT`0i{L+T C$u9;f0z >><<<222000&&&""",4:@pH,Ȥrl:ШtJZجv}:`L."Lݸ|Nۻ1@p wSb H&lpU(.FB& (TBGDqEyEFBx K].F cB  VC}-ŋt IvЎm $+*!0J cGȳg 6vKH X_2uFt>j]bC΄2) ,S ɝKB\h3ACb(c(RtgOpԀ}H $rZD"p5:E1|%cbHP:pIXNa"KF}4o:!:239)G5DMH=Ftp! ͣJ^ļ$;5cRT$F0>\AA qI,UAH;O64 upk@ bw0HU4 b (l6R-<4!j8.@v.VgPwY )dЙycygp *wAXw/5E4mjo1 tl( rk`鉰QSI{Eh?DW #2I} !>O7MwFS T>WW~?A鮁M^% =gĤzygڠY}J (H-20CZBùGcԛJqmAv* @6fezHX\`/- adV:p[aPU@2u,ZU E"P !ATJ ydIEJ`:CzL~EaNq %RNA2R2 #\ BPgp;;,Vȩ ~Qf̊[Ew9))V "x`T;t*F#ζhQr M+!_=`r  [UnTFע3aȌAڲ._`;3 ꁃ$ں!Т(f5m"zVL\( 7f1;_.Dd9I4Q} HG3d (Pjr"#I AvpB"WE3(U1 naX- :SUi|L|~&SIoȏsL!:Q`!` ׂ` AT;PKd PKN:AOEBPS/img/dbverify_seg.gif? GIF87a`}}}{{{wwwqqqiiiYYYUUUKKK???;;;777333)))!!! vvvppphhhfffddd```\\\XXXRRRHHHDDD@@@>>>888222000***(((&&&"""  ,`13  +'GG' ďG ʠ >+:3+3'+G(ذAD$0ѰF.>ҧȚy(BĉSNAʗ3L+H*:QDy3LN2i(†Ttsҫ"Wc4 9uccvQB6PDaCWBN~UP7e\AǬ~]pcgAZCq9  UOD+@q k61qc͹-!+BrI 05ř^abWN5YΛ\w^NPeJ̰. k TOBpU.-(^V_~|-!q"qޝ7O>CZu`WS$zxE] 'dRpU% avjYuD)bX d^:d*@E“t~;7` bf *J?M@J,2!Nxj'y=ZU=k o"J0ǚ(b)4PxV"(*+x&`p(} @ZW `lypsE/řa\ qFY .V; (T:l6V\څ)%dl ‚Y 5GePpGi$PiOAfkP5m0m6dcNx{%7}8[5$:v@{A IQ-#`rIzdvS7"L5l0_rHbxa;3Ps Z KP@gw5#3̒NHFE}, H-=zg;RMFrRѹaQX/E*2 + lO' b a((RX\p~b P 慂(b@&!14>zP@T,#x7$nbsa'jf)уir<-G \$zG+v17ʣg$2Ǎp{L"InlZF8IAd3$C&GIRÌa+iV\*_IZ񕱴.[K!r,bRr21,e&Й̌l~&. e,`%* ?n4wHL~As*ti;t8ٷLDs+N=Kv];'P#etSC:.4outE%]Ajyp!2rGװ).K8"qk{ܞo{BV])_Dd)6I8YY溋5N]H]Ĭ;;,7G.#QH}\쭷-G5;]̦[60-ҹ->W Hĕ͘WGbDLGDRN|V!|qHGiB'&|,tyK 5*T߼/˄$>2#UD^~Z$~OɥG+$6="#y G92\Oՠ3_@/Ш3"+JQ/OW~ H{7M =XE? &H %'1,uzpIIՁqt Tڴ'qOgqH|@(TJHL.Hw %"dJRP !zP q' |OG\TXy^DBnEAְ@@eGXDuR+(ZLvx6GG"T1VM>PpSUc$h'hG~S{WUE]sa#( tpY6wsB1pZ%и7BpA ?lsC(ez0Xo9|X_(C 7$^.V_TR"^ HY>w/3)MIA[98{\ER0F&WbbkB^r%]$uL(: K8t4WYv\f?,gB.1df/[){$hPh8wg#7-V 3Ph?:d.)^Fj ?4llih@$ ^PCCim3e"HBUtaoFXxWA up`s772pIH4JQ-:ݲwY3O$YX7]m00F2x?z E%ztMkM O2j dT:6z2إ-^ڌb:WLthSjmo)vltzA6 A G J%GOWjAwM/,4aF}Y |~BMa jtx ~&2d0N`9ZM-9^aGɤԺPJG Ual`IjcLF׭yQPXPBOI]Q&")9ǖ,p1KXV>ךlLSz8JiHeɐ30FbJE2b\4.,F3C$7wS8-O~3Z^x_cb(K'j#ie 6gYB9)QjAvG >`f9!0t2%,Fl(y'rw򯰶V%&òǸGIGRi1`nKF9gf}1JLIh`֎~!XJUE$Sb@F`Jsk0Y9Zejn;.D m'q%I%0 Ajq)ix?r+@Yu(33%)Kq(btI+Bqs Km5`K%!5(p_vV ljk:J1U!uw  9 g[l'cw<[;IK"p029~Y EAe[wC[ 4UĄĹ3(ŪJ1 y, Gyr$ -Ӫ@N!6\ӭ4숂Lj1 : HPӭZ}oL &GZJ/ǹ_pI Ǩg{GL}sLz;:-tʙě<QgPLD1.rl\ȉF 4pOePЩl7IiLjYW W xNrY[|E9ÿ?"1^ķ ̴a6=vce2YeHZ̤L0AԊ>Uciv6${{ѿN @IPVka9'@RH `ߪѯ/ <1<}k-Hb7\4e73@<1-ӭ&J-$eq<8$eER-ta;'|º ziѿQs: 93/@3 ,QYڪ ڥn&An Uኡi)n $  .5zd=DNmI0ۄ:QR8?aV~̈́0ﹿs4♀7EH@pp]< j.>nwŠς! 5Wx \|/|@ s. j( ޓz@Rv.L[` @ٞ鐠S^|Z5(=4W-x}繾,2YZTxpQXമou&D׿1΀X].Nū`N #;X]J4mӡ;@ 8ОB8H/5  "/P0JTݯu3;+NOU=E@>@BOPDe[ew((Ub W!|O\q4+ݝ{c$CAgiMόVծFI] x[OLTu`v2 ܖ{֋ϊ%n= {-ׄ)`*POɚ71<+@.IA3fq0l-p92 L`Tz ʏ!>_;RSMQ@.)]KA>4r#x hxX8SxxdAqXiy险9JZjZACYxh <L;\10[٨@ᬣmz~ .^|R(1 j>OONn?*{ΐ}+X …2# +Rhqƌ1"+У)([V"r̘TbI̜vDfΟ"}d诠*hԧ~rJuT[ W]iۉБlۺ}V,{gu ޽|{ᄎ- >ᒋ;8dǔNx朙+v qßC:ZpӬ v쬯&7ݼ{HRXq"C|~xنj ܻ{w+^B1w`ER:K8X?ۺ>{`39\%" ='#B~ şD!E 'Ͱ A'~taX:lp_'MxB!k&փ?{ >Ȏ=5#Iӗ?`'0%+:PɋEYQv<Rݕ#!Gp5i V1I2b4f8S8W_æ Xf4r('X =*HCl\I[p dBXr4y!0HxjuhIqX23m}l ͙ [N$꥕ T W&y+{5H$f ЬpA%Ơlp %f:Bui|1Db_&:*Om%+kDP樍oGKʎ `[$XN}zŊf$EB~][ X겑TW} vbTj+\V&Ś %1ꭀ D.Mr:@K!G`@xoy f[GJ\k }+'u36+L =(\!4C(B3ұ5֥xfZ5fu#+[+9/(@jүC8G(ɜ7:π9'*u %P"@@`@ \$s$&(ҵ+:]"wq,/6r&bԠqX P ^8&E"NdC@)3JD萁X+0 j@Ǚxc& xWCD`%J:,$t0%||fH9AJ9=m<3!K^_# TreLTXĕv-_ID,^ 3|0bEl'KHrg8(XͿ(B2!e*Dtst4=pj 2 ε'X9%&ղ"/z3!t l A#ЌaB 2,( -4LN(&ZHrb 8,3 uS؀L)c~VFDG89`QFUz3`DRGI T0Ԫ-d bW'U `k_cUX8XARG!ȄjQAbudl,VGUkd *RHC:DzmBfqTPYqak!z0K*FʪUf,Z]Dw)IfN,U>>000,,,&&&""" ,%@pH,!jl:Шt:e$v^xzn1`xcC@W~q` ` FI_ I d IœHb^Eh""Ch iqBDjHFE" CB CD,00`M AKb{# @F!<82㕄C,8HcvC NR 4l``XI DH𧨜 2 ytԻY  W5DʒE1'QܙCdWt\4rF A)q*vP^"Nimے|F@CGov (D =-nGt EcHX4lihQs 'zOk8Ӟ@$E8E@'tcCvIpD!|`Y`GU|(Z-VFt?OM$!U`bMu84vXTfDdsE) \#-ؘ;'eT6puv]GS\S?2D?Zu΀9au#eGx|$ )iZJBD WD $bD)HDJ4\JY^y詥^AE + 1!afQeX拙icD;8jj*گM$\뮻Ca-}kta y,%3! ȿog00ZƇnLr,rLs#\ r:¤haUas1ȼ:*"?'Jn*HE&4@ H\w`-d%0-7D[mx筷p'aHJ~hni!ňG.LE;PK PKN:AOEBPS/img/dbverify.gif7*GIF87asc}}}{{{wwwqqqiiiYYYUUUKKKAAA???;;;777333---)))%%%!!! vvvppphhhfffddd```\\\XXXRRRPPPHHHDDD@@@888222000***&&&$$$"""  ,sc6MM A M=% 8/+̙Mɋ++  8 %+A-8RY HIB 3j$p#|bC-=\ʜI$͚IP!&# LTH đ)EIrg# A* iӢ%e&U ~xF)H4T[Їl떐[ A|wސ>.AݶCJ 0,)eG B((`[ԠA1)\qXJ h80`Ͳx6%u0"w ݐra+tU~ QC RN7Yٰ[@Y%}ӘFy%7%f 1]dvXoJ9UUo\Vbqf(hBKP4X}w(;Jm' ρB "@G~D[ MZ*]2OMh++SږpY`M T 8-$e݈$nbZQsV{D Jl9oj\)7ۣ*P%o tt,9CF@QBaUs?2fրͩ! R_!.Y%v ׿\kC\Ћ\ NӅ p=Gnf" tL+lfƸۚ\`捻^Mj딶=H͍e <dWV k,1mCzIx~^u'"+ PS,Ok PM: "І:D'JъZͨF79:(H^yo*(a)8?Fk2)M.BR?: XjH^1! (:&7ԪjDRE_BIGp`J~Z-tVW؃Y :+iF?F bU@CAI zk4: RU%n\BN#!zej!G|wG,eUc7DDLFy,b ZV=D:z|X-?+٦6mn5o17f-pLj*bE}0' jS2±0$$RF=Hm7>CTx.w+/V[ug"wꗉSs3=/]ߚ|EB!A}1 Q־0zoܚ@*$Y@7ɬsς2eNKmD%#Gm0e<*'I]kmLBh{| Hzm|RV` /!p8KFkܓw6fMHEIMڲ囸DzVV0XU7?,DDҳT1W[r/N8.&柘 "q+_Z=N߄([b>03}7N B`vBs+!k?>@V!~Y p,d=E0ZwRx奣!D^4[.gpؽrԣ] ~U}t:=.ط#/;Џ~|"z`O묏xY}|؟`DU\ߟP_!g}H}XFjQx 70348EqutN!8/&(^(vHm&[1't`Rw9 ؂3q|swlӤ(T@_2q|sB#F,9<aJ =P($D:l9x[D_-%a A c^1uoLx$c2]S qWw-{" &C-EZqȂ2v;HwBcqDÉqz7MuV+} |Շ '}xؘ|wgqztb wW P 7#y}]\9oaZBCH^AANpWVG#f\x^epd fdDT=A?fb(@` ^odvwnxcy5LEs gD(W<פGʐw"ZDy.l%jQ Gz@= nIP py~)"ARe(%8 Ē y7I!0 [1z cY+ЈfIk,H }#8:p!a% c&gTb)B""*ƚ3^pșIqMAQ DRKR'QkE6MTLR%g'ՙhYd'Pc!=x8q `V+t%g!, mi>+*,hf"vRޡk.v|7},}2}@X|9@5g20a1 `Q +Y YDP`A@a]pzp 0@Y鬐Д8fsr(CY?g ]p)itzEC"zalG#ARXAu dvɋ$ їEuF79ٰS/i1@H 5 x2%#!1w~r' k78 wM@IhB =PᐣV{\1ꫭФn6%s;03|!1sYc4%TiS>k;Ae:]Z8Ӻg 1q,rS^ ccV(K͠{ vNh.Qz嶤[ !$2C\{Mhp|hꗺq75[v[ueP<ыL; ٽ;鵖^͠[p 溾D'+E _N=av'q'P&xձ|ƻ {hnItS džyZm;wSvBnAw'a?9rQmb!tĂ"|:YBǬRArm='QV¬AY' `x;еKhDNk ?hrnxЫz 5w"$ ʆOKJ'kS@ܒH+D.)l漖X̑j>JٱM]$">Խ;trVMa"bj %bb8bRO&S8,b^ y~cP]35]dhY>%]KFd"m辟yRP?j#7ɴ j5hmDKfA7LMo؃з$PxV1h,o\m2wܪ;r^/ ρZXhe ]0`CHA Qh@h#Xt 9IhXX@ [[D˜:Kj\A۴LRcSaZ \b1XKKxHPLj QbC@ .n f2NR)BR8z0g/RScJ+11=Q/JVhR!6%Ѽ&ZG8p ,ӴdLX'}JR0֭ץZ%_H WF+}FV]HB^+ԩY##8CĆpBlpU @ c2c6Rc={"v0cbsѨoW8ުa&~koDb|*87$ 35 VTL\ ^]+gD\*% PQa-+hFDՃ-9 p}mHZ,XH1GJ Cbcm6 tʇP=DAJO(]H*`!Oٳc#fMkre耑xeH4(ga&S% |+>D5(^*h$PDns@jjJ@IcIAjZa[UPV ,FmNKmZ ,*ؖRmĞbr#nkZJPb/ _[7N(p[3,|C o,o[o,srHqŬr6F9sʬ1=JtF}*dD)Ү3a;3'EnuJ7_쵴͚p]p5J= v@3⵷m˭2Y!ՔEh%#n:S7(ÎRߐݢ+6)HDvTۍt4δ{ղrOcD$rBO҅Rp;"jk8*zxU .įϾw"AT#_4^mXhy²B Rd˳Ҋ^hS+YTE+_uhnkXyy\sv [&V R"L&hR4-`W!zU倵 WObf ,O ߡ 7Vy; kRW7 b }\#qaH,&O71o{XnSp~ y@%c'g𦓟#9>1e)O&+M^nÜ*J6+3+Ό-VF \Ie93sVfupƼa@}D<*5GaҤF(G7u^JLk:*%QZt(Ѕ3p jRz>FY1k@͵a3=F|s@fc_b /z5pY.m@I0 ͆vA|{4кC @ 1ݣ~,y4UkY8m&[@J50W &k9:#&(dbи@ >x5 ybj b}8."G71乡 [- :t͞!p3ay ڹM{RL*I g 5ns~t|l>s2P~7iv\yc<#%qH2+yC\pH=1"1YAw&M74- {@KׁݗyY`@byl9yIATvLpY?1_W@1GiQr4Ty<t4y6GTd'3N6u7E'UwdEAעyw mDR=YzoLnա(RH GhRy!2O0%tfÇmQpazڞ0:CcD [WA dPSB} ` 0'pL%QSNPAv6!d  XׁAAqXК n| R=QLv l *g鞵r#@ uP@ ^6wb2R8z*j@ 7U@&wTQ{OU*Q补GP j?:IabR0iSwVѱYjfY )Ox+ۙjԝ*&ʟ:K-k*)@02|6rYgH^8K汢,JZ 33j#Н0Yj//I k68ʎ"Y!'9kFKH%^&Z9Ho 6lթx˘@77C2xc dGqsFf듆9j+)ZXԙĸټK;cº+z$bxa`xC5˾+9K Xa1zD10& gBm=E #Q(b]fC?u s 01zktO 3[a33$T'#\;uB=*,TA]Sz6@=z)U "&4@RO WUa84$y uuG @8o A Z}YH(YEN$Y-bH@A Ftz2W䀊`0C< gL #F$6{C0mW`1&{˗cX_AK Q9bFU#2HP4*-Ppͫt< Ъ5m|u8)# đ` Q@Es܊] l4Lӟ Ҡ2&0! 'UKyz"#YRP<S-a7 զȟ+ldkrG?ԝAkRl'ke&יS"Ņ_cΨWڟ 2 msQҹX ڀr &BEYe2טtbuc\t}]uz[ }T῱86B$w=cV | g1MKV81_ڋ |dx_ ^< ` @~d;PK<*7*PKN:AOEBPS/img/expfilter.gif@6GIF87a*}}}{{{wwwqqqmmmYYYUUUKKKCCCAAA???;;;777555333---)))%%%###!!! zzzvvvtttppphhhfffddd```\\\XXXVVVRRRPPPHHHFFFDDDBBB@@@>>><<<:::888444222000...***(((&&&"""  ,*=L%Ө!ޭ/5J H*\Ȱ?$@pă\BpȱǏ CIɓ(S\ɲ˗0cʄD̛8sɳϟ9X`B%V[V:MJ=EAkV@:WR龊UJL$VTrkʝ(b5t߿ޘHD)Í"^̸Jڵ˘}Ҵ&4ƋS!*װc˞Mvm$0Jz4 { .D4gHBXum?o0H/Dw§{ޔp[ 5y?,&V}Wc&8z ~E )Bv V(YՃD( Ԅ"Jš(2%.:D%L D@<@#A6 F(Ή JfS;P<D `#L`ND )dihYfkpG Q'N2gP@n+tDŽ2% %$/퉠(pdR˝2C Ry1B<1\'V 疇aX36jj+ ȕ4Ђ"$ɱh.%$ Hɹ3Z*[p6g"VE0k' /*V ,o, cBd6|L̫ "N P Rp [1,%Uk )P Dq*5@:pZt  rР,w1w!Qn$2s=hՃZ=/WESݫ= ]k=ށ~EJ|_u$\2yh'"@Q 'L0PQJVP0w!Sp hu {"\ -a s#|򘄐S6$@O`+ĸo߯ͅ,-ӻ`̞)PB<Mx`wAkBC0]5€"B`)`BG<`6,SOΡw^U|փp0FUl;*5̋5 !CR1x e QoZ(`%Ex @jgұU!7A1:čF}!:-Mb 6&Jh'eqFq^e&9VIĞ]-F]>)Zfל FՁA'JQƝ lN{d3JHTШ?iiZkRLgJӚWS;tS~TMJ0YIJcVdh,1%4XJxNcy0*-LFR"hV$F) 6,-`F | .f Z`*K.VLSL,(/n M%! K U;F"Mq? ubAB<6%-fQ-k^0nfI8" *EtK0HlgQ[B-:-V-6^ήDin뫧@].?REt-8Ա lK%[BUQ~MLSCKDг.]۠ iY]DZi_UDK3q% ‰Q'dC(COb?hy1FR(cBbI :VX$V, "tn2Ztr &C<b!ݼm+)#SB lFvH @@ӗvZBQ@X!^g;AzQŧ6ArMaT|-gB<`^j)8%pX XNȍ>͵7D|lʶp*%P9K͉'@BhD Rrc,f A!@D@t! Ci 9x%ng2^Q0"'j(l[bM rK3H8 C fK3Md^1{ fi66F$HFU1r{5effHjlg7@xUC rD{' Ff% ai +u6ɑg+(2 k{Rw3 ^^paA`L_VdPbJUap,T],l c`2腘kI)dZ-Lf.E 8fda>c-T*LH]tRN&WS0/!%reOoX(8 ,Dȉ_[tk[cmVa \'%pQ]^W{EL {t<8Zx^KlTv%u#%Ռ6]bRqWu\C![ygӎ}x' dƄ`Bo$B& (y$*F,ْ"[s>3Q@9D(o8y <9WS"@wR9TYV)X?YeUN#)2I)DȕӔQrđsDalI@n|pFڑXCOٗ1)Ni!ͨ}ǘN&(ʲF : Y)酣.ɉ,2:f#}R EyW2r`t͹*EReZP w?QWpDҩ/~:?64(h'v5O湛i`[i4BBHY UW崟\ԟxg{g*)yP< F#vw*@Oy}9`+Gd7'z{)+ʢMyMpO =Z %ZCMF;Z9QX2r9 I 3 Oz9ҥMxԩKhV|s, fj54  Gd꙯E;&䋻5>+AsHD%hiz;=4 X>$38w*zJf|j&f,:3a }n!G6n6PTHuːmhuAa_&A|ze:D3SI@&7b Pd2U`w>tVH;gr3ע[GD:$D>fGCI0TJ>##`j=_fqDv;ᓰ26.p6ӯ=@IH( 6d -2D:PAyPboA"!dlqC@[ KEk" ɓ!jӁEE1}úDS.HÑT8WԵ&4pEsNPU?"8g. {)8UAn8{GiEEd+1J2豳ǬI(;5C5G TfqM?ʹ)鹈nyԺX8C05+H 8/s8Wպ3 P~e཮l2֫sU%e Y™ 3 rnE~{3  Ի ƛ>!B#H6& ʕcS&#SUЙO \+|J4 -'+<+5 lTP%8|\p",) +x2IΛ c+nRuéPV P%*Sػ-[ # TEW Fƅ0-p hʹ Ju>Pȶ°[Ōી' #0Ȥaqdyj u1ue0+ls`4g:1ZEE l&η҆ɈeڬE\f>q137u&cZ12<ƫn$)f9\d!|+ #͙̽6dbv}%arxW{ C3I=ʙޅ"LEH(B/vX,2LװfdJ}})v3FP~H;B뛋l ')h 3مpn:h2;AxQháhTBl$)v x"E -]|ȩ~:ioqCJj7l}D/&17~y .Kqd8⃰W4%x=9B˔qFӫvjCDVp*FN\cMz>^!眬`0,xKꆜ#&{-Ч2.ڇ#S)PO+ ~ǬR +ȓǼ5}Ѯ,M~^.L/4l垐Yd4]=L z/}loiᚿTB_ pr|Trx? 5Y@`¡0%Bp a=O@}PÏOпPN_.>kwX /)Y S 8HXhxHP#9IYiTPb *:JZjz *+X[kE; e[l| J ع-=M]m}ͅ)@$踐YΔڛ|N><   <ƄH^(ɒ/Z1#|걻׎#}iH] ,Β6}@qgm ì8B#ѓ9wRVA ,פ?[J] e,69Yn&$t(͕:m (~}I  m{J / Ĭ^*+2ݦNk:ƨ`Un='ʺDtJ ;qU{k7p9/z<+b* | Wan(x-O_;d/WB\Ì3|X&(p( (ƒ='*6iNUo=oSUP{O8r_jLIvQ-2z=IW)0|闓/` G`IP :;J'̙[?µk(f&Wގ;ncz 㶈Q31ʇ̩5Ֆ$K)Ϧ13<%cRr2 ,ҽ <\uLجGEi> XCzKjm1=61`] '͝<3ǧm8]]©A892p@ՒlدR9k[ /@q>r}9kW{aL^$L}mAʎO n@OWMy 'li'iPF߿Gtɦ0%1{[ [t$⭭[bF?dzNf8B-xP> x=GV)aũXֳJVF+F kHp^[tz> ͕ Gv$Ke 9a*Nw L`b]y DG CoUaîC?pKV"WԹ/7P5s[܎n+Gz3,(m1T5W~ʈ4,2<" A(#Zoߓ`B+?B_~<~(x^J%+iHm/lH>@+C_5 j!9*9-^'ڹoX.e.q 0 rGiHKц'8S8klkIPˠ #=\ F=/#OD4$J.ÙXcŁb1Ɔai)&& |W6 ^rXW눰TSM!< H| AS'C!&SǡDd'D@2y`'Zap^:XFR-B0.e"qƳΖ_Z@T8.8K h &P nL:v“@Z'wg4j#O,JI 3V ԰8HA8DI~d̊Fz7!$K<@+a#ODDD4Ša&e3M)Jz.IC1,-6؃4f`tJ2S3!E'qHz!O$Ӄcsާ{LdB_*F|Qz!#repKbF'n0R‚VKH' .q`xu9ĵgn' M Mm_qr@~7>^6\NBÅnJ*rLF.^ioT PԧN[XϺַ{`'ҕN?0Π #SxxϻOwU4 ( 1Pn[wL;g=e6O=M@;PKE6@6PKN:AOEBPS/img/et_condition.gifFGIF87a{}}}wwwqqqkkkYYYUUUSSSKKKIII???===;;;777333111---)))''' ppphhhfffddd```ZZZXXXRRRPPPJJJHHHFFFDDD@@@888222000***(((&&&$$$""" ,{4JJ 88 Jؕ % JBBKPS J\ȰÇ?Q#"jȱGx@0!!!*B>B=}I͛T<BTTBB:\ʴӧ*@$Ci!K,(yY ԐNA\˷]J$T)^՚UJzK˙!rേJ*n۸&c^  b˞M۸sͻ N@2w10pسkνKE!d3ҫ_Ͼ˟OgQ(UŔ" H  5F(gB9dJ >iڇ4(r< V4n J&%H>D%,C""PZ3rAbQfiI=pCTZ $h>$vB)'#8`YJ Xͩ'"%$ 7`)hr%X((^ DX&{Fʙ=jri*:gy 9骘irȌW1ȫZt 2"'8,(VȲv{Jk.kv춻׺$o&֫{,p?WR?Ãv$',>\S@z;P%*NТeG20[p,{@Zhe/?R4H"ΤĴ_G7uTS "dֈd}}MUm׆p͵#Y_}ىmܟaiWw!"t]YU JTy؀ 9/-lj1yUT qR&WKj \M:#BnPP@ò h%RCJrN+xY*~ k.24H>K/:F d": 0ь8 @xeH bc1"l=q!eDJB D<Ɗyt*1HqfO;HTskA5r&ƭb8 Bn"R"9`~Iz)}*r9ip )B e̅񈅱:#  1Sl*6bGW7`iH䀵„$R(@4DHv Ó0 V4HL  $ :'wDrI#Ld(6"/L'hI6 bX2Fv0TX<.b @L<~c4!9ȥ'UtCRg )Bd%0Qt OT jyP4BJsBDLKF㴒 Rz RO|+Qt)P[ (؉vB4 ,yҍp[LSf(#chABENDjjSp"U m3>ipS]HeD' pNU@f”`˚خhe1 (g&EȺ|P)u,P  :9@ml&BbBMb:HxYK[TB $x;9;Ӿó",@M[ om !0((*".`+d0GO '= c3 F%Фcz`y=p;+m/"Eؔ| !ɨ"7tMe,cVq SG!Tv=DY)vmD (e|ѕ!FN4L.Uไe+s7UȌ&>gߛɒ1Fy6ٽ)bJ##bW>d(1aBD 9ޚۯ@!hpYggMkภHSbJJ%o$E8`ZFLj z,ʠ"Aa}+Sc@w""F|1-,/bTx/ gG0䗭:( d%Crz@"fG k8>R񽔱:+UY4%ߡpٸ:t^ !Qly1I2 nM e 'EdPUGKcf:Ň'ZW pRKT% U(̷U\J$05*3[v{-Hg7MIZj| ƝN7!聝Rw KHFW O'Rx=T`c4ðXD MR$|&[܃1+ _kZ:DƧa`baWհ>k*:q#dPxg 8~ `t0O8mt+L9˲qR(4gLcRRm7zlP ֗ {+ d1NV.6@o0J,GA' 5(> v8~GL: p=g?|dgvrXv$S/ ZxVv0VcUX1QFy sP_ g,&) #I7yQRhx"k \>nu" UmU"(h!4|emB:DY8SS BTFt>5sхPo__H SbV8aPhObiP>P A8:rWg"#jet` ";r'A:C:aT^ `+~ |B ,!Pnٴ &#ddA!1$&%2Cqt!Fm408Xȥ^CS>iJb;#i;ETq#( 7#A||"AW!;2  )!1p 2 /1I JIQ׶Aa%_BgT,Z"0 pz?$S%1##CtV) )7 ji" 1@pit=17`/Pk:K0%) J`a$p_uC)Ct":NB ]]UP1FjZ8?Cs3o`$1i/`%ENPȝ\Y؏}(S"9#E%͈l%V)x'D҈p@_V6$*(#tՎbS K>3Ej?Yĥ:pd aӌbr rDp\Î.PWM[# =TWhPQxXV]a ^o0:7XI'h,5|B~@>ZsHZ8 qsz o҆L>i ?~4fR DGrV9 :Cy (=Zydyo ]񮴵q g0tXziظ X4Oz'P[t< k'hIi}IdXi3e" TI&"T,k2}r$hHѬF;1@wHYFP3ny0o蘆01 9EV 3sP&>'*5R+ uHocsǤeDY_ذTI4F7" WBGI Ys3K)7< ;᫣PĢS]2 A+2KvTk" \cu8,j5уa91AųuzGӢ`a"@+˸Eh9TFMQ\v{ov6KoÓgLCTvFI,TXcM88% S`8hRq6"Oҩ*aDܧӌS"!%;(6LkK3L2F@ ANjEJ&1rĦ0z&P[8y-JiX |Cf2M'j<tAh,ǾH'٢Tb"*TUds R ,3oN &NN <4@B>ou L ~I RQ;ԓʥܟ𩟓>-ks7@,PqLLCJ6PWq#G UGЃ)ITS˖#G\{A|p{Jk#P<`|ix4Q[U9FUi7=9ʞC;r!KjmC7#F?6sA)-ic'yC>U\ԧ&M6ѳv@ż[a$dTg{:MFQX6ԗDȇpl!7V> C"A:AiS*qF) z}jcK}]~3ev!uRṆ$Ζ,/`ӤhĪ}V - LMC0VHp܏$ ;Hfײqܒ`|z`#mo#-FeXeҺ'MCNø +Km "{?_k6L@qr] 3@!RFЀDt3y-C=cЫPiS(d `C{36~#oS:ۮBŶm>m6Q,s)I2P>' z ٬BXgxD^ j13|Z´]s x5+= dl-5+~Bmvp"e_0nv`WjUʑ5]!=G3x'K ޲GI`Jeѐ7mUeyH)8ͬt @ryVy^K!m,AȐR ##n kD ,9:<6R1w! P$d)SyqFC'|ͼB.1\8(KjtjQ 3%Yk#9}c m&?Y:ޮX|#-/G"?A@YNlj fr?o q?x3u w~{oS_ _BO >_";PKM$PKN:AOEBPS/img/et_record_spec.gif5GIF87a4{{{{wwwsssqqq[[[YYYUUUKKK???;;;777555333---)))!!! ppphhhfffddd```XXXRRRPPPJJJDDD@@@666222000***&&&$$$""",4{-?4 ѣ?/ߓ {@"*0^dHŋ3 HȠ@A/NQ!HpuƗ0cqǎ~Q͓lInѣHbi8TksPDjʵ'Q`nR-ҫ۷p1-4'k%6߿e(`^  ө P˘3#HeϠC[ P!⪂*K6v/88p-߃p`H&lR 8&=tφRabfX)8.HD֔V'TX}pṽFBVh#d-QR9@Y$P^9퇠;]r$P'H!@( EK\ j'Ka)%?h,HNb#wK;N)oUb-YJ#4ޚYIa֍i'Vߞ|RAH㝌UfXF*1R"" %v`:Z)=6]YUY#/-( AZVbEi-JtA$Դ"JKH"r^հkiJA=Onpkfj>Ty@fa$Cf ,Y ;`k]7[jco!:LgƗ- KB #+5OD8LhzhPxH!ÙEQIЀ8S"`"*V8\l2g* P1Y|A,9/(^t gu7m< aUb*D KJՃ qI;/p$@^sZ#F1ƣ"CGmUkny eUck u3FXr!ojƙٝ"E+<@Ol9z$4Kb7u HgP2ws8g6 J$TlX :AIZ3%8idURFef1*@JIAFծFNe n`ˇaaC# "ZDF=  ;$ ܽNM*$Uv"u0K7o*@̀D~׉:LbRe WSh(Aܬ=p Xx ,DY"bsAW&vsIa]pӢjDk 7M-e\J*z!B8pJƍ[d;[,lY U f*snh*&!$=T 92 ZT$nDkY@ 6(lSf|N!KuU=jiUzljfU|Grȅaw{/V]:׷ɉm `P:Q5;@GzD$@A@t4Dnjcj{rH p-K*(b ahl,  "\X@A"/~愧DNrчT_#~L71+NJirX:D~WZ )ʪa5!\ 5b$O=H})L AplDbyM!!B'16 xaGCSj2M!M;J C;/oؖ7$4hkзMCmGt9 ]UXgb(!;4J<1G. ;( N.>o? |~8&`k-3CW,@od#Vwʝs[![X6,{X%m;zʹS!{2@220sw82SB+3PfwV'6Cihf/,o&X! H@0r)8:1HJL؄NP&6gևh`a7Åe.nB*ԃdž@~`{}kENMwQ'IFGjpaH rCUBGbXjRr Ug;x y ?o*΢vQͰӊRo`x" 8x x rЌP˸Hu(>,k獺Y(׸ Ը Ϩvx&e6$ @@E9?RP)U"Ed8()g2(nF""`XiwL]ԄJ(urUiGp!~6y8%s6b 4dS|)aޡ;߸g89QHvT;ًY!,dȌ{MіC -x#V3 Hx yNyL:B!t|YZY9uո={ WiAe&`.0H!n9$)==' XqYa ᙟ9R3c8&呠EI1L59eVÖVdv, x,f/ڧ) a d%YxѢQvLuQ>R!7d(<>9r@Cb۷iJʠNRB՝9HlY-(0; o]Vm1Ɓ[J u#kgŦe/ :PoH1B Z!x7gͲ3ԁ\O:7֛y%sJ܃5.ѩ𩎕d)A-$r~]$ 3UD}1T )=I2 #lP# A2*x?""&ѬG?'a11If' uec`jXJ jQta]KZAj 9IyvB}3y8P>g(IJ GT8`*paI@}9<+ > 6 Y=353K M O 2QkT4 HKh;]Uf`kBB:򫑐"bklUnAp Qk@wE9 |۷𵂻={F' Tuҵ9y?Bә 繥4ೈ6vk" 㺲c _[jxR0mV("rѧ#r[1 )R ap6ЁGUdQ3aL{,'k 2q#aDL.26yY q C@QK3?/"#脿" EuTLB1»jY!3'*\ r)%QEЙ6Ãwb hHE#}B&N c z,RX a6U,DYj5jFi Aha=ېY#6$KݻxֿG`p"j;E9i֫mäيL."f ̹ϠCMӦAYuHLM %Aݾ ~QE4SBޟFs,noEt['tmQ ^HYZDeEK,4chmH<0eA1YqWLׅQQ"n1^$ txc$x0S}U~ N&_EˉI~e_QxeK&樅 VBCSqLW9 Jed$!$5Y|B(%Nva&*hɟIHfq$aߚ8(eމ}2*aS zJqԄLfU8GPZd:Ȅ2Yi嗚A-f @DKd`HD}V(]aT:J$ tn :4YQ=+(1VxWV 'E4$H,G܉Ì @"' ! !d C IC QQtr_:D|DlJ0fQ4aEYk3= Y Z}UZu qrUwZW,pLlx p`A wNv$煈x[0!SN'*vB MBێmyGu C :Yg dA"g*@pp0yboVqNTI K]yBAlP(k{>pZ6 ]0T1| 9 5oŠvg$k P/$%Б Z A%~UUe;}T6FMS [BFV6"PlOY_|x gpc͜p`kxl7#\QNX v5)f۝)}hD/$@uG&_K~ DB/RvC!v RmPD%6D`X@K\ѱ*2HUjᐕ0{(io~5h'OF:҈ Ȥ2$e[Lj&9g `(dT_bf9qL=_ObPԈ3bI)CN9Oy$)I>`)c֮` FfT0ٝ씅e( Bȁx ZpMpmd ُ5(tQr  R#&Y**4 ҌB#P*<BBÂtMub bb$`爎qlJO <|VSV)T -C TV1 tcR_rpTbJkː&P.2 $!џ V"(n+(!]"*m[3zq5x5<ЭN12v8j&$J_$)02z;c!:Ё}6͜SZh$ ˰X`t*(eUjR4PgJ`MYB^x%3K:9b@gvi63.ſLBPA=繀ܥDd.E1o)l*!FF_㇌Χe)wc{3Wkq̄f!,e9ϳa7Ȏ"9BTuxpn9bŒr;w>bĿ'gf{YEEcP!7Ε csq) '(1ir7%-~qK$>)e WUVrj'/U % @-3.2ȳ@r'#H |,Zv׶(XtRH[;PKm PKN:AOEBPS/img/char_length.gif GIF87afwwwqqqYYYUUUSSSKKKAAA;;;777333111)))%%% vvvppphhhfff\\\XXXPPPHHHDDD@@@>>><<<888444222000&&&""",fpH,Ȥrl:ШtJZجvzxL6znYyN|xU[FEbB"y՞7+PC တfЍǘ7vߍ! ,18 )$A8+`!qHO Ac=%rQ),hBU4% 1Uf\nHM[,Qje<SRn 1H2b55@6t6\f!1hag_駞EO}$BكAdLww`D4GmPlR6KTzZj&NP29pAU͡)8"Z7C/r&حx0%Lpd%$p ;n 5Y b~\fĄ `'EQ˯# D-'` V !:TN-Ϲa|:J"H|EeFGhXS#~QqbXA+~FӢƼpme=n@(vfv N"[Ȋ]C.R+cYZ.c/bD]@*lGrl^-`pZ o29i*Nhqf93|$oR)%/X *β(4"(5GsJu:r(kTu l+tBqPY F H[iFΔm,QՕt0 _EC[!Ę'F=j-n)Q(^ {! $4M|\tݘuɌHGZ(dvt.ux@t"~o3#m|dbUVf)yUi \BZWP %u+3 $"8JHYu&%=;΄Kw(Hs]jmw#3Dny;aQн/_|4V [r%O@/F>>888222000...&&&"""  , o5 /ʌхK n`X"H0xI!nXU!@ 9Sh B3ѧ.+JXT(,ʵׯz@'سhӪUt ʵpʭƩ ˷/2z?-È+^̸ǐ#KL]`YjpKBMӨS^ͺ%/.gv)ϭnV6\mw&{7pC |_.T8)ܬt3G|uGA_%Q>xp^8ܫoȽbq$w݁/e'vtg!PCJ3Bz z@H'XbΗ05` %< "I6"3Sg$HRgHA!i{K)."BfxVfSI\JyX8aI5 %&V]X{54go y($a=q|u5Y'#錗+,vb*)X w~h&q"۬l RNc覫)r2l(ñ5( 5`@s` !"d 8uk<+" \ w {La&&z[X+d!::`W,q <feK,-K{{ajӆ,SIʜTq:փWy&*~-f_!R4_9; juՙ`Iυt| o/(b/5 CSU8aӢorܒBx~ !ώ{ A)[%8 %W2}%x=%S$}[/II/>$K#8kH\:M/sHAD| z*\DBE( J qE-L ]ZPȻ wGj`&b""Ç "" *b2)2s8?A1sLfEa4L/z0cʈ=,j&s9~cUQ8 BBM$ F:iNG)Hسį-iKzCXs$>JGt" m}3 9B M=a?KDYX 4V9R $GHд^>!@Ӄv~՟`Vm$f3k.w  zc$Q;q]iqX5ŵml8ކ) .pv[GIry V}t;˹xr,{5Qy^Mw=wx+nK`QoL Cl`Q Xo!kPor*zcbR\ĚpsW|ΩƿJi܊x::'Y W0vk "y Ƨ[CgNV0: LL'1ʁ-{9e^`'+q E )PMM<"(O\֣1N}sҤt0XNϓں`ϾwԤ9uŰxV*盍^CRf^Ͽok_3fyKvx~(/#B{ ga7rW@0KW3$B8 n·g9a75 '6|""8PxP6x 8p:h E=X ?qAvwhzl&TuM$wzcWVxXhZx F؅P>THFa<tQj(w_hSqU7q|~@{ 'A8DeQ{8C8E(CxEBhFϔ qe"g=TquxĸvXK EwibuL B\ .xO׏eC'&iqVhLsKׄ!l[ 9Xyдz5ePc*) 9Ip,u {e24~UyjpC3UCJy .6*9m,I0/;&2S s&rEWhԓ1)!kb70 ,vuDPF)WB!XNWl,ȑRg"'cK=YeB){ A2(B/&wsA%PSњUФ– r(9 @(1 Z9L5tAFS,r; \8$9Yws3V57W+0:DZq1?0s$$9Ԑ,ic%ɑ!p}޷,ڢ~!~dR89.:U ?y$~/P+š?(ٕv6i9/ÀlbX9Yv)ۡ $@#rP9Qk#B2ɓr۹m |U6ʂ . ^2ZZ-IJ 3rz"!`FO5e-Lbz |Y HjoZ LɐvHɩK/։Dz*W. 4^4K<({V9[g1q?۳Gf:BkHAr6{"jL 1Tى ;lU+ |4+I+] _˭HEe_7$mk|YJ˪ ;gu R2P"JZ4(! tɳ "{h9)8vz 鷑˹")#'- ( ˵ 'hyJ0S˸RA?P{;{mS 8UN6=( J `( Ä+`jI`B !"Ԛ&Ѩh[ *%BeF3{64 :i$}k{P%OM)g7wqL@O+E2}۽u2dk˛"E pS5NƒmrZr4 nEDUA՘ER  `cg[WպŲć@L{ `!8Ʋd!,(()"Ǝpr`4%QA 5@}p=ӯ@?: , 4`\\W..fó6<0Ņɡ &ȍ]Yڸƛ;bl ƅ *s(#<65I'. ,NR*XcS& Y@0 5g GCWYUgyDL) Vh:Jie-1~Sη uu<@ 47ҧd',c-J M 6iRTէ$!}<3?4z<.v" ݭyYϕs,×Xb3^?6SBuMfVDʘ^ HRoJY.겝07 2X*i-K "G}ʎLRAYP^8Ѱ`87GkT1k:-qa8NE jL N=0a̻>\iG:"&g  ~Y LCi'9FvZ܏-c[`7jG .`=qP8_w_cR7Cp瘍gb\Mڃ,ГU"y3PW펭Z"B}AyL{mwbSL>]q N O/>:,%A%Duӊ@ Znhk~q~1 ǬOH-n'JOSyw2Hhfv?dhY&8fֹ!o ޣZWe9)zV>gh}/д/KL0PxXxird*:z0jxXp#; %vw;,y 9ɔ+[9  oac]-hkbe톒 ٴk۾;ݳ:KuxM,}kЫY1Yʏ%F}=;WջH|+>L5{Z:h`Az#-h=8}ŵktb7+ bPm!x Hڅ'JgVW|l@tm1"@( Va"#dESb~'tTYR'_"cSC!PA+WPBdzAQ Fa*fI่%Z>5P36{ni2] ai8ʚ GVr<&ҁ҉d"K/p+a&xX(pPB[C:Z nwOV@TjВI(>Ve(TDP5 +>>.1H^i 8hp a0"!v3(`8.&[C)P>3|J4AsFtJ/tN? uROMuV[HA;H( pp! ɭ3Y |i"" ~Nxx/x?yONy;0!jqpJ WT~ GB&n ˭^BH4c{6%Ɓ|e(r/4O6JBڳZ6" 9 Og|u{qC6/NNE# X؈1)l >RTA 6 ' 09,!8, ?=hfS:(|2!|Csp}K $*q><BC!dJ ,bOYh1^,/F܉; *0k\!CE8qٜ qEcCрl#x&x#/Co`!1 JBj('@<&~p/öPnj(e(NiVv+ƨb -. `,CDJѐe> VIE* b_S2TV|.8My͂ټG;1rML3?)?5Sh*P e"X!0>Ofߋj tj€6ч(.%Y>f[E Th!i:ޅ4%TzUi_l:D6o|*0+Q@UngT1tZ,Q)AJyf\ AMix/M/(7$W Lwyܲ*Nю9}.f.3* ^M%!0IYG{ԭZq3y(lij'S4Uvigʕb`iZA2P&!E :&K9 l+hqKI:&m| ]QlReYvňB :p zef hkw Q+nRAX' sэB5VDH>Mv_W y:D#;.+ZDmloz +Ud|K͓dQ ʊxF6= jV_ g4fYT+)R*R=s,^:cLJ,(':#rˍ#BӼCPdxب0uF+_ %:*Y1>.j7RY"BN muvͰ9]ZO6[Û˷!=7h" 4m\q8w/p`_@Z dVȓG 'ZppTg/[۵ѫʳM%p4ZޝpɈ65zzFJ3'GFqeh?nHLź1͝Kފ|q]}(: 5MyOĢx# 0W)4 u*dOY:x|vΉ,5ՔS_)p)62=eHJE| 4wQjՊEIa`?V >csGሤ!2,@X6+/d ֲxGHp&``W T5JG(H 72_~~{,_g66kp#7 #j5[%H ! 1b95h7H9-8ȃ=?؃}7SxwxtwKDW+Uc!:V:ekn~} tz<)ov.w32@0L<}bk[8c]ӧ^gq8tavxrt燡UrшWac񈐈I9\ĉ'`P!H8{xoaxȊ$vrGqRWTqjye< bhHJhRȁHD m7gHȍyy kHwXg{XXȎxeyH0TXOXKhȏi>h8oXFA p}rN:K}H+Ziix>nVִ2AYEC!!f(Qp " X3Lx&#/ܴl 1iT@ %D0s #LISN9Q5tMxAvF!ҠVSV) 7-#C-X~\;8%T Vk'62.恖R$K0!0g|~̵< 3`)' ޲]=y&(Zt82 MCǞ󲠟8>orU72I8P17p$# J٢)Л%3XIj&ey1ԣ< b- "+#PE 풂1@XYu /( }Y< P} +k=)mo)'k(B6dv^/* PrJ90nQL"N{(J1}"@PI\!g  , ;=) +(z:Z Z,)xid) ~PSR|\(\J1UD 痂S 0j Iʈ $Pd-Z {X+d' a*:7菬XXlC$Z奮/Dʯ29͚zOD>r2BJZtoZʧBD$bY>p&$бz8vEvc'cUXA2X=l˰}v2]yv'w7+o0*'&oW`yjOPwe$If(gz)+t Swb SY Jz^jGXIrVv0$$*_5c&ΰdJ%H{esVڞ`'{+{[0Ŗ0Z&VH¼gHn^ +q} B#㖽_r۽pKSwQ{ط&cC60uOڻ6 N I5 %l HPR7^8PKP $x&S0B7186P4CLE81@FK\s?a gs3K`f&+, `r)K~nHij+A=K#¥Ce<_[8zǫ?84ai2p m viViV .mPzGdS8vPXD 1`w_H, "q^lEBWY%X c09UvЭ3%#o%vJ `Q&"GtPکBR`ZEdD0E(&VAWT.BEu2WV9 `T"^}BD,1^c*A<$h-Y>qL<2e_g\UGXD3)AR¥LOUR$,C' D jJQAf+"%| ȼKHIO,z4.J苅z`0 *u8 #* @ Bp7pP̸$lj8CLlLEnj}Dd8< AY, 0G%3LQ~+gHdTM$)DRµjmLKjItCi\0mY,OD(<߸M։׽ݪ]x~+ %wu A;nK> X`6sGV8 b7tc]n^씇;HH$|PlLd0TA{SJA,o"2(x |X9U6Aw]ԥPP,@z GH(4 'T1b]OK\'В̍qڹ'G@T1 cYGĩwKMx4aEH #. qSk?$[$3c('8^ԣ hr|Y!qD:ҏ\F#IgER&Gudrd:> JM~ e,JH/x+3;PKʰO61PKN:AOEBPS/img/expdynopts.gif\:GIF87a}}}{{{wwwqqqmmmkkkccc[[[UUUMMMKKKCCCAAA???;;;777555333)))%%%### vvvppphhhfffddd```\\\XXXRRRHHHDDDBBB@@@>>>888222000...(((&&&"""  ,7Pȋ1җ׺ 39M!1-(\ȰÇ#JHŋ3jȱ#CQfxIɓ(S|B ^͛3cVFIZ΢H )bկI&SaAU]PFTM"կ`ÊY*ƪ]V, lK]kb_6*(z+Ʀ"ǐ5(غWU%"9|8T"js Se!tk5:wU-h;ԣ_OGw9=@ԗ^wh b- 1f`)r߃>yUJLለ(W$&n!,fE 1\ύ8ƒ{(;(ؠh 1VrҔTnTUvD Mǟ-Jzd(0˜aPq9K#Qp0h!0jPP ៑ f$&mfSF% F!9eUo0A@#E+-|r⧭pX*,n2,w2ԣ-`.i,Lc%V),,jb.[مc9J#Vn5k09\i-1BG+Krk= ('bqV,Șl0'%y28\,-4ODlHӫ75MFMTccuXuZc"hlp 6[_B%\"Nn't5[d1xWޫ}`ȅp\@̸iw /*;ͷ;r'@N zj͞Jx>5³;Ypቋ\+<*O} AOªdʭ"3K/I᥉i}#z !][g E_\UPpS.x }(1Յ &% ;EI!NP,Ǡ"\dWcF[rӿG(WUðm< Q< ݦDMh`c@-JPx` 7С%P32ANP%LMXA*2J%kA0}ãM8 U&Ć!J@M @8]#[ -2 oK^ƤV4D( pe# (@ j\p$DIЈ'9œX W F'pB g~,&!/PtY^S0 3NHe҇i^H޲)* .!0p jv<3TC,VGThrbUiלo&CPM lt9%yB6ۓJQ?EH?Q '^0ȡk(j~ȷ^2Ŝ^;4| + &D kR4 EB, QTbZy3lҼJQ8X`R"<`DpEz-n+g{nV+*vث'\0"gp Lv佟y)Y0$aDz4X@$tab\ȮN%N#.D@Ęg+#{1@a= J-8Q(o!WDo`2%.3ncyU~[WBǘlV7:骳ua$3#|Aa9; mpY1ʬdW̹~2^/.9BBN{{L)DpģA8bwqhhLLМ!vb^nFMohKk9 һs+(~'a)r~n-g̒0 ~U1D bMEY .6IbZG;OZ{)9yE/۞e׵VsV.5 PVő}KrS̺]+V;vN#1o{d;~e,Z`}(ᵱoT V#$2"7ԓ쐘8Tn,fhuۊ׮mvidWOGW‹S\P)BNA=$K]˜QQ"AٰZߍsAwdlA Lз/fE~WLGL;]>~L\zDܩp W:w r4{)'Uѥ4&UW!()RIdtGOmR<'"$`OM`H`D"8!(`dPtN$}3ODd(cc4~H f>42CtV'} k p}RQJ5R%dXf0\HBCdSԔ 5x@4O4 ;E;5c 1XA4qrhQ_"FdY'T bwFisd}TTO5x{ ]x $g A=&vCGX,P88688G[yPXY4X$YjWuSi)hS =n"qa=ƨ"PZ(xW 5t7p/ .!dth l0]lՊ F߈b`(X] e G 9 Ld9& i tGh;FpX:v0x0qp! 3J 'b0ɓDjK$et6&<RPi Mf B.@ZIV 1ix0A#K0oy;qyIc#jkGD9XXlYlXD`:Y^% Efi\)rq陬 )Oa.ٚ39 Y9^)!iZ8Ey_T ͙yIaO0ŴUٞB )5 c!Yyˑ 9-'9P :? ڠP[Z0#vOӄzo޷D RtN%G8$sYFRdԴ&qt(jtoym7H*k7z  {ya=H.quU:WsnB N46.6z/#wDTUj_u2|ڦLj(Y7TtJ|啢hu$ZYxLjD4I⨆04v9ovDPpqʤzuvԄVJ>Ѐ{/p'~jŨ:O x!EKE:ZYJU] IRdU&5(tQU'.z9tg ۰ j ڤMD5McDO;c=:TY k3 o܉ZDE !:(JYF)oagY)ʋ=Z Q!=0 9h0X;FycDZT۵xkk\*1o/r;V>si 4˵x [v۸yk5[f{ 鞘۶Y`2|;Y2+{ۺ깲k{} {49PCɹ ț5T{[+6b5h;4 ;kW&v[컾;ھ t&Xj81 EK7k MiM沃;%71{+c[Wddqz"u5)P҆0| BAh2q.‰61HU0XzUºxŽv bR̶ L| "<J*Zbz| A@Pé]Z'$XVp|ɘɚ gV1k̕,}]buVڊUcwAv>)̸˄5Y~tX1n8.a! o(௟ĉ+F^ .A If2ʕNFs9䳎Flx$Nc0Y [NC?muN;w2*)aĠTZS\X@ BM tC`ۺ} 7ܹtڽ7޽| 8`tTK.Y|49͜;{ :ѤK>:՞umT&c@N ݼ{WJ / y]fܾ;.I%陈JRxB=)dvdǻ=,݊eg9|3~_ V]GV NHa7B'f\~\Xa"F!& zW.[ȆW 0(<9LGE@dFdJ.#N*$;2bb#b@Zne^~ fj=IfMiXn$ vީMIIg.t^4hT'9lrhYh荗Nz*mif~J#K kZJ J l]n+.N"E 3:d(=m"Jm`ڎ[-ejKnHڒk#C#@0f<팠B"캋.$1$(7X0B X(BpɐڭLLr: 1#}3"1.Óݠ=|"#sVϷF Z4I10V5dKJ5WmZ[b) 8R2[l m Nxx/WaoM{n0=DNJӄ@TZ馟zꪯz[ڵBS9L Y [lOhֳ czwRD{ܐTO|3\.!"wMsR}Pojw˂|ý{oZ#$7;fO lȿ@뺟ن[F?DNv,+k!X1Y qD4ҽt@f(f vH=öUqb}0i"$Ϙ'*Ov`D=]o#dRvA9~,d9GF‰Rl#E`f*&ЂGz4F,4k`9pv4\yfwClH V%pB+[]_+B wym9FzH 8Qe%6E Lu0F0Ht i[ZtA̚s[(K o:1(AM0h[J=x 6vhıBFtlgC$ZzF{$Zԓ*M9XLD+TD,ԜTSOQD (."JS$MF%-MԪ.9yZ-  ,1(JE'EOsK)!Wu,ЦdlXࡀ`uc_W˞Y?"n +8֖-#X/4*DT L U`|ako"D~sW6Qnc:O! `k-s! ƒDƍ@a7qc6?;/v>nj<܅ {#O[S~9 o"պdb ez J Wgx@V4{M~-!sGh  LG$\<|k#SնHp(<$-(T)_%!}R7˓Poqom~vg_s 'Shx7j~hQHHy(h€k7{W Bp|[{ XFW& ~cˀ,(3SNwzW =y71HMmRrb:ф#Fi׀ H|QW']$(AbQ"U"sx"eeU-1Ȇ%E]@VmizU!đbv13RqR}T"(AP 2&焛hy牚ǃxjYHVT_O HxX(9 -JłˆV8@M5;#iidahBX'1-xRpVTX2$^l ;H}@a/d}狩Ջ[5K+L9NvOs7,….x釃Rz MH)+E%} P~=WFwF"< 'P|CK`vSK5I)D7hd_1㔅m Qgٓ# 'WL WM!3cg "RA ϢsS:yxb`jH1hOUi? x;0X~YLNEyx .Dypw4C,%:(TEk'+FbpSFA!.WRW,5 帕{ٕi'Ĕ!PM\J/0|$?8qhd!dFsYyY Y׉zDٗor CHQVYo<p&n@eɃґB,%ĂԘnWC⟸< cyC0gX04Kµ)2>3'cPt* KR_Y|)|ؐ^ #4xj..6y,ZPjQZ`y E P2@8 8F:}fNHƊ;.u|&Rf:܈R k(s1dz6kzj 9sn: LoNxh:{J$cJٝgj!JasgV{x@U a/ġZ\i!"/)-6IP,\b5%jIs Ozs7ƁK>!Aa *@Xy.V@EБvnk.@h&uccP^i>$4Z 8R(WW@`NXtɩ u"x 2̆z.761ꫂ^/H1g]`Kaf]ԯT%1k b8J l*iq&.L'1:z TZ/Rknpg`0%7mAƊ9xx!jù`OSQ<;ڦ804##[ڇI!#k"W5s[G2|&~!㻿Č:Kp2  .I4 IU$pʓ[#q Su٘Jz 跏> ǣVj,.76/ Z#0ڬʉ9! /u.x@~%5bq@KL \C`7L!#_o@M@"%iM'[ EEwkɊ|y{Ju%%v5K;8IŔxmeڨm֤ZC|eGqP̯&, gmӣ2zĚjQ)?4?㳇qӵ Ҭ= .{T!Kճ1#bQN ݜ%u}<- ')7d%(ʃ0ge^ k9֝M=:͌X4 :y#߅i W Nݺ >Єݪ\}3c6UN>ܲ ڥۉЪ7g(`J 5;̀\%ޏ&HJ:K!dtw5p$;RM`;wy{JHEI$=B5%>@_ i;lc %8`ǡ.NnZ^r&TQ;zsVbC=38r w:?괉[ࣦZܖCk:> K#R ^>q~s.D!s?} i=ߝi[ډUG~~=Dg~?ߊ*M9 li?gY.@׭- / t?AAOlRh)m#S/CU/C_E2J|_f8}i?Vhǩo/tQ੫Z/0& y \Jo~P4f$<dzPNk1E.!,U M,13r+#๹U<> `@+S {xO!_>~S|snALr_-T +&XO`+4[  77#P9971 #P1E 9Ƞ7E171Ѩ*1 7BE*-ƺ\dhJңG,,E#X ْ(D!B$\Q"`-jKqE'Y!c&2gR(+~9ي}.b؟X ꩠ6&6Ũ꫰*무+, &H3*9"i+ "6JȢz˩)~]vU,p#ʯd&j- (9]̫J #0]U h ,ܴ;fҽK ̨ZWXYP] #ѢH8511}"G/ܝ Ec9̎- v=P, ;mwdz3=ө}1ݭJD_˒@I2GZI7eںM/J̯g>t1hm݅@0%w2ʇ1p,!vp;&ڞ Q;:ӘI(wb'zlȎJOؖXĠid3+'!$rLۯ,]!7I(@ՙ 0?=NT*Nn2ApxG+i'b!3<A-ӜVArԦW`]ԕC!`L]ؘȁ& L@ Bp#z` H2@&pH:1DD0FpQ`Qu tHr p撘̤&7Nz %a" RL*WJX3{Y2DOQ0BziOd>Stѯ$f:\8߹ fDdO~t\!~P\OAJ&xjh9Qt E>M#(CR~k1i :P8HOt7GsJԳԖ="Gc:@PJՌ%)4I浮zSX?uUΒG?Z׺U1~l+[JW YUjZ׾|+-®F,Zjok\V`jΆdV)[T!cPMjW y% fKZBakSiV[a}b+KŎ}WZ)b6sR7@Ae`L?CFE^7&xͮ^kwAEtۗvH8hW\T'#WQbKG>zk<̜q (96S(41: .e:GD9A&ub=k$-m83PhdN2dU$Ek`6.`XwM)5|G*#f"7Pď ϡ-eW.V:RX&r0ĨO=j~ԅخX5K)9i_O+`7O) hA h/5c.^L lf+ p 3rSgA+S%]@ٙu#9)0`S}*Ӏ~`x"f1Q0D̼r6;^1'T< e"BBDo"c XM<]Ew49)g6X:%Wv+z"2M%w}5bt@,4p{3O^>EUOu.k =ۍ m5=`!Ă߭I G[{&#ЏCI|bԙvvw~4\*@a>خ,Ul}Sw&69\7U|d'~W`ޑi#;0c5%}Sq7IWp(g" B`xd {AGU 3! 80 _#i9NH6,Yw4r ~ă'B=:ssN}O$-`0P֓ n7EH}W\8_ \#=zb%>svj6ZxXof"yf )~}qn{<*G?FMa^.'VMwsxsA) 0\uGaGj(\3#j0*O(29 1$ &6 GKjf5tY) H|p(x@Dx4MP>(tBL@:`D0$EFpGcF"  F]08X pHL}PԇBs'PIt(*)J3P+06ζ9sLK&fM ^bBh`OdA=sNgHBH5]lqPy(G@~wr|8iY) `D`~s 7 ֤y.Oĸdᗈxi YE7kymYyHי̒lyXLUZxٚɖ|؛^e X00=dQc.P9-QٝFU8Hu<G8D`ĈG$xE#0L M5@BTM!TwHI6]R#zU=8D! WJQ8 9 (%VH?$$KDH*L|d"2qEt ٵnҡW3"#Ā#}#ΐx"ƍP`,ıgȬTe w:vBQQH#0:\uWGht1geˌV8PLW]]Ax\Ak'&hFx`A@W\⏂Ƣ{Ug ,A|E%w $mFxA:v^e#E`hO}j!%=0 YuPq'Zn' &@, d7%40a$ڜGj[f-`dBhd8%ajFXe՝\pPj$PXPj"<fTj6zzp1P1g)/rWKuFZ[] N: gtk5My堕y'17f}&D mu tNx3Du7 Jn?ǴI_ ,r|J>bsW"t(' v^>p@ p[xaA. x7AND:& C[B;PKiKxsPKN:AOEBPS/img/impfilter.gif&GIF87a}}}{{{wwwqqqmmmUUUKKKAAA???;;;777555333111---)))'''%%%### vvvppphhhfffbbb```\\\XXXVVVRRRPPPHHHDDD@@@888222000...,,,***(((&&&$$$"""  ,:Ǵ#*ԝU+%  |k<`Ç#JHŋ3jȱǏ CIɓ(S\ɲ%H`{Ci,B< UE9Y5aӧ|J**#2jWS>]|K $O)kZ-"uK R)k'LYx0+x`#KUAIu ˹ϠCMӨS^-Hc˞M2O+C)At)` ȓ+_μУKN%ЀËocŻ&q [V>׵m@UWK#_0K W- ax <hK8YYLX l(ca L5#O,,_UTN9,U8eȤ6Erd, LdWC_#ERI<(P@`ٕD ʔN.e[ĚTC10> R9gJ+r. 5 Ae!( 2DɧDɣj4Cw!6:C顢(3nj+ #H61RA'{L SӬ4x42 J%ڮnh"& "& I3o%rpG";B~/ O^n<lE|bZp5/("ұ$_rK!Bx߾b'3.X0VRAB逘zF-{>[-2'oK%ns|6>B5!0@CR:( yy!cg>CxRŒ ],7G)>x@D6xF0~ؾ`Mˬl @> r=Ppr !"Q&W^ӟ<ЀT@9ާPm{JZE7O &:eIs b6Da) SB:2 @r 4 ʄ2"I3DAʄ8KXC~N;\b^Kɪ ?}qg&S-QaJ)xm5:vELuhXdaЀqцiyIu+mQ"A;uyKS>q>4D!1:D !@QOEҕh%#VL\{qvTrb qQS{) a.&7!GXL b,0r/ q&4&='OG菉:|A>9aOD3QSK1 / 'bK8B@Y$DMa&=`PED5[^*J.~bOxSth)! /iZ.JѪXˎCTR=!Uk"Hm01?=],-XFe \)( QA2sdYmW1.]sC k-Tr:B(IYE86$&[zb;1NF(̈́0ZB$ᫍ`YHUCeD 67EXeWQ8Iٌ^Xh'$*䱼߅hx)ۙAt #p/&R=azi.DCC3]TQ%D;ᡙ (Zl dVxWekuKAwg3!LZ#|b/ N))tRw:8h!I"D<\fH RZ;'R*+U~=fuMˉ,O0Hm(Tt"<%w6;bH<:/@ZZY~U۟:WV`L=BUmJi#2JǦ@"{I[hq"5CĬĠ=l m1QB| 7J:tfd{ r(<xM_:b( E,¹Ikh{@g›(+$6ɹws M|P$`4 җt`›H ꉁPp[ Z##lA]ZEE9K 391's+g2!WU O⫣u"$'O+~z:΂Mz*]Wm&x/tn0gSH!V*»`!8;%Ir.$PR΄ĪS1>=Q nAJ΋B}LK-Z!VLHX@~ }$Oz m{M+v@ǃm Qu5yQyQp3gk'2id}VF x7]f1[Tl/<A3 dHB66I 8v'AOi (2J3TS7 m7 )UΤN.ggUgwPQ8"usLЄ򕅲CTT7M Ņ$$ex^0,FdgG4d`yoL臰->xuq"q8>az83xUY(BȆ ]ebX71%X fh/]Y''{Tz ZEg.^ mUvR&gQ J)wrG aW[x֒xRs,am'$(bHh]! CPe ʼn(Y`8_8ِxcg &(h y9$oHHV(&HzU) i'1) `+VG~8tFyHssIٔNX6vc`y\ٕ^x= dYf''[Tu,~Y=FOL%=m @vGVwIOuRDԗe5V(2o5ۨAFA㣘Ę䘏2 ڠC'[ zp@"`刣9 cZ!THĀ@7 ZړE\j*^2%k)|')V6Q35Us55S<:6PBT6 IKJ"EUOg~Ֆ|!89[c88nb68ZQ^(x`~ mZ0 HL:uZ]RH G?CB<^Ӭԣ#5gHI q~kFDeP  [*[{Gg%׆ZB) \S&nj#yz3@f޹pX J>˫]_ېI0 ۭ )7c24CԹIنҀle"ːF +l^ܑ`Jrg'U)¸+ V[%2,]\˺\ۥE v@td.a:e w@Ç ȷȑU\0<`Epyy`ͪw5w#^whxDwôal[&bsg:|6۰6h`kipd& =d`:VU}ʍdV +:Lټ$CUP.uw6- zɇ[/).[kLl 5AUzObCt^f(ʈvZfz\+k:2j5D5AeBar%](M?>g+Bӛ{NB΍Bw & Ӽ&з6ũBgffSI[9KLnĪg+Чb1,). *F.[EԺBwbRwSJHђ$ 7.6˃XD_UC,'gAi<'W[L|Lp-%r4I'4w57t)eifp{1; p1dxr׃3G;=Pܽ@sٍt]sI.?*= 3Z'ieF1)ІG 0w}If6K4ZCevB|bxɱO=p&~?I@⑇0w!=̜+쉠.U@z ^gPn`J`` ].#-.Y/s Qb D*槼N46"Z?'<;ӵ 8qW!V| d` 3uZU<DI49P>ݮzjshsMwcEU&QQK A"@LڴL^kHuPcjO^/N6 =Ԯs!`/+%굇n+_PZ e,pp֦>:F8>gnO 3? ޾1;4 -p壸5-/؟p'0oy ,_hҟs˦'䡷3_:P3HXhãpXxIY h9( P1IZjz(K[k{HڛKl9)H -=M]m}M<42j\LN9~ɝ/?O_) (<\霩t;0… Zs!#*梸")Lp֐*$K<2ʕ,[+B!kd(D`s1OT4SI#8} 5ԩTZuDQ"h){I˚зSi iՖLE+=7B l8$ h½`T7d\q5UYZ7W*Fw2EP B;ផ.VggՍ6g=T-̶Ed@G=̾I ]hA:ۺCnC 9 >_;R~ EPZNhV}hW~GzQ|uRb5Yӆ'(OYaB1W.2F6>2ߐG>@WfeuY u#XYS2fl pCa!A!}dĚc !%6H#% H8e$VQ*%x皌 hQbFP"+ji:H0뭴:ȫ )*q'R)fʕ+=IpZ0ypa(8H:pnꀮx[+"'%Bm@MkY*.y0DLOx_OHPA"=*rTi\?l K ttɵ ?kVmРD (@54X1mTaa.J##&Sf>B^e(1:0elj$yND4@IL/);+3Bb~ IygJT2W`jr-y+BTE:dh/B=*Hf*V J!ÆI(P R-Ѩ%W4٧IM2ydhN$R\5Be6U* ZYά"C Uy ׺b5ȇV|>۸)6w̶09rN{,ze/_QNI`UD1~y@1p3AX#'sQwgHVwlg 1Tqz}NQLh Ç%Wx6x<0iR$p?8  '0@kyV,'H4go6Q 2PT:+MW 9U {2qUP<Đ7(uuDx0tUQl0h0jgas@!X ` pՐ`CQ8S  z:{x6}h2 GXN 8 DwQ"8Yf} j52wEd!Y4~0<6(KW"4:PØ.un'bB* ZS'hb'u&%gh d' WLX:}P=\rT-ۡqۨnHj92ÆaT6ץs]$kh "zqruuP 6RVWPVI9$' 6.{'kby)y8Ql/H<`%AY901U9.e 2XItXIa9pb=!Ƈf cF8w<1&0c 0ƙ陟 )Ii)IIa%``<8d1nl221 d0B !) ЏV{ٔ f4:S6ɇzg+#P"P7WC4Pgɞ )IiV3c"@kY(&6q3 )^6'0 \UeW`hf,Pv.0w=2Nݙ!T V/J 3v54 :ݢ6 rw=hs:ʕr9y2!@ٓo2%h$F)0*^t~%+b=Y>Ȣ&&PKN:AOEBPS/img/et_field_list.gif PGIF87aAO}}}{{{wwwqqqiiiUUUKKK???;;;777555333---)))%%%!!! tttppphhhfffddd```^^^\\\XXXPPPHHHDDD@@@<<<:::888222000...&&&""" ,AO@pH,Ȥrl:ШtJZجvzxL.tk|NPgHA;*4a8\Z/M"DF8L%a~d؋O1* 50+H(hȸȋ$2P2pN2r-fTv8I7uDK@4р4 ⏨A6zJ'1!zG"~ Yk=B1P4Q[G;eDFp "QS7jYBԠ f\CYF3G4W5ԧꡃ ؂ WYhAHWRh(nye0<րEF\FݐCQXsZP_` Lă gx,X&y*AjLD Rn\Iytz@99 yǰka_Z@ kYY{t8J-FT`VA)PUi GUV1$B !7'r $6pliT`DB P~? 4xaBۧ`>'p O>~'p "Lp!ÆB(q"Ŋ/bh_?~'p`>$8`>,h „O@} }˗O~ H*\ȰÇ#JHŋ//_|ϟ?`> <_}˷ "O_| <0… :|1ĉ+Z?~˧ ۗ/_|e4o_|8}1cƌ3f̘1cƌ ˗o?'ۗ/_}2/_}2fo_|e̘1cƌ3f̘1cA/c˗E~1A~1cƌ3f̘1cƌ˗_ƌ˗D1~˘1cƌ3f̘1#}˘q?}{/_>3>/_3f̘1cƌ3R/˗_}e8@O@ DPB >QD-^/>1Rܗo}axq_~1bĈ#F1b8_>}1^o_~aĈ|1bĈ#F1b80>1㗯|0b80>1bĈ#F1b/Fw_~1b/F1bĈ#F1˷#Fg_>1#F1bĈ#F#F#FÈ#F1bĈ#aĈ`>~#ƁaĈ#F1bĈ|0b8_~̧? 4xaB ǐ!C 2dȐ!C 2dȐ!C 2d/_> 2dp| ˗C 2dO_| 2dȐ!C 2dȐ!C 2dȐ!Cǐ!C /?׏!C .ܗ_> 2dȐ!C 2dȐ!C 2dȐ!ǐ!C ˷/?ۗoC 2L/_?2dȐ!C 2dȐ!C 2dȐ!C /> 2d_}3/@~cȐ!C| 2dȐ!C 2dȐ!b!b!|˗,h „ ˗| 7_|-\pB7_>$XA .dC%NXņ ԗ/_?ۗ/_ |È1!|#/F1bĈ#F1`|È1a?}+ϟA˗/?1/_>} ˇ#F1bĈ#F3O_|a8|;ϟAۗ/_}0V/_|!#F1bĈ#F_|b}/a>co_|B˗/? È#F1bĈ#-O| ~O@ o|ӷAO@ DPB >QD-^l~˗/}Gџ?~'P>~8`| `ۧ`>'p A (P?~8`A'p "Lp!ÆB(q"Ŋ/6o?~'p@$H`>,X0?4hР@> Ǐ~8`AO@ DPB >QD-^lFo@}ȏ~'p  g_1bĈ#F1bF g_>#ƅah@},h „ 2l!Ĉ'p "L`> .\p| ˇ0}.\p!A.\P| H*\ȰÇ#B? 4xaB[p… g_>C/_|+O>~/_|˗/ [pB-\p… .\p… p… p… *Ϡ|˗o? | ̧@~_?}[_ .T/… .\p… .\>~ .\`> .\p| ˧߾|@+O` >[_ .T/… .\p… .\>~ .\ O@ DPB@~W0| W0A ϠA 4hР| H*\ȰÇ#BODC/? 0D3/A_+O`x_>%'QD%J(>~%JD|/Dg_>O_ G_> 7P O?$J/D%J(QăI(a>WP_|%JϟAo`?}|'_|O |O`>~'QD$*d `* d,,h „ 2ȌT(h Hw g_˗/>˗~O|˧@~wA}O`˗/_?ۗ/_|˗/>} ,ϟA 4hРA 4h| 4hРA3hР@fB,A1KT 4hРA   @4hTJeР4hРA 4(0?ӗo| 70>O`>}'p>3h`| gРA 4hРA3hРA 4/A*A LP* 4hAd@E3$XРAf:xp>~+O`> ~<(0?~`>+O`.ϟA.\pB-\pBh@ ? L` ,X`E+Xp @@O@S*X>~ H*<|O`>+O`> ~.ϟA.\pB-\pBj!A O@bV ~ A~ ǯ_*X @f'p R,XP?$XA aO` O`_|.ϟA.\pB-\a~ [h @ *dP˧|;/_|˗/paBf1[HP .,|/߾`>+o |˷p| ˗`>'p  ;x$XP @FKS l`*rŅ|)JOӗAӗ@~Ǐ@} A* J@fSH+W\W@$XP|,h „ /?$XA '_> Ϡ| O| SP@8`A0| Hh *TFHWH٧O>}R^r?˗@~#HП|$8P*@R *TF$p+WӧO}W^ П <0!~  <0@~P!| ˗0|˗/>'߾|o_| /_|ӗ/_>˗@*T|w_>/ @REП@@@,h „ F_*@b6?@@p(,_ ,X`A˷_A'p "Lp| ˗0|` 70>@}o~'_> 䧯BLTHQPׯ_ rE_8`A&T8 TR GB U~(}O@+X` ,H>~  <0… 3/_|G0|/|3~ǐ!CcXP*fb6YQ8=Wз+WEO@ DP!A@@fFH@\qŕW1O@O@ DP|.\pB/a>ۧ/_>ۧ/_ o|˗O ?8`A&L/ƒ  @Rbϟ?@R\H@.^ <0Bb- @PbП@Ǐ?~R\H@)/^  <0ƒ%O@~ HA;(0_~߿| ܧ+>}W߾|;_#o`~`>'P o… ˷ *@1TPU~H?)W$P <0B1  @R1Cϟ?T =\ @'[p…K> [/… 3/_|}}|'p_>/@~ O| .\_QJM T8`AWӧυh H*4 hJ "E>})O@gРA 4hР| W>4ϠA Ϡ|'P>7߾|_o7߾|߿ ?8`A&L/„*APBկ_?@'e߾}IP* *T`4@FП?TPׯ @Iٷo>}RtE!B}*TPB%g0@˷O| /_|ۗ/_}˗/߾˗@~70?)TXП0| *T|   @R1Cϟ?O+Wf H*\80 @J ?׏?~\@8`4hРA 4(0?'0@}'P 70? o`>~3/A 4hРA ϠA 4hР| 4T HT̢'p "LpahR$p S1 3o`>~/| wۧ/_>ۧ/_ o|˗O ߿|8_>$XA .T/C !C 2dȐ!C 2C 2_|O?7p>~`/_~a>cȐ!C ǐ!CcȐ!C 2dȐ!C ǐ!C 0_}#`~#O~ o?~3/C 2dH_> 24/C 2dȐ!C 2dO? 2d(070_?70_? ̗?O|'|߿|?O@ DPB1dȐA8`A&TaC!F,h „ `>_ /_ _@}o 'p "LpB2dР?$XA .dC%O@ DP|.o@ [0?-\p… O@ DPB >QD-^lF g_>1#F1bĈ#F#F#FÈ#F1bĈ#aĈ`>È#A0bĈ#F1bĈ_>1b$ϟA0bH_>1bĈ#F1bF g_>1#F0|O@~ H`+X` _'p "Lp!C5lذaC3/_ 6lp_ 6lP_>} kP`ׯ| 5dOa 6lPa} 6lPa|װaÆ ǯaÆ &/_C6`5a )װaÆ *ܗaÆ *̧a|6lذ!B}6lذaB~7P_|˗|'0߿}'P_|˗/|̷o| /_>~ӗ/? ӗ/ۧ`/_>~˗/?$XA .LOC 2do_> ǐ!C ׏!C 2?~'p߾?/~'0?~ӗ`>|O_> ǯ`> ?}3`>'0㷏!C 2/> 2dp_>~ ǐ!C ˗C 2d(_>o`O`>O#/_A?}'߿/߿O|/߿|8`>__'p "Lp!װaÆ ˷!|5lذaC~kذaÆ 'P /O`>7`>'P` @@/@7_7p`O |'P?$XA .\/_>~ 2dȰ`|1O_>}2dȐ?~cȐ!C 7_>O`O_?}|O`>#O`> 3`3`># | 2dȐ}cȐ!C˧C׏!C ˗O? 2dȰ|/}'0| ̧}/?WP? Ǐ_>@} 3@}|'0}/'p "Lp!CǯaÆ ˗/_ ˗o_ 2/_}6lذa_O`/_A}/_|Wp_'_|/_>~8?}@~80 7P_|˗/_?$XA .dH߾|kذ!~ׯaC˷_Æ /_>}6lذaÆ 6lذaC װaÆ 6lؐ_|װaC˗/> /_|װ!B˗/>~ 6lذaÆ 6lذaB~aÆ 6lذ@˗O~6D菟|aC˗/_>}篡Aӧ/_|װaÆ 6lذaÆ 6/ 6lذaÆӗ/_|_CO|? 4xaӧ`>'p A} (P>~8`A&TaC!F8bE1f<}8P @}0,h „ ׏?~8p ?~? 4xaB 6tbD)VxcFǏ??~O@ DPB >QĆ棘| '0Ec"E)RH"E)RHqa>/_~0|)RH"E)RH"EQ(0/_> -G"E˗Ń ,h| H*\ȰÇ#>ԗ/_o>~/_>~˗|˗|˗@~g0A'_| /@} 'P|ԗ/?G_|o|˗߿|˗@}'_| /}'b|ӧO"|%J(QD%JDo?'P?O` '0|70|>wp>'_>'0߿| ̷?߿˷?O_>_>o@}_> '0> ? 4x>C|"D!B"0@8`A&TaC /@O`o`_>w0@O`_>og0Aoo| g0|Ç|"CӗA}>|ao`>ӗO_>o`>`>#/|>~ '0|8_>7P`>(0| O`ӗO_>۷_>$XA;o_| Է/_|!wP?}ۗ/_}˗/_~˗o|˗/˷o |C|!|"D!B7߿| 70|O`>O`#`G0|'0| 70A`>_#O`'0|ۧ!B ˗/>~o`>}ˇ0| o/@~_> O @~?~,h`} '0|߿|'P_>_@}O`> W0_~~˗/'_| Է/_|'p "Lp@'P_|O`|˗o>} 70| '_|O?~/߿/߿߿|O 'p?~O`>_|/_>~ '_|}8`Ao@~70_|W>} W0| />~W0_„o ?}'0? '>O?%L0a„ &L_„/aB ܗ0a„ &LP%L0?} o>~`>#O? W0|/>+/aB ߿_O7_> <0… :|_>H_>"D `~G0|!w0| W0|/?@}7> O| />~ W0|A"D˗OD"D˷"DO_ ܗo`;`~`> ߿|'_> ?~D `>70| W0D!B"D!B"D/>˗/}@o/_>~o|˧|˗Oo |O@ W_> ~ 'P_ W0,h „ 2l!Ĉ'RhEÈb ?뗏@/`>1bĈ#F1bĈ#F ߿ ?~ ߿| ? 4xaB 6tbD)VxcF9vdH#I4yeJ+YtfL3iִygN;yhPC5ziȀ;;PK^21PKN:AOEBPS/img/nid.gifF/GIF87aZ{{{wwwuuuqqqYYYUUUKKKAAA???;;;777333---)))'''%%%!!! vvvppphhhfffbbb```\\\XXXRRRPPPNNNJJJHHHDDD@@@>>><<<888666444222000...,,,(((&&&$$$"""  ,Z;YY;I) >3/ӣ/3> ԹI MM I "e/5s<[6t/@ݣ9Bz!7< PZ$ H5)XʜI&"Dh"Q#N@ %)>:3i#RjP6)|X=(jMƜ}KVGN˷3PFnm - ԇ`7DD@̹ MڄU3-Z 1df5l\̕B@;dTl7#mB!"|Lv0cYlv3"z32yIПx8I u f@[;0 f@n=([ %ᆂHXkMM7h;HlD`?̠cR\j \@N) -#N `C4;&TY^ b>6["RxGh睉PM8Ho"xD:R0Z%;JAUNz!F8\p&(5:C?؍N9BlyHXBg`ffzi{682آ R2+,430jH90[ [Vûnx6a!Q7Ƚ&f?AX4эʗ#"PtX`o)Awݰ081(,7ӯ 0􆹯2A01DqBF=*SeXk!Ű20Rq=v 'Z{\ TC7KUd6F͋!c/˩"QdQ37iCMk wcuoM&Ipoߪ\+-C m{RbyĂ&,ɦ $TdDlZL!mB1R1@&@&:7pBXClyqȉW!աuP##6bx2[0֚lYk$ ;Y&rj"""X'(*>r(V-{Dz64UHتK2BSz3=f%g):mdNeD ʁ t0}Nx EN8"wrBo9j1wPFDg l gÑ1B½}2($3.Of ql!2F~I@4pp )p+B]Q|{ NrEDJvXrt/] J5#1`E1`M3*VEtf}_?H ]ͳ~ˈ#²M6 dJx`y7( ?ö́9;$ј= =iͣ2ja@ENDYTHPͥwӾI)WV³1H?㗲(F awuwW-k.]%4]Ivj2cM`}E^cCEZl1jqUHNpېa0sVD^ .-D|j+%pwR]_"X۱qw&^ NTr(HP'yREow{]Ўw-q ww \[Δj`Cr#P|3_Hfv<RUT˚>DxbP*BƎ]K$ !ߩFܓb "Wª=#f $O4DR̻], hGzݞ;{$QLǡgDt e Y1hG9,)DH}f+$zO#Wz3Z{Y< p5Hs I Y~M %`[ j4=SQ׀8xNTQD";F0#Np*,89 XZF 6-:<؃9(JPC K`} W/>kK {.Qtukwv&$ SJal؆;S-G7??K_|؇f@ ㇆xȇx1,:5xD+'}JB:p# ׅo0%D}{{y6[}}L~(rLNVTn" qj.x *hxmNa$If : (f^yWEAӍ.m uPhX $W]0g}V n1B'HWVK YZAtUّge*i8  $eZ+aGV:i-) /N5&BS;/HbﲓL = Y i\BeM /p|`gYiW%m@c8LsgOf*ic`%t}ix̠aIOc Ipg Y٘9Yyٙ9Yyٚ9Yyٛ9Yyșʹٜ9Yyؙڹٝ9Yy虞깞ٞ9Yy5I Pҟ쨟}[ڠ : eiFá; Zx"#@_f2&C:3H$d(`2ڣvB7+q g:$H @ q\\(ǣI:±TGDv ja+F`zX %SAlYP"`ɔ"yh:Fئ|hVwN"WFp}'-t/rBJ | <hw92z*`銉 tn9 顠:6>6d60jE ђXY19id ¬鬯jBrUHuzn bܚ޺D% E0p 7k1n asrPj6tm5եP:VùG@Iv~7Vejrbu`s: ["{5NL2g iP#ѫ  {ә{Z1᳥yyOOS4VMљ&L$[sy)+5 *'^$61d /nFed$Bs۷I$rr@:97i % `({4[{ۻYVC&$87E.T5cAZWh)ǝbFP[_22}lfT7kR٫<8&'V?:1q((prxbbKʱ͉]"0a y t \Ytk6:'ޠ| 7[sVXZg TZn9HXVzûl%V1H}… öЧP7jX8e-$OĊ[e|Uw A@@pl:;5j[ȦuLw<PWcq ް!BgK:UT;:|㠴|z_dJCHW)SUԧ,}ȟ>Pk&}A.X tm̿v)f3TҬB#񽊊;G}İIEEIu=8ܛl LŐ* ̛<<=!odL|iJm am }`!]#%)Nҹ|24v']69};=s=E AmCpGg?I]K=n> L2մ9Ւ`=b=uYֲyh jlYְr͔tv}:׮|~ݚԐʪ;s ͚m2 ٿPēPvp٪ٚ - ==ţpڨڬwٲ}])V+*Z.?lZ@6ڿ%:*8}-REJ V /׹jRU{ǵۛMeE'}\*l̙$@ :f͜i^ߣdoL 00,I"0W3nJ~`eFy)BW\Tz(V4Mex~Mrn(+acg#"ҚNCn b>iz r*s!`ȚBz>4}U.!zVBKmJDCOFʔ؊i^ZnBH\T$ eoλDr| [ û |X(̟tIqww Y, u,)/к̲/|0tJCs2J1`PV_ =r`$3k\]8^X#Mb3AI@qH^5/5do<,B sC+TqN$V ܟ? bq{, !{BфNE]6J7_B哏zpBr}΄ԙ4$X)/@|RE@"Pef7¡H.|a}Aa,lڶBSD:ƑBP@ UR :Bn#:QAA+"bƊX~:H0ঈGͭvsE(B*^9M bHq;N(ʂ؁&ťsTt 7 U½Pg[aRQIVȔ`(w+ER !d3K T!EDXrb ",b$wMf`(!(TI01((Qآ!$CDpUwƳKǠIN.s@K|BN4 Ւ< 0OD"0# "sh" qBቆ`N ؄,TYPj(b3(!*?URy`̍*XUsi<{Pm*D`)HA)*Y \h3V`/L(TꃪDk*∑`a_1Њv-iO %7֭u#$DOŭ`5ڥl=F&^n wTT)>R{abf43|@?*~ZGpbL_! `ě)SssEpew04Ccp,r@B\51TD{? q3CL!,gLTL]O3##|\RX|PYVyP3eJZ,ji8|$ͻ. BXpceQat)tb4ac@Z%471Ĩ)diI͢ -M0BL"#u-M8Bw Nv"h5ekQ?"Cy#nG[;)enVc[9 }1}nYF6'Zn|n«SS, wWp%:xxBȀ[|g\ԳuN!쑫ܼéxs44^%殹7`bůyfP+ʂv}~>e(4k}\׿}Z?L&~l/Sp{H|p΋/+~oK~򔯼/k~?Ћ~/Oԫ~o_~o~ /+oK7>Hk?L>Ϗw:׆?ϿGa3?fR6?Xv rrE\ "ەdP\)3@Do 318Cb{'4gu jЄAȄ09$_EHքY(  D@=Z B7 Q`OGd~ikmĀetx w}߇}w Y@g8 wsHy~ȉY ' ٔ&Xy31!W98 0 Ċ>D;Bl1  TႿ(ym)wQI( \pTw(އ'#i!uahHl+h_ Db1/(M"*PQ.|hDv08_#[CE'f$CBx%30+}8*7Pa,mV\ 4a`8tgfm4apGGT%8Y GԌ#QwRf7 /9rM) 3|;ؔQ*sXpғXh6K30UZfQKw RuC KX}(Uu:YRPI'qg1 nh)rsuEOy{T 5L gEoxqH 7p'p0O=`2qBm"'Օ7T@'fr-IRK$`L Wl7}},@ q,RϨI l{m@_x />D/) pP`y/9TZqq 0|?|jU vb 6 )V昂) 56/fPj6''G-Ab!c3Ȓv)=Y䛔BYdXh|SVB~v^n]mP/nE8YDf7b7;4D@:EGk(KtXhj~ĩf$tZԀQ!Lg )@P%אϺdHk T0s aGW jH:1rPك 1 P `VuSbC#{t) %|Z&S0lx]RY0""cЃ꧲I:P0yV zˮ@) *3˒396$fhj1h$"! F>!. PjHڲ8W jH9P`PڻRĜM|2FKWLk}BY 'K7hٿݧ[ƛƛZ1xzLEH Mɂr7\8&JZš+š' DT-4ǼZI!ɻY6@>Eq@p{jq `v.;l )aLvky8Q̲e}L-9. ĉbek)M Lcvٝv5f}C Wc=BZ8 {ky-,f 8l:iYtŽiS%bWGt$^%`I ]y 0Q=kMiރm:pB*PTC$*n"Fe]URL}t) Q@+-;ʀ@YnŒQ`T 0ŀ|33>IYM Y>> I/"")/ ތ㷻ڽ/YÜ"^0v( X.C0Eŋ+.G3wb:r))\ɲ˗0cʜS )[) Ҥ9@H*]:h#L9jK٠Eׯ`Ê F]ɪ]8ʝK]NY.{ 4bKܒb0Z %q˘3Omire͠Ahb7^ͺ5>NmۄRb ) Mq/6;)ADO@IAcyPۻ_p˟OGϿ(h& 6F(1(g~M ("?P4W.$4h8<@)d@O8L6<'3,ޖ\vYJ^viWnfhgݘeVBt ݜvg@;PKIAvK/F/PKN:AOEBPS/img/et_datatype_spec.gifaGIF87a}}}{{{wwwuuuqqqiiicccYYYUUUSSSMMMKKKAAA???;;;777333111///---)))'''%%%###!!! vvvpppnnnhhhfffddd```^^^\\\ZZZXXXRRRPPPJJJHHHFFFDDDBBB@@@>>>:::888222000...,,,***(((&&&$$$"""  ,IIkk ôM;@@X̸;X@;Ǚ( ]] (횮k;pGhM9kk=d$hCCD$G,bD`M^MvfC6/S̞~Bƛ+oܵb ˃'ͩ7+- rՆ(ɍ6ob3T9%4HSkhTl"f@7Meb~#{:#P-I&&mt]="kP%byg8E}g֑3@^'#Ye&2U6d>{j`M|G&7 FK@ڻ&P`dvII$²5Zh7G'v$jJbI'CDbrE'5LIIRDLh*cv4 $lqa|0I^(>҄ TpH-p0H "8`C`D`8DX>F% '!H†KIβ 2HD4/:y1PEa&5QhKԪ1"  S23iWQPANU΍@qGǙxĉLViȥ7%ɺ\M![UlhJwR `C(w[2Y84IFѱ.dT2c`Bd(ITag9צz4#iWTƗnS/*P^6E>Jy R IKND1Y!Ub,D6,N JV!_3?k k\e/톳KL0*X 3T7zN말Q"&.g %Jt/Φ5Hpqb{k#1y؂3/CrbjcWYRgAXdB6UZҝUC:emʼn*(lDXQu*ns2*nZlKj2q&:g؛Bho+Îa#j"t  )q@qA!7bvDzSȎ@Ay## %rҒ*[P|b;qPt;[3R"`D#G:3#zKg̈3W$F snn_"DL[]60ڜ_| J#v%a%1񟟌\(Z5!qWhUeт UxUQ3%?jkL;iM V`Ix+$r"̕1"k\k73m "8 JT ݩCtg:֮ĸ ,{v*U7NkaF-^'pBC܂+j!x]XwHv-!P|⟙5%:C89>z񇳅#=s%>Ta%@m CA v's 2' !),#U$dRw(]g^oGqbgXRu )LVy(6Fْ'%ڜY7f^ړOnTLSK5q8J~=T{4}(}zZM"~YC=&N#CEyӷn*] 4=wgu7?rF͹6]c=}:rq Oy\T6ǦV q|98yGw\D!+ P ցRHр`5^^ T6x8:<؃ l^-JF\J|y?GK0! @=Z\؅\RfEFKUxYl؆n61(Y cfR|؇~BH 5Pup:u؈(9AURDEg7yT^0PAxVv8EC=]#kptG'N'1MWM\~0h֘ dAiXWaeSv3&7EL Ղ(8X昏-WY珺%) w{`T 9sYy;Pv 8 0@" y,ْ)0e.9my +98? w)7D97 >YT0J o"sZR T鏅rbp[YEٕflٖn9tYiz|lxp}9p r9QBfYpYفy y xAo鏙i 9 r2 o7F  #Qrih!3YpWQ7C=uYq&w!F v pg: q&a1QHaw׶Y/v ɖ +Qlkal i\;5S|4)#Rᛌ1eР R?P|&B ` ۲5t> " $ZfoUdY㢰'Q]CU8 H= ? 5O(iYw%(5V2xCǠ#U WM@:ڒ%!YOsCFmQVҦE-}/b.g6c00dRXR3!q*ǰB\ VRO63e]J9ZU6+`ns)q37P7g-iyW*?ˀ4:)ܕ8T&"%5CQ%z] )!MI4Q}dM;_dlаj~  ] ~DR 1Nu]8QK]t*gr09+o-+d4q@"hUIj=QEkL2L"QAʳE 5>rZZE.Fe;sgUق-*<1Wt;*E( .*HCsX`0SaXfUVa*K; ǐ01zvweIҫI#{#$74AмK 5j;5 ZNAZҤhTe")Wka`֓;NIO&`w Pak NRwvcx `2yf ` ya塘@e! q[ k_ ł= iT=ГɬzyƵh5:S2mN̐vq[Ğ-,|ܘ{isjmmnrܿȄljul0ȸ(ɗA**kʾļz< at( j&ʾ| \$ PnP.gr#˷ rʣG\2Fb} [UAr2ze+E|$#c[d_0._~c~3=׋fP!<5su"]}xru4|79C"d3V9Z4,'VH.p6Alf ;F]?%O8@ja@j'4s0|i 00ƍ7@S@ S-eyF/rH)0ڬGmvڮ"wTGITbK@~5hO ]?9HDMOM MȤNN6p]v4C 'wvGT=4aTR`Uy.ֆ_MR|E+7faJy^pFKl}>NnX.hMnτQ}Y}]k>?29}RJjbyumw縧*Ei^D.xL=F͔> [rЍ> ^TF+n9. m d -目D<̯ Ә5畷C~0_${~ "+>D~wn mk&h0T";|S.t-ȳ>9ѓ"gbT_,Mu2Pv)b`!5 ބ k` _W&>P*|!Rk" n68:<|(K ^"(8j*ROai^\FO+={C893 ESh?~CZ$!7",e5P|8T O٤ }1iB.^T&i{ | U-h^n֔I:!O& 4cd<@ZeҜ_]7l(g? !UTTⱩK!<赏/) Q[ vbe6W$̵V2$> 2ΐs yh~[o+$8HXhxdӰذ&iy)Y0ЄP  ,<\:ll\|hqP:!Βb-&]hNyg{S{\y̺ODwbMMM5WꖇͽrD?ӹ{V\ɬQgNxB\@ ܍S"-X DEA6Dp_!W'gx4'>&Ā-X(}t`!Pd"ӥHBNv$/F^:Шz 0@eA 06 ^!!50S5T !z PN\)Z%_JhS:ك= 4h (F@2@BRl/(4R˜w3I4%ј !&\8jJIq.l> mP= &6 Ȋ.b!2>A>-!MtU Ty˯\-'[}r jp'! hIh t1t( Y!*̲S;&܉}t6IƂ\74g ?ri9̆FuV&KL˼"1'$ѳ%5Pc%RRrmQklNӄsEmlI)~4+ux#3lWYY(2^8>uE~7/BCM A&4u4bd1@G΢rp'"@:&LH ќdpK.W_(;^|8HD#R3c! HhԿQ?ȅ7p& _# BÁ-U#5h((OhpEAa54?!փ*!0)1VPC&,gd8Q /Ik b+h<&/CP@%1_ 'CRg+ײ$hPb ѠhSuIBTb\0P` J؂w! fy0Ot#JJ<( .*ML|–ͨ NA0 1h,0@4<1̤9$ \r"6њbRX9UhМQpP~RaMxp 'F2$LBRE?d.N1(}i/7^ -11Ę;lE};+ΪɈq!?X[UQ)r)#D‘ Ǒ$A @*Dv@ cW~2^j-㈉9hS>0QMzŗ[+s#B`azčA'+Afu݄Cz(i%PJ $iz92 bY *^$DIKf`iS JV.4@@\E ;AzL&w$ 4[!EEDdb4?Js O;*~eETs#v O#U(,E-6@bXbxkYZtfDT1 Bxt^ L@qؼ^b3PNF/?Bd/<}xOֶz,?1"Hx: Ȁ&Ӏ(> }ǁ(#}(jS~)+8'$/&1Xgf70;h2XtTO3w5+.a!$A07m؄OX`E%QdF O0&)#7*Q膉Ux54fZhI4hb5E-B1MxU&6jp񉕃Gh5#SXŋQfņS ˕eg%PֈjrJ㸁p(KO%HS41!V릍MYV7)IiF%xDr)\.Qh:h -:)hh')+ɒ-/ 1)3Iw)B-sDXfk_`Bه ) QALjHQ XarhwdK|J㰕f8ˈ@;%k)}/Ȗqsna p)yot6zTTix K|8~0Y,阓69aIXI r,HYxyz( 8IQ9xɛ LI;8ʼnoɜSx)q9yMPI;a/􇄘X~YiFș99I釟]XnrG6/?CѩkA}D|( {^Aญ؊U~zz(0Z^Ҙ H ږـfz:Lzn$ Ht'24{":aPz1m̈PTFzzHJuXva%!`HENzS@ZX ~Vz6ؠm(oȣNzCeoReB0ƦT6@P *.MUYdʋC%AZnB6JeFQnESޥZ2m+:@u;O|ɎGhBz[ iO0Vw2:#DwOXI8{pmOB4j)U4F6ʺj($.ecʹY/`[H[&"pS0.˾w1S`oоkK7[j ("0c(wjIy#֋XDS(M"R?R,#L%l'7(9i-وyK ˑk+E+JE˵ ߋZ{[ɗZ1̔CLE9nG(]p9f Blī`G젊|@ P[8^Ia(fbv:5hứ `iUF ř}LpIki̦ȩȕȍ~,ɚIɞy ɲiɛąFn>Um([ #Mc N֘1z=KpxDq!p,/[}`qg;gƜ!%ضxQ: xtfF1@FYT `а46!1 CN]Jngf__t\ T!)0*1MxTSKj@^3ot2K<@r}K@y"fX  '$Q86.b-17V# kN P@*p]5w=PK @@,1{1>"!>P = ,@b2l Nem~S@r݀3BA/ZPz#%|L6 -sN m{_/%J`KܥQj4 KI1[u߱JMzMpdl.spE +"IW NYT]4aFf.hR0RR Yj0/v.JūA<1Zpp5O˚4U.g|#4R+*WTgd/ WX*D%/`ᡶ#Srb; %?#&p_3q %<@𬌐Ю ύbC#25 BXE6{4Q?O :/b R9#tW%$*]II](@ ]k M Ĵ­;k ӏ(ώk;űծ(]+u pZM9m"<4TUZM6hW\ɒ徕6I͛8si blГZ=E 'i2V ud#8D5EL"HZh$HLP*Mz'R0*WJSl,gW"̥.hK'.I̵HK!Y=Ф1q$d<6 Sh$lrH8PLj3m3Iz~ |ANӞMiM 3Pp)T0P F7H5HHa7I!~tYњsIKDK&Jʴ4Rg3z""`9fԮrSg9}E5Z_ E^ e.^k(.f+E r l&Y ΧB#Jկ00'KZͬf7R6{6ٞ*{j6"#Y E uHliIXO* K@8S+V:⮲`('5Mm$@8D,E úqH.Ʈ.wGRs/yכFQ/{Ey,};E“~/DP/CԢL:bw\x3GLjz=5Qb-M#IX4&w1<y.X$u1,!) Nd^!;RbNd@riF!+R$33'BBw+ >W<|7B)dr3LT3Wʷ$p5 ]Uht1-OSc!i3{h3Ok/}Jg^hG5ys.!#;BڎmDYM Я8Myv*XhO -F=h&3Z짚(Ŵ}0>Jwr%9mI0{)F@hz)W༸w40IwF]X5K^@yX~*EvX81^m15P8rtIwǠNheDf@%Vuem:v~i4=-:JI1ͤ*qɅ $qcP Ն6cˬYsEa?NAUECabA};3MLR PKxl8QzHXT^&(s]mvхgRh֧>*} /"_[!D݊UZ}OlZ*D6?p]_XfĻ&"X@wq@gyli]p| p2?@>g({? hJy}2OPyssRkUj6'v&'CF s c|6'g2Qm4hhp@hT?S]B)2 ?D$E!Sj8a-G@RM kg[e P8ck؆"YHm]l]{H,`1>6qjC[-<4RO>TR{{[!rVaRN|C_g7u4&Sq8\!#-Wéw2|DzB:'g":6ģ US+2<,74P^ЇLO0;,P; Y=+%/wTJҏEIcvX8vŸ[);5L )5f$Ċ. =[)=[.Xw,QE^@0DeGWK`b^` ep,M۠q|t1ǟ~ez,B}DŽ|kv\Ȋ+@ȐMHL!yɽPɚʴp\ʴxʧ\Kʮ6Dʲ\˂<ɶ2D˺%<|ˋlə\ȇla1̻bܮ|'a ZTP͂\"aDplm0=3ªs4V)e"1WЖ| is]vvU$d?+3gї! j ̈ YW?V'ŘZnӶ&m\H,fC!-5jčk[~;1]6?Mϡ` ;BS}@|!yu  P}d{9kjSo!?_Y\|xz|Y8jl%\7\,ZV֔c-cRɅԐR,1uUmxCK ǕXϔ]*ύGȳ=^ެE}Lt@ {(M܂l=[݂c<-WQ z$}mY-߰|I$<.J~ N  2Zֱ>W/52gLp8MbI'!J!->pfal:3~I ~+5T` Ri\{U(`b䥰.b\IK.+"2"JD[1w~2Lw~ 'lu |H~)Ҝ1ۂ>x" u}^ a9*2m>/s4] rm 0vz+.m$s}1 a^Ӡ@kI.$WȮk ŒK"cdJSӅAX(/#AVmAr@@7f`?c3q;HΊ>*^@_qbnu7}b2{q7WPߦ*N^]sṥF?r}Ps1RCYU>6s5np.OE0īG}i9 b>_@S%CWIo^* uNo Bicvj9Ac_?$OH*;;[B {~&G>tOCK?!4.f^)7fwg2RD$2DPOdKGs+ݷP |=̬]B/13h9%^?>Uc&qmXC 7}Smٍa$8HXHԐ`(iXӢؐЄ51q)ڰ +;K[k{ ̻<$XqR5 =ݴdm}}3/?OO;,l\PY  <0…Wh!E(ԅ`.L hMI^0%*[| 3\A$$п$;PBѢ ;qΌ @өTZ&rڙ4 #(  15)CX&7޽X>DvX]QMش5p[m=K˜;{ί; 4(34cu/*I@)"A*h~- <ّNbZEHjb MP]vi-d?v^mP,^ana~b"HK76 XttO 9M!@^>yǢ$.O0J/.V;IeV^W||t8nTke[Sbfn Аpe1oAgzɧ=uC )(?QFi6)v馢JTS)jt/j蟨"& lž*l,r#%&lNK܊^m,q~ npˋ➋n n{뼹ԋo/.Kp|U@}POLú0l1VSq rgUkD,r*{\^^' s2q lŸ9"]=MQ#6˲C'ERJ uRO 5@lFUtAu^GȘO_vjL3I05AƮT#ebrRA t"zw @p *! # ދg+ε| R oѮ ,ud\]X25޻A_H<~Z ^ / K"Wo_O|@@Boߒ/$"==cgUɥ&O\,'2<%_AP8u LnO6 cXp(b"#} Q%!Li%z0 LtS:G C*J|"xp"â!h 0֫!@ZXU)4Tɨ H!LRE( p eź8ihQ|B@@ DIf~c `a9&@$]Hfl35"`}r—Ԉ0%c$eq((್'< bN$! {r{d8Ò&sq纞t,|0 d׌r"Gf!>(Nb2 g? Q{JB ǐJn!c1ͩh'#3Of,0_*FLP#At:I] E4Er>-7wǠ-@AҊ<łfDcX"qլVb]Mғbgwš PrYUrbRzj@+CQ  +Aʙ} KaS&$BȌrZw+ݱ;4VX]mۆg>BTv<^MpbTKބF[WQU(t@t @ZUΝ y`R.wf>RKZ&3k|%6 ԭhɡ!|3Ln@bN Z_fhX>pj` FscsDou-s,T| eAk xvgt~,07͔rLfU4b@67=$V8-+^,&14`@@ۍ̋:&qD# ph' Z !;&ox]. zjb:h@8v'MiSr#^е aTf,r?|{uo'gڄH1[+}7P/Fg1 ^jwW26s&)2Bvܚ. d *_|)뀳>%"SnO,>\3[1/ ͗󣨼E?:(vL/b%wQ?Mw |Eg?{KV@1`.df3* q7:dEkP}4ݽm to#~%(P}^o=88v|fEazG=Xq0ys>T !}zq[T~D; 30D@I{D ]g3(0[ ^p>G 2 LHrA0E1[Mpn7 ) [C@zPAQ]v q u u =u o(7 OhX = z8F+ SP'PTpK$JxA,uCU"~S#5, EL[t@$-A8X/`B.$ uōB^8P ppAPq 1 oȀoϲ vB3CA!HNtd6L#hƀ XCpXKy Ð]asu|!Q0g@3\;zYmÐ(`rnҡ8]ZcQMfh"yR׹3D@YB8lhIV PXgĎlr80AO:rx=q8?>ZҥNE'{v)e#"t85NAK$V7=)*p?ge?M =\(obqAz~_ёAt`;ke23BLX`p_q^N@;n!4jJʬapP ޠ6 ֬ٺ 0!j n@,Pj2*4jzjJb"g ć,#sٞԥ'aj([ )zT|S86p.7:$: @ gf_r-z7-1*x(pz/$Yh\Csf&&cg=cKZⲢ"3[i*?Y*Z(/4[K  k0[˵*aq]Ke_kX[ckm퀶J+okw+ Ijjk4qs˴~ ۶+踏[x+kU9 zK{.k [Ek; ˺A kZ+k.E ˻Iq!ͻ6k 17u+ =[ +ث ۽2j+;仄[<諾; 1WiQNXâfi@+<۾9.[ՠ)f57 ac:A`kWG&d!OT-'\gY5BJV6?!kZj Rlz> Q%q15@IsNZذݎeQL$MT!RGP] }>}'U$ ]p\'Jqr9B 0 U;W?EaX\]-uhXRel w %d1r?\kD.HSQ6Vp]Q_[u_ ]X[\JNM3#JLW'Dj~x((mc)Ѐ;gKcfFADQjh:y i71b>У!w35;qÅqi^ miN,+K}5y*RtkKfƦ֣e "&"\#>K5n#EF<HD%ܦ=WS(㴰;7C_YdezpJuQ+%}LU1BJw~ =WSb 'SщހդAMyhHĄ|u"$zLu'1_{R,jmApBN1ndLqT/ B-H9LpsȬ2lGBjlg#$(up)N ;ҌsuN? ֡'K5 P 3{ TjMt =ĝp)A6cxս d29Բ.O+e4{ڻ;ᐯBQoI Kϵ=@;(I M;k Mkk k]]  ʘÚ]X-M 5kk;@ ؜-5(0b͚ر`=TS1DPRzB.Қ1d)ExiG{]ɳϘ;djDE)CGXjʵׯ`jV@ԎĴ!.>dl%W KQ)vXK(CG˘+&7)pF5װcEE }I9[L 5.k15a̻y5C5#-^:CR;O 秮ڑrPtCpjѹPU7]tp%MhH]8߄bbOHhInbu_ɷkP΁]TbL]|gI "}XԷ!Oި>%0ݦ#%S\ܑX#%MíBc0PV7@^>& x1fR4]b\5HY.vN%(h6W6>&A BdP䢜6i%LBAęEI!]"xe)֐zb`!v*lG DDnBse6p*z))춆Xh렱"=ꆍarQ U #Z Pr6"҃@[d#4970{ ,$,)0EXBl 7H ºR-P#i`H'"D -ƆB4`F !?d}|@b0qy}5$уp-tmx|݃HMF-Ϲ@%zԊaC*Dv" Lw砇5[cl*A9;.<*aH%f%*ϛ3or#o:IfJ/>v!G" h; JT<9}y$~3JS/‚]8` &Oq o!6rE&"W&a0R'v6 =KLw>G0a1E!Nju<P M,EP)1o7ĥxpS z لcƾpȊ?""~H@ "B1vlc&ШX֖B҂\.jXt"k`G.w +*%G{i `Iwx{.2N*/ P*iZBMp `A˲+9Al[""h@ұ\B. ,0qJ{ʃ h3s-5U;V0 <#4DBc9M9CQ b(0 wlgZvL40U\c :C.ZӋV:%F]}4u;$GKԣ:5Q1DW@em S%C&PJֲY|Pj`3ZpŕY׾Nh F0G 1PX)aSGC#Q&J,P!ԯ h)mIP 1T @"0f-᳣ pk$MVV\J['GBe-\zM"$(QðHNdR/vY˩S~ H +&[¿p*{x?L/XWbH*ngLa8iQI ev"y<ӣ&ZLd$yvRL(s+\#myhSw4ykZLg#k0xq\>ӘC9rQkK?:/{FW9Z}M& QJ{ȩt$ݸOC lW u"\$L LH,k2ud%WQ % %@zapf66]b!%gEB#SIP:S*.TSMb-mh<^nH8#CB7- 6M.Uê̖(d}BM41lAY /K?oMVn:{y|}K@=&?-aBYr,2[vVi`0jUK6p 4N- U.unpYYuLI%4y^V4o 2pZ&FlH}'k3Л/OO~>>:::888222000(((&&&$$$"""  ,F5L? 8/Lڂ) 8 %?L C ? C HG?!#JHER8@V!Ƌ CI`7 c Bɛ8sY+MC?XWeA%"ϧPJ*GC/ b׭]IeAٳh {H CPo^*;/ÈG\_[OиrP!kA"AӨSgq!_O;mCUFehNxr fXZGpGa"Ąp̍J ч ̈5tRaIb5%x"dQ@]!,jc0!ɝfjc_h=>E!'tv@IB_9đBBO4+/ΓNL8ZH=jIFN20譐x'!xKF";ˮ+J\k?wI `S BӟԜTJ4 ->,/M_d 30FUgb.N?RdÅxpprB-+]WC սsX.:1`!HI.JAlH iՏUt"rWEE *kt'H B=l8Rp †<|΋ B ?RAR-V}Ҳ>fiu 4u("Yvg:hC]8US[i3kꑱ-'صދl:'8J ࢿE rUa:Ec/fnCϝ Y*) %Έ#8Ί ./؜aЄ%`,2Ҏ10DXkQ# ^#w 2BQQ>ց9h 㢭A\2 0vUZŏ!D86p$ 4@Ch@t3  ^@\7 BA(A !lN2늪:B0@FeK_23IR0&y_#ƽ`K]" B^$g.9-Eˌ4`nQUR.P]#(H,!k7#-~%LWvIklӜp:FdM0RNrccpJa8&Dl)(uЂ yKh@ p@JY-iEBQ^Q6[bq*< AW!~@+p!0p!!d' T.RqiԎmj09x2Zw"IFt#Qy#NY,tzD j12bA:@&i*#BT\Wy~*&E1;Ԍ̡$|GUPB, p2t ‡pǗ*mU.$#PDrS $ '71=H % 0 %G3.FnX RVVa+  AFA4D:1)snwFp$9 Q8WXvXX'DD3`6Qq6m365u#Tg ڷOe5qv/JBn1Kkl44|1tv.s|i[vz nL'4؄&&|MrK-cW0uVrdN0"H42v2O`dQ:=d{Nw-#( dU+wgtq18y uPcY-' ;t gy?XpȒP(8? )/j0KRB w_p "mU0IF Ez)Y PX v9WU޵+ p'Z)n=.` >y+,Z:U"fH 5 .II=b200,v,#L\`]}IZ )I\y+9c[LrP?`Dә`["UxI4S O9%: i өE'``eQPxQh i'̨UٞV b{Y _t o2pX np2(x&Vm j8M hb[+ `tpt(w+2j kn 'h)-Z㔣ڀKN{cHJ 9Ip4\ڥ^@ C>0lڦnpME@t(~`A,pi#A: a' A#5ک1 0z (3b$>Sᢶ ~$M=ZJҞ)4 5;PK k|PKN:AOEBPS/img/byteordermark.gif0GIF87a4F{{{wwwsssqqqYYYUUUKKK;;;777333))) ppphhhfff```XXXPPPHHHDDD>>>666444222000(((&&&$$$""",4F@pH,Ȥrl:ШtJZجvz*xL.Z|<~wtF$K$MEVEG4.BD"{CCmFȜE*.4"4ϹFBHD"E"IpC.p |G ;ÇXY+KWGXC"Ɓ'R>#c$"aIBvc!$@Xʴ\8*(8™=!K J)됰Q%BÚ2,pջ$е[^abU 05 >NJ#nj 6' .@b!5s&bGװcˎ E+˚>) Nbzv⾤ !MCB, -Iؑҧw흺7ax[;oF$~Mw"1?ႇ x*GCfhhx@pZEy0h n$Bvl80k G xHwA@ BģY@5fmEw(8xċ5&l\v#Ey $Y>pk&P@/yucOX*!`qa&sW JAN<7`AJQ d :`u>vjKBB_sO)ޑ&R )g6< F"4@MP)p/*[$+|ꘝE*I.0-g乚YP4`SN}PqB]z!\6c6 B"ZSST\{H hue$PC(;l3Xۜ哸l=Lbk,@ |1̕!"PpC@AiJiҔ$ۜR =}E`A]aG}qٵ9oJk 7N~Fƹ,-L6t!hGŋ_sD]*c.E'RGlf5P0NԏO8s߽uG~'[q>!޷?WDŽeQ)̡"}^  &%5ɂ5zw|F3A.p g 4 n;Z4t !a" WnAыX'Z$Ц Tn!$,CHN6I0 rQ0 gX%3.gLl" Т;2HNX: y]atX!!d V~°l Z$LAdA0(cERDءT"N,c)t" @3eMl$<(&F'*8HfwBՄ4 $3 4g?IR_)f0n &}CQzAh%&jZ UGYXΏN"@)%9Z4".-GLR*TP%tJϚN8m%P S4 ԕr*RCŪ>^0ծΡCJ7*Z9kbVFrkWص<Bªq~F8XnŰ0rXd)\ hGKҚMjWֺv)Xb` p J pKܷde 3E_H\ZW uV],lwA;PKZiPKN:AOEBPS/img/vargraphic.gifeGIF87a6.wwwqqqUUUKKK???555333''' ppphhhfff```^^^\\\XXXPPPHHHDDD@@@<<<888222(((&&&$$$""" ,6.@pH,Ȥrl:&Zj,z Ra@0zN|^pr|~cc)$h-h U1$Q1-h hBC)1O g dFg$KIrRVM rImKH -лdX%Ê-ȀÇ`Hh؎B!2H }N>HoȘ"D% vdXJ@IDBHrR&H>$РX* "FFd-t m0X6B#$f[/#bH>z>NS^j@HĻDj>^gI_ I HfF8 "JzJK߰0,DAZ2Е&! m'8(}n -8K&MvU+#)(\"@Y@$ :2@̝!]C*0M"w } befވK *D:%Sy 'g :Q 3Pc&MЁ+$DHqh}HF,MI!R*yצe!& NQYQUQqrIb ?šF0rMږV5t5ՙE@zi^HJ01HJ)HfDC!D)dezͪm&D *cFIc&*\UigiN(:hA!Q}9JDQC:Lj<^EDC|S8NnlNl<G 1_L@LIԱ-,؄OP bPG-6+A] M0+%ϭCtuE"_TZp-rODl+>ƤoI dQNxrM<~tJ<ыα6Ko@ :1z$XѸ!CsRD B#d+1$T/<9L>FGE; L10z8>}߬sE:B@M栴l)+crP9\^s%7kES^),[/`D HGgiA%6+a|` Lp4. P=if>><<<888222000...***(((&&&$$$"""  ,9<< ģ J Ϯ9 <Vw*\AAZ '2zذǏ C^@F+\Q˗0cDQ<̟@ II9/ܙЧPJ]G2M+.=tׯ`ZҬ*۷p)8DT9{RF  LÈ+^̸ǐ#KL,d U*T"".װ $Bdd1ht%خ\]^ܗͳPr\ Sӗb>WU 42HR(Q"~ FIJzZ!ׁt2U-ݦ pX!PD#O 'H Z1*!BKF4"J `ւ]3¨g=/ Lڴ<1Ԁ(;Elwu_~? u2P 7cP#H@^u0KSf8z O/aN9+fo9蛳Wu*Q ǝqIi<ǁ h \DG:)fd7e~@3KyRH(A &#!&W(`6jyt]$%DsK[oV]_.ZL1W,geqq&,IlcUh2+`yS ѪzsR茢Jªz*<₣Z#0ŘM KRJQlD4KFqQ,et FGѐ#]RIORUTI+mLCNxbڦ"8(&N͡tL "?\@Kͦ;Rs*9꣣3 epzUBl<- \őWFd:oF\k4%R\k^WUH|DS]{ ""4ֱm:).ٳX0FwE`N@fD;ºH]"xղVUla!42"x1=EUAq9n!  H@Vx6wDϭGxtH#aYh  /=HtaY&1^<:NrLLGC^2k(-$x,mܯ7xP¤T7:uj@,Ϩ,ku_὇NXB^Gl0_L2oM `2*Q;A7FedAk(Ct=TW,? 悊 Efe*IXUgXyVSyzZ{6ԠtLf%zV&?E>FG#i>}=>Ǩ3Q(XANM#Z'Xu%fOm4Ru9t- ƖvT ;EnkGLu G?4l{v!Afb9u*7nݍt7B[xFOy rt%'Tr55憽'@95蛻pUGN(OWro09mL\հ /N"HOҗtjpMCsO‘*EP; KW"#:nby6ndpNl1?cx%E&te]P./.?[Gb:H{#bH\ԏ^..cs(ncdC$# &])/v`v9G?rߋYLީN!p"WL~9әD{7is 7a25NJu g ?$EĦaw@͂;gpX8zذz& +GHt|1w(Z~d^wz؃z87$.8g 1sYD$4-ҀUgE sh0'Y07І |G_87ؠyJU ]riއ? XHH%Ccwevju~t4I\2>A4G31dBl(3uxI#E^ps,g- X*+2fV, rFa2-x2H;px&x2#([Z9pb97:Ř'Q;$P20S٢ 6!NjaD 6 b͈^TExz41$0sAup׎< z"Gْ*hy'B%zOԠ@fV3&eGS%=`H2g!31T(|8@ 0281( ;@5 "qh{3@dwנV _Ï@t1pp H5+'Hcz?۱B2~!@-$'dB7%s#XrPr@  ēd M˦{YXY,PrS@r GxxoÄU| f9 S(H}GrDj񳓅FR"W/\HPktc\VYsG+(` Ee v4ɓ١a3>h[Zɷ8D|wc-(ȑp p4.G1920sI`-#tktڲp,L%k` g0qY 5-ziZmW"" .: F9t㉲ KCcQV8J XX*S"}g}fwA&EJP@B?a7gfryw4,Ad*fY?øx U(r&hs-}u}VqS:1iIc2AnDA7jzv}<0z@ijЬDMߺn'sK|>/ hUQzy9" LJ?Yy?Y& B"3DtP2UJ^0^ǐc-^v[r͛''F`FG1׋GG!kiz*»{/!~BvìqzC@tǿ[ZŬjfZ;L-v6iuGjM?zE>"Lڠj(L됧*+PA4ۅ` "& i#`epoZ$;,"!{1ŋ F87 h:S?tސlj3 9Dyp2Ohd6l`32d\V-2KM/lB$˶!yðd ɡ U-_&K0*Ȼw1٤A (3,v$C%[skȘ ͙!:<,0|tI*ڶ!= :"[x3kE+6mj`1:Q6M [#;xج#{v$h|^a\Maˡ73{9*C:CҎ0pJ:3Hx ,\yo\D{˰g?}<<̃=۟d K=Ju7;ysqN`*%!h.K?9+ٗ5; NPB@ʇ|jI2`&_k۔#toRt0t׀ҷ#G%7~Cj!E.,^P!/kBKvrX<#E;Ѻ ӻ=G]Odd ,Z=M(½)(r/[|tUE GD~=<=8L x 2%[i DR-"~e/'UK¢-;X-1NT=MQ 䌵P["\8ܔe}RFJ > F<+56y[G Rp ԝ^刌zpr`wJ80Tu-k}S>b ςuWkK ,2@Q閮(އm^gy+>Uۮn h5vNfͽ~˙fǮC~M.K}3>I jմ]I> .& ڡ >\XV?o(+ / FQ$0s)mp 8rlρ .}-^$? ޮt&1^W+)y:"$ p>Oŏ7h>`[){ `bѼ9 1q,']c}?_3f0M8)H/ /|Ĝr?_TT 1/+?po я/?)/ O6@ׇ 29(7/? O.a*#8HXhxȘȣ9P%iy9):JHYZ +;K !RK\\{j|,ݫ,J@@R .>.ԪP1;ݮ9 /\Z=xSxWA%9!"y8H1/5lh! (1JNWC  -VAGN~; QXu:Z(! #3TA-y%VШ꿪+FjBh-[GNOBW"ټ̰ZR6#\j}iW^nKL! V`%rl2F"'fleT ihuGM z3P¾l2 5WaSȘ }r篿qY\A-^/:dg&׍2 .HGp̕ E(A vJ"E]|ЅV#1c}w_&Tjq1_B搚v)! BƀW@F`dYv !(He P(7 3E$#%@ify cF%'$܀=9\[ $8Ggu)'k2cer v5`9d 橓O'GH-zə H CւHL9(Q=]C<*"chy׀`R)]%xW-Q^Y1JxӸ&RD"j>+/!RA5̅thH2ȟFgAEz.Q4F< {s:LT!;#Lj b6֖[O! Aa{t(0LͪHEIMKbA |XNȲ{!h m\ `H<'<ăXh16bk# WD'S͓*Q'?THxF4B,jd[UO!^ЀH˃/ŁG 7%?ʈŠ"6}T&g&leH,M]@x*1X옲*!6 $X',B1d$vV1[$2"X ȗ9Py3SS(aLApb'#ƈyV7A<%&ޤ+ib#$PQ(i2c[\*ɠS6&s4HQ'F@OvFDsMZSg]!( FS6) _ҖCH.j u(:WC䬪iG4'M bXP juhb9+Qk!dRVC aAȤɺ*iѧ`6frX B U0eNWAL g3{ۛj ZSENg3nVw QBFX:`eyoC`T C 0UIn܀!@apȅ_ L '@\OXHns4hr(:<ɫE#&8! OV+Pl7K @ Z;p# "A Ć4ylnsj ltyK܌i绸T ii6-Tf|_agC+i<\8Z+!2٧9aZMlohDmJz=tr:O}"d_N’V9f߉IVM& ncGlpcQԎ2U%M 9BB7𱬁-\2w6DnPP+"Kxĵ)\ ^݅ԥdxaG-n[p,OSArs[qXs4(@:V)WӦ ^[\1=sR;*z!@ZOS"IQ_\ {ܖ^ T/<^e "K^h/4sl s8}asZ@A.|Qyܵvo=p$?3&w*Vqbw B8v?"eȿ8<7{ُ`О LG}.-WB-y'@N n-<sEܫ~FIg/#q=ȓNA4zu Εh ~G)Qpu uAA[4N!|V5"eh-<|a[q1 dDG' U thG CCj;.QFHQUEk##5ِ+r(Lk9PB^Q͖ՆsH)U@Y2Yv=,hhja `%(8"55E0b SB(Ȍ=AV YNcU 1Ni19 fFd֧Q[e`RJ 0͈sieSW]%] .14UX$1[95{e%S!N:m9 `Iu<1ڈ)*4'=1~S)3(]p.hc"$(,XvGC}bvC'|B|z1Q3 ATY53a2(*7҅ z8 o x}7 M Wx vyt 3Zҳc+E,$rIImN uTz`uI&(PК s)hsi]DG >d'% GgEzCq霥)Sё:Pt  ۤ6U/ f~)TcT~#t@IԒ 1y䁋=;PK $$PKN:AOEBPS/img/infile.gif:GIF87a@wwwqqqYYYUUUKKK???;;;333)))!!! tttppphhhfff```^^^\\\XXXPPPHHHDDD@@@<<<888444222000&&&""",@pH,Ȥrl:ШtJZجvzxL.zn|N.~0 1 uC |'1f10.g1' Lg.А1g.0߲Ӭp 0fC0b{@*QM.|-}\MTs@EF:CgI/~YBZ|!6]PPS%;rc Ր{>įn8-IنP)H-mTY]X CN$VdCƺ`[䀯J8sAH[d6|mn#.`5XTE)#A jH`@`+DڡZ8!BKHM$v.|?V.yS ؀)uRVq|Zk<s1UYsKT'څK0`"~4UքN푨D8'HjQ`b م&7B4b^Y(?) ;~ua 8 3BU?*suPY(ULh%ВUM[_~YjzdgXž1JB4BKI!bB!A;IA*v(~bI'YbiW I^y UWN*SFJIB*U7ꗪ/G֪W85fv!KSYF >8c c*`ĆNln ΂*oaA E.|0)A1H[D:WO!ODDr -.ap<1qh6IIoqTeBSMp5Re `,BH !>{}|h2͉a 1aKH`bL!]Y $Ake@@1 VE>hc>q`X0 WP 0^( c!ȀRhjT0(L*WVR[ 8-K.TG^ߋMv Ba'Pd$ L1eMG2i?~X4O$b M`s T0-UP%t &4q<2dJDEfɠeBP!o˅Af7=>$%NJaF@ $v6Ah:8q  L $lZ!lSR51DQ w9B֤:Հntt>*1$i Cgh+h_LdC 5nɚK+&dqBhbE$KV G]j%zvB,7H22#݀k%  gdc<@"hj+fd"!bH|s7Rd4Ot2dI?=R\A֭m Y( Zx0W {4J lU#R{a .['I ךԸ~ hh/d= !^Hde_ɽ1Sy rݘ6tr#;yEXZ=\詔bbK2[n~%Lg6#iCF:G!m@H#!Zvn {B{ kFbϋC;PK\PKN:AOEBPS/img/et_position_spec.gif:GIF87a-{}}}{{{wwwuuusssqqqiii[[[YYYUUUKKK???;;;777333---)))''' vvvppphhhfffdddbbb```\\\ZZZXXXRRRPPPJJJHHHFFFDDD@@@>>><<<888666222000(((&&&""" ,-{6NN :-#ϲ : #BE ) #ENBH*S JHL@DTQA`Hɓ`QBNDIIW8sɳ΍upICHRƴӧP:JP& jDAZe(Ghp\/"(D+P+e VLyoАWBP3kfv8]b{P(h-G6." ȞM۸sͻ߲Y\6Â@Te[sI`mDcЯwD:v,OW Ux )tfq-0H V,=G0E7$/-V UفtXkD:{q́w)AGQ0,)ZS#O.W 8bY)W!` e H6hWeX?REhhB_i c8FdHߍZ=m2| xs5#kv+')cXfH0ߌ U %AW]&-x=u,Ex GR<&VA֡@v NX6(fPk)8U jLa.Ɏ POz5O`v@kHHJ"0^B5QM1!tc?p@N!G[$I,!1Ii,n@ΡGe",#P@{lj*Z#z?4mbq A{|3( S"- 0S>";Z"'B;"6rK&AG u{c3- BзIEɺH 3P P"1uP\)D-SF8%礑B,f/ PM|&4:)JB R%Ru KC$*W'IxEODP ; =;)#"bZH'2Q&O?q}a@w3鉩LWHs>B lRT.M+kH:R.iMXͪVe~ O@oi;-j A.b o4ѭJXURPx@xTF6uaF6( uJ:_b֗"|)JdjDjCq"VYFN~*r-[ç9}"v@\A/Hi'PCjWB52U*`h:?)en D|p  JGLҚ0!{  d1.-qE䨽\y)ߥW*)D)蚕@ѐ9Jr#,!LBa 0JX Q"FlP @2!TLԍ bgl\1!rL{-o5(Ö`3fn 2\e Y^꺋7q*)2o$C\3LNj3 >xno#xd$Pp  sI9Q6(  ZmMig|`&s!˙@ PV8n2[zWn@=\8 DW`(\+$ 890W!ɬhB-ּw/jHhP9M (y衖o~P=_GẉOXIfwQ \"-"ez-D()OL%u䳇Rgc{!ʢݖ$m[ TkhD YY^"lmfyqnΕ ~;+ϷgȌj!J co7b{mK4&iJjzsAtOJKG9;}ECs4)vK"DO7dej]E@:'OE[8 qFtVD9X V[ 7Xv@}KJ2W ph3 Zz2O0{l%TXgzE^|vt;Sf;\'^(06~Wdp&3_]o?hOjQ47f|C~(M'Ƨ[f` ?R50z*b5AoH aFKrCX!yvf#'_$ H121z$I 1tc9QɄśY~E(0 txi0 %؝0.6?&: )hI g|܉w_x%)`(vat|5vFӞ\p4)-Y dH%9f y &`zPhwKX`QꟀ uڰ.i:Iw wT4ک Kꩨz ɸQӧ*  ਚ Lli:Z 0U @IOsÌ3)+H"rr:AV 2rPZ ybgG_h$& Аcƒ0"⣤t2p>>000***&&&""" ,%pH,jl:Шt*:X#LdK.h$`n|^8x眰dyri\+dv{f " CFB\ UhŠEG Ő"uFgD˩EcڊCm{m%coD%nECECd8e ϊxJG& [WB,0ш_ظ웕GCI~\%j5'.<ұCO=X(jfZ'΂hN.w"^T@ǝ!iAr<:DD5BΦL2=Hm'kp:U @w /s% +P!7nU+ŋ3Vsp`nӨ-poA!NAP * ȲJu›0 OP\ ͜˱#Pjެ0^ Ӯ,`a@MD! (A=I)r_LʙUf%`ZTayfmp!4 ,"v dIy2OGI9\@ct!w2D!?9bXwfaD$b\zSD R9V PcP"B yY$> d`AeP)G0#"|MCi)fFIǀ}#^tHzeM"aKS5PWh)F Q~mƅhAhJޖ( "%ԝC.419e֕ 4 z1u#wց^efWFUigT,-ltA$JtH05dlpN/v 5k0ECo,_NJ3O6c+ȼP̻vW, P}uOsԉУe!iΉ݊ζ,H` ܁1IG m8 tNp@ k0-Ro txñA7*cBPl^]ã(7_(/" ;odFKOc/ Cs=rFO !,i*<@  HLX&(ĉ. A(z )W5X0 Y=^HCƐȫ'X;PK1whiPKN:AOEBPS/img/graphic_ext.gifIGIF87aZ.}}}{{{wwwqqqUUUKKK???777333### pppjjjfffddd```XXXRRRPPPJJJHHHDDD@@@222(((&&&""",Z.pH,Ȥrl:ШtJZجvzddp0zn|0.~g, /eg&yU,xU)z  M f&G E˞Š/^)&HWESU#BF,RFZQPD@6A# ؍`,Dt2?#1y& #t0L9NZ0Y sҜyfG"<-u J̀zB?j-roGp5&DUځ0G`n[fչ#ԲE*Ž@,ɳuʥK _j&FA *K"):c0ݮ 3E7<^I} wB}t.R#oR)āU|/pt)irFFZSѩY]w5}ΟH' V8@qŦEƭYq7E_2@ ~=~FF ,(D 0 ŭR[#880m$} av-68NUv1 Q^3 EHLB/[*0Ii.M'ztUO'8@hJ)dfV0nĄzxGDD O`F@)jwUƙҺeFL;6QO(jhJ p>9aŽZzLvTWi2Ftۆ6wPD;\A!zv 0 7Ñ\PNf/>o/  `OR\,Ŭu`ʒjYG2fPK3h|*N\H'tdd]WB& P@`;ux@PSm :ļ0.B+Dfpm,dB|·kruM *'ei:D6 J1ܚE͚쐣#zd>%|5`]/qG#sWq?&^(Aa@DZ=]Qon^qovqBD? \&ҁ`{  0a:RXL}h]bp5$J`S XhK $0P&1!ԣ"@J_1+U~߉DVЀ'UܠHWh"CEdB="P(`@H:(@P(b!9ĀXEo@$S:|جhݣFB`@2 ß' %!WV򕰌,gIZ 8y ̇ot-O@"=qtZaTC: dC'ShQ to f';{E'$mv'T(XFtD&+ U<&,~2u@.Srs-S @ڭ┩D)TBd*͂3gZ/c0jfIeip\@/ux̫^˻j%;PK' VNIPKN:AOEBPS/img/et_cond_spec.gifFGIF87awwwqqq[[[UUUKKKGGG???;;;777333)))''' vvvppphhhfffbbb```^^^\\\XXXTTTRRRPPPHHHDDD@@@>>>888222000...,,,***$$$"""  ,pH,Ȥrl:ШtJZجvzxL.m|N~}8-vF>KDEqJ[B+TƤZXщY͕^B"-+lGKxP )7~(>sxg Dw] `KLxjdiAi~xDCeV*b0b3h53 褗nz".Y0~ .n`ť9` D<o㞦qSO8-d]L'%=4;tދ|oE˅~)^D"0Ǵ  _JÂjV-Ho@]/1! {*H?&fևJno! O8!# <Ą W jOwЈGH#zJGE'Dk0Gm1٬mQ6 b<8j5 NHtq#/我@:nDт>f4b'Z V54qD$B]"eGU d{fVDDVJ:-!Ԍ+sِ] 2=`Fk_˟0BLzSD ZX20#\R—&2 TQO,-zIY>T e=s=C}r gdžIf!JRPrTT`Hg/>ތ~1$6NI  2L*CP N;ohjә7:i>|*.]lRY-Ga\$:E`?Ҕ`T 9|ddAJ7" L.`'e%}[z+."H8v(PNJIoDFW4mЃ11ϖ ( 25nK7gR["^-9r͇誤(kRv dn:E FI4]f!#Vu )/My+i憍3##w$vOx{nt`pWHQ`xńh Ocwc^DWcLb4p)/q ärl#ǡr! K^)[Qb2 .Qek[ۨvƜUtBd\oD! %lxu6ŝo!ex-':aNA"qО[+JԤTv,հ " IQsLF^j.Ek\:&B+!TRs⒴4&;Q˵p8ڌhBt>(۱TnG ԝe ! 1|,$ڜ\ 9 7H&Qq!~c0 !q]A'Lq%ϛă ^_GG9̕P(v5/Ū3-C2v}KERpM09XYa df|߰n&rԵw&&.x++qkxO_<7 s^9G/ ϓ0=W7B}z>WϽ]߾nN={!a$Y\_b?\%_Zf| Ǿ4}IWLI}FNG~|JQMMM%Kc$JxA'QQOzW8Q~=5 @6KAUppN'0D8:<؃>e>m)s͇-%W%VUNX! {VxXZh: p U\ -W\אsHxqxx``8F^quxQ|7m94_?HJ@~r@kD~X'Yld`r  pv͓PP"8H@"Ÿ~Nhx[hjVS@{PP{X`kkqw_5␄8xN "u{Q Q5?#s=eHah Ȳ6y I+񃑱J4'C( s`Mlx`s@X`g[OP- "Y#6cd6IM$HqSC7Ó=)KSE!v@CEdB7*-,p$yqѕ~Fc5YH ANI+2EdEU3`7G y0A<8; 92AH`1G 6m@8u9Vya՘-v#3ǰ#]tw2ї1eTxA#Q#%= vx (Wv 9' A=F#99 P 7@׹Tc@?ey~4XdeuwDs N iw*sdŀ0y4A[79, /ǒh%6wi_4)eגwR,wpXI`>q lD*]. 1DN]Q I6,C23U< GbF@&% {jɕḥQb+1XgN o*"zM;3558 p?$N11bO,)=y -0b4J1mq0 3A@HP hXHM6zqI5G `'8X@sr%'x\7~a%5FB?!ɏkBJ s Yo&X'`C " lA`B8 #۲&>W+;NyciM0M8i.i8;C 5یz:fx&B{a0r(7&pTN` vXefe>{9b{ p $PsX;fնqk j+T7kY@ #! F䲰Ep[c1HBQa`Bza3R'g$"$HY,^Ia5C ڷR HHLA`p 6s~@*Y# @ 8!Z Ą$t[ŴCGGfDk8MK*L0N$p6$kÏKx(/XQ+Q:^Qd#qP~ePcvpP#&L+t,,Q0lPA,%5 n{24m ȳ#we^41R͜M]}8o)piq0Փگ p;_ Gv=9P C%vpD4: JMw=ؐz %VoaJlxyLj[b/ jm 4 SWT]*H 77H&߸ ٺ%iר -ܗRFa F-7սz؍ٽ mm-]=$}I>>888000(((&&&""" ,uJpH,Ȥrl:ШtJZجvz+zn|NC~~;xsHBc1n;1EU)G,c,}y{D4$DȚIļغ¬F4ɶC4)ՐbG,R o^ ?[5.`qXT ѭGk!$1T`!"GOLnd@Q\ʩP!Xh Ip0`_;~.CJ ;4\Jp9՘ n~ 6S; !Uȫ=!x*#2Y\r)>ΉDCM P -C>,74jD,!!a`>+sb&e*MCb얩$#_Eл1)'ɇaWԐ:ɳc~!^Efm u]!|_xq,b=\ ~%΀CMe5(fXe!+&dhHZnS!c -` شĒ/JW{tGC^UZR)`\@j衈1&Nŗ^0<(hgt駠2)1\@5~\ށ&f gfJy̚+/b3kbidV[F['`DE X1,$f ųMKEO$DBhAЇX8'~6cJ+.N1@4N aQm$bZ~VAG uN:BղŘER `⃥s<$՜]=7mߪO E>XJi_Az &6}Di24;m}u@p,jKE< vN45?{- \[>=UPdĐAދޓ]Qx>a6ymttry6.u!@8'dX:]_H)d~6ֺƅe Ka 9X@ (|/? <;k\ GC&I 8h0A Ah A(D6bNu5d#U4AesӢomp;8Ʀ,Ā}ùD3 ;%Y kU6,] QEGgD#")58Ra‘G}bV (HCJEÎTIo<[ %-i!*El$>9PR4+JFRs ShR@(C10<b&$B>qʐ5ESB:mA-@3hbKjCl9GYJ h,ɌT8L >2}ơrhα&7108KyQQ;h; }ii;T''T1)UF1,`xBuVP:HE(@GvRj*bjS8M8YVJKW%\}M^n-i3ѵ2&^Zʆtc(cXbV ݬe0Z~6 }li}"fVj'6h@%[26جp9@: M.D")(f(\xgVbUa vG}׻E|*>aIK_SDI$GМmLN;z"h <{ GLb `WPx%ilB_LczĘ 1;PKVQPKN:AOEBPS/img/zoned.gifmGIF87aY.}}}{{{wwwqqqYYYUUUKKK???===;;;333### pppjjjhhhfffddd```^^^XXXRRRDDD@@@<<<888666222000***(((&&&$$$""" ,Y.pH,Ȥrl:ШtJ-Vլvzx `iTW @x5Ka40trtR+sik4rt$|Y Es nNZ+sG0J a KLL}MH4~^$KI 0ZL=7Ep{b;adn!‹5H TBLJf\ɲWY#IaH5GHvd@j9bM8ڲSP/M++"!ŠѨk6hЁgʝK(, @ K\wQ|y$dp 3KN`(DZĀЉJ@iBGX(iq;H@)`˅ea4v" <Ξ/]Mht+ѓ]g4 s׻HBjҐlK%rBȴ`A)u@[nnNGD` F5>@yȌR - 1HŠn#\#^*Oˢxo!LE` {I\`AG6 \ F^<#/a?.ҀjۘEfIpŁ`ô6fvƗGAFH:-Z@#!y(Cm'DaUI"0 HЂ-WKˊVb?Ur);D"|uCeG\<ڷ>IcxS9Z;B;PKurmPKN:AOEBPS/img/sequence.gif zGIF87a[{{{wwwqqqUUUSSSKKK???777333111---''' vvvpppjjjhhhfffbbb```\\\ZZZXXXRRRPPPHHHDDD@@@888666222000...,,,&&&""" ,[pH,Ȥrl:ШtJZجvzxL.wkzna~'>}T -+J_xG>7C>7ÚDBC-GPE57>757JD# CC #ԹԢH[Ȱ{C(Be3]l60B CaGQ(Ѥ#cʜdw5a)Mkh JI *>rΧOk$6{@`2ڀC J0($A?}mp& C8F5<Etf rhWd@K EִH$@N=h9]*"Rl L&%/m)h540"3%(ܙ\4|?&1`;4*OKGl *F hxh)eGQV˭jk~B:Oތӷw(}C-*]ńO8ߏ;&0z*F.%ްHL_ 0ëDӱ%JHX({7 ̈́ژqX?==/|+ajP 1a }{^x^vR ~#?F!,0'"d; ȓ.Dg `X o a*PwEx +x((:_Հ %fv<ԅ B%|Gʡ( FpC`YZ(g`ԋ%)0A!dPc 7.Xx@@Sib[ħ1v dujA XB FsmBR&Ik%/B20R1*^ DLj oI@sd%;9ch)v?b3S̗X#^Qt% 8̆ 47P;0v@LE-; p=M>B! E\?=eXrl jPS,Fb.RJ-R`eB{rGAX A\Ph*.W.e 4㩉H;12SCymX\fԆ T#X7.>|9&!`'ihEAyt)4!ˈO9>ƥH8c%Kf3fZIn39gBWhGܕ`uh̽kE#Dnu)#X(kn%W1 nmB75b˼f'ld-eW9F :*dJHgX`L$!Rv <%40Pi[u.']iPn}^.k'j]A (x+Ezy};0^gi^#8U,@ ǵST .LlXpTa)g@+31Z0 xJ2ߪapa6( of ̆#cQ=*7/Br h @N{z ( p8 dͳ_8HY8TOaR}kZ ~[h.,26C+h@e:?`/#"-$]!@jW(Ejh`(ANOxQ اy=!NJp]πVZVGc$ 1cV0LC)dIr3CԈABf`rԀHrZ7c fel_ 5ew\&9vvA  k蓛A)ehaPQGDK3F=z瑨'__1dѽ#Fe!w nznn?IwӛJdm, -.;0yM9jqE Ϟ_ldaqɻ ~ xHLYLW j#0-p3O Cjk?Qihe.TJg.;f K7M"Q{Ie}^Հ C釀' c T|Pe 9C{:J (BusWE AB76cJՂ 8yă% gXDS1cPDs4PYx EE]Ph#huxBãaXbDrd=|<IӇ8hd w=' B"y(=8(}VIe8@X3HTm(|xB<|S ㊰X CЃ腓xH`،8@=h=F#Ը@=4Stȍ 帎pȎ(0:g 0 L@Xpe2X-FS*F "4 hd;1 " UBcyw@։6e .')ka6>RNgvV`iG/9Qƀa N%06 :rf@Q++1>Y }٘W ! rWe`gŘ/L0cR9YБLPTuٚ^BH>隶"y4ٛ6yYSəvӜ D@ȏA;PK_O PKN:AOEBPS/img/impracopt.gif .GIF87aUf}}}{{{wwwqqqUUUSSSMMMKKK???;;;777333---!!! vvvppphhhfff```\\\XXXTTTRRRPPPHHHDDD@@@222000...(((&&&$$$""" ,Uf@pH,Ȥrl:ШtJZجvzہaL.t9n|dX~>t,rH\qCpD>D5*  IZDBF #KC1BBEFCl*BMXr"yYCПÇ ʞ @> ChŃX\AG艜I@ 2`A5*}b)TF@(@tׯC:*Qʂ0V"#X%Cݻx#RiaY.77?#Kz #%Qf%dO q|8G䄙],9KSLhF{d%R%|^BKa9!a$12r#w♉=g.mRz qT9)"Z I ޗ$çY, oZ°plF+ACJ5@Dm085ۦ%>4)A + +.>@'>8)V,a 5$AlU覻bI @2sH{,Q*[nblt*HD(õ E<3͙* BQvyNnGX@XAp'|6"!ɳ0Rr޴#+;u as2{+^‚wi=ԈpN(xs`t8)4By`3NU&07?D~h׀zF{[ |C&kyγQlA|(سSD Z2Ź8eH%9œL7jP}4` Ra:~!@J<0`uZ(WjYBEMZ֊jr `J׺ʵ')EY# lR6Uh@JyRWZVNIi2 P #[lp&ISq-EUBqеqmIH 7mxql&" y6!J4] V.f:l@Al!Fd*Xy3pA*q-Gu@ѪČ 8( h"QG/JcK6U-HcLB R& zڥ"2&Sr#E\T1 t?@!y7 Qz{l-ì@ޫ|([U"pIdqJI(~? hc"^s%Vtˢ"3!f HtHc-x4 LѺR hz8 .7MDh`>03\owt>q@l,iT-@k h,j(g< B@n\NB2mC4"б>ywQtCd.baZ_dsb!62"h1BP6Hf.`hhC(5w@ .X9x B xg#x~}Y{X{vl6o2i !`]W F ~MFk'Eo`N_k6dL ˄&TXX6Ņ9AW !2C?d^\=oQ3'rsVȆu>sIPZ~`GPZcHejO o ꣈S8W=u؊BŁGXm0cUx PXc-dFZ%8ˢWV\@_`(03|!XT iYeX!9Cч:2xYbl#ȅ)`18<<"9$Y&y(*al!:<ٓ>@B9DYg  z 1C`pLДV)R9x\;HA;PKJO> PKN:AOEBPS/img/sut81006.gifGIF87a0D?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,0D H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴ)|/|GP|7P࿁/_/߿/߿O@ o`~ /_|| ̗o̗/_>} //'P࿁O߿O_߿/ oO|Ǐ@}˗o|8`A CX0_|;`'0'0ǯ` |O|+/B!,`` ߿|'0߿|/?+o~ _>̗`|"D/_A~_>GP|W0|'߾| ̷|W0| 짏`O_?!B /_>ӗ_ w0| _}'0߿|/߿߿/O@o?} o?~8`A +?}߾}3`> ϟ|/߿|'0߿~+o`>#O`~Ӈ!B }_>70|_o߿|W0| O~"D|/߿|_ ; |(߿_o߿߿o`},h |/_>~ ̗o`;o |7P_>~߿߿(0_| ,h „`>߿80 o|~O`>/_|707_ }O`>o`>3o~ `> W0>~ ̧o߿~0| _O`˗_?} G0_|+O|)TP!|/_~/>}G`>}/߿/߿߿/߿ o`~ /_|7p8p ϟ| O_>}˗/_0@}߿|߿߿߿ 70_?̗/_~ 8P`>$XA .dC (QD%J'QD%JПD%J(Q?$XA .dC%NXE5nG!E$YI)UdK1eΤYMp_C}8q>)_~k!|p䗯A~@3/_>}8q /_~k |é_|>˗O@, <0… 'p A˗O#`>'p A$XA .DП ˗/_>},h A}'p "Lp!Æ"? Hp>8`A&TaC,h A}'p "Lp!Æ"? Hp>'p "Lp!Æ?$XA˗/_>} H*\ȰÄ˗O#`>'p "Lp!Æ˗/>? 4x>Ӈ!B"D!B /_~CXP~!B"D!B˗_A~"DP}"D!B"D!B;‚!B"D!B ? H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵדsa_sa>r/_뷐_|˗|[o_} ˷oa}}O| O?})O`>۷Oa>~S߿}o߾k/_Cg5| ̧0| b?}SO?~Ǐ_>j>(P@'P@ O@/,h B연`%,/a~/_~/_~/_~/_~ KX_‚연 㷏ྂ㷏ྂ㷏ྂ㷏྄%L_B%$_B%$_B />}7_|#@~Ǐ~˧O?gP_KHP_KH0?W0_A}+|WP_| ;Ϡ KHP_KHP_KHP_KH~ Hp~ ׯ?~W@} _+Xp`> W | W |+O_W@} _+Xp~ ׯ`,8P_W@} W~ ׯ`,8P_W| W | WP`,Xp } Hp`} ̷`,80߾W| o_+Xp`} ̷`,80߾W| `A $`A $? $П <0… :|1ĉ9簟~=O@ DPB >QD-^ĘQF=~RH%MDRJ-]SL5męSN=}TPE5;;PK{PKN:AOEBPS/img/load_statement.gif>><<<888444222000...,,,***(((&&&$$$"""  ,7Jh%ƻhNJ o*\I KJHw  Q :2cAh4eh1`Q3H$25<ѣH* S]B 䆆t&EL\`QR"X5JDR tQѥx݋iܴ*! 4D皱&2鿖eAT1H x˔COX0U3W|Uh Lj[D tldф"mh w%1`S0d){ݐ (yiVNQLbQ&z@NYLJUTu'Rs$DY%=ǀX udjƑU9)V9+ t_8F:ݎ1npf8YK*aGURv!P#DK0By)㚪U'8I7%qj3JuײF+P,y愮vds+H)ٚKko 0nz!!`Bn* 7,${HWlqr .w D_lxel5,0c314,U%s!* H#mP di0aR$[[@3PQ&zƀ 4 u,ky$HfXw_$Hl)Cb,QD,IՕ.5c%Z9G~]~h+2pfIxN]XEы6 %|J$4=ul*S 0ڄyZ$>sm# *Ad2CH6@)gHM]&;hV<"b:j,O'gv;NyGfNEaBHO2eҔȺX2b v''17Xr[0BQc'1O 40Zk cBVAfĹA{ʊSm&,uת.PLf6 s# R UbpW"Bs1]>ܠ.!1ǃ6.@LnL^ %AtF0_gӠ_8 m3 z2( nMO.C%7%'QV" E0mK4ċhI-7^U %T05H%8郔ZPiG{`oO M 0 bjUa OHjfƄ`v غ`h13=][/A$BĠ8>="mj+ydVb(QҶNKZ/i-Zb"?5-FLY2:݄ب 7+hJ}7t{ ߦ P2w)7cМԙ]e*’~[8JF eO<7 "1=ԐPnJx5~D>nD]hs"} !K]`*]FG3?+@T'!,PrJ"%^PbgM r220цx lP~4 Yv/I C G d4}ԙi @@ 9.UÕm @󐕒Vo 's31 YRrɜ  OS93*t1;wؙR08"yTȊ]aQcC3!39DӟP'fg?f j 4F*D z!|順2'yWK#*" JIX*@Z@B:DZFzHj5?=00\@pUj70 0 6q=:Z16`)ɡ$:7* VJ(:SQq7 C>3)Fk*_x@2٭viBݩ5$`b9*:j5Ӯzs&:{Np7I|q\Rye+x̕Iv Nw^۱#h&#+APL" cqNd\&R[kCZ-k81*[b:S=eciP4aTc7/SA@BK=>\nn By/+)ٙvNÇ^ jED1K @XoL@N N^гYpvN2]z!8> Mp7 čAIMˎ f| [e;>eNNT?e-z !Ac+(Y~c4BP ;R>> u1BVߑ7hQ J? 8nmNB u=zQ^,W=Vd6=imk0k)7HDޮs~\f^}j۵evq A<N\ᩝ-e8MTuqNts!׿Ye/{N[^ݐ'V2iqsUAS {?;sB4baC>#}}"<a[ ތk9Z<iL*tm壡w}W߷3_eÑܢOP21߀Х4x#10X88hم& aʠTz꺪X0Ȑb :L\l|X`Yz;K J.1RқdzԘ6  /.{ :ѤK>z/(|g1daה`WBpRl30hraam^sO.:uŔ:HmuO{;"}=@ wQP)1߈zEEFSp| 5Am#iX#p/M7(%D x-HbJ}fEF7z| &R4u%Ff XπGR&Y&d4`, )4`C>!RT[mW$nF%}+7Kb&:EmJ% oZ& RPl*nd]:#-]D塞cA &ד](dI??~81I7!8!*%T ;̞&I :_`VdL~ˉFU^E+1ˈku\ oKoދ~㬺_CC"๛H.W4NL[pUr"օfu1:s\n-whYR넍@f)/S%sTì<fLD ̹ J}t>+tMC' (UNw!1ħoȌ-=ލwޅU jk[h|aD'^X+p#<p#oJnCO\zk'M鏧Q㞈JJh:4?hx{>Ђ;%"M}h}s2;򓧣&OߦyOoź{W׺0yҀRf"?MN" *#,? G׿|!vH_&Z?P0O Nx݅hƂ G<|0QIE:8Fȝ'1$Ozʆئ) P4 Đw^Vy0ubk8 tSDxkX)[-6"!D(\s> rA {hpF_bBZy 8AjϔyOB: r*d<,c.a1eI4}j (>$VnE hO%1Q3%=&K5RED%SQۼ`3IeP2K?IsdMiWRs`:`}$1Z0AD2Mat/t1>THԥ45LWh(cI@"kYLuT L'2'PJUJ1ܵư|1RuvjNFUOG5BgNj;{ #BOpVW m(@=x%G5aC=%l& r客`[%Ԋ ge9pTv|@jSSXVeK`wKSeu8rAҖ\;:ow.K-ʽH\ Vs+M*|wfK0Pk, t A`MS:֦qvz&;[MlQUU$DҶl`Q㳆$`3*8|4WzR6"\&yGGt;E*#ek0¹IHLFDgh7͊y.$H.՗,pQppr2q99⏞R@2:c%A9/_%HT9tigIxG-Gk!7'z&6)?Ӧ30(m$K:Vt'|:%lʲ!M`Jf.nEJɱ] nIJ9mså@j~Q,Ti 1ȉ;X!ގLPwPvM>7U :7%'ڮ3:zPJ+΁uI=!DZ!BB#7S=o27Ipaأyi%`-LӃRvD]҆5v/o[ָku?zN*.nb'(s{> ,5Ocr]QG7ﵹ mDU<(z\޴R3ms*}Tc'}Mr?G=%|/Ƨ }|04{/co_s~1Ow!y`u1Cf xX7g{ ?w: P.hh: FG91u0+ZƁ1:g&7A:=)oV 7Cx&F6HIX"0=U%[JP0kb7Q6QEER_kMAU%*b`VGUipPGŐRBBpmQmֲi% 6#B 7h R0(HhOO P#(gpHh pip>QHZ( SUDSm05OH٨{hW@(g!㈅h騎Ȏ(Hhȏ)Ii ɐ  )Iiɑ !)#I%i')+ɒ-/ 1)3I5i79;ɓ=? AcEiGIIBɔC(hQ)SIUiWMɕV])cUsvi\%3&avyqb#@]doɗ^1VQ $$^ Fq֗yP#h{$јvkiH1keі)55xN1VdY6hZ(=oF5"k?T)δyJr L@v`y9p m1k70]r%{y XpUU .2xtxd',ZS}7pI &k'=jJ **삠XAr @p)#J%j'0!YLwm@ \e;@m9^%YբmX;=JUB X]sǤFڍp7> $b =! ~b"fN$ƥleNSy5*;~5KB KS{&u0bڻ.<` K@; e;C)7:b5K+G+Y `G2~(wj%g-B`c(F4Er$JT$봨MKC?+3L*0"jjI `+ ¯>MZ o˾[FՙʙTv3IYrx+6)cv Ŏp ϊnbdYR_B)*a|ɳvwx91+,k$,IW ƌ́\Z0!ɰ= R4*$N ;{gQƒi]#4 }{u T >q5XWޕƛ)>XJI&TO!^uǎ7|w⚍]j! ^*)L/e*ĩۼsC#Hr"\?]'\$ (Ҍ]6a SX-pgluV]@ښ%/IN"~Lh9~e":^g. ѫƂYܝ7t 뺾}>~.ĞȎn^uސnd yZK^%N i@鞐# ޡ>1 v^ڥz? qN>TA e~ NNyi=쭧 D[Vz?w>:UFEkG%*G: '1>u:o{|=v1">^Ptu ;EcᱧaOL^tB> x#({;;$TJ?|i9x<)vKOn8h.o = ?//AN?︤ V~~M﴿_,㸰ߨOX`XH M Q͔)+Hc% L `GtEJTHr~׿uǨsCN J)%NN]N) ğXN hh7@]ݏ%]–h1))7h]*) A\|. ЀEg1 jȱǏl 0!Dž<(ٲ͛8sSFɍ(+5tA2VXШӧPJsAHISB=tH#4):Shj )˸x *AaQ[Ë4=#]xShU#Lh`5+LUO9S SŚS4Q%< h$dˉW81xՇ^Īۀlr{ቍ*Nj/J]Zc-nrE-v E E'uJavZzYv5X/Fi,0|ۣ,(A5!~?B&GX wHRLSC րRԫ,c6ؤw#TYNj2,ni`~4!Uӿ|&y*2{08]#D덃0JP널u]B)YR* IfԽ?9z@SGӰ[I""!üȋ ̈"^zZ{'Fd*h\܆ ގ2۾XlQV?B% J8@d W]ԁ sGra~d#t6 x9?ЕF1U~,sUEZ`\qsWd3 @ZHxV X#FXWTA9/!8wo)O@#(t6c}7b[C_'c!(`&c_qg~ T~١)dhOR06}8[& Y7 _J 7]ĵO%p(vsmCaf\lC1+f$U rbPq%uSf55ϖd xOG)gl~&d9RbaÅps{1$IZ[&b  BvTwFe tLԏHd򆔠( O'%A-̲'a[Pg3|v)6ip E-Y+2 uP9^@9R2TtR@eS{Vt5XvhJ'9aX@ bhcxY7>6I-`3Y1Q4   1 ,F wI070oY F[pJؖ$}r7 E>݈DPs)zJ6y 6 d*U9xh Y in0 ClT\Òr 6qW`R)PQ9) Si f yGt6BI u  ZÎv9s5qpy 1)IJ5$p<ЗD5QQ z iT k @ OD@# % QQG0+ JOA ʠࠜ`6b30З6 *"x $;, I$KJ2&Xysã4gyeT OjQj I aY/E汦mn %IUi>\:CGEB?ΐ72 g: 7@AG ~ H329:?Q"Œ:+J%zEj^O:9t 77+mBZx0vE|6 BzFlq ;޶s?}Нjo ?SMz 0 TUR7љ`YUZ]kw yZ. <:*Zj>c Wbl+7ؘ#";# b]7gTӱ7g?U0;{,<%:ME`RM3&Dv 0i'u"tjғ#cKÐ&XLİ3g+"9 0Piw~[{KvdV8U iC@B\% SR;B~\1HnTxL4`lȎwX\1l*I{NFkɜl> YɢĝW9ʨL 5ʮ, uʴ U7jtqNG? Ü@@4Gj(Y fDž@ڭ\E{FDZ OEỊ̋cP8p1Ez(K=TR֬ N M.}ӀIP | ) Дzs PjސJ(M  Bȣ`2IU׀7+}:ka|d=۠ Bp䖌IB 4{ Hm@3wIE^4amQ> 8c]W33#Bh`6 DIZMv jn6&mNӈ-Pvߪx2lv- :\i+kc&42쩖ʊBGD]7r"n.V~kXEn࿂,?o@s~s\QpC3&b&t$D+ ,IyG/MrnFOkn( ?>@u$ubqy?f@MF= ?7#/FNog"vap?7O!p]|Sk!|%wbwɰrvh'8F wPA2ڵvʙ 9lwCOh7^$jTׁ>HFSif>>:::888222000...***&&&"""  ,@02E #: ˨ қ+ +Ӓ 22E=2 T^"*T‡GYQ? V$ C#$`|%.cd4q; f aOK%hGNH+L4f͘'06u=pAԯiK}廣.'Z%KWBac "O{E%Kc{aaOD/;X g6zpØ2Rχ .b:75olֽKb=}ۮ{:M9l+CZD'm{Pzև}Etx{%T.eU!:$V=(?1's>VQ Xvd€w" dFz)˜̈|8W.HB"( GBAN: #`d$Kzǁ>x=ǥ _C/>Z#eN !|Th`졙ל)("|΍y"Ƞ4 O$PcЌسLZsA`j,Z! L[vy0fy!bc;GCT k"&-cyR W!%>=@&H֧{°'/FVDZ)z6oj CU*lMvŀlFdo!F;X'(Bb{0{fbZQ,L+x[ w]g`R}[ڡ2)rwsݏl'k`)2=p*3]-EU߂7#rqꞨ=ǝ}96NFgc  ^E" h+ (̔$@QXeF[VX rED9hrS.<33`n AJNDʟ,A&A3Ep8Ae"(̣f+4R lEz_rxҖ&⏆x+!FJ8Z-苩DӞTL.ĊARvO}4=SCsP9UE"*<&qF0` x"IjҸڵ$죨@!?K2+(k]nk#J֥x% d)y,"}he8/)MMA>;QwWY[+]WI%Q ۇ~yDmц!}Pd#F8P#KkAcD-D8G1' G X@$PFnk0Q'p%Uҳ:FN(p;P] `؀|LP |UA v?hǔ粞6[ ,<%") ε0?syYZ{z3 x[~#3XF3)L(x1;([1w;c޾O9/׉==H?Oaux8Xx ؀8Xx؁ "8$X&x(*,؂.0284X6x8:<؃>@B8DXFxHJL؄NPR8TXVxXH7p%d rdZ $~%2p8%*u8 wxu$vՆ0! G}|~o`VTkH#XX 1р 6q`WH8(2s&gh"!N(cng1Mg,S+Mt3:Tzۨ1U󊹡AR9B_9Rj8CiR/gt!K=F%8H{QtPyԍi|!(S.`D3k(}??U/$"2_t#$YR^j2#4m/0o5:U(#)Ix "mdRCTQgG)ft:)zf'0)m@-@}j&=iW.Jt񑤸J.Yp,.R9L)3VzyqI 9=D)!FޡAt(c@ -uy7etG`X2:l9 I/ :_ Z:QO 8ȑ=َ 0 l7 'V w67C2?ٜ5ʙqq#R ?t0: , d@cDA-cL&I[0qMi+wjbE bc"p$yi0!QY8q虜uF G"Gz3c/O Kh:y|JMjD9}TO2lʘH&q G^,w`zkBt)X+%@TzzĈv#};1Ly-A|JNDB[RE9ŠĪ[T,9vx.kӁ@&zꦅJTpn!T!f-Jcz%}Hxx ˰LJ ۋe{Pk_ ;i8 PQҐaбjS1۳@v7K9@Ij(rO ו)=0Pp#gES@˃EXyx g Or`y* LyR-҆e3铤P9#@<;-D|īYcHƇL: >lIw|2ɮؗ&p;P/`4A|ĿŁ{ UF Ԋ-KyYn%(m&}֐|2r#47\D?:Y"QV'`F@ ֟`n-- )@kDu: SX ɐ# 13~Q1.Ο=†ٰAIِMl 88SȹQr[٠^fa'J kBt'=s n3A Όq=fiH<Np#LOn,.#&-Ejl \PZX:jMyuڣ1z2 l٘p+P^kB2BȲDB5^8|Mڜ@YIALŐi>FGq$jjMR$'뇽~Hk݌n׍+wyrAlB1/LKdĕ2/11$$.!#TMJzVVaƍȍWb%*Fz&o!N"ԫzT'yzM"2PaJ7/P')np pMʺNA 0|ǚ -Z G n_XW1τ+ѝu|1i A!ZA l\_1C _?̈Q`T_Ohʀ^M _E`fӀZ#8HX` PD(h  Yh):xjY* : Xꢃ!񐛊A9r| |\[I-Jjz8 cMX>x..XM91xOF;):8"Dh Cu ۨCFɓnԉ"4kڼ3ΝÚ2Eլ[~ uJ865 t <l0<1==:b, ֵ;!˛ϕ=ܞ]|?~!H8 aW>57!!GI#a*8E%^!"  18ȋ 8Ȍ!dB0DJ.dN> eRNIeV^eZne^~Ie 6a) i1n餘Bc#"T4V~ hJh(Q MXp@N*hi< j 6Rjpjf꫏Ɋk&`*P ;PKdPKN:AOEBPS/img/into_table5.gifWGIF87a%{{{wwwuuuqqqYYYUUUKKK;;;777333)))'''### pppfff```PPPHHHDDD@@@>>>000***&&&$$$""",%pH,!ml:ШtJZ- }dసh1nX|N~'b+ /~/ +uu'|c ' Y4Ya qj4 "Bt4a 4^%'IrY+ ˤڐY`  DH^a Jei@.^x tHxp ."9p]xH)!o ^hs93 ,RXFTH;":X]R"Weja"$ ҠM(YkZ#&|4[D1[Mu9p~SKb, IJQ\5 lRmS !Y^sIN)p^UDY+Nd [ hKݸcT9!+߫ `⊲Z qaՁ-ͱ2^V[R/n*TvB`Rܷ I 7$'xPDuDlG@H& #Ad~Xc ve4`#1ؕkFڵb\޵dY^wER`#sJqG'k%HgĜ(aZ<6U)N@qY| ]grٔHǡ8"[}~G%bANDNgbfk_Jhhm[Tn™*ˮgPv$o>8KiB" F/Na0GN-$iɩb zBzGL mE xFpMa 4FMcm-2MW >k \5H(/00Xg/{R ] @`*1p|}&lXN삸*bHbq1 [2O֓ka]rl'pߚaA N,=rt"ToC{w]0RᘇB);w@"c7$d@֚ Lqh\p[K JF 7 0w61E4QqjO+/Yl6YfE=L%E" 4jR&%(]EG05u ضt}M1+窦JCaRT#`= p5,Xz k@*זE9#~:Z1Jg*8();?7DXH4wx>{!pѽ퇮q"C\ JV*njt!GMevї\xTLֈ fÌXb`ո]>5@|G0pg PYe8H()/:R frPsQ(M$Zpi 0UlA=`@_-y]mD#) Ɛ0KrQe5P,c8A4UMms; F΁3/X(0+th2$h>Eʊ$p0i3<:H,5F f,QҎuB:=KTUSCWwCOz{XJV0 JEڐMtSRy`"j׾rHMMUvzⰈMl Ph `H0V,`[B^jtXIMm3 H`m:&@ZUS-P63~K\9-r!m|DˍIZw97G(#ȨHiJ* ԼҕGxCE5Jc~@7r#zId fpĀΰ7{ GLb@5AB4pgL8qH g!Ak|HNd) @p^p&[XβX?eb|_TČ3p^8J 3|-0 ;PKxPKN:AOEBPS/img/impremap.gifFFGIF87a }}}{{{yyywwwqqq[[[YYYUUUMMMKKKCCCAAA???;;;777555333---+++)))'''%%%### |||vvvtttppphhhfffddd```\\\XXXTTTRRRPPPHHHFFFDDDBBB@@@>>>888444222000...,,,***(((&&&"""  , DţUԴ)%1 GM H*\ȰÇ#JHŋ3 < ƭW%lh` (S\ɲ˗0cʜI͛8sɳϟ@ E)ʄH*]ʴӧPJ)ݑ1\Hacׯ`w]#ٳhMQʝKwزu[GUdK 4 +^5 T 2L9#3kfC6 džS8Us.LM۸s@E N&j^pOEYK;УKNسkν;E*CB.4H1MO_W-z\Q5_}2&_Wy 6bFt!3 ,a-6T!$_(~t8ʇl0X4h㍭\( Di%G0H)TR0 PDYuĘdihɃ JZ" D!fx&G1H(" );<])餔Vj)Haݎ҉)l7ezRD,IV!GL+r3J܏̅Tas#H_v'c8mz: {~E"騴^(c$;":'+d 2!VmFDK_~~V OD G#܂Yj@ķ iqLhc:EltDRMDod]0mQ  h5.<( gedcTV69&>K}=k"=0`&tFKR2L9%+ )@#]ɦ?1L~ç鿵}+wDİBpS[)=m7G ҃PH4<9wb"j'>!#/cy!%u@K@!PNu,m[pA"ږlF&6}VUAƛe:XRTJ_B-)0.${!YZ6\`(gw qX*VM(Bч vAFll EfWRH6@DAIY؎۫!@kFpeiJ|OXO8=EwdF8Nj EX!YVd1w}DqVߥDxM_~ŬX4oB 95+&<5`R$65an nDհ8U-8-/OAdv"N&;Lv #[8">b+:M|yD< xe<\]X=)'67͖@e,C@Z\0Fa /K^SQg9]" TL!r7=(AԨNW*Bb:>1 \ Hkq&m2bɃ-~껈Z$%$UnҖn"Tl+܋Xmk[XS*H9bvn-KMC+Gc{uE #`:\w]F2P۞*@7i w|݀Oź7!6Nqy*`_mrb :V:B1ys⻻9)>Zsr8rUZ(7eGy[ gT,:.t!3Vď!QMہ^z4rZ 7$xC %{epZ`}0KT/5];x cq.ξp^oq@$I,W ٓ" rt aGБsμ2p%PVaT3#jfB7e"C20' E &[i #59W{`4P4؂E2-c⁊2WB`u 1c%8>" v   d9JP~D#\uzfq%pœ'W*_eDGcSfU]&tH\Nacp%M9DžxFv?x6A8EyI:"$3BRT $1pIS+Ȁ93W\1EHc8|t\(U~ w6.ck?41 ?\Y`o08ħ7hOTR(H՗QWL#F%6pr%j88Yr<;s H 8. 62P82P'2OAE@(IT7+#A9Q!2ao3x, GM"|G3q f^VIlĂ2R"JZ>$>W[2ts,) oNaB[#T@@sDSyM *>vMa+ER7F*+AHOÍ?WB!E?GyxI<03RE<IdG8}8p+p &`QR3@& 0J*MHY G/&JJ,G!1MGiR2Qkʅ0*1+Ʒ@ЧSbP&@P20TEcU)aH$ FTGw LTmɉ1)! ydIpQSpcЙT6PJD@;>%w8?Αj]""-'gk3֐l ]֝0&>w22AfdcguxVDy# Y׻zm1*y۝*\2zT &!lob%m)fI2 )%=ԭ'x>g*Ƕ)ٖ}&7xjѳ_E~'|7 ߖB%fm#ߒj.g}PyFe *eVS *<9]c&~(X`JM <82>Sb-QpH@.=YL[\ #-C,NhւGߴ8P\^[b׈UȲj^p9P/0=5=>^~舞KTp+:T~3dnh>d喞 [1l>(o%VN sճN!2ѺU4s(쿞^#Xi Ma^#ՙN`;["Ge 6Gnyƾ\WM~[?j&0]KM h_d  MrڝUP nMV^1>UF (?4 -܍T654|'o6!qHo0J YJ?~Cr`_/]LHPc?!*שp_%#"Pb@|?N_~ e=ɺo^IiRx 0+"m?虶l3 ? tOkBU=.'q/ ~ F[+(_ /[ e1a!wn@$8HHP%V)yptx!YR iz*h*;K[k{ LxT:L(p<(9SؼҚG |X*(JA7>,n&&M!Xغ+L7J&61iFOEH8-.<^ FGs~?|u& E,{n]yX1ŊE+:!"a&^/\0qd,ψ4qll d}. :*5S0l7Ln6 p&\e BUR1B~'ewQ9)2;^Y+F#$K(A1O+c9A &KҕcL^vhJs3ѣ, <š0Nԅbt$$ER8UmN .T *>9 bEd4 WSʝSEBd(Ld%w܉_!saIZL1ifv<[6 &s sΖIr;G%eTlH 4gee.1Ǹ.)Aš]s$ý]uDEnlmX#s- [[Kĸ!B| $/ C"mHF1' 12όB%kb4OmNpBD J %,EU]$0 3ӄ)s j1=ʹdFi UAVy7@x`L{ MS\^ړPgh)1ٕmFH {jV*|7c hK{Ԯ_zUY))a.1?J}{~bL|kVqWa/+| o@;b1xck|QQJCe/?mqi|4oҳu }DrZ Eoӟu}y<->Yc}\t[uU(}^Lo>^S/k11}8;IWgo/o"oyn6!G8~/鳘/o_"Bg^_Ϣqo׬Oܳ<,x#>.q_k&-Tt+>-dZ?I|SĽ~_Sy{~-ZX |whx Q7 HH9vE=%|:'wVׁ O1(fzAV1j3V `qǢk%M8m:(|S(s*耑0ѐj'_RX8nCxFAj`):mXr7nshk6ykL]}pgg THg85b{0UVv E fp-- 601刅UHo-(/؊Ғzhyg`5!`YWVUV~hҒWVȌ>@`3h 8Dbk#q1e،X SHufCLRAgx"gX d!V:Q&ha[cA` o%vɀU!@ xIaQ d6;R!U|5x _#]QMc7w,Y^zuhT؀(CCgr 4  nD4 ʨc$ St ߁Kt2 α:}!% TW"@Z*]%D9`L4EQVN1e~ YOn0Q!,o~Vu"x ,zejQYs22'Ţ+r#S.Vq,2)R(Yv ۙ (鐜3!Ki9 Y(~J2bvl7邝삓7.BaU d9u!/MRfDY00ԏ. RBSY/46c_NOhDocR/j452RtD2-99:1PoY/* < cP/1*IZ/6;cQYZuj'ɠFg7a7l8nGg)9u7h[2ȧ SuI)RƤ_Re8!XL@37#C<Ǥ4h`L"u6 @GdRH!RI7m6"@e[0 k#:f_@f1(4E)U`Df'l9.DtBA179t !K0^0Iy !+#K%k>e!E?ZFV`G%@HOn11u 0!g?Y r,GW?A!bLyU<%LR_Ba[SfKRdP?P#d1Β@6a;ô\ 𲰧˸븏 ELk?+- WTHAcX2("OSA7"s MJ.P6PQP24/S|")tSқES 26 ; :[QU jh(#[R b$dd.T`UZuREU/()*؜lyJwOǐjQ.,/Inuq( ^l(v:Z["y΢ ,Ac59X|l,V5'#:ƒ+WBlž_|gܢG$HfS`*l/gS"GVa S\ē"(]#(nzL ml? ֩X-<e܌g5i Zb &et{5% [c &/l ȉާth" Ly5`IVʫȶ{)؈5(} 5V2Tl]l` @yJ΄pgly'ʏeeW鋩,uuz։\b v 6kVeekInl V wPY;)ѠMڜMm? pqAҒxA϶ҀԠpWm7ɒәpcMҹ1p3}e w֐@kIemj Cwmyyf׍ ؇\'ƃ9'8*ЉMٕ__c650=3 יwϓ`k"@XTm fҜ [??EE y /}}Mgws;joMteGhd 1 ?M%' -M#xWMZS֤ Jfw ȧp / n GAR;MCrZ#/id ㉷G &`} /-bT)iₖڅ` ?.daךT$Ib^m ݒ@BXߙqC9ާk ]b`HR; ks#Mc~I ZXx~G|z ~~&olLJt>WvA3~LVm.sçmGaLԑ~W"SK~>}it]֔siZ.m~␰gƇdM}NrܞIr)\}m(V3gtq豘/yGr FZ7#A`jw.p^#nѫnt'T[wͫn~iguagc~!|ٰxgTɋ@bgl,_>"`#d{Pfg„+ ;O ~<釽DeB* }\{.ld /pwf|N  =^"0 k.NKx:J D)  j,R‰0K &^UԀXYZNA֦[KfG帅kKS, %Y] PTEm.£[蘷+ȿ .02q]"@̏+d{ 6yKV G〄#Bp8M(4e<{q ?],6` 8!.$ ==~nV (0TLK8x`** nSqj-4>猭aa$VᏅbC~.8Gm>g;Rb/$ ƓWd.RyoYr)7B (in)Az/GT%!l1Ce.@.cHrh&dlOad6D$;Ch>xB 9gBuF R-ȶWPN+ *N4$/<\ ּ&:ACC)8w! `ț.P]z^;Z ɔ$rOщM"Jha0Ԣ%@6JȐւK^ ;q=TGSƥʇ>9dUXq"q=+K].)W!dFk[H #wh_rƨرPsN 6F,Lm$8ӅUJ]&+ZE29M*h'G0εb#}iwA!A]i:^z=]Lx3+.h6Т1aR\ 5m dI=s4v>jmgX$)MqS' sW'|Yׁ{y4^# ]a]@e rA3Uc@y/(2N+؄NPR8TXVxSC05[pzOGsZ'mƱ[G~q kx~Ѕ[CXz|؇>P9pc.W0Q'Aa2& цp T_5Ї؇xX"H Gbs0zA 0B*&/0$[;s wD0tj.EEl&wu Um\`Hu b/Mdvɨ,w(te7G?v}vqq17#CpnuBvb/] 7hXV` pfApcxf )h?ǐ'Zk(i7 roxfe|r1"~QD@iz'`Æw>&6Dl&,b0h%"tP. &Gr'*Gp'{gƔ &'AR#Г\fl  5H"%@`)5WAk /i63w9 g3-vVCXx5Fn~hQс5N|/f#3A vg1auax`anDy Х>' Q[i-W̵gA]EԛD^EvW zX}XI2E #R%Y"v@QB{[ۙ/9etSs:1THU"5U列œyI/hlH2KPvPt2#QG'Dhq4dIJ/”lJ,ڢ;aO#L v`~E*)^eTTHEBCIIEzȧ ȯ;k ;I횰g$s tX[{R> "0( +g )F#F8$Abͺ+ gc}1DpyJk;y0k@˴EpK% U\Q*Rk^][ffk+{Q{/i r۵kUmK+[wu۷p~wiy t 9C; DũAҸ;C%TZ;9[& ;y#zc!;"4GXD6TSGzG;X|իeTP ?` F +PICbU|bn4F'LKkKMDC&c|3FO,D3 ; EF`뛳k;.̫2HSC0LEК)@ //i 9 d$U.w+J? W.fVq%Emu=-lu[ IKa^w)XS3!gSxAr4 i l o 5-FXTY7ZA*< u p ZZt[1W"5 u_a; k\%ȇFȐ9a&ՉfP#1TʷLY ;\i+{ OI ֘ڸ!^^;^< Ux^rl0pc+GPld!w٧Q(r%E|{w&hchj2` G,Gvba16W 6!cE0 <7K8Bje >D7M| rhgV[Dvegh@fBh3rsBKefh}e* Vh|8)R]44!fLڰnj1x(TW2{eY'<w YXxمn@~a(;f~ g뺾ίA@B]qQ/ 6Ph=ps $\@t? Pk^ %cƚ& kh Cny|HH& . .  HP_o(Gw( ?(*,`l!N1aƕȼXs1B< (Bfj%n* d"Í 7v9"E و\hr#2lZ0~d2R8΍ɭM00xѭh?kO#m_n߈P3o ar=RW , 7,(MŘmEY!ۚR$er-VcĤ)e7y~Sg ,&/)>&3 w%)Dםh1IpВXGUBUzr]9 >V^xq)@DXCxth`@XhyI PBJSxJYy{)K ,n>l࿈vst\w`>X zyHȌKbla~b"Hb&b<Ж@53T( 'M )c=6O8v ^u4cRFC3.aZn9a`̌ffjfn gry\PC3@ Dr $GgJ<&94R <hq j&f`"s&ת +&C+jkA'|k.l>HE zr+nm~uD1̕ n o6nv DqĪЫ$ pL0n! :(yA !wm_q+%¿4öa p. s2#F%RB5rpB> IaGN? uROMuV_uZou^ vbMvfvjvn/ `DP"8@[7cxONy_yoyz袏Nz馟zꪯz뮿{첷C`>A'>>888222000***&&&""" ,.5 HιH8ж Hņ((G j3ȰÇG36/J~|tƑ(?Li:ih%-4 Tx2OH8)BG O*Z3 UUR b@AAZ ,Xp Qұ2 !  Ad%hnݻz_;AjQA.|=k1u]<ἲͼQs^Kg|e6rIL(ڭ0]}pʡsaF^ָnꫢz! $!}Y_-޸0] Y `1F !3b tUA lh G_gٷ^xHn B"z8# S,EBr衋8b<3ȐI8`׍da*c 9nahdU! €w(0`y)gd2Q BWuu`~ZCzG@"hnf9g`V:Hi vbsT#굲 or ɉ*=jvZzkU&+[/9JJH h@}8tkhE@k}`%Vd0!%ɓ"Y!l)! ,[CiL(0^&j!l#Z~ {/X߾|r^+fuo҆XAbZ5"ƧB!|'r\.#'W@PTJ.8ͤe;{vډ(J Tp@qd/JsKByG,?`gVͭ6Og 4Bq7W2^N7L(yϏ R{$pY b@3 LTp51Aͫe[ZG,j=<ǂj}dq9{6߾?//nty3[:L| ~Cd @v!8&ct4@W{r0 4A(K!KGC:Aq$q!G.  #Hߓ]q(`7*NtS]EFmožG?zWPO񑉉iq)VLdJ2DAt{!JPOڎj%$\ [De`馮QheRj0,I j0bRI &yUrjɡ'E8$\4`嘗"`g2 t,(IJm%.X;. AK P2BOT򩞂,\@4ED TSVOB|oG'<HF$EQWڔ)=Z!AV&YZ@?+eBƨG3K(eLmv>DR)sKĐ GI!+I*`7Oa@ G^ g9q;}悃VUިP((]P-ζWژHpbJ. 7"knKH1 u$&6]4g ǸQe`,+ 3iU7rUi8"BADl6hUk2Յ>iR cD$\Enx׈ *j"OM/KOŖ¯4܈][JR>+Gb{DĐ&7dOW;y)dǒ"<u8},!W&7 Xǫ(2 zN>n#1G8)2 HDoq gUOD}yΖ@4ֽd LxMq @F`G X 4YdM+_U+rUՠtPYm=&:'^tQjM0ut Meh)#nG d3"aQmHM~m;#l zcη- uP/Jx{OZF%,8##ϸ7G(@ 7\ ?+09;1_ιB5\e)< zH+ќ=[N#:%KtAī_z㙜 PK y*X(('(Ƞ. fσ$ؿ 谹(BoJ`  V`OϿJ+0`0 x+ ;~ Xx'9t~\Ay (8*(؂vxP6'o8 ,Ń6; F $`yWGj 0rpaeЄVxXhYAgY8dXWGv0yz[gprhcQvZ!;=0EN8Xx؈Ȉ+I }pM#P:p7Xxr'`$sƆ@T86拋@-ĘʸB!;PK2y! } PKN:AOEBPS/img/oid_spec.gif GIF87awwwqqqUUUKKK333''' pppfff```\\\XXXHHHDDD@@@<<<888&&&""",pH,Ȥrl:ШtJZجv=xL.Ni46 > ,%"} ]B_%M owR`lsQ%m`_EJ E MnP N %F [G nH JQM IY߷E L -4JH×2QqK.j` J$\"RKEEDTR)jH*C!"i3K$-0FHD+'A&WˠzN!xS_Z @g!A`@JDI:r ".᪷KChٝa@$ߓZ,&#H6`RavGxUjIS9^ͻ*^nL'sƿ`-H 6R'ORވ"I)w޳i/|MTWU ,@hnd`]ye^BbE@:ysJ!tPk{c=r@cI^@OW.h$d;fУEHx@F O:&\v%&Z8Ć[yI`|F%jjFGKAYkÍ!xI Lbxi 1]'hmZ4> `GPT$ZK  +!CUDeΩJxTXbv]]O<b& ԚVZaZjaWD DSF61 ta` Cm&djq>9r oU&aNțOeƠ g\GTХGāDm#Tg=HXʯOoAuI'M)Za@K!]E1m>DI&!4iU3AW5#/BiƋUF :I#"c!4ڪwT\l@ qwS I49JXJ63x8Io;h#p )3Wo}SJ;PK PKN:AOEBPS/img/expencrypt.gif2GIF87a{{{wwwqqqUUUMMMKKKGGGAAA???;;;777333///)))%%%###!!! ppphhhfff```\\\XXXRRRPPPHHHDDD@@@888222000...&&&$$$"""  ,11=ƶт׫$366:HzL@@"*\ȰÇ#JHŋ3*dCƏ CIE|@ Jl0@+]&̨ć7)- TA@¨\HJe)SjT pK,4zT3˶[Qm˷߿ NԢ)$hocǐ#K<˃f 8Tͣy џ7ZpR[} lGž4'ۼ;xHXYPsDEkc:燢_ךѲxCѧ,{wCHP;vnx uԃX޹ fU'Eh u=BR R !ZhӄT("/pÀly7hԠkB&& =D"`ҔTVYRxOIBڂK 7Um"!.%&@2~<8ޅ#'yI2\%&vJՂ,:I6.ZhtnB),c,V)Rj2*`-R*eHk!yI߰pj :", }-) Zf< 4ɮvb".$䖻ɹjؽ嘼R ~B/lpۮ(>o q- Ұ#OLR1(7qǑpLr+2##.'+/r̋)3wR"7B3'?'tit)Go4"K?x\w5PEԇTmɌlcD%v&d/;]w,q57&{bh'Lm}I tE!Ζ0.EC.fQ^1`.憄,8":(~νas:) `S; ~L1]Kc01>1w$ߍ7#8(ItﮦA14jD1 w؋z5K &У{OVDp, X I$H @Ah?:Ђ bSBH0ab$#~eV!B6[@@,9|p@B=s"pJb&F8#|@N#1q}! VН(Ic 4QC+&II(4#D1t+43-ћ^؃G";KEN ]C&(SrXˠN)͊-93qt4jb9E,2KGlCMƹr6.YEM"1"6ةw2:mϬs N堊HBf/YCDH4&FJҒlYf+>ύ*t}*eKҗƠ2ܗkڴ_)7kRB$BP<>)b Re*Ŋ P" M̫`MZU\UOtE׾TUUk .'! @֔r`+@QO6Zaѷ} |>!dښʲ­٬2F#nXZ}bƴ@*Xמ%|F2eQٱJj]g˝C-1~o`בbZ٤J׃r''d##y7N7p aLЮvl!ΨD$%7eo'zt L Wu_$8FJ@'pѧ;Yė< ۼx@Y^"RI:-AXad֞a-͞[:uaOs?pò +f~}*)+Yӱ;9fL5) #ͥA%FVg HvflZӹ>y.%v9UX#f:tX Q#V\:$*|=5TuPНy&+mlR` YPکګ3LpX bГ^6'a+o,+1ܯ{lbv2z"a!FNlyPʌ3,u3i9'38tcśئwVmcgQΐyNy'W1HAd$0LDLF $]]LFAgW0@.[e%Ʀ@BIN@2>dRR$zfn`ڹU8W"s3/2HӏZqC}AR\/ob ~mNN](b& LyhNϨ<z0S=>qV I c蝱>/J,y4eoveMX̊c`JX %!JVOc"lDo; D@8p=hx`}H$e{b'.A t3^n&r;L"f:j,d:=c`LUfo|0X'Sk60EEGXZE m5Z ^_F3.Z*VN1s01+.C1U,,:3Uɑ:p\sPsYc gNGO|d?^ 5 ,0wo(2>^9P0f\h "A[(0NS>>nF\3445v刣Qh3Lku8pv5ahc8%jkX {WxQ`)7֊U!$,菅f y 9MX'MӐ悇b‘wYn`$ NƎ$&.I@0i12YvqVB9 k9 2;4YzNP9,rZ(T9e*9/5&ZN\!/ B2T攔gHML C]t]ˠ]',E]5EH 4v`k=A7;QXamF,fr tד`&A@ #@mh>Gt}ř+JFv;d;™!JYdփd!bJCfH(o yYefv e&m&{&1u~th@D?s? QS1f'#D eyf *xmj*:!LjAٲi!i gTiٗfo-llYrLƩgƙApSʁl2>d4$l1pr-/p,wk` ɣ-!;aCG(@[I +u6EPڏډKiJpCy=t=4h!zdC_Gr`Шs3Qx'*x:yz{z#;xxꠐW.H| z1pka@eUڲE H@t(*9KVڴd7D[DyhVa+˵L+Q9t`8WIB;ӕ>3#v-~3KXR[@*UbQ+ٸ YW|!3\[Bk 2h1 {2 8ۓ[K28KG9{[9K7(k{-9ë BX.+< eѫ[9a,ܻ/3~<( L DA)AK240hҾ[;+(a`-V`cmx{-qKql(f?cВ;17%ʱH+OȈ41!}4t\](,B$;CR ķ8khQ'J|bΠm묪  x`~t=7"Bmk.㜾n..ތȶ~븞' w[u,~̫+eSXvQ`Wk= 7T.+̐ԙgh5Zn ?+7 vۜ]v5P=iN>D9dn;).ο.DS\LG.|-/ie7Uۅ+߻h^_cT^Ɏ9j 7p\>IQm]ƙk >n>dz.p>> Hj_:I=HŠ,SN%&S T x^V D0ߵ%,CҁbpV`B IVFtAԩTZZƼu߶r|xMxjTXE A` N6jdk'mC6*.v&OM6v: ͛k= b3{U ;ٴ7K6ntv[#]\.z %bYjLGMa8"%ߐ T+vABJO òG e]!۩^%.$L`o['a,VK__ " T^tDhACox@ @3wy #&(\v~t$3bO0 ) p5Ta|&jfE6=PRlL !$r!ĸN:(ЉmmÝϥzIeF$jN)-R"ZȢ|Ȭ@*$%>fjj뜿lʞc&'B@lf+!o!Vx_3L{n2 ;۶ h뛸j/<λ1%7[{&OXW(n|q!+ |rC K,2׶<Ȼp1:/Kr :T+󺳾l,Vt  TH{-/JWNqa[ϊʪH<PC\4lәM,!=x/x?yON9n'#YpTz`n6jCn{N{ߎ{{ 7H\Bk/@9ѯsJzGx=}̦S)bzv?NML2H=o?om4pAM oYrbTok \91Pu &9JJ gaPj4Azv񙣁+!,-%pꋡf5JaH/LGA"7# HC>BmO<Ġ4FukcŝEBx'hQ|D2ż@ntڛܷ&'_IMg^וzC.Dd$?JL3r2yG@+e"Lf: 7"Hj``f됳l5p`!/)&h)K|EEPJkt AtȪNbLLdv×G# OmX'.Kc BXYle<*!쩚h4R :`aD' NgS[miX1_ RH>p4 BLL QyO'UTP*+Wɀ z1CD+T@Q3ҟ.u;R`10'@J1SS'ګ5@3B}Я*hx41բ#Ш5s+d7Bƪv Z|GIl)Q@ҩ:뜂Ub غRĥiL+? j-QwŨFjV\凋Ux5naj^"U.cQŶe|ksd='|aVWnҊ/6ˬ;噆Fa1ZX P~ib1㊓"cYc_I8d!ۈ3xq7BG{&O\a x5r)"N %97cF}3$.c3!Cc,f#Ɏif':9?> .fr4Q85DÈpi;>Ks"c;kwܵT<\ЁRU00IDLHz#lܿ"ցuE? wL$CΏS?0wnvuf4 eE"GdOƔIA;q |YT:/}n' g 2 *!j4FBx.[X1L& ƖFcKٷj&WdX(!S 6Z+T8VNlhF~]Hf!n8,pRsX!h8&\)}}h䇾<PkHUlP/Ȃȃ艸d$iBԊrZO!uH'&3V8MҸ4-=@ddHȍ1:Ȏc`CClj?юȎU\AbW&CI8I3 +0Y3chis.vgHT5OӲ#>X=1|V٘g3D`gaR+)o\wA)X#-œ`H=t(VMD/ Q=^ȅ!,)=8OXvG!E#nRT &+H6)4 ,$U1@ ͔1KIo:EP|Y UMYӒM~vgO){xE NR 9m  u2"ہq EyG(RBŘ#(Ns1(hqsUqXewxh9``C㨋}-0ҵ !0 2Wٗ(r\|舝z QϹ"YqGB-P$3 Z)@cU) w \ "t}Z1zʼn_4-6TL1dY~|rpt9s~3D֑D:V.1(t9E`|Ͻ}r׸FϹ\ӻ-Y܃םL/ ӝݽvЕӌ#Mi ߍX]jW ^ yf> ^rFޅlm-,Mi}Ա\| &2$}~Tͽ0KMLN=HOq JE~VG_a.c.Y cdz<Ҥ >h?7:wi{aڰx8ު=S{oX,VY>=Kv60X.(BMФÃN.;/@x'HaX/_T}$ڧ#V鉹ɖ@ riv阧unkF(HQfeE1 \)S(N4b{ J91*^0yEA~p1O/#>$BH np>Jpx(*[ 9h{"+5"XQj[Q|7HMvQs7p +Ub'oPI .~ER(jz.4"Ǘ2j*z$վM:!z Gu.A0*?^pޔ(AB/ ?|oqF 8Dk3M>/%ۗeFB n {5+'Wƀ d' + Tj۽!Ae JZ`ĭFpBa[ #,xBA j&:ojQ:1$:(3H+1=3$1(H1+H1:3(:+ 3:=1:=1ٵ: 3(z/qz N$=섐)B mK`z#DbH$Vl &aK1o?+(!ӄ @QÈNmx7A ̲UE ZN@hӪ]˶۷lAB+c:{tE//Y$ Bx#BX)Escc0`DMuڰQ|\!+6V4Z\ͩJ ⡚%\r)Q۾M,ږQ[BlexË?LË^zs}lkp'5gq*AJl؄2` X Le ((YXdT8E+"Sp-tmx|wgNbt STIS]@?6Q˂tOo.褗n騧ꬷ{ x5_cH? CٗyI<**̧a=Ep& ;:LhLlk)>]ݷ3 $0J#jK*03_ZKh $ˆa(sb57"*DIu Ä( z0j2k`G2Р-8<!v  IPy 0@eHJZd22KzLV&Q4PĕP;PKI|22PKN:AOEBPS/img/sut81007.gifaGIF87a>R?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,>R H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJ2_| w0|0@ o`>}?}/_|(@ ?˗O ?~/_O@ DPB >Q#| g0|70_70}70|#O`?~ `|H"E)R0A~O`GP| g0|'߾| ̷|G0| 짏`O_䷏"E)RH|>/| g0|߾~_>/|G0|>)RH"E ˗/?_>7p|(߿߿|_߾~_ _߿|8`A&TaC!F8a> O| 70/?}/|'0_| 707_ <0… :|1ç̂}_>`o~'0~o`3o~ `>)RH"E 3ϟ| O_>} Co |߿__o__ |(0_| o'p "Lp!ÆB(q"Ŋ/b/cƌ3f̘1cƌ#˘1cƌ3f̘1cƌ3f̘1cƌ3f̘1cƌ3f̘1cƌ3f̘1cƌ)'P|?/ۗ/>˧o |70_? ̗/_> O`/,h B˧_+/_>'P࿁O߿O_߿/ o@' ̗o|8`A&Th0_|/_?G_~߿~3o~ O`> W0_>.\0B#O? `o`| ̧o`>+o`>G0~/| ˷p… +o?7_G߾| g0~O }ȯ }-\pa˗ϟ|㗏 ?~70|o~ '0>~ O_|7>o?}} o,h „ +_>;`>o`>#O`~ӷp… + ̷o?~g0| _>~O`>_}70|'_[p… 'P|_>`>/? _߿|8`A&Th0_|˗@}`| 0@ _>߿|O߿|߿@߿|70߾? 4xaB +O`~7G_> g0@`> 2d(0_AO'P} 70/?}'0|/_| o` o| ? 4xaB +O`|O~엯~/| ׏`>ǐ!C +O?ӷ_|Ko`>/߿}'0|O| g0@}/| 2dȐ!A ߿O@~ _?}o o`~ /_| <0ƒ /_~ӗO>˗` _/߿~_O_}߿o@@o|? 4xaB 6tbD)VxcD~3f̘1cƌ3f,h „ 2l!Ĉ'Rh"ƌ7r#Ȑ"G,i$ʔ*Wl%̘2gҬi&Μ%䧓?tOG:';/> Ϡ>3O΅%ԧs˧OBWP/_> ˷> 0 H*\8p|>$XA o|[p… ˗/_>~-\p… O@ DPB <0B0 HPྂ ,X` O@~ <0… `>8`A&T |O,h „ 0 <0… O,h „ .GП|cȐ!C ˗/?2dp!?'p @~د` ,X`'P> <0… /> 2d_| cȐ!Å /_| 2d0|WPC 2dXPB~2dȐaBcȐ!Å/C 2L/_~1dȐ!C 2dȐ!C 2d!? 2d8`A&TaC!F8bE1fԸcGA9dI'QTeK/aƔ9fM7qԹgO?:hQG&UiSOF:jUWfպ5b|\w5|94_> 廚a>soJ~[/_~ o!|1j}-̷/߾۷0߾|jO`> ̧П>'0ׄ۷Oa>~S߿}o>k/_C5aOak~˧>~O@㗏 A? 4xaB 6To`>So`>s0?sa?____s>s`>#`>#`>#`>#a?k_C}P_ />}7_|#@~Ǐ~˧O?03a> ˗o_|˷`W0_A}+`>篡~ k_C}P_5`A,80+(_ $`A $`A $`A W🾂 `,8P_W@} _+Xp~ ׯ` $_+Xp``A $`A $`A $~+X@~8| o_+Xp`} ̷`,80߾W| ̷`,80_,H0_,H0_,H0_,H0,8? 4xaB 6ta?sa?sa?'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? )r$ɒ&OLr%˖._Œ)s&͚6o̩s'Ϟ> *t(ѢF;;PKbUPKN:AOEBPS/img/sut81003.gifVVGIF87a?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU, H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիX0 O@ DPB >QD-^ĘQF7ѣG=zѣG=zdOG=zѣG=z葡?}=zѣG=zѣGyѣG=zѣGѣG=zѣG=zdOG=zѣG=z葡?}=zѣG=zѣGyѣG=zѣGѣG=zѣG=zdOG=z`>O@#8,h „ 2l!Ĉ (_>`>? 4xA8`A&TaC!F8`>3/E)RH|G_Q8П>)RH"E'_|/|G"E)Rh0_A~O`G?(ROE)RH"E /@~_(RH"E +| ̷o?~Hq?})RH"E˗/'|;"E)RH`|_>~ ̗o`>Hq?})RH"E+|'߿8_>$XA .dC+|'߿(p>$XA O@ DPB >QD @~ӷ|QH"E)W0?}/}#oEH"E)R0_|˗|ӧ`>)RH"E O_|/>}˗/EH"E)RH"E)RH"E) "E)RH"E)RH"E)R(П>)RH"E)RH"E)RH@(RH"E)RH"E)RH"EH"E)RH"E)R"("ȟ>$XA .dC%NXE5nq?}=zd/_|˗O> ?7p`'P |O@3hРA 4hРA8`A&TaC=|aC Ç2̗|˗| ˗|'0_=4?~>DB>|Ç!ÇCÂ=|Ç O|/_|/|W0_/"/?=|Ç3/_>~>D/_~{XП>|a|/}G_>~ ̗/_>} ̗ |O@+/_| 4hРA ˗/_>~O@ DPB :GП|{!B׏ ? Ç2̗O`|/_?|70_>̗A}'p @}, P O~ <0… :tO?} P 'p 'p (PO@  <0… .̗O`>}(p߿o} 'P`|8p@}'p "Lp}O@ DPB :|,h „ ?$X?} H*\Ȱ||'0_/|'0_9?~ H*\ȰCO@} H"0@,XП>$XA .dpa|/@˗o |/_ O@  Wp_|4hРA /_| <0… :t?}C˗A~Ç>|ÇԗOÇ˗A>|Ç/"/_?=,OÇ>|Ç CÇÇ>|a?=|?~=4OÇ>|Ç>|Ç>|Ç>|П>|Ç>|Ç>|Ç>|?}>|Ç>|Ç>|Ç>|!C>|Ç>|C=C=C=C=C1?} H*\ȰÇ#JHŋ3jȱF=zѣG=z#C=zѣG=z#Cџ>=zѣG=zda>8|+X` ,X?8`A&TaC!FXП>'0B8qĉ'NL߿}o>۷oĈM8qĉ'ND/_C7qĉ'N0| ̧0|'Foĉ'N8qb~˧>~)짏|'N8qĉ ̧0| 7qbD&N8qĉ'| /|7qĉ'N0)} oĉ8qĉ'N@~Ǐ~˧O? />}~ Hp~ W | W | W | W | WP_| +/_}ԗ/߾ $`A ,XП>$XA .dC%P_5ׯ~ 90|90|90|9'Q?}%J(QDP_5ׯ~ 90|90|90|9'Q?}%J(QDP_5ׯ~ 90|90|90|9'Q?}%J(QDO@,H0_,H0_,H0_,H0_,H0_,H0_,H0_,H0_,X? ,O,h „ 2l!Ĉ'Rh"ƌ7rџ>=zѣG=z#C=zѣG=z#C=zѣG=z#C=zѣGyGyC <0… :|1ĉ+Z1ƍ;nϣG=zѣG=zП>=zѣG=z#C~70_ ? ̗/ 8}7p 8p|'P|O@˗_ 'p_> $o` ,X߾| G0|'P?'0_| $XA .dC%>O>~'_} `~ԗ_>}{a̧P_~7`~M|O}#O |O_~+oC&N8qĉ'>o ?_+o!ܗ@/>|| W0_>/_>~ȯ߿|/_>~OO | /_O@~_|˗oB} |߿/o@~8`A Ӈ!}7p_| ̗`>!DX0B"D!B"D`>o?WP@'>䗏߿}_>_>~`|O`}O`}/?_~ '0?~ 70?~_~ϟ| w0||+O|OB!ƒ//? /O@ O@ DPB w`> @/| ̗o|o`>/| /O|O_|/߿|˗/@ '0߿|؏߾| /_O`>} 8P`'p}@(0@~? 4x?}̗o ~/>̗O`|˷O | ܗ/|O`~"D!B"D`>#|߿'P` 7p@} _|70| ̗o`> ̗o|O@}߿O࿁ O| /_O`70߿| /_}o} |߿'P`  ˗o?#/|_|o߾o_>~/?!B"D!B '>o>~ `>o>/߿}ۧ_O|'0_|>~/|'P_?}_>/_>/70?~ˇP }`~/@ ؏?C!!D@˗_|/߿'p|70߿~/|/߿| H*\ȰaC~`}/| ܗ/_?%/_}˗O/|˧ϟ|#O`ǯ |O_|ӗ/˷O`>O_|˷O@}'p_|70?o!| G0>~`˗~:,OÄ ̗O_>˧O`|#/|_|O@}'p "Lp!ÆB>!B|"D!B\ȯB}! ? O@/~/_?$/}ۧ_ۗ߿}'P_8`A&TaC!ۇ_Ĉ#.1bĈ#܇_Ĉ'p 8P |o_|0@o@ |O>}߿| H*\ȰÇ#JHŋ3jϟ> qȑ#G9rȑ#G Ǒ#G9O@ DPB >QD-BO@  <0… :|1ĉ+Z1ƍ;nϣG=zѣG=zП>=zѣG=z#C=zѣG=z#C'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD8P,h „ ӷp… .\p… .\pB8,h „ 2l!Ĉ 'p>$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TXП .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD8P,h „ ӷp… .\p… .\pB8,h „ 2l!Ĉ 'p>$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TX .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD8P,h „ ӷp… .\p… .\pB8,h „ 2l!Ĉ 'p>$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TXП .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD'p | H*O… .\p… .\p… 8`} H*\ȰÇ#J0'p "Lp?} .\p… .\p… .08`A&TaC!F(|O@ DP!A.\p… .\p… .\`>O@ DPB >Q@? 4xaB[p… .\p… .\p?~ (? 4xaB 6tbD'p`>$XA o… .\p… .\p…80>$XA .dC%O} H*,O… .\p… .\p… O@$XA .dC%0@ H*4O… .\p… .\p… 'P>$XA .dC%̗/_}'N$Oĉ'N8qĉ˗/_'N8qĉ˗oĉ8qĉ'N8q"|8qĉ'N0_}'N,Oĉ'N8qĉ ˗oĉ'N8qB~&NhП'N8qĉ'̷oĉ'N8qB~'N4Oĉ'N8qĉ 8qĉ'N8qĉM8qĉ'N8qĉ'N8qĉ'oĉ'O~'P'߿|8`A&TaC!F8bE1fTOF1'0_> ܗ|/?5jԨQF5jOF1'0_/>˗|4jԨQF5jԨ?}5jĘ/|G0_ӷ|5jԨQF5jԈџ>5bO~O߾} H*\ȰÇ#JHŋ3.OFO?} O߿~O?5jԨQF5jOF1'P/_? ̗ϟF5jԨQF5bOF/@o|ӨQF5jԨ` Ө?}5jԨQF5jԨQ|̧QA4jԨQFO | /_?5jԨqa} ̷/>ۧQ#A4jԨQF?}O`>5jԨqa>/߾} H*OB *TPB *TPB G_>)TPB *TPB 70߿|70B *,OB *TPB *TPB 짏|/߿~˧PB *TPB *Tx0| /| *T?} *TPB ˗O`>~/_|'P o@~ H*T?_o… .\p… .\(0}o… .o… .\p|'0_/̗O`|[p… ˗O>~ӧ|p… .\p… ./_}˷O|-\p-\p… ./|˗_|/_|70_>$XA .dC%NXE˘1c ̗߿|˗o?(?? H*\ȰÇ#JHŋ1ӗ1cƌ/?~ O_|W`>O ,h „ 2l!Ĉ'Rh"Fe̘1|_|/߿| 'Pe̘1cƌ3f̘1#F2f̘_|/_/̗O>}˘1cƌ3f̘1cFe̘1|'P_>˷O |'߿}'? 4xaB 6tbD)Vx#F2f̘1cƌ3f̘1cƌ3fdO_ƌ3f̘1cƌ3f̘1cƌ ˘1cƌ3f̘1cƌ3f̘?}3f̘1cƌ3f̘1cƌ32/cƌ3f̘1cƌ3f̘1cFe̘1cƌ3f̘1cƌ3fП3f̘1cƌ3f̘1cƌӗ1cƌ3f̘1cƌ3f̘1#C2f̘1cƌ3f̘1cƌ3fdO_ƌ3f̘1cƌ3f̘1cƌ ˘1cƌ3f̘1cƌ3f̘!O@ DPB >QD-^ĘQF7ѣG=zѣG=n  <0… :|1ĉ+Z1ƍ;2 O@ DPB >QD-^ĘQF=~RH%MDRJ-]SL5m޼П H?8 @'p AO@ 8? 'p O@$П H?8 @'p AO@ 8? 'p O@$П H*\ȰÇ#J?'p@8P  O@ (?'p@8P  O@ (?'p@8P  O@ (?'p@8P  O@ (?'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? )r$ɒ&OLr%˖._Œ)s&͚6o̩s'Ϟ> *t(ѢF"M"?$X,h „ 2l!Ĉ'Rh"ƌ7r? ? 4xaB 6tbD)VxcF9vOG=zѣG=z葡?}=zѣG=zѣGyѣG=zѣGѣG=zѣG=zdOG=zѣG=z葡?}=zѣG=zѣGyѣG=zѣGѣG=zѣG=zdOG=zѣG=z4/_>}$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TXП .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD8P,h „ ӷp… .\p… .\pB8,h „ 2l!Ĉ 'p>$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TXП .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD8P,h „ ӷp… .\p… .\pB8,h „ 2l!Ĉ 'p>$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TXП .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB( |O@ DPaA.\p… .\p… .\(|O@ DPB >QbA <0‚-\p… .\p… .\P ? <0… :|1Ă ? 4xaB[p… .\p… .\p@~ ? 4xaB 6tbD8P,h ӷp… .\p… .\pB8,h „ 2l!Ĉ 'p>$XA o… .\p… .\p…'p?$XA .dC%O@} H*,O… .\p… .\p… O@ H*\ȰÇ#J,08`A&TXП .\p… .\p…  08`A&TaC!FX`>'p "L?} .\p… .\p… .`>'p "Lp!ÆB(Q`?$? 4xaB[p… .\p… .\p!|̷? 4xaB 6tbD8 ?$XA o… .\p… .\p…8 ,h „ 2l!ĈO H*$O… .\p… .\p… 'p} H*\ȰÇ#J0'p "L?} .\p… .\p… .`>'p "Lp!ÆB(`? <0‚-\p… .\p… .\P | <0… :|1Ă ? 4xaB[p… .\p… .\p~  <0… :|1D(? 4xaB[p… .\p… .\p@  <0… :|1ăoĉ8qĉ'N8q~7qĉ'N8a|M8?}'N8qĉ'N$/_}'N8qĉoĉ8qĉ'N8q"~M8qĉ'NT/ĉ 7qĉ'N8qĂM8qĉ'N\oĉ8qĉ'N8qb~'N8qĉ'N8џ'N8qĉ'N8qĉ'N8qDM8qĉ'N8qĉ'N8qĉ'oĉ'N8DM4DM4DM4DM4A <0… :|1ĉ+Z1ƍ;nϣG=zѣG=zП>=zѣG=z#C=zѣG=z#C=zѣG=z#C=zѣG=z#CQD-^(П3f/? /_/_ ̷O | ԗ?~˗/cƌ30@8_>'p "Lp!ÆBϟ>%J0_> ̧/߾/߾ ԧO`>} ̗o ~(QD%̗Oa$J(QD'QD'0_>ۗo`|/@ _>~O>~(,h „ 2la|˗|O|A"D!B$OD!Bh0_> |G_/|_~ӗ/A"D!̗ ̷_}ۗϟ| B"D!"D!B4/|'P@߿ <0… :|0_>'0|O| B"D!"D!B4/?~󗯟@~7_~/|'_/A"D!̗`} '0_'P>$XA .dC 1bĈ˗@~_|/_ _(P,h „ 2la|/|/?}"D!Bq?}!B"D!B"D!˗| ԧ/_>/_>~!B"D "D!B"D!B"D!B"D郈p_| O@~ _o@~  <0… :|1ĉ+Z1ƍ黨/_/_|_~/_|Ǒ#G9rȑ#G9b|/_|'0 ̗`|ȑ#G9rȑ#G П 'p}O |'߿|ۗ>} O@ DPB >QD-^ĘQF8`|8P`?~/_>(߿? 0@8`A&TaC!F8bE1fԸQ?}/߿| 'P/_|Ǒ#G9rȑ#G9"|˧_|O} ̗`|ȑ#G9rȑ#G Ӈ_| O@}/߿߿߾| H*\ȰÇ#JHŋ3j(џ>9rȑ#G9rȑ#Gqȑ#G9rȑ#G9:#G9rȑ#G9rѡ?}9rȑ#G9rȑ#Gȑ#G9rȑ#G9rtOG9rȑ#G9rȑC8rȑ#G9rȑ#GǑ#G9rȑ#G9rП>9rȑ#G9rȑ#Gqȑ#G9rȑ#G9:#G9rȑ#G9rѡ?}9rȑ#G9rȑ#GO,h „ 2l!Ĉ'Rh"ƌ7r>=zѣG=z#C=zѣG=z#CѣG=.O>~mϣG-'p?0ϟ> ѣG7}O`>ۗ|ۗ_A}yѣE}/_>'`׏_>~ۗ߿_>$XAO@ DPB >Q ߾O`>O`>~G"E)R}_>/_?/|) "E)RH| /| '0| w0E)RHQ`>_>O`>3/_?G"E)RHqa>_>O`>`>)RH|70| '0| '0_}(R$OE)RH"Ņ }/߿_߿|O8`A&TaC!O }/|O`>O`>EП#F1bĈ#/>_>'0| '0|#F1bĈ˷O?}߾| '0| G_|EП#F1bĈ#F1bĈ#F1bĈ#/bĈ#F1bĈ#F1bĈ#F1bĈ1bĈ#F1bĈ#F1bĈ#F1@"F1bĈ#F1bĈ#F1bĈ#(? 4xaB 6tbD)VxcF9vOG=zѣG=z葡?}=zѣG=zѣGyѣG=zѣGѣG+O@ DPB >QD-BO@  <0… :|1D$XA .dC%NXbD$XР?} H*\ȰÇ#JHŋ3jȱF=zѣG=z#Cџ>=zѣG=zda>$o`>? 4xaB)TPB *TPB *4O?~Ǐ_>OB *TPB *T_Oa*TPaA*TPB *TPB | /|˧PB *TPB *T/)} oB *,OB *TPB *TPBӧ_B~/!|PB *TPB *T/_} ˗oۧPB  <0… :|1ĉ+Z1ƍ;nϣG=z|O8,h „ 2l!Ĉ'RDO_Ŋ+VXbŊ+VXbŊ+VO_Ŋ+VXbŊ+VXbŊ+VO_Ŋ+VXbŊ+VXbŊ+VO_Ŋ+VXbŊ+VXbŊ+VO_Ŋ+VXbŊ+VXbŊ+VO_Ŋ+VXbŊ+VXbŊ+RWџ+VXbŊ+VXbŊ+VXџ+VXbŊ+VXbŊ+VXџ+VXbʼn70|0@8p|ׯ,h@a>̗/_>~C!Bo`}0@O ߿|߿O}_~8p@'p "Lp!ÆB(!?}`>o?} 'P_>&"̗|CoD'_} O ?}`>o?}8qĉ'NP_#O`>#ཱྀ 7Q|_|o`/׏`|/_~_|˗߾|_>}O |Mϟ|70|O`>o`>(p@}7pO,h „ 2l!Ĉ;`O`'_| o| ߾} 70߿|/|G0_> _>/|/˷o_>ϟ|&_}|`>o?WP@'8qĉ'N0A߿~(|'p ˗`| '0| '0|̗O`|ۗO`>'0|O`| ̗_>}!Dh0߿}> ߿| `> @/|8p'p "Lp!ÆB(a>#|߿'P`  ˗o`>_>o`> ̧O`} '0>~O>~ O|'p,hP|(߿|߿|߿|@?} o| 8p <0… :|1ć O`>߿|+O`?~o~ ԗo_>~O`>O`>`|'0߿~_| /?˧O?'0_'_7П> O`>߿|+O`?~o@&N8qĉ'B/߾o`?}WP}دą˷߿}'0@~/| 70|7P_}ϟ|/_>}/_>~˗O?$X|7/_}̷~#'_? 4xaB 6tbD)&+V|+Vx_?* bŊ+VXbEXbEWbŊ!WQ?}+VXbŊ+VXbŇ*VXbłUXbŊ+VXbŊ+VXbŊUXbŊ+VXbŊ+VXbŊUXbŊ+VXbŊ+VXbŊUXbŊ+VXbŊ+VXbŊUXbŊ+VXbŊ+VXbŊUXbŊ+VXbŊ+VXbŊUX*****ȟ>$XA .dC%NXE5nq?}=zѣG=zѣGyѣG=zѣGѣG=zѣG=zdOG=zѣG=z葡?}=zѣG=zѣGyѣG=zѣGѣG=zѣG=zdOG=zѣG=z@~$O,h „ 2l!Ĉ'Rh"ƌ7rȰ@~,? 4xaB 6tbD)VxcF9vdH#I:4yeJ+YtfL3iִygN;yhPC5ziRK6ujTSV 4 ;;PKp+P[VVVPKN:AOEBPS/img/expdiagnostics.gif;"GIF87a{{{wwwqqqkkkUUUOOOKKKGGG;;;777333)))'''%%% vvvtttppphhhfff```\\\XXXRRRPPPHHHDDD@@@888222000...&&&$$$"""  ,0ĦD+E1D%/2 HI!Ç#JHQ_y@!#㲏 l0;8``0$- "R_^JPT 6$ʴӧplJQXj,Ux[Êk@S,%˶[RB." x˷߿  @#"(p8pe<~ܺEUMӨS^ͺukYGOE7h$c,gO%ge,eF0#+DdG8 CX|(F6@={5ɏҏʼXD7"贈;@U*tO)M8(  !I|}'HvD^҈v]&*raDg c&`~=Wdy8 K( -%( O6/9G) ň;ܷ!8IVӄIJgZ!V]塈R0XH JҒD#Zbv¤g''!DV%Cm( 8niȣfRPr@Xpt@wɰF"# jR ʩdη/,j%\) ntah њ"zҵ- * *较ÃҳY!y (Y=)z!8pױ !bT! e+KڦҺԠ00ц\ئx֛8@8~JꂠA ؐAЂ4(у64Lϝ 4pja7XQ8ȢR(oýZ=3t x3^l>3 NHZ3Ts rLpU8zdU&܄)nΔ(>+:a 0؟a0]")1,@J k!ֲ0` P}NB]=kI$a$9Y󝇞u A. 1h}r2ͨF7G8 1QN@IQz0/fLaQ 8ͩNQCRfNW˖vsD=+:9:+lDTaԧZ5,S]DVEX`%V1VvU_ Z4QQ5QM]\?V7i^vVDկ}^ *dEJAUaX*vFuc QYAU,-hYֺⴧljU Ķmf[m{Gp uy. R5moE[冂ːfW.<]7G:IF}u%QEj"<ع ,%-=o_m`r=n|3)_2~7HA-EBDL辰^.Yoc4^1Jx(vJVUq11bWMnǕX@~rqܔĐ)N5E&ru IIIէ(KʗY rز]K<]题˞pz!6o QTEl4%5OXhezMWx ֻ2dgUE H]>ݹ %s-DiLĝ̔ )qI}毡0s%뾆~Y ڊ0.*^;f%|HTq@M🀃[CXy3En{[ S=2D(F#YtD;eXWRh|Gt<ЭMpVPi9Mm TFh$ɁY{EgcqP,鼵wdbIτ~ThW 3s5g9dw)mos;V-*o{%[_76@.+?wE)қ C#'5_+}pQo#hzz^v '=Q&E*D,YvW_NϓQ)`JO}zkG?t?akO **zy|Gx t1!7zBgTyxwV-B4U,u,$%" ,2"A)A3 Wlx'3zv_-ҁ0""kC^څV.r"$yDrMX(y7/vH6D'7%gk&3E;$81=TR'Q`Mh\-!D0tb-! -n*|$*tGC"!#N}~7P%φ%/Qa&7,~&ƉwdF(B#fSѡ@)T2r"3f"amևbToY(~ W8dc+C+~0cPfc;b|h(ssJ/[tT!.8s5HB5:C62*F}X+.2Y焯'}w1.cf/%2h b5>}a1K6 [(H$p`"372/3FLZrfSfڨ%'nr G8K{`"'p$,eDYs3yqaP$i62/(* "q/$ÏГ`vq}}66(c"7mS+;?GSh%)OlVXw2P}aQqryx`X)0Ohf169?3!yHY)yȠ/TYBR'%sMP94nCoۑ^("Xn eдS0ى뢂 po *o;RAR',aʆ&9E&*c% Gu:[K(bRai!B8>jrRQ]Hg('A0hI+r- hH5~5kk$De0r>V egђS0a˼W%*覠ۥ5j)JA2 Krp'+j[%eɅF^[3 0욥g(4X*Pk0l&H[ЉA'9·<*˯3<%<7|ȅÃP}\uzVMڸ0p}U+SSo@,bUwkvkB,qkXÖҤŸz *[)s LEƤɕzy,W A׹USPAdgɺ@ʼeoAtw6l|[aʢ ɹ;|q| lgn {<2G;^P̹JӋVuA1 ߼˓D\: jMO&z HDʍ0R\M/W\J}@ϗ.&u вO&`ܴwH=_!MHM8IЖ Ҟ\^uфEGOWՈӃ[6#>B̕gM5>}CQ}ȇV5C|9ryZIӻIDG=&!cAkT(_eM1֐8 2:b"t=J]`-b)#:>"'m'\Nxח$7+N56҉c8IkR]]-s$'I#1w/ xƢ91?dRDS.*ML&Q.1"@܋H5B$1ڄm֨ԿtM@zRz>NC]- t]&>*Bl@•=e<  L$Dc]*ٮM^-;<~<z'Uk̤Fϙmjk]̺`E ܺ  iT4ොPfmMeq6EމC~G>lv! L 刀-`H< 2] 4ݱY.}H@'E cAIr[#"Y?+%(B&J2@R$F ) A>]{Na]՞·BJ>ܺ#T6 hcK0MC [-.ߠݽ Z~|F<S9d͍ fSTiDzW#|.~‎䥮))ظKn::LQ#Xf. Bҁl,ƄEx@#) H(>n-f(;C@ef8fַ '>"a^G~  X>y3qd39&&lZ-|J|}a+ud_`IF|Y?i?R幺1c""R_t3 FXY'J0`ouoOv_ᗦ/E=XLoTgRުr t>Z`)$cB˚d#R+"6,O/.k// I~b ^,=F/{{1$PR^̓iU##s0xx 谳) 3HY @xZ9ZZZꪊ8k 8j`9<|L]mȽȼ:-m~x` Jj>lZѷs/`BAV PP>D21H+D+.T|!Ⱥ,KYؑq@@F 5G$2:H&x 6رd˚=6X*!CwFAE7m(A y\Tců6$?Ǡڲ7͜;{ :ѤCVB[%:S;␊Ӷ]#pVA>Te|ಅyj/! 9w reqɻJoDhð'YrJ "~?^r7I)œ#MB{E ~Yl >'`Khz,a hOI$wBՅc]qQ"y!B ?bJ=0 2 6nB@DD! IyȑMe-)E~&a xYd`4*2iRJ1g^wV#e i#r*h-BbLY &*ƴ)j騄º婊$*9+~ѫZhF⩫{,}.۩+-",H6QZ[-Ԯ o~*xn+ŎHn.bn쨅Wd{M꯻ΫɾhAi wFa<'][0_15>hq"'+ Lr.*˸l6L2_E! oqɬtJ3]R=(7bwSie^ v`oR}Kώ}Yn wr6M+,5gs^[D 8:ڟ6w)W3n%>Ӆ#-:,6%aǩKl"R0Hy_"N,Do1."*䆃s49%AR""x6f.a49Rl@Y^E?/e;`SFqE-kSD2=X8*_ԘGFN7@;L[ H8 }*xRCta N "_(g]Hd(v" (ҙraqtc wD!iTHask# ɸu8DMx>~BDV BjpL&"-i0(,L.QGa'0QtG@<$yɉL?I0Q`DLшDg6@4-HYњ d*2˜B$~9NQ~̇edB$b JxdGW~x2~~rrŧ}W}td7=+!.` P)"-(w'7xo W^0A\7 4Vx<xB脝T)m0sY <]q5r W\2Xmh kxLcF0ACP|waHhȈCAAp`x3(-+dȊ芯(H PdhQaE][* Ĩ}bYfJ;PK p@";"PKN:AOEBPS/img/expopts.giftGIF87a}}}{{{wwwqqqkkkYYYUUUSSSMMMKKK???===;;;777555333///---)))%%% vvvppphhhfffddd```\\\XXXRRRPPPJJJHHHDDDBBB@@@888666222000...***(((&&&"""  ,<5řʸ, ?DB 1WK ,W H0 Ç#JH1_5F40*B0ȓ&4GPE̛b,j8RQR?JӧPN3UQj TKrKlJQbm'pʝK.+nq /È+V<"mW}LVCϋ. ZUdT~K.$:գNJ {uPkMnзPt r{啃9뇔C\큥Qq֌=nc8G o=% xp&kj5Հ<H 0 X澧 #H伄0!/;Irṻ;"Z/az//WG ⢂Y Q!{B\YM"l|:R%i8o6q9ϼ)c(LFϣ|Be39D)aNJ!@%KG*Ɉ4% % a2'7rZ$4< i_$L! LՐqb@+05W'V=5SS""JZF׺u.5بʵrHc2: jF;iU$ z"[#LKqFK2F+;&D K@%gfvIy'-UиB $$ɓYR#"N^jX0iS#6 d3=ض-ҝ&kѦځ7;5y>}Fq  -GA[;'az/|G@fd}߼*b")<#IL I~ FiB4ԠG(YBhtN BXI G&p5̸ƽ/HD[ (*ho;ZU?# DflNw4ۢ#p;P#sՀ w(J F\9Vt9EwBiB<&jJdD GWu^إNA@lƉ2Ф&og=H$Jõ^Sg{ 5 +r۟ԚRJ~CҪ0N/:.H1uJc>PUH^3zG;_`q//ptl{ Gy 󜧼9|ih-<_|Rv{qLe[Ϟ>B>qsLq\Vv9Tc`!|G7E 6"],B y v@ 44 bE*d4F9^d3ReWe.|<6]XmEQb)E<(vfgp(`brh7wTӁSjG[^∷G<~HơGu-7kQkXw]Zr胶S*1I;F';d&D;XImcFU |$w|X|8F1wXs8zHBWP'G6H]xuݸ 3ܨ| 3X?Hrrhڨȏ(H 27oبȐfc5/؎Isuд yb ٓ Ŏ}Sx f)8ؑ`VCHG e!-,Yv<ɕ ɒBQl kLm. [i>Iٕd#4Eɗ3w)t cny5 V0N Y 闇IbSwp] zSɔț2KSTCֆõ`a"1mɜT).$>]8y6jǞpi yr6i^ S+d4ɖɒv!]8^GydU&Z$: ݙyKGՈ`H* pW<ʣ9(3֊֒9YѢy0A } fh(k MO **~gykL~p)wD腎monz:eG )[t^[髺:,,Žڤ8 @8߱|2c?(|Nd2>Ǝ ^l^n%.Ү{vߎَξ6i,ݮ"7 \p M)^.*o D/Oڛ[-to _]. u#.) pq *X!9;_U#[J Zz L^د%LrRpsBSςlvYau 6'!*d+&b,hlmѣ@oaTna'17\~1H#fo"ZY7EkS ҳ}C$ B%Z."\Ng@<#N_~'{MW Bj:fZQX.֤S2[jCAT)f{U-Lɩ#8HHxP8p)9IYiySx#h3 z$HA;x:A*t(JXSӛ鼘l=}9lM]}ʓPJh4Xp>h>9(PCOG)PV` kĉ>,dN`,9B](_ #bLcx< `EJ 'xСLbJjh 5& Si +JRw bNё5E*+ݍ4զjoW~ xcs:7޺7m05φK .xϸOM;O{&OCo~ۿ?D]2 nLeG`>aNHa^hXX Hb2茈&N2#%,"vW:#!5F$7bbFIYyudR2^Ut9NeI^yeޖ^&ہА$rIgu!'% h9$'> i.R#GR6H jR ! 0ZiЪ"9f A@]SZ*ݘ꟱2[62+# ɾ t*ɥAi룶M~;hC%,ĭPձ)r]\KUpZmb$X [ĨImK1<2g?#OxVo%ȯd܋!\ !@񀖆Q"5M6O 4:^ߔNILPيøgtEػ֫", ,Ŝ Vq/BqL/ ;5943,l?R "Niu4W4A37gUeZ \P5L`ե9u,J6qݰKVƄC\H ,_9_BT2Vx%x\7Crك'(7>S -DD=PkԐ5+F< iep{[q o Bj3 59F'aЀ)\Z0eϻM& I0Hp.&xR xQDDФu8B VT CdJq %# 0[Z F8Bn%/LjҊTdU1HXB-r d8HyJ(4M߮ƻQ(S(`gbd_De,;υ -H-tq! LoVۻ`VJg"/"Y |o@E4e?5@7 Ap̛VـA a' g~@) `E^DIWdGPV\ W9G$Th$ va ɐWII分{GU#pRXW6X)49=C~H҄1_Gm5mXu<"X_  x#xLp~U"R;;);4HD-i%9ng| \8of)+Ȅ^~X#B7,`]Eh~Iz &$9(/bzyƖii8o5q0 UҗaҘgtHcx` ϛk)aF31~+8r ~{,Z(,3g>f>ڻrd14;{hWn_x;l˜C S#1!jx#E<Y K\3? A3糖W\:]T~sg.C 99*9ETu V @BQVVc@˕k |_>̎D9(TCK>ahX+ jɨF9)XDgȐDBB88_*BC` ͵~<܆f9aĦAAj5B$zS#v%C"Ha8]D`0᠌܌IЌpӆӤV)a5>PA0$`LPQL`@m vE} ߕB+m P-TlC W{~ɛ)a_!Mm+WqɝY2zY)ϗypudLл/7&a5lSЋ7 M8H?+'oXԂW BTnzNھ~[2`秐9l]Rҕk.8"g bz=Q2䈮tm.}{qFo|ˊ-NJ$^"7 ޾l~N <>"^>]YM/ߔZ.1 zNԪe[ {ޛN#!ϸ&Ox(O*p,Os󛎥^ ,4_&MI/ccbhuڽMq󄀚}W7_$]aNHb#ѷ4m,PCoyd܀obIz/{sh2RC 6Ɛ3I> t++{?DK,`3$s`T0uב5S::Ωϋ̩x5]W41OcB*`.mǿNR~ C pi (OX)fAQVa4h%)FK4ʍZ\9eOAI @ᡈ>C -ߞFeg*?!uҚN=ԤĐZ̥Ŝ^y*:J y9:LKkM+QkZiZ10S,[tW6^DS_N)LL k4C2 G ¹dAu>_"! ~ 0{ cCd6Y=˃I#VMC"s] Z## =\„7`.-7Mz1{r^9$`8%)S=; <,A Wp-tmx|w-PhPf9c[]؉/gw0;535Ui_BȗD5x9M̃O%K=zp!8V~<@GB4{3{:c˒nFHEA=!_b&W ^S{'Ÿb42^+l8]4ӬѫV`_g8\w?Ȃ> dWf`KҋSͺ"R 4,*R G(&f\ߝ#+!$}aԜ8EPa &_"pq.vf\kL(3*#r#%>vɎyAJp$"g:"rLOeHZtN%Nz[&$JѢIZ/LLo̥.Z8"$"߰.D@ bK_gRI`Ā#~Dь -X͇a4)1c#ytix'j ƀTBQhA04N~t6 JQ4.&voE H5yKa!Tjd!K]Eg|@FkS|~G(*L2vY w *#\`(1*M\˦/@*rclL5CKJPUs y+  Sy358 rgxxg B~@SYCP :HX̂$)Fή&%X~V \L#)i ؅XUY3"eH,Fyk uT=IQ[rr,: >C 45P*"Bxw#1*ݥ$<`]y:&QxAv҈lF36FjP0jB%)#^ LB>&E>E4M @N)r ".xL"!!@Jzv%bXg*@(A,狍0FF:u0ѻ]t,b5Tɭy 4WBOdh>(d~eQ(2gJHw)>D4:2D,dg,>G( ŏ1eB2+ƭ*n5;$H[ѤwmFƬ C@EAtC\m62BLA5*{F ͖}jU:rQ kmY㧸A:Zr[CH6ܬ9˂)pc{Ys?R{7zPq-F1Bxʙd1Ǜxc@ʹo8!w2'{i̎q(]B+QG9H|ֽV' 󼂡_G S\Ro?j5:hlEUQ3s-UexE c ߛK䷧l^% 0fZ` m c/[U' Ce|r^g*l G*2(ĩ5ΕVpm7S5}l2L %"@ BJ˔7Sn2DQ7w#GMz,y"Xс2xI'AĖ>X.H0DyLdEHH; )HQKMD}w Zx~;(J&W Z wxE#rL* < 5uC2 qPOE0Wre ',FN6͒\a(d{Axc "J",Lf}|hMG"BdFSD}x&͒|`Gq'!'F wDvuŅB"7!;|ѵhWAS3EitM6Ud`C3Bwp 4=&3 irzpV!P>G9U{lUYaU{ im2v[YAXUw(_vUp@y=^bhX]XRY%eu3FRc_8 _;Pz ?5hSD4NƎxgK9}Mɕ-o[b\e3\e\RU(Psl Gr hQ#`ߐsw2' IAV`ԙ^:@+Fy *Vfu_09 rkcCkfku 7!]-a`aEsv&F1FbC Ж&k$R_5`g13ReAcczYd0dI[d@dMPkaAe&sZ  bf`g 6Z !6_ڈ ct 69 |, ~ )ը 4@8yahNmhK3'ią>zi <1Y̶%jQ{5gC#4vXIk94FWwbOZ+4}dy19lSiϖlV I_n y]!G(3@jUڥoV.짡)#lpp+ G;Ռ|0?ژFh^.0v@TS<@1vM:4p4֎k:;x J7t /s'LtdxҀ/6I˦Pk@;$Y]i෶(Ɗ!NwO[5jC9t&sh'HR x;Ȁe8)Unn q?'u۸f ;79O~Y+7%(wkPDX +:T>x̻44+ý Iջɋ4J[K\S cp`D#h'$KkyI %]kTYn9 {LS%"*]}`x!۾$!~L M""شiۻD! {\h 7"Û* ,z+ "$$5k> [ s2 Oi =`G#+ܽ+܈iŪqZ5pm'PJ_!Tֺ; ^ .,Q{H6v ?8RE1r,;g{bÜځkn;KswƑvM;ed1aY&51'g#A"<#Ն IXyb0 E1-[|dX W=`\b(; ˈjXݒ270:'z_g$ ` w0&jt>ɘ1 3]|:np̜h ; y;#<4]i|1k K^; cْmbc0 Ypc`敐i̮z5Ɋ* ZG- Y9y cNM\ zQo: > ni(V-M!/ 1S,w8l Im}jB݋ ')őtڽ}= #nrj'bf92| f^v3π-'-]qAu ?n "@u6 m M4D׉TX#!>)˖]݊W\D| aAZ2:r82Ы;! P1 P b )MIN^O~1,.TǤ+y'x{KNMnT in~>2prʐ~ <(-'ݴNg$d9$Ƞ蛾߮HۮHnn?>H.nG~x.H 7~Vӿ2 7%|cLȅ_]jp; #h*X }1>OȾ-mn%֧}ٗ~?_?_Ob}ȧ|3|LP?79/LƐ2{ r "7/QLT Mk!^'k^࿹M!q u(FǸkc 2xs>w_H?S \9^nP EaMK%%1a?#<ȴn#PxI$Tzhƥ"Y"ȡС" ƌnڪ-EZ7(H GHpB@{}u!ޫz^Z*Cl[t1ax6҈G6$(ȃ&U 5"ƑAk ͅ٠.씨]JA;kڬQHdqᱰ ґ :EK^P$`CGGQ]P7|9?53qۿ_u"O.Sm2%6pE R`Y\bQI#H R$=0Y[( & Cx:b($QYP 9ɋ:W4ȹgG0z@ 5NIeV^y%SD̀(hvM؈ G<bW\#@j@ݚW%X!&TzEL4Q'yɏ:!2=)KCgL=Bnfh* V؍ !kabZK) b)Y:{ Tz$,SLt$C^)>4 ,2B0őTGMRª*Z2"+i6c /\TY#JA- \&gzP+rAKa0Ǭp,ȭ\&%8 53'vS7=WvQ3\ա[wc4H#4hK*XͶӳsM\@d۰i76a3jevMGw /C@yA6(kONc*]0C%S7e I:cС_Sz3>悌>;Ѽ#"Gl ,1{l;ø'];8}~O~g3{R#xP¿ߏxc9sl)p lJiqDW&R_j N/@\Lʐy+ Q8b  m!W+ qLlbuGp/tœAsઘ0D=62pll#՘B$*,tW@ FAhbS^-O%1W~$a=PTX"Ea"(iʰnb? % N+\JB5>ŜP T|dEtC7a$Ȋ~A%fw5LRY,!A(Vd-G\@ҩhfO*~EmrS(ԡk![D+=Y}UOޭHמç~> x$~ArQ< j $ y^y@ EOj#t#RY B ^{"w$a,ky\2T#.md. ֏HK/[mĢ`RbZ.xhALuoe0SF|_Xw Q&chm!VW&MaEF0_ ;j(iDē-A>_P=EyrlT8 Vkc5FoŵCt M^ov *'I2lT֢wb3O|I8}˭1G;}L;3`QT|nsfJPSc_T ~kG\{ #8_9h5YΫSUa>hS-ȨKRgn3㚩y"69rlYܜ%K46v]Ci2~/8ø=oYi쎯|!/K<\ߖ1Onl8/Aɓ"9tƷ~3|g.SzyZm/~,FObhمcdz3_gmHe{k[U9na9 91\qG O.kiW *u GPa r's~}>^*lP1?YG#pT0H٢!ivc2zÇPA mp Rkcp6Yi w9xz7x>VR[q)e`4Q 'AVP;\VE"(U ;qf~ #jrxxhB d"+dT/NX:(RyGhA7@LPx}m]@vD#4c<Ћ( ^ȌXH=B@yw8R DX[ 5(Z "\P_&KD`҃HgKZp% 6&"."1/d !. r\LB+ȅ9 YyQ Cut60E7 Ki6i8xϐ&9X}g!C-;'pKWhF T6C!o=4xG3#NS~JnZ{+Uh8.S=hס:R\ 5苄*!)R( K0L6JdJ0i(叚rRkJF(L &ʭmc剨qgQR [Ys& rׯ 3U0GڟJڪH 簓39^C RڙѰ%['{).7t¯0kǺ٢3f::.'v<K29ڠvQ -O 3S_ `W5+ ~ ,dJnzf pjt|v [h[Dzpr˵[z|+~;;{똌[SZ[yV+ iٗ$FG˺{k[,YIPQ˻SR(Iy0?ۻɫVBҩƹ9r7maQ kk9E sFC Ký?rKuh4g4gAr׾ ۺ[::\G:{[aFh]4a&fA  75 ?/;9o*:SLTQ!'KSzBXq&9Qi5 @%zq>i˜3sq 77!c@\>$5xl%J5VQ0pQ0U;:K;b7)Vkg3g C0c ńBM2JQ-AH F kGX(%Ve^]$Ed{Ad_9̨`]R-8(QWQy kx)-U72|Um[djh`%czr+i[%Z N y -g,DŽX*mpT3^&%-(':g"P#A1_1ǧT^^.-*k= =D4AZ Ʀe p&ǖLA=3"@aapXUZʲj2PjzlQz 8==ƪQ,L[& 34Hl<NJъ!@pl_΄F"03O`3f!>3A0#F2;fL%e, Ne/&mdc#.m-ڊ7:@f5';"Z԰G|t鸎&a8ҒV MbZ2# A L׾M MƂo y&!都|,#h,7!rF!jm3zj. R!9ws-2N7ˬD+d{v/0U4|͵q`­ z΃_Nany my9UlzWm^~N>dn͐hkYKvn|}r4ǏQ:M䌞SOٕcxn>[(3?sw lpt^) n3ˉu dXw.ښ.ظ [z np+~.7=/ܹ "$~(.w Xw ?0_,ol]ۗ,F8_[1tM:@Dhp<؜nMp: ˣH*]ʴӧP*-pTy4 *D#]|e(;koq,c"b'Žb%nuŌ%V˘3k̹ϙoTfR\L[ZϯYqXcMi[=kǍq… 3n[7%g;|[Z+BO-~; $S[U󬵾Ԟj~#'UCypA Eg(?Xt<4`6Յ38c,@z,"%}R%1_&5vR}!2"-# x$< !n(ʏ`sI$ "RC+axDp#}3j7g,f U^Š-uy)i}B D3$(xgI61#l髰x&3N6?x+&;fG@'fB6p4F眛 jILzBh-zyoj{-+P,&C 赛b[ k I #%G2f'_-z)ƫ[̏t\p 1rDC?d'W, #4,2Lϣ2JKw0=K3dXg\w54 4dXC f7-7=O /,yp߀.n x, .y,ܘ=6w3RՃfaH=/b[= Nt0)󕰃 C p~"zB歐}- עFc֢IV*,8bRpC!ǰHb- ,S9,¸ d!a$QeEpcj`!$@U?BkNlNP8Ʉr!}g+6LpsS5b?{d#ANO\5ZI<oA@" a d joc ]iE0+F"W2tOia\6|id$3[9gBi,.E\D<8SP cq-h >4pUP*w Kp\b" EuGc-ޮp)0S:PYdD'=: J%06$,⏌X(Is,m)5#5PI$6&R(FZK>`TEQ KhMZ+󉣉jh%b/-52(P+K gr=:eɣ֣ӃՎkR1Ԯl@$sT-n7JX0G9!t~mDB,ń͸*@on뱈FuLÍAi" \Eą`fe~ubPQT3wdSڕ+  E1eҾo9BkX*4J[իLEunʮQc[ia"PD*;m̘`6&nh6X5f]+la^X@x!E&G%eJ2&/D2vu6;vݛX E@2BL 5Ɓ1[rF0 ]O܂Ywݔ!lLĠ @PDqr5#XdGzDt`J2@ATBӘz/(@.\.ƪEi0!3aNmhkP_ NJ@p sQڂ ~8*eM2ʞ1Of- Kb.jaa I|-|GٍjMš@zZcf5"1석7-'}E, 10"k!H!9dPbk,tš慓Μ9H3vP3?Lr'Evg#7gM4V \eujG'UC0'ZAR@@/:ڠjF5U5\4(}6aE|\؅iZX|i2OT6"cGQ5R,~z{HlАjHfas|7&UlO&y08Tۅ8]ih8( d:և X;4Һ|ź{k?a;dE=cO6([SyO=ƒ 8 \p'oWe۹{[:Ԃ[ y[Y ,9 Z{YTCokq;Ǻ[ <)Y1zԋ/Lu%Ewi7tSƧ&UX3lꛪЫz+l (m8hbԧ}u/lA I\ L֋ӻ+AUt`/b"q|0l|gjeԺ K {tf6!RΈYLTz s,0lI4cl1V\̾2SD: LgkPLɰ2L^DpiZ6%nj.vk*[{ożC?!E&]a*~V>ݥT#8Nڼx;?.)ݩށ{M޸Rmθr[}岓u'V^X.79^g:;~3@3]h ![ю6sNn.K UZVIC:l |:߆Bꏋpޱ2 v~6E(;'jneyN邕0.+ /*dCn`nC=Z(ߛ0B >W <(r#䛾#rUΗ$@(j ӌ[j gpp)<1h#;uc) R5͊G!RUQK%2*1^]}P4?pJq/CR#zO | [ޡ[aF!GM5, Q3E2̭*AZـCNPl   Yy"E%% 1N..Lo+} */H0J$cEלC!$2+_T>k_I[y,<"ode/c ܉niփ|'ؐ* & 0ʢY#8HXhx8xh9pYiy94): JH:iɪjx +X 9pl,ZY |38\mDy]}@=جIhn ]N/O 1}/FҬ M,X1ƍ;zQD{ T IgrCv]x j]|F) mT^rP?nD:9 XcnǬYɇ rc K"[b@!cdy=8!Gܹ,?H=DJ0I5hgt:Hs"Ƞjxr"#0jRPUxp'4fQ}&hpAhnʪ\9|K?՚*r,I$ɆRj,2,&+$H1xHK1n&'vJdŇ .ц / pHCFtJ/tN? uROMuV#}8?|y~z=A#%@"f) r o?fۂNxx/x?y«t eo!+Ȱn{ *m58*Z@K ܋Nmܴ7[HJlZqi6h 4̭?̡NQzH&c>J2796~_<R%bmkxXي+0XsS9S5< jV;ؠ @|PŴ3*˝]cIN5 OX"ݰVǐD^L V](hbbS5"ψ4qll@^Ԡ' DcHP28Ur'lQ:wJA"#/y>Y׈ar % 1\u2aS<.?)/N*#5W5Bc3' Mۀ ; xs=|SB PMXf`q3An0JtE/ьjth|Q1yy͔(IKҘLeE;PKޥP ttPKN:AOEBPS/img/decimal.gif]GIF87ac.wwwqqqYYYUUUKKKCCC???;;;333111''' ppphhhfff```\\\XXXPPPHHHDDD@@@<<<888666444222000***(((&&&""" ,c.pH,Ȥrl:ШtJ}2lvzxL&3, (شAa-!j#uR6/ul6htuY_+t#EPk\/o I M+bĿOJL#c M+a#O+{I I*\lMd%VJA& 2^VXx8\d˗`  0Ag@pDd"e:C\LCE!#œJRB5랆"+^f)ݻVd 6hPD#CxqTB pu6Ј$6MPti 3`l0OJ!F(X0\0i?鸤Fd-H! 5g(ۅ\9#'_? *ŨFWHQBR7?\FDM ރ/͑"DMu d@NFD0 8LMT $ !!4.zHcm#Vl PRL8p\HS9#Xf62Ϗ) @$x6N _RerGnY1h?Rň8Ǡj衈'6Q"UqEL襈L4HN) =LlL Tasvf뮼:X92!0@*'Ta'FkE!8a0enR1ŷ[i5!:,˚%bt 0/! KLkP#|fLo,.a0nt{DýLq;Tu܄bj"Oq{2`dcD[:P81$ 6I\| H)‘{E?J#?"aF,D0Ny>q&Dt B'uݙqFed!J-JFptP!;y+﬛6dmuڇ'C{Ii5O97:ѵsH0ERq;6BA=۾]Rٟu?W&y[W.#_.DZ+2be$P6|GbPGHD" -RWۑdցM щ & 1Ppx0a#!t"@(7}EyF@70BoRJ㔸)iAX"PP^\Ǎir*ʁvzjpGWpA (w +L|VdPL&0KXF"$S2 Kk` 1Ω` P j̥.wT `*Ή&"f7 $񜶮Rzzа!)LJk\JvC+G.q+[(L"3JЂMBl %3xL. ͹`sDZxl(.Lo OWGCꉊynh)Zw ޔr3,-TG4W('\ŇD>M5/ie4@iݏFAA[ '@*4rr]!jQm$AY(~ +d!8y5(\EfPa8e"xeD-!5U{]>uR{4mRim'kn6ېZl|^j5~;U61 SRMaHBxY6m +N㼫.ڵU|{^!]EH.gw yYs͛mR@)6CVo=Fwpi7oZ؅3F*c{*/0ΤD#Ǐ5RCD(-uPCc);Ќ YlJ9;+hKbe}ifXJ;PKfPKN:AOEBPS/img/filler_fld.gif [GIF87a@}}}wwwqqqUUUMMMKKKGGGEEE;;;777333111!!! vvvpppjjjhhhfffddd```XXXRRRJJJHHHDDD@@@:::888222&&&""",@@pH,Ȥrl:ШtJZجvzxL.zn|N~v22|Z2utPWĬwD̙ȶOsvљ'ߌQ{qKor{'OymEkɗDd#0jCzX !@.&%H\\q,cbB%+]fxƋliW$pIL4k]pbJxD%`R">8ղ&@!V+DPH\IU0HCxk >Xfacq"B hl2f͑YO6\[Ѐ*!Cnɭ}Й= cf=vБ`ܡ ^ H`" AASiI9H_tg=_= nGT{I u*YW]wL(T@ P@Bs$DZC&D.$ƀ @N_R6Qb $ Hڑ$y$O:!/HYF\Y'08ċ1 sCbu]v s&t7.@\AEICrjRIJ,r!l]0PH 3T"V\U  a H:ٱȪ ^J+"7(D@0A1]MB@H͡蟀`(I$bV3ރH(Dt@ :BPZiTbE> .a`$\Zu G pIDd[|D. )BhķGK+㩜l͒/W_y祧!G|fGZ=۔!&byBHO{F$U7-jLeKN^,e7 e_" ZyG||i˳DWD[RZIb +8Z'XkP/(UĚ1/1hQ CLvg*4 %QT?aC[`ub)wDА Kp= J$D*4n)\ F-H8b(Qs\&@+t ʘ&D ,ðqQhH'L c"q'B2(hHǶNŔM%K)L! bbKÕZp1I8P #qi #"9,4@9 y̚r&OfMbØLpdxMŔS w4-. ̣&Ynf8IN'4 4Yg@l "8 }SlcD'_ͨO3Sjh')q]v(L& NMJgJWLxiLkSP|> ѐEYBYXVhL-4ըnh[ 4T$ѪL@ bBYZ B@R*KO!HA[_5dFqs=)])+vѯGa&B}a,({aq?m5V"`bJ+bvISB*KǢ El'$-Wa@;Z>zѝtP] ++ $ @+W" DQ` H@ *)~xFA yG; b3VkZ" ,p0gLc"PA}0@6L" :zc#;E L-HD\KȢjR/ 1G&4N:͜*L(3 9 MXRKkݺv^Jh0 m:*F;~F'׏fx˜JN{Z=?MR'$ԷWj`˒;@Vz@jw@zpʺF4B"Om";I/, ^l a{f+/Y K`]CҴ5Q)̽Xwk:J-C rDv0 >Yfx;׿ ;PK: PKN:AOEBPS/img/decision_tree.gifGIF87a?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU, H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜ? 4xaB O@ DPB >QD-^ĘQF H*\Ȱ>$XA .dC%NXE5nGȑ#G9rȑ#GǑ#F~8rȑ#G9rȑ#ǃq䈑>9rȑ#G9r>~9b䧏#G9rȑ#G9r<Gȑ#G9rȑ#GǑ#F~8rȑ#G9rȑ#ǁ O~9R䧏#G9rȑ#G9r/_>~˗#NJqȑ#G9rȑ#G'p?$XA .\O,h „ 2l!Ĉ'Rh"ƌ7Nԗ/_|8rOG9rȑ#G9rX_|qx>9rȑ#G9rѠ|qx>9Ǐ#G9rȑ#nj#?}9r4/_>}9rQ|8rȑ#|qx>9/_~ȑ#G˧/>9r_|"?}9r/>ȑ#G|qȑ#G_>}8rOG˗A~ȑ#GG|'p "Lp!ÆBD/>1bĈ1bDП|"F1bĈ˗}E1bĈ#/_1bDE1C}1O#F1"})/_#F1bDS/_>#FH#Fl/_~/bĈ#F_|˷/bĈ#F_|ӗ_Ĉ#/bĈ˧/?}E1bĈ˧C~1bĈ#6O1bĈ1bD? ,/_> H*\ȰÇ@"D!BO_>~˗D!"D˗D"D!Bd/_~ ˧D!B!|A,/>!BOD!./? "D!BP_}˗D!B| "/_?!BOD!*/_? "D!Bh_| .ԗoD!B1|ATO_>~!BOD!&ܗO}'_O| B"D/> H!B"D!B"4/>",/>"D!B!D!B/ o@~@~C!B"D!B}/߿| gП|"D!B"D!!B!B",OB"DA~;O }ϟ|!$O_>~"D!B"D |!O`|G0|!B"D!B"/?"/>$XA o… .O_>~ _>O`O .\p…۷0@~ ?K/_>~ .\p… ˷o…o… o… ./_~ ?~ '0߿| ˗… .\p‚_>_> '0B}-\p… .D/_>~ .~ .\P ?} .\p|맯߾'0|-4/ .\p…ӷp }Ǐ߿|/߿8@~O@ DPB ˧ϟC˧ϟCСCa}O`>sH_|:tСC_?}맯>?}9tСCӗC ˗CСC|O`>sXП|:tСCp`}O`>s_>}:tСCСÃsB~:t谠?}9/@~_>ϡC:L/>/_?_>$X|'p "Lp!ÆСCСÄ9t!A~sСCsСC ˗C o`>sHp_>}:ta~sXp߾ _>sHP_}:DOCԗo| Gp_>2ܗOC:tHp_>}:t(_|:tСCsx>O|9,/_>:4OC/_?̗`| /_?:t_|:t萠?}9tСCa|ۗ߿}/߿| H`A3hРA ϠA 4hР}gРO_> 4h| 4hРA 4h A~ϠA4h0A˧ϟA 4hРA 4/_> 4߿|߿|gРA3hРA ϠA 4hР||/_|O>~O_|˗/3h_| 4hРA 4h?}ϠA| $/_>~ 4hРAdA|8`A?~ '0߿|/_~O | ܗOA 4hРA ܗO@}˧/_>~˗_}'_|˗O |O_|˗/>|/_~ 4hРA 4Hp_>} 4h0߾~'0|4hР@~3hРA gРA 4(_| W0|/?~O`>`> ˗A 4hРA ˗| '~'p_>/@~?~/|߿O_>~ H*\x_|2 70@ ߿|߿ ߿/߿>߿?8`A&T |1dp_ '0߿| ܗOC ǐ!ÅP |O/߿|_߿'P8`AgРA 4h A3/| gP`>__O߿|˗A 4hРA˷ϠA 4hРA /_? 4h?} 4hРAgРA/A 4hР@3hРA 4hP |O`_> O`>70@~>_>gРA 4h`~3hРA 4hРAҗo,h B~"DaA} ̇!B Է!B"$/>"D!}!/߿| />˷o}/_>ۗ>}/_}ӗ/>˧~C!BܗOB"̇!B/?"OB"/_~!Dp|!Da|˗B!AC!B"D?}!D!B˗BC|"Dh_|"D?}"DP @O| H*\(0CcPa?} ˧C2d> ˗C "O? ǐ?1dȐ|2DOC ˗|_|O } |/_|˗/O |/_|˗߾|c/_> &̗C`> *ǐ!AcȐ!~ ӗCO_|˗/ӗ/O_|˗O?}'_|/_>8 ~+X`+X` O ~'0?~+O`|ȏ}O`?} _>o?,8_| ,X`AW@~+X | ,X`+X`W` o_ׯ` 70@~+O`~'_/߿|o@~O_˷` 䧯` ,(П| 70@~|(?˗_ 0@߿@/ܗO,h~?}!DX0| /_>˧ϟ}_| ԧ/_>~o_| ԧOB!B˧BC`>0@ O`ϟ|ۗ/߿|7p |'p ϠAgРA ˗㷯߿|_G߿|O@#_> $/_~ 4h`>'P`| 8p>7>~?~O|(0߿}'0_8p?}'p /_}!‚O`> 70@o`>~o`!B!B!D}!,o`'П|/_|O@~o?}` ܷ!B'p  O@O?$XР| O߿߿| /߿'P`'08p |'p O}/> HO~+O`_/|o?}wA~;x@~?}O_|̗߾|} 0ӗ,h`;/| /|/|~ H70@W0|ӗ/>/_}O| />/  ˗>}?~_>+>}'p "<? 4xaB 6tX_|C{0Ç>/߾[/_|/ ˧ϟ@Wp_|ӗ/O` /_>}=T/>=0O?$X |~(߿? 480;h0A˗ wӗ w~˗_A~o_| 70@}/_>˗ӗ;8`˗|˗/>ӗ/@o_|ϠAgРA3hРA ˗A 70?~+O ~ O`>'> /?  ~ ԗoA 4h | 4hРgР4(_>~ /? ?} 짯`>'~'0@~O|g`~O@ OB"/> 70? O@}o| '0|'P ,hP_} ,OA /? 4hРA4hРAgР4(? ˧ϟA '`>(P>~o_| 7`>/߿O@˧ϠA ϠA 4(_| o`3H|'0| G߿| ,/_>~#ϠA4hРA3hРA/@}ӧ_>} /_> 4hP||gРA˗A '0A/|'0|G0A} /? 4(> 4h@3h_?70>}O | ܧO }/A̧߿߿8 A~ ,X`AW` >}| ' ,X|/ 䧯` ,/ }/~O`> 70@}˷O@~ ,X_| ,X`,X` /> ˗O`˗O?˷߿}/_}/_}˧_ӗ_`A,X`ǯ` _>~o`'P ,h@}!oӇ!B}'_|/_}o_|o`|˧߿}G0BC!!D!B!B"D!|/?˗O!D@}!D`~O`!B|߾|OB/?"D!B˗B!B"/>CP`>/_>˷|`O㧏 A $H |$H A/_o~#H A˧ A߿| O ?} $H A˗,h „ 2/ װaCא`>k0/_?/߿|䧯aÅװ~70|/_>}2/ ԧ_| 䧯aÆǯaÆ ./ װaÆp`>k0|0|G0A~6l/> 6l!|5o`>~' ܗO_Æ 6L/_> 6O_Æ  O?˗O |ӗ/@~`>}_|/_|˗/߿? 7_|o_A,X` ˗_ ,X`˗_W`A˗o> ` ,Xp`|'p "LpaA}1d ?} 2dx_|'p_>/@~?~/|O`?}7_| GP_>Ӈ> ܗOC ?} ׏|/_>}cȐaB}1dȐ!Cǐ!Ã1dȐa}G0@~ }70@~O?(߿O_} Hӗ0a„/a„ 70_BK0!B~&L0?~K0a„  ܗO_„ &/a„ &$/_~'0?70?70|7|;O`|%L0@~&L?}%/_|˗/|/_|˗/_?%/ &/a„  / &L|%L0a%L0a„Gp> '_/߿|'0@߿_> O,h B%L0a󗐟~ȏ}O`?} _> B~K0a%L0aBK0a„ ˗_„ &O_„ &Lx_|˗o>ۗ_|˷/}'p_|/_}˗O0a„K0a„/!}+ |O?߿O#H A $8>$XA ˗B * *LOB *T(_|*TP@~SPB)TPa}߿|G߿|˧O?~ w_|*T> *T(P_} *T`|)TPB~*TPBSPBSPB)TPB~+O | o>|˧OB ӧPB ˗ϟB */> *T(> *T`|)TP~SP‚? 4xaBGp_|ۗ/>'_|/_}˗o˗B OB /? *D/_>~ *Tp ?} *TPA})TP!})TPA~*TP@~SP‚PBSPBӧPBOB OB *D/_> *T/_>~ *T> *TH_|*Tp|*T ?} *T`|)TP@})TPB)TPB ,h B0a„ ӗ0a„  ܗO_„ /_ &L( &L|&L`|%L0a„K0a„ "ܗO_„ ˷/a„ /a„ &/_~ &Lo_>} &L ?} &L0A~K0aK0a„ ӗ0a„ &L/_~ &,/_~ &L ?} &L0aA}%L|%L0a‚%L0a„/a„/a„ &4O_„ &L0@}%Lp @O@ D0!?} *TP!|)T|)TP)TPBӧPBPB OB *TX_|*4/_>~ *T ?} *TP?S0|*TP@~*TPBPaB})TPBSPB /? ˗ϟB *4OB *T8p_>} /_? *TH> *TР|*4/_~ *TP!A~*TPBӧP@})TPB)TPB̗,hP|}} ˗… .4O… .\xP_} ˗… .\X .\p!}!ԗo… .TO… .\X_|˧o… "o… .D/_> ӷp… o… .\_|˗… .\O… .\xP_}o… &? 4xaB ˗AcȐ!C ǐ!C 2$/>cȐ!CcȐ!C ˗@~cȐ!CcȐ!C ˧ |1dȐ!CcȐ!C#oÁ˗C 2LOC 䧏} ˗?}1dȐ!Ã1dȐaB~_|ӗoC 2dX> 2dxP_?1,/_> 2dP!?} 2dXP_?14/_|2dȐ!B~2dP~cHP_|1dȐ!2 ~8`A&T?}ch> 2d> 2,O@}䧏!C *䧏!C 'P>cȐ!C ǐ!C /?'_|䧏!C .䧏!C /?'_|䧏!C *䧏!C /?'_|䧏!C 2DOC 2L/'0@~ ǐ!C ǐ!C~ȯA~1dȐ!Ä1dȐ!}'>W> b!>$XA .$O |O? >$XA .,OC "'`>ӧ?/?$XA .OC 2O |O? O,h „ 2O_ 6,O`g 6lh 6$O`w_> 6lؐ ?} 6l0|˧ ?} 6lذ@~6lذa|/߿~3O_Æ 64O_Æ 'P_o`kذaÆkذaÄ_ W 6lP ?} 6lذ`>?}g 6lh 6$O|O_|_Æ 6$O_Æ &'_|`|'P ?} H*\P ?} 6lذaC~6lذA~6lذaC~5lذaC5lذaÆkذaÆ װaÆ 6䧯aÆ 䧯aÆ 6_Æ 6$O_Æ 6l8 6lP ?} 6lذaC~6lذA~6lذaC~5lذaC5lذaÆkذaÆ װaÆ 6䧯aÆ 䧯aÆ 6_Æ 6$O_Æ 6l8 6lP ?} 6lذaC~6lذA~6lذaC~5lذaC <0… װaÆ  䧯aÆ 6lO_Æ 64O_Æ 6l/ 6lH 6lp ?} 6lذ@~6lذaÆ5lذaC5lذaÆkذaÆkذaÆ װaÆ  䧯aÆ 6lO_Æ 64O_Æ 6l/ 6lH 6lp ?} 6lذ@~6lذaÃ䧯 6lO_Æ 6l/ 6lH 6lp ?} 6lذ@~6lذaC>? 4xaB [p… … .\8 .\p!A~.\p…[p… *̗/>}˗… .\O… .\xA~p… ӷp… .$O… .\p ?} .\pB8P>$XA .OC 2d_}_}1dȐ!C1dȐ!C cȐ!C ǐ!C 2$`>'p "Lp@~2dȐ!C˧/߾|cȐ!CcȐ!C ǐ!C "䧏!C 2dX`>'p "Lp@~2dȐ!Á 8? 4xaB [p… o… .\8 .\p˗O… .\8 .\0?? 4xaB [p… o… .\8 .\pBo… .O… .\p|O@ DP1dȐ!C cȐ!C ǐ!C 24/_>} 2dP!?} 2dȐaA O@ DPB1dȐ!C cȐ!C ǐ!C 2 2dȰ |cȐ!CcȐ!C ǐ!C "䧏!C 2dxp_> 2d> 2d`|2dȐaB~2dȐ!Ä1dȐ!CcȐ!C Ǐ!C .䧏!C 24/> 2d> 2d0!?} 2dȐ!B~2dȐ!Cǐ!C ǐ!C ܗC 2LOC 2d> 2d!?} 2d!* ˧,h „  䧏!C 24/C 2TOC 2d> 2d!?} 2dȐ!C_|2dȐA~2dȐ!Cǐ!C ǐ!C &䧏!C 2DOC 2dp| ˧!C 䧏!C 2,/|1dȐ!Ã1dȐ!C cȐ!C ǐ!C 2/_~׏!C 䧏!C 2/_>~ǐ!C ǐ!C &䧏!C 2DOC 2dp_>} ? 4xaB[p… ˗A~[p… [p… o… .\8 .\paA[(_>}.\pA~.\pƒ;O_>~ .\!?} .\pB-\p… ӷp… .$/_> ˗… .,O… .\X_|˧… .4O… .\ ?} .\p-\p… ˷o!}-\pB-\p…ӷP|-\p‚-\p…  <0… װaÆ ˗_CװaÆkذaÆǯ}5lذA~6lذaÁ5lذaÆkذaÆ p?}5lذ!A~6lذ@~kX_|6lذ ?} 6lذ@~6lذaC5lذaCװ?~kذaC5lذaCkП|6lؐ ?} 6lذ@~6lذaC5lذaÄGp߾ _>;/_>~ 6l 6d/_~ O?$XA ӧPB *T> *TPB~*TPBg>O|%ܗOB *OB *T8p_>} /? *T8> *TP)TPB ӧPB */_~ /}_>PB SPB ˗B ӧPB ӧPB *T> *TPB~*TPBS߿|߿|˧P|*T0!?} *TP!|)TP @~O@ D ?} *TPBSPB *OB *D/_>~ /?~O`/? *~ *,O_>~ *T> *TP)TPB ӧPB O?_?} '0߿| ˗B OB /_? "O? *~ &L?} &L0a„ K0a„ &DO_„ &L|%}G0_̗P`|%L0A~&L0a„ ӗ0a„ &L &L|%L0a„/a„ O,h „Oa>}ӗO S8P_} *T(> *TP)TPB ӧPB ˧ϟB)T|)T0!?} *T_|o} 'p_| ̧/_>~S8_|*T> *TP)TPB ӧPB ӗB)T|*T> *L/>'0߿|۷O| /‚P„)TPB OB *T> *Tx |'p ;h˗O_˧}o`>'P? /> 4X> 4h }4H0?~O`o` 4/_~ 4h ?} 4hРA 4hp ?} 4hРA 4hP ?} 4hРAϠ o? /|O`> gРA~3hРA3hРA /? /|۷O| /A˷ϠA ϠA 4hРA ϠA 4hРA ϠA 4h |4(| o|70| '0|8p?O@ $OB"/_> 'p>/_}_|/?"OB"D!BC!B"D>"Dp?}!D| /| '0|!DXp_>}""7_>/?"!B"D!!D!B"DOB"D(_|"O@~o| O`o`> ˗B Ӈ!B!B?}8`A ˧!BC!B"D?}"D!BӇ!BӗoB˷O| _| 70| ̇!B}!D ?}"Dxp_>}"D/@~C!!‚!D!B"D>"D!BC!B!B"D!!B!DaAC!B"D@}!D ?}"D!B!B"D!B~"D}!D!BB!/?$X A~} ?}O@ /'0˧;x_o`/_>$XA .d8 6lP ?} 6/_~ !7B pa ˗ϟB䧯aCװ|#O`>o`>acȯA~6lذaÁ5lذaÆkذ|˗/_>O_|ӧ/_>~ӗ/߿|˧?}'_|̧/_>˗_Ã0_>}װ@~kP>}ۧO} '_O`6/_> g@} H*\p ?} 6lذ}6\/_>~ /@~o` '0|O ?}O`>?} 짯aB~C/_|װ?kذ| ܗ/˗oO`6O_>~ W 6lp ?} 6lذ@}5l_|#O |߿|/߿_/߿|>}'P`'P ,hP`|0@ ,/_~ _O`o`G0|~'P 'p "Lp!Á5laÆ˗aÃ>}?_O` ܷ_70|'P_5d/_} 0'p ˗}_/߿|˧O`ӗ/>˧ϟ@p!|O@? 4(_>}8`A˧!B"D!BӗB ? 48_|} 0O@ DPBkذaÆ 8p,hA~C!B"D!‚|C!}!D| C!B |O@ DPBkذaÆ 'p`> H!B"D!Bۇp`|"Dx_|"D8p_|/_>/@}'0|/_>}ӗ/?!Bg_|!D!B"Dh>"D!B'p@$XA/a„ &L0a‚P`|&LhП|&<}O| O`>o? O`>0!AC/> &L0a„ ӗ0a„ &L8_|o_|&L|%La &'П|%$/>W &O #|߿|_?߿|>}'p ˗;xO<8p_>}~;HП|o`3O`>_'0|˧A;8_$Xp`|Ǐ`>$XA .dؐ 6lP ?} 6D/߾O_|˗/|O_|˗/O|+/_}/?˗O /_˷o?'0| '0߿|/_>} ϟ|kH_>}70_Æ 6lؐ 6lP ?} 6L/_> '0_?'O`|߾}O`?}/~/?|>$XAۗ0a„ &LР?}!̷O| &L0a„ /a„ &L!?} &L|!70|g0|˗O`#|߿/ O? ԧ_O@ Dx_|&L0a„ /'0߿}0a„ &L ?} &L0a„K0a„w0| /A /? 70߿|W0|/_>&L|%L0a„ &/߾߿| /a„ &L0!A~&L0a„ ӗ0a„ ˗_|o` _/|۷O|/߿}˗|%o |ܧ߿}8`A&/> SP@`|)O>}#OB *T ?} *TPB)TPB#/_}O| 70߿}_˗o_O|#| GP_>ӧPBO*T(0),/> /_>*TPB ӧPB *TOB *,/_> *T@*/? *TPA})T(0B O@SX0@~ۧ߿}8`A&TA~6lذaC5lذ!Akذ?~ |5lذaÆ|O_|˗/>O|/_| ǐ_|GP_װaÆ "䧯aÆ 6O_Æ ܗO_Æ ˗o_ÂװaÆ /? _/|o}O~-ԗo_Æ 6lذ!C~6lذaC5lذ~kذaÃװaÆ ܗO| o`>'0| |_>$_|8`A&TaC {Ç{{!C}=|C0A /? 70߿|ˇP}=|ÇÇ ÇÇÇ>ԗo}/| O`'p_󗯡|=|ÇÇ ÇÇÇ>/_>˷O`˗o_> 'p_|/>/>|O@ DPBkذaÆkذaBkذaÆ / /_CkذaÆ 6lh 6lP ?} 6l_|5l |5lذaÆ ˧aC Է~kذaÆ 6lx 6lP ?} 6lp|6lHP_} 6lذaCװ@~k(p_>} 6lذaÆ װaÆ  䧯aÆ / ˗_Æ 6lذ|6lh_|6lذaÆ *䧯aÆ 6 ~8`A&T|1d|1dȐ!C */? /? 2dȐ!C ǐ!C "䧏!C 2ܗOC Ǐ!C 2dȐ?cȐ!A}1dȐ!C 2d> 2d!?} 2dȐ~c |1dȐ!C 2$/> /_? 2dȐ!C ǐ!C "䧏!C 2/> ˷!C 2dȐ~cP|2dȐ!C 2d> 2d!?} 2dȐ!A~ $/_>~ H*\ȰÆ{|=|Ç:ÇÇ /Ç>|_|6/>|Ç=|Â=|C?}=|Ç/ Ç>|P ?}>| ?}>|`|=/_>>|Ça|=|Ç>OÇ>,OÇ>L/_~ ˧Ç>|aA~>|aA'p "Lp!Â_|6lذaÆ 6/߾ǯaÆ 6lذaÅ5lذaÆkذaÆ ӗ_B~kذaÆ 6lؐ |5/_> 6lذaÆ 2䧯aÆ 6O_Æ 6l_>} ӗ_Æ 6lذaÆǏ|6lذaÆ 6l 6lP ?} 6lذa~/_> 6lذaÆ *O?װaÆ 6lذaC5lذaÆkذaÆ ˷|8`A&TaC!/?1bĈ#F\O_Ĉ#2/bĈ#/_>~#F1bĈ 7_|"F1bĈ1bĈ 1C~_~#F1bĈ _|"F1bĈ1bĈ 1"D}Q`>#F1bĈ ˗o_Ĉ#F1C~"F!?}#FO@}" /bĈ#Fh>E䗯_Ĉ#F1"D~"F!?}#Fȏ_|˗ϟ@~w0#(": }? O,h „ 2l!ĈE1"C~"F߾|O|;_Ĉ#F?}#F1bĈE1"C~"FQ`> >}'p|8`A&TaC/?'_|"D!BbB~ B"?}!Bh0|˧`>!B"DO`?} ?}!B"D"D"D_ w0?!B"'PO@ DPB >#FdO_Ĉ# '_|/_}1bĈ#FTO`g#F1bĈE1"C~"F1|"F1bĈ | ?}#F1bĈ1bDE1băE1bĈ/_>}ӗ/>1bĈ#FO_Ĉ#2/bĈ#/bĈ#F!?}#F1bĈ1bDE1băE1bĈ#:/bĈ#F1"?}#F!O@ DPB СC:tСB~:tСC:lOC:TOC:t0?:tСCСC:taC~:tСB~:tСC9tСC:t>:tСCСCСC |СC:taA~:tСC:lOC:TOC:t8P_|O_|:tСC:,OC:tСC sСC sСCO8`A&TaC!*/bĈ#F1"?}#FȐ#F8| <0… :|q!?}#F1bĈ1bDE1bD8? 4xaB 6t"C~"F1bĈ#1bĈ 1bĈ'P ?$XA .dC 1bĈ#FO_Ĉ#2/bĈ#ܗ/_|#F1bĈ 1bĈ#FO_Ĉ#2/bĈ#̗/#F1bĆE1bĈ#F/bĈ1bĈ ˗/bĈ#F!?}#F1bĈ1bDE1bD1bĈ#FtO_Ĉ#F1bD~"F!?}#F`>#F1bĈ1bĈ#FO_Ĉ#2/bĈ#䗯_Ĉ#F1C~"F1bĈ" ~8`A&T@~6lذaÅ˗_Æ 6lذaÆ װaÆ 6lذaC5lذaÆkذaÆ ˗?~kذaÆ 6lذ!A~6lذaÆ 6l 6lP ?} 6lذaB}/ 6lذaÆ 䧯aÆ 6lذaÆ kذaÆ װaÆ /_kذaÆ 6lذ@~6lذaÆ 6l 6lP ?} 6lذ})/_ 6lذaÆ װaÆ jjH!O@ DPBkذaÆ ˗_C}5lذaÆ 6lȐ 6lذaÆ *䧯aÆ 6O_Æ 6l8_| / 6lذaÆ װaÆ 6lذaC5lذaÆkذaÆ ˷aAkذaÆ 6l0!?} 6lذaÆ 6TO_Æ 6l( 6lذ_|"ܗO_Æ 6lذaÆkذaÆ 6lذB~6lذaC5laCk_|6lذaÆ 64O_Æ 6lj!? 4xaB  䧯aÆ */ ˷aÆ 6lذaÂ5lذaÆ 6lP!?} 6lذ@~6lذaB~kp |5lذaÆ 6l8 6lذaÆ *䧯aÆ 6O_Æ 6D/߾ ˗_Æ 6lذaÆkذaÆ 6lذB~6lذaC5lذaCװa}5lذaÆ 6l 6lذaÆ *䧯aÆ 6O_Æ 6,/ ˗_Æ 6lذaC5lذjjH!O@ DPBkذaÆ/}/| P_} 6lذaÆ װaÆ 6lذaC5lذaÆkذaÆOa?}'>k_|6lذaÆ &䧯aÆ 6lذaÆ kذaÆ װaÆ ǯa>@ P|5lذaÆ 6DO_Æ 6lذaÆ װaÆ  䧯aÆ ˗_/ /| ˧aÆ 6lذA~6ljjH!O@ DPBkذaÅk8p/| װ`|5lذaÆ 6,O_Æ 6lذaÆ װaÆ  䧯aÆ ˗_C~O`װaÆ 6lؐ ?} 6lذaÆ 6TO_Æ 6l( 6D/> O`}O`aÆ 6lذ@~6lذaÆ 6l 6lP ?} 6lhП|"̗o|O` ǯaÆ 6lذ?~6ljjH!O@ DPBkذaÂ0|#_>.O 6lذaC5lذaÆ 6lP!?} 6lذ@~6lؐ|6lذ~kذaÆ 6\O_Æ 6lذaÆ װaÆ  䧯aÆ ˗_Æ 6$/߾ 6lذaC5lذaÆ 6lP!?} 6lذ@~6l|5lذaCװaÆ 6l 6lذaÆ *䧯aÆ 6O_Æ ӗ_Æ 6LO_>~ 6lj O@ DPB >#FdO_Ĉ/bĈ˧_Ĉ#Fx#F1bĈE1"C~"F|/߾#Fd/_>~#F1A~"F1bĈ#1bĈ a|-q`Et/߾#F ?}#F1bĈ1bDEp_>} q`>_|"F1bāE1bĈ#F/bĈ1B[O`!̗0_D1bĈ# /bĈ#F1"?}#F!O@ Dx_| ԗ/߿|'_|ܗ/_|˗O|˗/O |/_|˗//a„ &L0a„K0a„ &L0a„ &LO_„ &L0!B~&LР|'0| 짏>~O`| '_O ? '_ӗp|%L0a„ &LP ?} &L0a„ &L0a„ K0a„ &DO_„ /_ 70@ ߿|߿ ߿/߿>߿'p AO@ DPB СC:taC~:tСB~a}9O`C`>___/ÂСC:ϡC:tСÆ9tСC9/C`>?'_>/|?~/߿~? /?:t!?}:tСC6ϡC*?~s8_|̗_˗O̷o߿ۗ_|˗o_>߿}ϟ|/_>}8` <0… ϡC:tСÆ9tСC5/_| ԗoC:tP|9tСC sСC:t!?}:tP!?} 'p O@ DPB />|p ?}>|ÇÇOa}(? /_> H*\ȰCÇÇ>|Á=|Â)O`|{(_|>|CÇÇ>|Á=|Â)䧏 |ܗOÇ>|Ȑ_|>|Ã=|Ç>|8>|X>!~{0Ç ˗C{ÇÇ>|@~>|aA~wC{0Ç OCÇ*Ç=C=@ <0… ӧ ˗_Ä6lH0| ˧_Æ 6lP!?} 6lذaÆ 6TO_Æ 6l(>5$|ǯ!|'_|ܗ/>˗ϟ@O|䗯AkذaÆ 2䧯aÆ 6lذaÆ kذaÆ ӧ70_}5 ˗_Æ 6lذ!?} 6lذaÆ 6TO_Æ 6l(@'p A~$H_}#HP`|$H| O߿߿| /߿_>$X|8`A&TaC{Ç>|p ?}>| ?} {(0߿|/_>} ˷| ̇0| '0_|p|=|ÇÇ>|@~>|aA~P`>~3/_>o@}Gp> }/߿|=>|Á=|Ç>|8>|X>=O>}G0AsO_>}˗O70|_|߿|? /?$XA .daA~>|Ç>OÇ>,OB~ 70|̗p_>}A~{ÇÇ>|Á=|Â)|o>)/_ {|>|Ç{Ç>|p ?}>| ?} {(0A}˗O_C}=|H0ÇÇ>|>|ÇÇ ӧ˗ÇO>|B~>|Çz衁? 4xaB  O!?} 64/_>~ 6lП|6lذaÆ װaÆ 6lذaC5lذaÆSO_Æ ˧aÆ ˗_Æ 6lذa5lذaÆ 6lP!?} 6lذ@~װaÄװaÆװaÆ 6lp ?} 6lذaÆ 6TO_Æ 6l(>5lp|6lP`|5lذaÆ 6$O_Æ 6lذaÆ װaÆ  O!?} 6d/_> 6d/> 6lذj O@ DPB >#FdOB~"Fϟ|"FtO_>~#F1bĂE1bĈ#F/bĈӧ#O˗_Ĉ#F ?}#F1bĈ1bD)/bĈ/bĄ1bĈ#"/bĈ#F1"?}#FȐ>EѠ|"F4/_>~#F1bĄE1bĈ#F/bĈӧ#/˧_Ĉ#FQ!?}#F1bĈ1bD'p A~8`A&THП|.\ϟ|.\p… .\ .\p… .\ .\p@~ӷp… ˧… o… .\p… [p… .\p… [p… O!?} .\`|-\xP_} .\p… .\( .\p… .\ .\p@~ӷp… ˷oBp… .\p…[p… .\p… [p… O!?} .\P!|O@ O?$XA .dCA"D!BLOD!BO!?}!B(П| /?!B"DA"D!BLOD!BO!?}!BH_>} /?!B"ąA"D!BLOD!BO!?}!BX_|˷"D!B!?}!B"D"DSOD!ܗO~"D!BtOD!B"ĄA"D~"DԷO!|%O?!B"ć? 4xaB 6tbD~"F!?} 1"A}П| ӗ_Ĉ#F1bB~"F1bĈ#1bĈ SO_Ĉ 'P>7_|"F1bĈ1bĈ#F#FdOB~"FX~˗/?o|ӗo_Ĉ#F1"C~"F1bĈ#1bĈ SO_Ĉ ? '_/bĈ#F!?}#F1bĈ1bD)/bĈ O?? 4xaB 6tC~"F1bĈ#1bĈ SO_Ĉ?~/bĈ#F!?}#F1bĈ1bD)/bĈ |Ϡ|#F1bĈ1bĈ#FO_Ĉ#2O!?}#F5lذaÆkذaÆ 6lذ!B~6lذaÆ 6l 6lP ?} kذaÆ װaÆ 6lذaC5lذaÆ 6lP!?} 6lذ@~װaÆ aÆ 6lذaÆkذaÆ 6lذB~6lذ>$O,h „ 2/_Æ 6lذaÆ װaÆ 6lذaC5lذaÆSO_Æ 6l_ 6lذaÆ "䧯aÆ 6lذaÆ kذaÆ ӧ 6l| 6lذaÆ 6DO_Æ 6lذaÆ װaÆ  O!?} 6lذ?6lذaÆ 6l 6lذaÆ *䧯aÆ 6OB~6lذa5lذaÆ 6l!?} 6lذaÆ 6TO_Æ 6l(>5lذaC ߿߿~8`A&TaC!W"F1bĈ1bĈ SO_Ĉ#O/_>#F1bĈ'>1bĈ#F#FdOB~"F|˧/_|"F1bĈ ˗/߾}˗_Ĉ#F1"B_}"F!?} 1bD (p,h „ 2l!ă8P>$XA .dC ˧/_|"F1!?} 1bD80,h „ 2l!Ą80,h „ 2l!D(,h „ .O!?} 2dȐa} ? 4xaB 6tB <0… :||1bĈ SO_Ĉ#*O@$XA .dC ˗/#F1bD1bĈ SO_Ĉ#*ܗ/_~#F1bĈ ˗_Ĉ#F1C1bą)/bĈ˗/bĈ#F!|E1bĈ#Bԗ/bĈӧ8`A&THP_>~ .\ ,h „ 2\/_ 6lذaÆ &췯aÆ 6OB~0 <0… :|(P,h „ 2\/_Æ 6lؐ@},h „ 2l~ H`A~ ϠA3hРA 4XA 4hРA4hР~ H*O… .<? 4xP>$XA .d|8`$OA <0… :|OÇ'p "Lp!Æ <,h „ 2l>~ ӧ{Ç>C}>|Ç{P>|C}O!?} Ç>|OÇ=|Ç>~>|Ç=,OB~oa`>{!?}w0_| {X0| =|p>~0_| {X0| =|PSOC-W0Á{X0|>OÇW0Á{X0|><Æ5W`>$Xp`> 4(0A 4hA3hР@~ ϠA3X0_| '0A|4hРAgРA3/_| '0A|4hРA gРA3h_ 4O`> 'П3hРA gРA$OAg`˗/_珠|ӗ/?||ӗ/|o˧ϟ@}'_|ӧO߿}g?O@ | /_~˗A}˧/_>~˗_}'0?~/_>˗O?O |O !!D>~ +O |/_|˗_>}/_| ˗/_>˗/?} o_| ԧ/_>~˗O?}O ?~!A~ !!`>~ o`>Gp? o@~o? O`>_~_>$H ,XP'0@~O_| '~'p_>ȏ߿|'> '~'_> ,8P ,8P+O`>'O ?O|'O ?}O ?O`|,(P 䧯`A~ ,H+O>} O@ '0|(߿_'p` O`ۗ/߿|O ? 4xP>~'P>~'P o`> ߿/߿80|'0|˗_o? 4xP'P>~'P o`> ߿/߿80|'0|˗_o8p7p 8p8p|_>} 80| 7p|//O`>~o?~8p@'p  `O߿|o`>#O`|70_>O`>~ ?~!D(P?a| /'0|!G0|o`| 70|/A~CHP? ӇP ?}@}_O~#O@~ۧO } ̷O}O`>G_|Ӈ!BԷo ?}~+O`_>_}/>}_/߿߿O@ gРA3h}O|/_| 'P_ܧO |'߾ۧO} '0@~#/_>gРA$OAg|O_|/>o`>|߾| ̗O`70|˷/|3hp ?} 4hP?˗@|+/߿| />˷o|˷O`|ۗ//_}@ 8pO@ }'П|/_>} /|˧_}'0߿}/|˧o`˗o_> wР>~@~,O wA~~~ ,O_,X ?} H*\Ȱ~9OCsСCϡA}:ϡC:tX_~sh>9,O|s_>:lA~:|sP`>O`&ϡC}:t0C _?} P?SOÂ94O`>s`ϡC9TO`>c!|9LC9a| a3`>94OB~?s_>:/a>$X ?} 4hP?4O`> 4Ϡ| 7߿| ,A ϠA 4/AgРA3(0| gР@} 4(>3h?~ 'P|ԗ/߿|'_|ܗ/>˗߿| ԧ/_>~˗_A~_|ӗ/?~'0}o_|OA `|˗A}˧/_>~˗_A}/_>˗/}W0˧߿}'P|O|/_| (>~ H/_> O_|_| /_>/_|˗/_>/_>} ˗/>|/_>O`?}4O /|?O ?~ O`>'0߿}ӷo}w`> W0| ' '}o?o?'p_> 70| O`> 8P?$XA}O?~O`~ }O ?O|O ?} 엯`/| '0?~;O wp ?}`>+O`'P߿|__7PO'0@~_>G ?} $H A$80|O`'P 0_@/_ o`'P#H A G`>˷߿߿ ? O/o|ۗ/@o_| O`>'P  Ǐ A G@~$Hp ?}o`> 70| #O`O`#H0A/|/|̗`>#H A Ǐ | '0A G0|G_G0O`>˧ $H A#|o`>~#/|`>/#(0|G A}$H ?} ? $O`>?'p>O`>?'߾'0|>}۷O|gРA3X0|'0|/_|/>}O} O}/_|'P_o`>3hРA|O`>/߿}/@}o>o>'_O |'0|/A4hP O@$H@~'0|/|˧70| '0|˧'p_|/߿}_>'p_| G?} $H A$80|/߿| />/_>}˗o>GP`>'p_|o`˗O˷߿|G A Ǐ|O~o|o_|_| ܗ/߾}o_| h_|߿|o_|/O`O`8p7p 8pO@ DPB 7_>=|Ç>$o|aC}>|Ç{X>=$OÇ>||COÇ=|C ԷÂ=lÇ>|>~ ӧ{Ç /߾=|Ç>,/_}aC}>|Ç{X>=$OÇ>|!?}Ç>|Æ=|Ç ?} {H>|C~>Ç>|!O@ ? 4xaB 6tPSOC'p "Lp!Æ <(P?"D!B"D>~"$B"D!B"DP? ӇP ?}0 <0… :|(P,h@}"D!B"D!B!D| H*\ȰÇ'p 3H>$XA .dC% ԧ ? 4xaB 6t0?$X~ H*\ȰÇ'p 3H>$XA .dC% P <0… :|(,h „ 2l!Ĉӧ'N8qĉ'N8qĉ'N8qĆ)oĉ'N8qĉ'N8qĉ'N8!?} 8qĉ'N8qĉ'N8qĉ'NlOB~&N8qĉ'N8qĉ'N8qĉӧ'N8qĉ'N8qĉ'N8qĆ)oĉ'N8qĉ'N8qĉ'N8!?} 8qĉ'N8qĉ'N8qĉ'NlOB~&N8qĉ'N8qĉ'N8qDM@'p A~8`A&TaC!F8bE1fԸcGSOǏ?~Ǐ?~ ?} Ǐ?~Ǐ?O!?}?~Ǐ?~G)Ǐ?~Ǐ?~H>}Ǐ?~Ǐ ӧ?~Ǐ?~#A~Ǐ?~Ǐ?~$OB~>~Ǐ?~ǏSOǏ?~Q?~#D~Ǐ?~/_?~C~Ǐ?H"˧,h „ 2l!Ĉ'ROB8`A&TaC!F8bł˧E-ZhѢEO@8`A&TaC!F8bE1fԸcG'p "Lp!ÆB(q"Ŋ(,h „ t2l!Ĉ'Rh"ƌ7rq|Ǐ?~Ǐ?6ܗǏ?~Ǐ?~>?~Ǐ?~Ǐ?~Ǐ?~Ǐ?~ǀ;;PK}x.PKN:AOEBPS/img/impfileopts.gif4?GIF87a4}}}{{{wwwqqqkkk[[[UUUSSSMMMKKK???===;;;777333111///---)))%%%### ppphhhfffddd```XXXVVVRRRPPPJJJHHHDDD@@@888222000...,,,(((&&&$$$"""  ,4="'ı˵X ?Y8H3<@ H*\ȰCPyHŋ3jܨpljB f(Sʗ0[!Ypbƫ 'ɳ);} zI< ʴ)&'JzHĮT/Nʵׯ\ٳhi pʝ HTS˷EdMa È+^̸ǐ#/Fڨ‹@Nf`JH `LӇ,xkXO .l۸sͻqc(kf<O~EPwWZjyElHWG"-h=Ta l1u"V!#pwT3|~v)2U4ء!VTH`ĉXɅdʆ${%/ "zd= Ȋ K @#!=t!I)IWc#`h^(22eH9" ],)N3TPA;)֓"0#0b"U*Bآz" YHiɚʛx pnZg!b bJHizrZ} M$j# :B*(ֵH0*!')k6GH|mP@!@`h ``)a9ڰ MR];. `pK Nܥ&'w 0!3Ȕ.(^a򪡔)TƬ3'2`j "!o+0HJ֥4>/!r`K)׶jf0Kڟmglܩ.K7bĩs<` ȹPΈ$"OV as:-@{!k-oe唋$ډ|~Rm)`aNj/>Hj"DN2"Ґ+njx' 7>/48J. L[|'\i D p> xG?.,rO4`!$IRd#&ץp'.@ b?4SIfŨ=N"``/#NLND< {uoSz`duuMWlf:q"Dx1Btbǥ/2 &;6!L;ID'{Y7/Xa: Ly:_Nurivj3&6lb49-W~/z-мsQd*9%D!1iD:a U'3f9ԣ-AgIFJ{JF%fQ D.`CIiUؔhu$*fE8)#Z$RҎRӢ'$ժBjlg%l1*-JVF!kwcbfGXk33Q;b#B{ T'2Q[=Y)͈JoܭD6j$&ZtG4b7UC{+6k/"RC3,[p'vR>?tj|9 A\q>|Hܶ|MsQ`wUZ1%@! ʠ!E\{ޞ$j+ Oɩtpf810GBٔǣu?,aׂr+/T 9IN27 2Kr a@y9,_{[\9IKgyCf(`zlVyٲ&[q*ѾXnF GR<¸pS7˘luI~&@|8=z4 ';9r?BH$|I*r2$<(@-q473$8:aHRR|h+u@0aFGV2y08&g"6x FLpxҢyp 8&X Rp7"8=",ĂÅ12q2 DPkQd@3|Ƿ0?pV2T66*?W7.@P!2gxBS))7/SzDC>)&G:7od42I%1;#8 "!d~ xH&qc(>G(N玕wW;Gse9wp ~sHq48KI/s5:9+'H:EK.QRwţI]iG 38T*0ut51M[䙀)'A}wCf@r&Gq')nD,ԈEn :b '2jނ#"#igR z .ta0X@{G1,A "Me ivv|/җ*|XLIr 1}KHqHB~;a) fh^cɊU7 nwsuR1; kBC9#V!Jxwb V׊vb:}0PKx6v? o` TuuF a23RbS&(pK % )1Badءw7bu1l@sh*MKȦTvړMDXNEWu;U {91p|koZG;{+e   0 fqƺS5h˭CP[pc1׼$:P$jKQ8`-Bc1dV"ի*VafېڻH -&3"?U/r`k9E&-l8ܽ:S%*O""Flػ's+632-W aܲ)ܺG6U<2[6|qz^Ŝ Uƚkks4`Ĺg)29 jlVU;MCȯgZ{/30h C;Ȋr 0AWє p-C+Tb`^q'L ]өխ Y {rG01;r08'01ҧ{%}1Ɖ68;GЬҌ 2Q€賃"(RSHX@|4$5[3"_5wD6Rقc7  )*0rX2??u2O To >{ .pV t٬#8HXhx(q))(p" +i3 0 *xH\쨈xji :**-t-]8.x|qRYqRdH?8c)2ZСDaaR @dA*Pò"FDZ)A%=⁹0e֕zhG 9aY %xˀD<xЉj+33ôVUX~bu 5v!arWܿN3ڕjuuiA/L5P /;i3& N' `ySrw!ǘD*tN7Z0[[&H7&8uR}Q+ rThН""NÇ"XkUA6L]"pB$"&[=׃ / _2C{ ǏLvδHv͈u!dBA`&0p3!& P@~ 90 7H ~.h> iNJi^&CC'xRT9(c_F<%"bJ@Z'pbe1">^6Kh3ntU1C#f@PŸ枋nn o0 &Z_:ۄcjB2.$^Zs0 `Lt3|n5h+vX$/M2.ȱI-}gch9`\4آL,)4M$`Дp/Y}&|o^8deoι+X5H(d@.{獬N- 9A +RcH'$yfC 0#:Ru!Y?.w~|C&]د(sE_b >D"X .>V |$n,\A@T`,8 B A] '-\TCEGHȷfcX@Iz!qd;w=Fl( *ŠN Pr,)OJSzX|XB狩-IY1*S@Xr,1=K& 0 hJ~ď&gNsf˪I)b9ωaCd: xgg;)|sʤP6 ws&8?fO@ rˈJt|.[Pp1 AAslНIr,m)J"J#dIMʏj-c)NaQ.]2|jdō\g30:&)T!RhPhjةPx*7gXkYCKGveDm) 'Qר՚)'B@WKFD' ֔b9&Yu#}BC;1p3lB~}AP;ڜ.Vui#[lq5B(oBBpۯ6mow(5 Lk︮0tr !(yϫJ'ETO "\ ]!Bwa$kM ĮLz@.emt'b"#FՅ t[DPڂs3"ee`cvUwpV:+q\! H,bD0'`k+c"7nmZ5A\ܪ½貝)M4?6l\'_s!l0s\\üu66ȫ<)=qd/FfEYIIz6 dYӚ&| Nz͒OЋ|\qyÒ&W9ڿkUbT{(ySj޽ycLƢhwX:qH7$t2]Q\d~1οu6o?ȷ!Z/תLmAzVJMDOx/~yyhy' 3{dx>Vh0xEpd!76 OWg<2@5\du @0Y͓׀HGn| e5 !G W7% $ <,w*'>u&S&EX`v~iVQcq5VL^RHH4^ȅKVYEAtȃ×kcCIaaql(?3c r2Y3SbG thg0{S W&p 5%[W`%]' QKCVs`!؆3S'S+1B` JB?n8Sb c'r9#=#U3Br܆$OgȈ'Pw8xX+(ggcEe~CwFy;>Hes>c&4e8 *.F!1&Mbi`Ñ42.ibWY[y!0.(f#0y""7ax7 @.Ph iHoD~& ("t `QKc 6pP0' 'Jə)**53023g>ӓ(9rαC(("Jc!o0'% EIL8F" K! h+b 6ry0ę0Ā&QH-ûQEGDSgz;!q$&AU j-G% rі7k]m;F^9M?T:iޞ6|S8[Nm>7.0~.#e q ". =sBӟnDZ>(4^{뼾Ƚ N`!F_sRs5,Q>-P~K~@z ^i "ZN=E$ ^m+qiU41AtYncj68=  򅾤V8o& ^՟ݝ-اNBp)a9qğ./u.zjza5PsxzKOMPaIr6 Jp` mWCmt+֥Y?ψt >/O_ ;] ^PbO^ɝ>o5.?;|!]onk4ށo˟@o٠П?O_ά_/qQ//oof= @XX͜ÑбҊԐ؎։XHbH! Bȝ*xŋ3jȱǏOYɓ"R8NQ\kf01@>81æѣ-C&SKB*9D pN6կ'z:H% ^8qdۻBC/ӚFbw Y7=B$Yp6-~аr<+:\4ie+R dNĕWWnRo\u5%¿oJܙ܋XV _Hr므0z aj^N̵/zAu$W^PYy'0g݁`J ]R7Ӄ,dC<g-pbH0( <*PR̉= ' 1΄6RUdY.򝓚h! "hyD\0DI"3ʙ^*c=@*9Hi!x基TNM\ #H Fz `!#T]6GPj.>JQY5!|y wqdk"#I/H ID^x d"B U]jmmW#|Fؐ8e"T["[I/7hG*"IXIRwUjꛄUc(l =FƋ 1d|D!9_ؘ1}rء8XRa)ucgˢ;Z4D2hpK%{9 /"LDl Qqii?7J"L|XeF{VFr?BNwY% MG*<.m7Rf` S0H2YG'2?B#5gn@FwJ*y $'-sng esX07D쌀)E5P,%RN"=a'[F b%2J.Z\ډ9$x-S> sܕ h_Cl;]:R7V&,!R9KxaDMW`l#rQ3WKmfE)υtҙjjqȔ)eD"@G*ɋƟg@TdD*k#";bbߜ `m-L!Yd8wq\۳#aGd ,`DtPX `rͭnwH@&|֦809jO<[.`Z.oqY11GBB0WL"jL$uu' @c90  -gäP[/`o# HjI $+"/W2ݢ P QsDfh_"JbY=P,,V%X$R'#JR,ibZBXH k^؁Vydh@qUPIB_)5V) b^ Q$ s{s(y~5]6Ø^!LOd R8UF-AR" F$pU^R3}Hѷ5(+Z|^jP&z87m!ɇQ5 U0Š164Eł ="UvJ86#(AVAvLŘgEKi3&e,Ԅ6#,#.)ACTҁ$d|^G'D`"(pd X(%$]DtXP eH !4v|hg`Fbu95}0>J}v Xɍ)H{AX88h&'=)Utx~6  i'!fp WS7jY~҄yvrzgtyT!|0(" x_9ZY=C7J)Mi fs>:CI[r`aWt1@ prVѝ,:m()Z0#st)F9sPO | DB!6hp%Va!"|Z4TJ=9,4)jyY} sڟtt$:Ol'Z*`p"uIQ80u@B<̛@U1ZUZzy4"W zLj9xζ0Cl OuK j%]tʶȲ@KKחPkh(}l$~zu"BG6me2|4ಝQXz{F4=D.[P[pTMZ\^`b=d]ָ;դW )֭Y' _$`F96|~׀؂=؄]؆}:BG]َӜ7lٜMvyw٢#;PKf=9?4?PKN:AOEBPS/img/et_init_spec.gifhGIF87aF}}}wwwqqqUUUKKK???;;;333))) ppphhhfff```XXXPPPHHHDDD@@@>>>888000(((&&&""" ,FpH,Ȥrl:ШtJZzxL.]znB`N~ခmroTYS!u0Bwu#C+CDE#vGÚG+(FDHξLOF+FED(+E#Y/I EV+b@.Bbf!IZkq^Îhq vK׌I@͇@SH dϴX;e8aFb&_Ǚ2Jw)!rŞxlm~R&4A"8DCҌ HsDŽt,SJjOh.uLn-1l'E컑BHW8Q)?L$_ăC@bVp}D W]xx@~.1)nCcM/A̺&~T(}FT\>N}LFZ7aTQ@I(tӊ\#5Uk1ZMUF ,cidg<9s쩆t:gE<w?L ~f-`D"ͨڄiZP5 zQ Γt  3N8,75u@ qI*C)9RZ'x>><<<888222(((&&&""" ,.pH,Ȥrl:ШtJZجvzW !zָxN|{~+<"&|j7 "<_n%b nm yt~ffT<`eBcG̻G,̓c!(?/c bd z @C=mEĶ]PŒ4 L\nƋ0 UXtKDsrzA@o * L>ߍAOL[ռ{Rߣ5f\,bmZŜ~FG `$"[koF{cAF6ح⍏LG6 l aVG0RHT.KgW`dc[Axf=_ fN0{59ͼ?A/Q: \Fh F(y Q@*\ ,Cvԛ<Ԅ]OȎƒ9cTa2,!!(`^Ѯ]PD~( B6dD-$*sHx`bu.z +JF%@y1 o7C yf|ABF m,oKV3r@GX:iGLT)C*d Ft(%XHEnr l/Z ]ɘ*<@RMAdZ8NE\—KHLig*! YH]dZl'p+ <(B' , 0-H{&@ @JjY:gH0.s=X^Ӈ7mJղ:)Hq~9wz~SBϢA"0)Q&%iSRh$JW\ K 6؉f/ GQmD9 , ]-W:$@ hG"HLjm*C f!a` rB"ոWjbu-Hh7о5[G&0>f4'h[B  H%̲2JtRAlP'N@0 R0KMz;qᵁR 8ͯ~+tR[J%jV ,5 [ΰ7z"p(N1S hTLƳ* dg[.d--N00bAAN^k J9:f$ ֩K2qL(/9sk HNH断 :E 誚i=t~C좐eHx4ЈN C+ѐ H[ tGˉ5N{AC?MRBsWj-ZԜngvε_Zvt;PK  PKN:AOEBPS/img/intotab_clause.gif/gИGIF87a}}}{{{wwwuuusssqqqkkk[[[YYYWWWUUUSSSOOOMMMKKKAAA???===;;;777555333111---)))'''%%%###!!! vvvppphhhfffddd```\\\XXXRRRPPPJJJHHHFFFDDD@@@>>><<<888666444222000...,,,***(((&&&$$$"""  ,N:oBG"c-ժ:1BRR^""bQfL{*\Ȱ[0CË3jqP ٱɓ(S2F\#w.UʜIJ$}qϟ@Q R*]ʴ):!b6"INj1ٳh.j۷pɁ > PT߿* LÈ+<΋1OHLYrDLCB^ͺoZ##ab]tն}6{d#݊aNΊOфB-#_/ {3ҤQB^"F<؋{ s`# (ᄯH$%IPa؈sU-971! 7$CͷVNaH8b   ߆Hq ch !9 Ң3FQ =e%"BKB`?M: !CyQXD䔂f>c9ȋb&h#Ő((o椂@璄aA }a*H &'0|Z҂1B 諰f*;\k`q7a!A5KГ Ί8a:)sZ̅x!76H^d `"!&R/{"cDeIhj:!WRYcs_.jb'1p@!\^K#K ;1:ϻ@]Ɛ\Rdx1# UϷgyJLi~ f&n<%y\s!Vo|v#b,!c }'*.d gSTsKpg/knL>.`;,#b{bDnKN|`s28:4{aHP517_l觯["ERr1:`A8h1Y-BWI|iW>Bl`^CBpeh 㱉k ./j FoЍ`,0H^z@ "' wA|RD3A@R84iCA` 8G@#6BE0I/(! ;f1s[|2rEJ79D,P 8"N>TDD #Ʌca(qJGh=eyn9^f2qjғ)?w… !Qa]5ҟŏhX!t@RA4*+ '-yLi2&L\v :=85g_"ZB!ZN4M*KE1 Eo8B+iT} ڪ vK [R7nȟ.6Q9F (\e+2BK97znL`I)tRZ JXĊq.M0!T0xI9U2gySvSmɈX ;u'!ʊNoZ D [2\!im`` YV}KA@ u\^5 $ ؀ &VT--kf#QM4+KL࠭et >ZjNLZg+,I;d+a5Hn"a #hp`SM)v:l MWe<7b T[^ӄL%;a/6џCVIB4J)Ř0 ӫӰF\ONg4+/'h{ ڢgrju X&Ai=%`eμ!Vew_H7&5tpKfEtPBCs1ӫ},ZXsnEB&6Sğ"PYsqu2,;UVtւ`؍R4&?{rTuφ1g;v rŽySCcz:.wy`ަ lvR׼ yc -ߜ(g2{*~{'oP y ?@UA&b}ujUdUu^S&3KX -67%wlXfa#ҊA$JZuf3'3Sh~_PB8}qShjhI#iP9-R U9(8n3@B:DZFzHJLڤNPR:TZVzXZ\ڥ^Gzb*ait`z@lڦnpr:t:^,Sr @i'8*ʀz6 9$C *:C c:?GPq'#Jeamw x 0H))8o'7&rE1]Vxg24}Q]J'6H8Mbjك͚ti@Nj'Xk/%:r)a5pyRUTՇ7Yzb:0l"b%>h='*CY1Y**[z GIV9 `5k?Պq4Ǫ5ƕo s;;BKHv PB\iG@f61 T=[WGYgfYa*(;)Y)r;ilE{ ~1ʞGꑧ)JrAe{ zp}@)|Q/QhY*B\[By}fhR6EC=:ql>A7luCRt" p@qITN639Ő:46SʣE/K@nn2fT>vﰇ+Ђh`L gg  $v+V$n1SNH/Mi4,P%s+k05ЖFQߧ]Hх`Ҥ ^f/k->&i ,1n/:Z+$$$+&+#kD>Bi3eN`cC'D|0=0%(i(Ь BQ4 z婀l"%,QzVԳ"a1O0P]LPVv&O&P"]Ϣ@@;BbBO62ؓ3,3ܙWu5C;.Gv!Id{I`\` --pk":_`wx&y+{$/<1[Y/W$y > qt)%?|xǪEa,# D–l[iRǓ(bTڢjG+h3B{(b~jq΀Cx\sD!H$<)vKgS$S"(X t) ):JZjz& +;K[k{+"d;L z| \ ,=(Ԣ#`HM:n~>+*ξ*M,_j]ju|ת… :Lw*~K%䈭|GQ )D̑9J!ʕ,[V,G.z3G3iNIAsbtҧҥLxճ)*ꌶ2T\@Gye)1bL5ڵʶSnk5n5$gK=e݉w;kjqY;vkA)fsy s)ˤKl 7[~ ;ٴkۆ} ˛?O~((D?[˜cXVSeG`0Ό,` RF^!#X-hCuHe-Ub98p*dbqdcyC:£8`F.uDG.W#:q@IpZvTeNn 7I)Y2Vaιrr2&z꒧8pR@Nia g1OhiNpJjjd5aEk lKla!5i  tBMtFtJ/tN? uROMuV_uZou^ vbMvfvjvn wrMwv߽Pէ$~L Ew3x);N_2Xy%Sy xH@ G*瞿2&E@{#~F<~B#Q|[Ĉ~`;g*7IaQ"cٟ_F|4;;c_ ))FӋ`*c)p LB%ТIBX(G@d}cD hM}fj1mB7'D8` k:#Ljݼ4ugE d Ob.(%&*1{0<)hؓV(JG\djx 12e4و{~{c(9ngGء c @-&c"HEn,Iɵad$:1PzlX)G)SjL+!Xr-o\꒖+[ɵ_Ib135͐:!a6x(!`^<>~:=F$/@ c` ^pTX  [[t dd_EK/PdPQ}h8Y@/0iP >0r75UaD=Ec3l|QNctV ?wid>OAd( HGuck R bJDl"ȡD掯Pu9dP/rgX7lm(99k~q|љV/Z8?yd瓚AE-p pi;_sEɝ߹KR"e_cqvVhxct҆frqfXGp )'72e˔=YeUX@!;C&a`vl ؖa8_Aա ԰i zdP8Y1ʓRUxfã2 eѢiC K+: I6<#eV)NMZ 7Fgӥ^zksC ^0:mqzx:sJzz|ڗlȠ`ʔM~ʔHvzڨx37X ddPKd`<{k+e  r!ou=Ӭ Ac bZ3ۊ: o63QLxf*fovY8C*_aazpښ_ذs : "$~WJh:" TO6I23 TBT%k E tD{ Jp)3='Y*1bmzwJv3% \ز  ^٪j6XK3q :X8N 0:km5ohVab0չ1՛}+n+K#I:D;SQdu$tgE(T!;SRt+ wkT~ ([9gdћ~lk#K w;-QdP[z5pgPEHɛZFB_:چ džxa۽Pa&$ ̚;/8w%ڿěYK$ " sF!- z;35!RpT-T`pK|!L2_RTD(8»Hex^k4|&" ?C7 ] clalslMW7K"vƬ sk~c+KZHKJ`K Dä{ؑ0"q)kty؊1ȔJG7W !J hLg!30Ya;܋ˆ, {t;!ГQ\ ˘ʌ z(l{-U4e0w3;ժLrK?X nU} ڼ=imm "۶>"T#ժ_ul-r"^DHYLx%l5;.lĊ L٭ݲ]cmZѹtԓ% [݁ي! >[=!8"˷| ^ ]`} g?>cM̺jX%Z;/+ &').܌l@|Ā7&d>SBWn=7ӈ ӽ }Ӻ n ŝ,lUp}A M=k=a=^{ˀeIʡ:bX7}D~N)蠠G@*0%`K գLhb- NI@Ů \$N.NSCĐ~Zu~@'zQnxlNN 2 w( mlNS#_8QMQTSsB$:/B妕ܠ܋p=oW5#/`Ooẗ́tUagn5ՌW}LGzCmUTNZ qWz;WcvO _LdЉ}_~,a CIɓIe\Ѣ),c S͙8s*Ο@s44ѣj򉴩SLE3E}J*#Z5ۭ]6ՊسAZRLຝ-%tBK޿J 0:^\ #KLYbhGҌs4 AI#c|3ˬcdsu툷߶h;6p>cQn>vyEԳވڃ&_4>N}{Oi1!N?@ "pGzTD5(!;  a8c!1xo`H9B 1$a~D7L.8Ua02z8/⇠JY!* 4"AƘcN5sT޵c5= dd~%_ ^("B GpYVה朌pW'@m)D*_'xb"0y_|=^H>Qٹ [&Qu <:IH #}%[2A% Bڱ&6'Y!gVӲJPȚJ bNX G BXZQi{xpD[:q~oP! [pwkZ$?g;b1_Řq)7l,]BOtҬt1R/w\54ToFtMWft:cmn6߭.p T߀vKކǷ86ެTs\G▛xayv3zo~z싑N;kc{yo'/+r32.gw@'%=oe}|_>2/0o._R?L3_25@jϒ% C{O+b tSĸՍ #f <0Q%Fi+3V0AmDBgJUu0L:D :A*B:! BGxE(Gw<FgldȮ t "y U|&8ID!Z"H6d!q \o [sKEXd ̥.w^2!F(EQR:hdE'k&~V81~˦6nz 8Im͘!SxJ #!yLBg:}YYˢ dU4FQ@zcU:f'76j!b"FSt(eid:0!i%V^=ZN 11%*ކԖJ匪6PV1VU1)XU1˱Z̚fGSpz15AD0WN<_ *"a$,.ADZms$6`,$bI:PMci#2S %򵰍lgKL Zi7M r:ЍtK `3. I. ﲕ/z]1 t|Kͯ~LN;'L [ΰ7{ GL(NW0g<:t%쪐U`1&UP 8G^e F>?SU8 !ǒlfBrPpHrEi Mb'=&gK JH$2P""1@# %A}NīVLn'+ }:PstSE E kвIu# cY6n"F0MkhOh>$>`gmGdĺuRcR*ɞ?3MqA=Ö1ZrAyP\`c!x|RUr,Kv#~3xCH ;"vHWX"xC/%:$(_|)$:2axAh.|"s"Xt:(;,áwDp;P:7By_X>U^؏WsF]ļý<;`•癈!%ߗ!D#a15zUED ppH񍸶.m)= ^+o n@I% M6."JQ" b_xd7tJ"1Bl }} I$c+#\{)28R1~=B|c@&C̄1ÈK cԈ9r h|#ǒ11k鈊 (;+4KBn& 4v ,thHDKvt2Rg&AϹ? ǶFkpkҴ Rax Mk9l u+ rw#1&f HWv1tV:3mt~%4fj8!@I~ftqHkzjx;?۶PAjsuׯ ڻk»k$J&kɦ0R 3$6`ekbh;bE ###Lb&L`bd6ڃ4+f2(\w_F>@| "̿p` [ YL\iciii{'KsVNUn [ƃÎ$h\]ܾk\f?:48X>h5Ǚc3D^#0~l9E}4 [Ȇ"Yȣ8Y,4ۤ,.@KƤ\ʦ|ʨʪ%Ka=p<@<'[f˾<\Ipp8Cɗ\>|73q#ּ><\|<\|=]} =m1s"#n]V+A|Kav%ҴU" 7Zw{9He/ #e@3}=%TRFڨ ZT)M$ǀHVJ(_}.RC0r!n "Ek!p *q,%'wV9wsZ/BMh ]}XF͢vnvGn':v9['Cu`|- dأ2H2H-wؽ GӉ`$B ߸s$gz)z>$FTz{D۸]R-ׇ0z h=(1w݇Pa=xtn'!'1 Դz%)&ttq\-opD0lJGLچ}.|m|ډq ZO㵝w"$MôM)(]Hxa$ VKS-X+8WlJ*Bۦ "qqtJ+|ϣl!KRNo}tRsj!"Tm cB(eLM$>qG>t2O,2kΉ).͜ޙ iډ+.vUpi&I*"#7Nར0(&}0^ r0 n0z3$7x->3K5;*Q0) f[H4 ex Tk?2$>PO ShrKPZ2^x-`Xӥ2ԭ4C 6H &_16!'/O 1K6w#?/#mB Ys$G/7kOO Q?!QM{VE+1fnCQPp *-8ڶy%'^ptOy^_H(?Wu `=_Q`o=^o$x%r芅FnV͍_̛Eʣ^\Džʲ?\Aad<ђZ#I/[K0Z9/F? &!?_?_$8HXhx)9IYiy *:JZjz +;K[k{ ,N^n吮/?O_oeO $/: 4078|1ĉ+Z1ƍ;NpKq X$Bq1CX R^8)6J`}E` }Xb}Ɖh`s✋HmJM$ z<au#{oEdx5~Z^CaTBYwo%c`"by8a]PhdzB這➂xIZP}Uk9țfqqFJg\w"!SN[M ;Zb cG!a :(UȄ*&dUZ+PJȱԨ 6X(V"*lv˚"l썖1%!j$X6V5l* o4)90d͊jqUG$M  WAzԦa**11ki+n1‰Z Ii˛M8W%fX\!^Ku5 F@zT~s;q+Ua\;E6X w굲|D24f'|#1$5ofU%6h ]uǼ1$e168YH 2M -4kf;ѡKN|**N4ﵻ>cKLdRcd0) y69q<;| 9aj&(ԝ; !a71BM/%*#OLU& i+қ#(M$LH" *`q>( :q EJ!\)6IsPAQ!"-Oh*x I-T WVAs̽S>4.H Tv!nݬT4"~{<$?*ei˄>s%ꇃxq:z\"})p9c4Б/V#}!jPw,BT>XC1 iFP S:2tmm29] (wtPNMdm̄!G'c>MTx<6AMctaSd3J\/CܥEGqM9mPtQ"ATg|o0U#D0NTinlVfD.Rb'"REhLDGFu PQ>Ѩ5-=US:U'XgVX'f1a="VNJd|B%8cM+P /Liͪ\5VP^'hvp)#vgӶ25Ekch^&*I!EEZCiZgDN|j@K ))E[[X\?wjwf ]0V-v=$qqSH7˜j;4 רheׄGx"ZX800}+\cT7R0&.8 AzM/IF:9uWOHN*ukbd 1Y} ;h3FĀ0pgp( dI( ʲqaeTE.>fOEf`<\8u8ˑhg"*J_+6.Ofc[}[1phMXx6D7ѪňV&spNRinZtjJk"A,!`U˪DLlxCZFL OJMHG4zv`0<҉f䳗 !CXɇdCN}1`@n3x\2 )h{ |po@eB5-$q]~:Hs?%*SS!grg(cAtgR Rkd ր93AIY%9Q1 yr N,.ʲЧ8gS%"$M)b4yy#/G/L3sH0x)5keH5+Y3HuUg3%3>Q%$u"_s4cwNR:4&6!B'4Ix9W#&!&i8Z.Cj;q"U1VB!!#Iq>I1pr~nP#M#ec,=6$DHv2Y#b) p 7y'DIJr+ͳȒd5RbE?;`6j:*Q.b''E6"Lq҂~1*0B3IN@JZs$IG7)T¤OәğOz)-:D06ӗQW}2E;# Py#}d$s//q51pZwIЏ]|8#\|Lg| ?^atxٲxyU 0C̀% ` ҪnR1Z9/ |g@XE_! j* 0ePp{ӪuJ^(?f~QsP` F J^G|xBlA{yvİZf_vRIfweN(ԑ'>0-AxD ʲ-;SQ@fpSAKx+0L9F˷IaKk|MSazP]_ a+cKekgik˶mo q+sKukwy{˷} +Kk˸븏 YKk +K2*۹x4պۅvM2+02J x h7Ovz0z$9B ]KɦR $B.e-z|wqhRA44c7'Gw@Kagg r0*YRa -4vsi[}^'1ɥ{)DwauGME"6CFV‚W .BJFLjHs9Z4!LX8/^Q{cj: 6^gp~ysY2 V0 (Ƅ[Bś[3sW- 52'A׺BoXZd9|C| \=IP#5F<:G8w&{AbFk4{Ap> 8!4֡2auS!*E  3Xp0$3i=lFί`]tR"tG-GE>K>J+tņ`0mf%:GM;}w 'DŻ ,2eq4MFCGC9ȩ0-:z֥|0VJn + 0! a} <fl{@93n C l0)-[c <[2 s&VDLi=*e\%ّ0BWI2usV9*L7)G `fa%M] %1ڬ<'8GF<t$V&0"{n 3 sh] LR,*X ^1b%8YeN&ߋ1)^ |Nұ`Ϧ @ wB/+$ L]r,8q9oނRҞ n(6-짐4mgC1CYDžN o \ )N" %SoHiÜ l,%m#gCY:-N*}.7R3ߛq kA`>@idbK%;ǝ:.`;+!c86m qNhf307' <̡K1}NDR2p}Y/ p`(,8`<55Iviy$J G;22NNRo1-1N1RN-"N1cGČB1דB΃^Ӷ-:c:ދ(M"\BLJAC "8O-cǓ(YL_*#L-)ɓZJ?{ efD̷PSL<`oիN 8H&VN^ QI;˪=tڷ..Z SK!*jG= w\%Ypŋ ;("F'yɚC#a9aѺJ#t+QǾ䚭ll'x5]Kln _|i !SZE 8´سk]JK݆j]W@T^<SJ.7!!R/Ƃ 6F(VX01D'bQ (Gw1 _dQ}B(#Hki6Ix6 H|lywY5ݍx`A9P^0]9,؎e=9 !Wf!=j֡1ezHW!k'J! ~wg2&J`~_kJD,%* ZFyA xA I U*II&\(NNR%ALIqDN#U>cSD:b!PDv#0%Svu1["Rhaԝ2Fo.2O= 2Yj4!TQ\ †b G˓M6dA 6ҰCxvU7;%H]"P0ZFPV%.Eq$48$Vl hP~RU!Bpd,^M,GL)1CI.w tb<^E_Ju̮.?;A%n(E6^%GYpִW`5 NxhFxRwg L0H!Os zrƃP:m.SGP7y98B$VlKgڽ.D,N2̏!so ߓ;?cHB"ŲA,]L_J-'LKRoA̬lP쵈wy%ˇ2l+ӢwhJ~ RropBM1? `+Bpup{c/!Y/) 3F.p-m=WѴx!VDQ v0>\ΤgBy! " czRCŀ ÂY!jUD'w잊 I f[ DXdtcVyS*.'"P9-VLA`\@%Ќf@p 2@R840C"_%Y,  J2iiA @h:$Ep(1ʲVgsvk!;qGMЭ $P/,v @NzR- * 4ͩNwӞ@ PJS#4;5uQPb h@N ŔdSjDMZֶKN#`1'_Afђ\K3E@ՒamI- =^`g5ODc>i8]!3 QLոd i,b_u HE HiM`2%yX_8֊a:s8u02V(a@6yOw,𭪤^8ў6n}PBl"NvqjeKWy)KmcZm {UGXVa17 |DpJ-"^{̙ yj{[Rh<(E` qEGkNP| _ͻur3DBMB,u&o%mjѸ<9==AOX%.鑦ƌNCV+[UI:_ḰJn|vk{UHiq#;GIA0g$^"Bi)񐏼'&d@*M4hXGOқi>+5o|gOi jV4O= 8HaBO[ ߠd?0ߤ(ucMs/wO&wEw-s^A 8Xx؁ "8$X&x(*,؂.0284X6x8:<؃>@B8DXFxHJL؄NPR8TXVxXZ\؅^`b8dXfxhjl؆npr8tXv dP2 -sw-՗#P%Շ& kix`K3$QXwx A VSo؊WVp~Btf抶Hx.b3TS2Q6dTvu3UN-^$@srS֘qUb6Fp\QFp ۸AuXEA8rP 0(e_ABGP 7G"d pPmKT&w$2;]K1F!@Yg)9Uua5U#@:yג<SK0eRi$`Swײ'ANqft f9j n"kcg8Z &A[ c s%u)ހ_u#21H6X q4 e3ԙC 8 qam4QNW3)a榚B& 4& 1hy!$>0%'y )drs0e"i,jީ %~y I# nx  #7c#ݙ3-sC"]icT+GY#4i3xFU(:ffd8'285^uH]h2f[PD 3 P/lH\hu h,4 PJP-8IOѧ~ڄ }T`M@~">aHݗSBIF @NF^IڃB580MV7SE0U`lKw&EGyRTv Q^T9.j9o@QBvUٕUgg_bPEJ䐨u ZaNʄT '͠ rJ2 2 ] w{!2␖'1 cњ {7AfWA(۲< [4"!Z<%S[(k;۳D{") YKZN+ ۙIV r|1{6;PK~4g/gPKN:AOEBPS/img/raw.gif.GIF87a.}}}wwwqqqUUUKKKEEE333---))) vvvppphhhfffddd```XXXHHHDDD@@@<<<888222000&&&""",.pH,Ȥrl:ШtJZT0xL vn 7r~~t )oMD)n#Cq MDD FjT/BDFoG#Tœ G)WCGSFEP !+ξ^\@jėz Qf">ǯ@ H.!!:hK4m/@HюȋfcqG0PBt%4CaO`XIPii1rN:A`] qU!/ a/p6BU]M K0&T%a&R(g8-"` +B71c(5HbOEq$(I2A@k0@`RUW{h&lD3B ^JHznJ#WzUT;E) f@EDdVjk,T(JQtJz CduUJq GP¦h #4i Ќc- *@@jS FPa, q8.ϒM% 7#a@*=AH'HAlab G|)`,@*0y0-`P0,42$"L \1b0x\FJVEDB)ۃPG=NJ`@@`=$=HQ]z7!n;4'#yW^ldy+ B)~_G;PKPKN:AOEBPS/img/init.gifGIF87a:wwwqqqUUUSSSKKKGGG???;;;777333))) hhhfffddd```HHHDDD@@@222&&&""",:pH,Ȥrl:ШtJZvzй|NzTz|OMIFCD\D{E"SBLKġ#ȴKٹ[H#e߿ #<:RJĈ{bvd;GDᯢ "|I,C B}J,˗0dhE8`SS (@A'5uF rJ5)CRUd\LEfZ%l0 !LyX ZYa&/0 L, x8$%N Vvg W# ZRE!!DpŽzwy30<)I#aAyJ<2$\0ə@yrYh i.u($l(! ub T'b*ʉh_ĨJA\1ڀ]Fgh()VGA|@g1  0LaC)n([^[>jY5Z>mcwg'pTRT@*AYE]Awx,r8Q,$EH\$TAo5(2cQ %`CLϝ~]ـEĩlDXscGfca~|`ͯ̋L2Fw9ׅdc}Gb tGCSpM7AEtNqCqa(+f."xSkE7YTJQBpByJ؞NV\"-:lѮ`WӱD@HR^'( SSm0i1DcLIl dSXW۟ 2C.Q^"BzjLpN@1+*(`Laq%tЊ B) !$;PKPKN:AOEBPS/img/fld_cond.gifmGIF87a {}}}{{{wwwqqqUUUOOOKKKIII???;;;777333))) vvvppphhhfffddd```ZZZXXXTTTRRRPPPJJJHHHFFFDDD@@@888222000,,,&&&$$$""", {0D:(4 ф:4= D: 4$$ 40 ۜ]G?T JH"="ꁂqF -\ɲ~ ] 0pЃ bJKR"PyBJc!VjuBh\@0$9 ҁhJ݂iQ@cZQqTK(o Q*^̸[D:L$i3 X( WP -3U]$! $$HT`$ NxQ(`6 .hW#: Ypvia(e]3ɂBbW֕"PC]UR,X&WdI?& ARud%[B,OUp*hWԤZgvN1|P f^}JJ58 t+QJZ+,k {J6,9ʚVKZ`* L+ZB( bںꋭfcbd 6Z V'# s֦MIǪDOqwy ! ɘʜܲ)S`i+ϊ SǮ{zNΛ L͛|(h^8M׎pMBwE&V_]&JS/eQ!JM=b|ԣ`yQѭvBlY0hpA\7!n锒`d*@"^^2Z,mjY8{NZ "DP̬K <PpSyp,i1c<򠜝 \R*1.4?kV?svHPNz")%!!7j<Pv|@#(|}AB3dw  m;eqѢ7b *DP`d$1~"4 սwC܀BB\BN/;~/xaau '&b+qDK$Q3X"NV9RA۠1Eh@uDݸѰACc%5Bf0Nz2FcZð#ls(02&Fd$%(HFM ,6S똩(&رb鉗补E|C#DH)dxČIX=Dj0YG+9J`2~h p4ˉNmFFyJr+'@"NhLFBa'D@%㐈!$XP!xVDb4 $ 2M#1H@3G) (`AbFLb׺"g:̠/BӚX#*F>e[$Z5NBAj"yg:OG* K҅ feVQaK5@TDS֪8I@?p6`;+&L"DbYiMO>ei$. ۪gM<:6y(2/5ZA?)(Q\2۲@~d^n4#1as&sB0IQDH>{(:9^cP*.D ډ,Vğq)Au[:FlæVPA̤= 0h2^"wh"QӲq|3]Kb=d0";;ԌYyWbz`@a2 ,R67 h8oBaBR\Du/+gi@P]skJ`D0@B5-z3Qv/2"돷=Q2׮xa殶@'py4G^IsLi$ʈj[9"p ) DHۇI by~xsF"E/*]1=PgĮU˙R ':Vh *ü˿P/܉cF:^:G"*k @%io&MX$zMG ѡH[(@ Ug/-'Ioܣo`?ȗQnT(p(Í+˫ pG Dp S 52UT<5s Y{R-wc)H0sd%&E `.1 ' b{F\w4O5GGS@#&IBTbt S? ץ0n@p~% YEy 2E/J4U|tSg }D` yyWTL@eG gX@Z2AF'\ӈOS bSFw'7h ? &vcmˀ}C/sWVf$7#qo;'؁at(^w^ Xp7#rW{XOP ؠ8#'hD dcvQ*>TU7 bNi453yx3C=*9aa):dX!K Yt`7 f'7 X\!n/ l^VeU;Rk=Y5R 561?dC6^V3WBf!j"EwڇB*)p,0q! " * ax  Xm*i#0Hn2DD f6*?`w4G:H$٬ƇWDWziJKL; 0 pm01{ }[X WWۀJCX!(^r ƀk@zRWeآh;яdoa3c jK:A@$>I=Tr۹4_1ʸM(k6D:*&2櫨{T8"hEyypE%ڲA Wdv#T$J ժj;v%qAտ!k˜M4ZZ ;1 JA0õJ˰1ILZ×Ģ we MŵH0+g)&Jk"A%řPyfwt O|B[qH43!6֐;mx8KΠ, ^ TLBN dĦ: ɮ&ƘȾa1qC9׶r 0 |v;Np*,M)"%X㐀 |ͥCy`A9A}GNTe/\`,C =`%Q<_+\Ф@͚@Ts\bl} 3tQ:AW}Ҳe"A#2"@Agi(}\ћ`8۪d >=ԈGp| G0|'P_>} <8`>OG|0@ o@?~_~߿|o@70?̗/_>~O /,h „ [80|3O}#O |O~>.Lo|?} g0@7P_>70@~70|+O|'0_| ˷p… +O | o_>~ ӗϠ~ G0| G0} ' +O |'p_>~'0|G0|'0| '0| g0@ o?~W߾-\pB `o߾'p|#80||O`G A |_㗏 ||_O`_o`>G_> /,h „ 2̗/_>}O ? w0|8>~o~߾/,h|/߿|_ ` @o?}߿|>~O|(߿|?(߿O@ DPB 0@ _>| G0? /߿/߿߿8`A+|/߿'P}70'0?}'0|`>(0|O@70,h „ 2` _>}w0A~'0| _ ' װ!|O ?_>)70|?}O`>o>~ g0} O| 6lذ|/_>~ӗO˗o|7o| ׯaC O_|O_>}˗/_'P /߿_߿߿~/߿ o`>~ /_| 7p@8`A&TaCPD!B"ć B"D C"D!B~ H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cb'S~d G})ӡ?~KϟL CG}˧OLg'S&C~3G}˗/>2˗/_>~+/_|ɔɐ_|#G}'p @},h „ 2lP!|8> 4xaB-\pB} ߿'p 'P|,h „ 2laB~D!Bb'p 7|'p "Lp!ÆO}߿  ? 4xaB 6txP|? H/_>}"D!B"D@W?!B"D!B"/_}CA}Ӈ!B"D!B O?/?"D!B"D_>} !ƒ!B"D!B !!D!B"D!B'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? )r$ɒ&OLr%˖._Œ)s&͚6o̩s'Ϟ> *t(ѢF"Mt)ӦNB*u*ժVbͪu|90WI90|\=˗B~[/_~ j}-̷/߾۷0߾|jO`> ̧П>'0|ۧ0)߾} oW5| 峚o`>So`>㚱>~)짏| /~g5| _>$o`>? 4xaB 6t0)| /|Ç +?~+?~+?~+?~s@~P_5ׯ~˧O? />}7_|#@~Ǐ~ k_C}0?W0_A}+|WP_| ;Ϡk_C}P_5ׯ~ k_C}P_5a>s ? 篡~ kP_?$8P_W@} _+Xp~ ׯ`,8P_W| `A $`A W🾂 ? ǯ`,8PW@} _+Xp>~ ǯ`,8PW@} _,H0_,H0_'p 'p A $`A $`A $`A $`A $`A $`A $`A $`A $` H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JĀ;;PKPKN:AOEBPS/img/enclose.gif>GIF87aO}}}{{{wwwqqqmmmYYYUUUOOOMMMKKKEEE???;;;777555333---)))''' ppphhhfffddd```XXXPPPHHHDDD@@@>>><<<888222(((&&&$$$""" ,O.A :0 ADŽ 0м: ڗAA̺ A0:AΚ\š&4AB Bȱc%*Ǔ68+1(c̤)fμPU`: W@B!$QAA0Xp^uEQSÊ}eCC]RJ[w"EQf6;A:$Єi|\P]GMK:DIC.%e肟]LHCA |MlAS6֠W~P +W4D68kєA OAc ]*BѧL~?_?h RT"68g+@ Vu nWEuށ  6h>"OeI ȇ!>HV&v@B>!Xݒ:Ï * Ti,@A>A2H"dRO# W :ct^CPitsU1# [C6,Hs`1WkuB"vZ jhJF1rg 8TQlKO.h93R\xHʼn AUk"d%"\ЖR0*J -3  IY\F.':D!hOT3ly"/F#Ӯ0(,LHTfr,"R,r9:R!ቫok$ ̕r$#)|ḤP< $z 2U+bw؉Ћ%0df#r O"QcJ7NbKקH*)Gn)-("owB^R!"][j€$6, Fi{舻xL#x0 I6硇{J #k3 *5撃j'T/_%T"6~C=2qU~dz KO;R@5{$  : @?A0pA|vw'))04]($x ]b?,ftyIʐZH"9( DshEr M+@ ܔMpL^ d2D?4X(Z+pS 38h,\H%ED2D|lƖ|^%8KSb`L6NN;-XTeGL:P&ք.I&:S ¤<Ê,`ꎁ|L)&-dz%ԎKBsp_c8.Jrȧ@w;5š %.xʅҡ 6Ё =u׭l"dHaJвYcHW*"c!J~_hhR4,])z9f͊N!1Q,@ 2:EP%Tr«dI@:dDVI45g'uu]%ZuY źW|iՑ:A2⭠H!^.E!`"im}N_ʢ ,:X~!RMhK 5oƎDU"ail QFg90:&fNJ,Ǵ d{[rL$^tBk1^t5TYJMעx]`$ҒxS%0$EeڀZ`=K5%D\l@&"vbC3NaiEP ##b;1 93[+4"f۷hJD &5kU܀f7fտ GBEIYˮٴz pb䤜.shtT &ݶ CY8t_PCu@z:"2Wmɧ>F<i(",c3ӐՈtpmE9Ea a wWL@{QI2t/*bWtoن1M%2F5E3$b]a%aS<2"5*Y@,5&PCy , GW+7H< kY##K^~w%74'虞깞y%(4nr@-H홟ٟq) Wr=@ ڠq/@j%'6OGjJOA[vn!eg67\7 Тw9 wTEA~f"=i9őb. H)\-NXMK!/ONAcS:4 "0fڦXin: tez:ħীZ#@m٠OX1~d٩A;PKH+C>PKN:A"OEBPS/img/et_preprocessor_spec.gifoGIF87a.wwwqqqkkkUUUKKK???;;;333))) ppphhhfff```^^^\\\XXXHHHDDD@@@<<<888222000(((&&&""" ,.@pH,Ȥrl:ШtJZ.uzxta6 zn|UHK d].+j+.f yXv"jT ^(O.F]] KiI+ +VLKaTHNcw"0!A מx H?،DA Ad($;(ȲL.H$#Hl*&:qdGH"z #PDdI'  y!LzuGzV-Z !;Иvu* E/_#6,X*g#ؐxqT#/\r  oW1!E(s뎲 N48"a:#\+tRR"͵w a)]>@STH_GUvC'D|68`*O@y` +'"E^ahd pgGqT(:(D}a @ ,4hT)bȏDP#VV)e B(,s!5\tہ4=00`&@WHdhI V6 . i@SdWb^zTcdC`2_n*HMYJuc~He, AHJN4P kLS**0D?,cX=AgGM.+A፹dȒnBE@O`p!IC0d*BWlg #0V$g|-wVWkGS<3%T9,ՠ"TAS!1 ],js!cQo~rlj#kJ4GF1t:il!(ȲBUK]Ŭq8ņUܝ]F3T>xmb1{pObzD\矇DI#DѢ( Hҳ0$0*6p޸ayx*3Tzq/ITJ98㯣 ZC CAʑ _tp}D%pDp@PX#A@5Ά.dGbZ:"`@'+FBhNuh>LBEw iߑFG^b272Q;ۺp"0eB,<b!XA;xJn\ؕԥȚ PQ]$_hic'C `=N٪8bic"pf4jlI\f-+R$8r&xk8Ddd 4I'pc J$`H TpԙL:)1@@% Ђ *Hh(uGsp̳B3[ 8'k1D:%}4d!M B0LgJӚ8)BPp@K+Ԩ)aix6b21@@|l$5UĊf<:)^\8U!|$ 9HrQvشHz4SH.*Ix5R_\"/Gak]M3rh0Os= L:n$15(M6 6E^C=Y#+ ۶$2boې(6#k[ANE ;~ν؅Pۅ5 j/w o ޕlq0a_c' Spas~#! r'"r6DBkf$v[,,Y$a͌u\lN #LƄ͖;PKIĈtoPKN:AOEBPS/img/bfile.gif GIF87a:}}}wwwsssqqqYYYUUUMMMKKK???===777333111---)))''' ppphhhfff```\\\XXXHHHDDD@@@<<<888666(((&&&$$$""" ,:@pH,Ȥrl:ШtJZجvzK4jznZ#\ (j&#x1,i#=|=h, =#&i B#* [ u QȳK1hĎ&ǽ=B}c F&*G嘲{FG?$ ŋU(Ԩ8hV >$Rɔ%cD2AKZ7bV,3x>LP#11 ~fbtԐUTq+YP8#O**cZ6دB ƶ= 05K0qi 3$N %8>"հ" zoUHG˙r GގHh h`:[5* ;]?fZ2_m1!a?,Vj%@qD 9=00n TyVV> (R}lЀXbIPX+*bsHc66. iH&L. (MP…֨ p#;~BDBhf%A+.p6\ UItfءoWiVwb=h:޹oM:#`(LEb diO EaU?zմWyUd`4P0,T#|DCOZ=Y!l 6bŶY@oGXBbBqn ƻIr$`nqT9JGb107qFpW*1X, :) L)E\4VMkrR P4kCO̲/s -{ $K#Q©JUd'Q$9X-VlԡT#@sN@LLKLKCB Q#߰}d$MMncQطMޠm0'a3t 0sҰl?61G7яNΈDD5N3F ƊS<7ACp,Q!l@9%:EM]eK0"p =ǁ*` t@e82.5auE݈v;+CXWH%0O9as ( Pԗ0|DVJB' ء x DTN҂XC}[ We $;'!XPm,8 ncH0@X!suhBLp6y¢- ^ʮ泱H)c[8"Y*d8U]x oQcxW&H\sB abe8 . !&JT P -\D3J$!YV`.@F/ ,(%B .n<5PK`|?-a"K< qM JVbM Dp̳ޅyQEM-:qv0H"ppO%`)AFԄ/`SV.I' t(EL$h"K5J`0twSU갈,Sg6CD\Y (#]H&8cYU i5B_:CfcuI;^CR ,#y?B1 6CdPZ;.jN[֥J`{:P1k> eͼ|ёh9|c*Xi.h#$ab }ch!\w{%.BSN$0)aDOcm7Bx&eZ!'¨&l5k8O x"N3ܹLA%L͑ RHܯiV67060!mOPRMS0z'X%4ʤ!n9] YӬ%(S0m*@{6m,`L# &$ 3f26$PXٸl(`hǐO E&z&ǩU/ 98eCv_`sփ Qp=)3S7Qʹ D@nq"h9epFHE 27s]|;FD aIx !npҊ z A;PK} L~ PKN:AOEBPS/img/expfileopts.gif:GIF87a}}}{{{wwwqqqeeeYYYUUUMMMKKKCCCAAA???===;;;777555333111///---)))%%%###!!! ppphhhfff```\\\XXXRRRPPPHHHFFFDDD@@@>>>:::888444222000...***(((&&&"""  ,>% %)ˌ҆%7A.5Y O ,`@*\ȰÇ#JHŋshǏ CI YҪēbʔl͛jN@),)` r˨SJM*ONʵWR pٳh6Kj?p e ,\·߿ Lav!]iǐ#KN< Le*ϠCMӨSnh OBo uRARͻ) 0(ax$9Bf:!|fj25P~fP۸yDGmgCKRB(ODJ`n@8{6HRI1OVqFe3 'aL^l#%yaqv*ʆr=AhK 9É,Q!($™rL HhOA`̸p v8H 9ODt5t8Æ)&""9`ꃥaRe #Je`YH7AdH (zAhꈲ0j:[ ,Ī`k.vÄ:Y8U"A>D޷e88XbtW:^Ѐ2pVҠ><,0 psQ`۩>ДhX˥(< @쬑mlʬpB -y ڟDGXմ!"*)Y֪J챈 re"gB)&sfh+Ⲹ'=Ƚ3+04#RF[E>@xq䶇/J[Oӧo+d[ڤdQCۂ \;nH?l#$L=*0@VUBD]*DDB؁wF#Dx3Ϥ `{"D>s$̀XtV%J@#?10EbDƆZTE ^ij9sp"h&&M0*R:'Tħ9P\4}1EPcDɛh" UOKE嘙QI(4&P!JZ1\^ҙl1f@Zb mQ+I)SSʤ򂧐kThZ-j!.gEOB#,@2 `x*cLpH>?!`'KT`v^Y;hGK!5b M pղ]-i΢y#j g ğ\^?[|r9;]Ϛ·ͮv \ݮxP6xכoE/%vԽ܅"x^oT['I*v ăX ;_Fx.m +jX>ఁ=aDZG{b&BdqRt¶Gc\)R+^WF 6O\ BGнz c- M#fQ8E6 lCms g()x) CElOISD P,zD7i4HGCټ44(.-H&;Pg.(wme*{s"\O% T 7CӤ `mU|HmŖV4BodbS~Nd^۶G[Fw.drY]xeׁgN$!d ʧMQƊ {Y, ȼLN {㶽9\ B̵g Sdb\pQĂ Oҟ<.9`:#OG=22b{G\xOiߣI,Ö;>b- 'D/+• qG:Z'1tj^/c Yw9"?GtGdb7 *aѠtV6) (wqgnToB*,ҁjQtvrSry 7}BsaQW-SJ(zP^'53&\>рq7w;Q)dyr:b;og;2hTNuuF2_Bh`tB@!RN78'|ɇt8i*@'cs б2G%1+IsjG(^tqwPCȂqqu{s,@G4s#6b!6UcF-p +l5*RFKÈ-s8ϖVp1xxdv@Su8N-#0c%P:3B00-981AZk {ҨW^ #.6>s=.@'<ѳ/3 B Џ 9}G%/ #]UD;0.2d"D9R.(ExXy;h2oD'8@L3@إB,-PBWrrc5 sZ?{?Dmٰ,_B!) ^(J$("aP[(W IiY/p++>L) &J9S JZCVCߣ;AC0tJ;[mFh;g;&>c`HHR.GIxlRG.H/8" v4wz(Fw#Lׅ@a??VT5e18@İRYZaC@I7ZE@H0Fj؛UPM{%'(Vm=[1?;*QZ úG {M<;.D[k.ILk| pE>c'(C&\NPpBPp㩥]6BN4 { Z sH)2Kr2i8 )6:7Ӕ+'LC4) ɌLĝnhvw V?X@ry9 N&ݚt%JsĴ.,`PȻUIǟଝ(ŰIc 0dŴ]z.8˰60| @mpȈȝ:\ Ael|~} pԪ̨d@ ebڕL  lȷ,?RJ'l* YW{;{FKC?E sp~ V.E}}P 5Жpb:ҧ1<ңvӨumG^{F]p8 qԄLJ ,}V}\ }\ S=VUb!YM[=hd(uan} gt-ԙRuײ' `X/;Q/!sسמeLˊYE~⻘iْA}À ԽNp%tÙoҹ{՝\0|%%t: s%ܧ}~ѵѻ@D#Ua\A1ɭܩסϾ: Q)ܺ\m8zi$ >ra5"r f8 Ԍes! ;拸)>AP'6ܡ>\y2 }c%L,.)ֱ0#rחə•[Mv뽴#ƃ0вs&[k`5#R )k1\1rKH<>bW| ֕͗v3v1o`󑒛3Ăg@{"K='+$9"Ȱ$2Gw%<6xSbѵ>|fEaCk .J{BC^,p1,PξGk}8j)=nle.)N>z]QßIb"AɦQu MȬ;89R#).W</(P؜ k#,C8pH*ڢ*~>LAw)&P;->>y57%ό.c d˚:A['Jcžb5*,N&cfY&qQ Μll.=LIrIsKI\ߘ,IƑ)-)¬rX n!OKCg3ln/_^Ј,1VܰR ] Nhn k# nl}-}dy B[Ŝ+•7&⬝Lւߍp]g1Ϙ0@DzJWY%R}˭~/??ʞ_;oO (Cz܈]׏ܫ=>؀$8HXࣸX 2dЈhD b*:JZjzZ z +;K[k{+ĐKK, i -=K{ DC .>}^ (ԡc+ô{08蘃7 4g=^eSvea !*"O=#7AbOw80@ l]pJškA)4e2  wA`%,%nʏ%IZ>kظ(/08\ykl)Iͬ@H 8ǡ,ΧhJ|-:%dA. ` F胼-6 s"'"𛈽>\ j; 0yݎvy@17Tvc*[&ʭfՀa3|=`*ђBr"{ٴӸ(9փ( "){R6oWf20,Ac"*Rvl5<EK-iV P-ć6Vɬ7++m|[*:ñ:+'>j@CP 7\#Tjr&K2Ok8.޶Xtr՜:̢'#l}A{O,vaLw%FWĒgc?&/<`k Fmh\&S-O Cj[:] ێ!r>5,D?W,>"lH,0xa&25ka 467 0dD5( -<(f.,iyT"zIV04+;ȃ\-bl`,a;/;BFy2v kArUByq z.<yhԸM@1~S=+yL&bPLHpW\Izf{,Ӳޭ@ iBA d]s6bT  C%pM#2k]*;WX .j10#OX (ރV"F"smRge FkՀ[!2x':Ŭʏ?0x*lC|FPj&qނ:X,OC$fAw%dRWlט^a(zeTEmua<}A< նDNp+ =T m_J0;|7BavAi)3ZR!@W v4SiFa$ͺ53l,0I cEDθNvln 8ytL#` 6 bӞOכ ͈L8z`* 1y)hHxEatZ+;x;۱Ҡzִo\z׼ddf0_f򡎹 hL8xFDF K%S!S꽢jZ9򉍭\ 0YVSN#NPUvu 1pB-l'L wڍc_Hp`~`eiuZڞF&G-uF6!\/1yJYȈ &B>,D~ӯ@@ Xk-3sd!С ig %/4-Q\O@ݳ>)X SPQyQJ,|{d L3q}j}M]3wC?SA};SOD$k9 *TdOr /HmF*pr R"؃IAQH 5yFzLDJ;`2UW&Ҁ ;i(BW|x|sȇQG؆,T.H巄}_x>(|p 'CzxVQb@Xq(H 8d ~(u82^h( `vZ f6l؈ pH(LȊ 狎 f_BӘ-՘Ѝt}IcJsha`Ek(]BM8-Cvh-8{8-g^bUNRLЖP،8;o.O*(7<Ɉ )ɉ%鉗(#t&#rFx6I&1 E|nxےB)FI$Y& X\$Y8IXy( 1"KPqc)e)P ŖfihqQ1H{5 YqHiHZt@s U)مy1VXHva)7yi {=}m遶Beq l Dwzŋh $ghYIS; F Ё1~ZS)IG (m$QO䜨DɈy)9gC׊1=QZzY9(qUvp”x|%jNJp=`HЁu#)P,: 빛Zw|\4^EHٛBڂF:F#3-]7 #c6PeU`4VRw[X[ʥN婛(ii+`V=wq r0{cqf?eY6h 7  iWUnO.`Jɗ~ /#ngn iuZq.yrpyb$̓ba3W"Y2-PuFb 2/30ې0A7'p%0h$ ESn!k;J]3&4S]v9<XGčo =&e+N,[eQ0BZ7bV7coqSsJ =<&v:r2?9U9"ɇ"*b06^bb69;h[e=a=K7D>a$!vB;7  \j T<S"0 nܐa@@>=4zq@$}RU޴E| )A5>YK00A)Kǂ ǰY> Oڟ>5Ĵ80)ܬO{@95ιj5T+J{%pDP.3]Ha ͛f71@CKW 6+X_)B[G ʵCnF;Cf +~۴]?ڻ()r]^P G}d q{-N\h($h(} EQe \(5@9"  YEc&  YD)%E~RB|yҎp%)HNb ,8tix|yg ny;{vI={>\(#祘fgIҤpߑjM' `cl}t)*t!PY^k*Σ(],V[UNKɧGȩ.뤐2'o`6*ϓY3<ݺu%lT`o-G10 9/ JH@#/:y2F Z̠7,yGWHJ WH +![@<1w-px YH<G7i@/ELH,fQ(V` #ܔ9bLX+:FB#H:f!etHD;btc=/y쟤4u%2wD'CxRVTIL,X` IM\ғ233JR7$ )JΩ^I Qb9.uΕr&-Z"m$39}&4$]} v\j34|s.O L]8ϸc\5g̋93r|k?1'@ó;p\˼ŠDBR FJ@]h u*1gҠj=XB Z0:5џ;t"j1SpQP2CTӈ 0JP-.!( ۪x:z3">S+Q-=[J-1lU[_6.s`+#LĎ@tkՊ+Fz TQQ(G41K*β G UÚjͭn,:`iiшqʂ`{-5u3Q`2AY+Bܬt0D6P-b&I/Ca8/ &#ыGvq^Òݯqb#丛~ ؐK:h8I{C \ 7 Sh$&;YМXR8zНQ #j]gJ VpnM4-#)-M #ifv3P#: jEAFv@}Dye,&K>gϳOReCP'P`J]xf^0= ` zy#tjS.녓Lpn/B6N'lb JqF`ۼ. Q 8+OW0Tb&Wl?T^ǯ&ɜ!5 WA3 Pa0,IFj_i+bX1Daf86_H(y|ϔ EvIt E%X'8wY$EA414XIcll:H]spuT+p&8DhF(9h 7E2Ϸb0qg a K6` jqU|؇u=/AV |1`[5Wt~[W1[sB|0|eAE1>%i v$H5+ iEN`Ge3(i181` ` ɘ1.hg5x8ІW1 OË>RAZG`&rw%@9m>70Bӈgx> 3QhKuX؅ H)/(52H[-`dɏ:\8YB +yѐJ9GMiR 'IX8y gUW\K[ayLiX7Շ4j~>`CF6G7xC @Ab@٘RG3蘖2) v4P]}q+.r6  +6 s .8ET F񚡼4mq +sBA $'uX9Yy虞깞ٞBB.U.יAPGAQ07@}w6;9y ڠ:Zw=7@@>}'&D/9`/0+PH**E&z˶0zDe;PKV6b ::PKN:AOEBPS/img/impstart.gifRGIF87a bwwwqqqUUUMMMKKK???;;;777555333---+++)))%%%### tttppphhhfff```\\\ZZZXXXRRRPPPJJJHHHDDD@@@888222000...((($$$"""  , b09#3< FF <Ų3#%FF%)#F)%@@I ZRLŋ3jȱcF3Y8Gq/'dYqc=ɳϟ@eСF#f6r*r@iτRʵׯ`/:!!Gǡ2.N)ʺa˷_sn3Rxa[[K˘I=6@9B!uR0c.f]r!rKՆj+_μܪy3sJ3)0x#%8 7wDx$J!v Ac)|7B׹S]$4\\Jtn )L#U  w $ї "Bg 0x a[ +!@4C/]bX}Uu9fUZ~bA$ HP_;ؔ eYt#3ލ6Yƒ{Kg )bu"2Cv26裐:dP"gg `V)vV"BbPI}Z%æ< Uk*jH嚴j+ gTy' ðcknF(, f㟁vm}Bm \ @\bx;ɖ&1|0:Kk>p Wb 4l8<@-DmH'L7=ssS=QmɠPaqvȱjn "=Lk&&!czh%Yl ( {ktgk,+c-Agw砇.褗n騧ꬷǮ DM3T&.2['S b8OM؁B+gl[<@؇s-Hw]lXVGУ&7)6bA,j{yG TqNd* ;ˇGyT.K_ct:P.zs< 37F`1T1d@["|܇=BW"0<&6PQ  юbBIbd& S9 `ԅ&a8(*S@a)F D]D\AH X&~aĤBI ȃ9V#4J}((Za."i]\ԚrxWKSG<DKXt}U~cc'Ke7"=gGKvЬiWPkgKzmw~ЭoK:.e30mTU"7HqrBJTFޙd"v%9W܍mw7޼\/Uh:R|\АV8AL?N(t\:I e/W[֮j P +΄KefD=OLc+)6˹p458NUN~KDآڽ%L9S^I$Dxo 1*#7D:YIMxCѬpietd1DXPwo4튨I3:rfym.s!Te6PlZ`6K-* QѭjF9!84%wJ'ұ:HЧ(I5f~]ܥGWS( ֑Ԡ ^ig1(¢zтEl4EB8CVZ='~MԪ&wqy ߷U-Utrhn6 ,6o:<D nJ" VDnFF.oeZ%2սXXP4 oiOF>O{%] a8Y ġ l%d`\|*А^L9!A=*qI[=fv tg'QOFÂ5"fɿ8}ӆ*p1k#ŋP[wǹz7D~en#yG.۴r_|G0(_JPtbTSe p3{eH_G@un"CB ] Y9ؑAp 9$ UP0zO7 [Fq-SS$? l-%4c(d=6*ׂYf=$+eR d)H Qj<-!C!wB^%i;֤ o<"! %#?;1݂+o=NgI2B2E!N<ө' r9?9py#1EL i6hM;fS2v5m7Tq8EtH~$4sgg%+so2ėf#Z" 032p`d``*A371vD=l2F( Ѐ~-fѭ̦/1P3M8*Di =`JL :4`=pCkAC(Sɠ5V[6R9 )A9ZJt(%@'uti1m;)R rcs?IaUbsީ`*CEpz\mCj~6;9 Ѐ"HI#D!Md %P#p*aW*5|Ӳ#S?6JCĵ&_,!I%nPP"{n67tVF>ι+ yD崷Ik;3=E2[UnI%6=zSw CA,r%35a)-$*qxg!9q铀&(W+ `[An17 S~M 3J.h2KLB*>^Bs7p=P,ЫGp5G@^~A= :P&p]}jd)=.3 н>ܹ`״ 8CGk G'NiDG>><<<222000,,,***(((&&&""",i- 7 0 @@;; 0 #   #;&r!*tC*Gٸ-H'"ٳȱW6ƒ(SG'vI"ops%$# d_ϣHuAEPH-O4.իp( G^{h`m)WB dpٻxaG($WH0NڕaQ^ڡB7ZBA3\3kpbDàUq*lvBm aG&TP0Ȥ0]:ߐhqpu Vu ɇ4ujI% D":mx7_3I>I |Dui#!1y2vQ"9&re%cw$oҌ+KFb"I)[h Kt!B YH]~Pؓ-uBF7sH][fx [.Bf|z"] &_-(Q&ȕ#l(0m6By!`T~- ufKb*@9t`#C %B/$ n@h;Z2m} z GXsqJ(!פ%Mq݉:H_(AHybO^}YB E$Jpoy➦^3Jx19A܂LO:Sna=.vdb9%,"pQ"?Hx%Q=$ebn%.0}!G2"E2R0a!$#l41 HPD*d& M0h@nh#""ljر&;$.7A>@i" "`)eZO"qIHq,FY6+1ӡN)MO(Q#dy<Y9rw?)pOF" H#NJ"bGJY&"|)gjsҥT*Ur1CHvӋW>+TԌE. JΡ9D@TViL&46:z*@CqdUyQlk1u~wK%Wez ;p 3ɐ'?v}!f O: XQ NČߩi[UPh95* uQB;}pAVݐ #T *@р\*1! nÄɱ[ 9f:XM7˴8B}FV 'C`yx@~#1aeN30"p֣KÉd!N@"Q&ʒ#AberL96bDUxӳB~7}4}M6nROtOd^Fp Q,Ea-!F-VW\\Q_# =: B#(f+`:"❨63xq6eV}u,',nmaR-DC&&Jw7(tm [$6` umyDL:)ۢM!yŸsT}s<grV {4u'KDp%ǝzIr^;}+E( IEdv% &ì5|qDzJF`cq1u r (K%NG/{yq,] /sKϏwIpj44O/z~PuEXsa56F_57P]^ ]'y 'y%!| ^BdX{(60AyP$fe^$JcSK/5*t;~dJxwBRX?X7j {@u&T|xu_?w. V.Xf[Uq( XA"÷Yts 6=͠mTM#ٕ $ȆX=E,[⣉'XhSO;уGQ}^Hwsƅ ؂n&+["w=V{!F~MUC>><<<:::888444222000...,,,***(((&&&$$$"""  ,YLl)_O l )=l).O6 .OO5b$'m`P0#J!!iȱǏ A1 -L6$+8!"`*ZH$J-6|%06;'0`C\ӧPJbOD=&D$UC)v2H41_e²Էpʝ ׃ pK*6|1(/|qMxWVt#KLLiq1쓘8(^6Ncڲװc~d!@vC(% փ d 7D|ѳk=ƿ uHg2ȇ<eHC !zLHz'r #B!u8Jj ($fl@XbF 5  #5Ԡ;7-G ZnAF9#<mYRbDFl&ep).P%"GѤE b֠K%HA)݇HIlZ|Hap̦_)X!Etg_ zV#[Zl?#@l"O8n(kP<:k O !t܏Ck&L`>-`y7r!6۱+ã164@m=^Af̧ )`}2%q~24 ) ϓU21·|a_Q/?| -bϥ9Єn8(GE)kc@6 AI,U.־IcX ׅl0azufYn-@ڠ8Zl8>7<$ zz]$WaE.fyH\W~?Zs_HX$9t9ykû9,ȭu9 I`/_ʊ+ݨѻX{Z:I\̡ W'G"`<1&&{@,IC*R$QNd󝏈U,䩞hJ*т:mxv@I"ds4I6INŸD AOVT%<$ 4X a$ Z "?"Hgڍ]Nw*~⢉*)L6qTXH9GR)!"X CՉ@1yJP5g=/AcY:Ѵ2 iˬC ƃu&`WDu6zbߙExrX:PrBTU21[h/NTNbbmr&J0;hwC}B L2S2ġګN-Ѣ X%.*=:;͠8Z>gaKx!aMu_gϺfU.|\"}ӟ>f:P8a몈GLXXp+ `ىgLcC -@jc$ 2)҆I z+ #l18)Z7eN$.?!jcGW-wZ *w9Ya$gp)܃32|]= J&M=Yt1hBC^cWSt:ӻA]iRҚ4UN4]NҺPq9[#^4K`[Ʀc]d֥ΦS*Z{DmNo=nQn[>au3w[x;(ou߅7QwvC3< x.qLJ4d7{ GNʴxHK)h=jF8ϹwnXk9iXr+ C7У1s}dLe S]hS<*z..`$#HC;܁v2d& DP}9G&Sy5# AC@$d:5\H5pڠ7RC_}ZDݟB u@h$UMvW (hqX6dK/D󫈾GD{ Vc,.r>*8.2g.,,(-uy' o \  P@ C70Th2.Me4!&*03.N M0U! #Ho  1+']185BAfNr7,7s> {-XhxGEqQOFQoaFTFRS*a(1#x@#*'#<x\RO DgGq&>ׇ`&r69!`A)D"c%>2\)q3?1@ܒ+ b@@u0 NHafTOrS=)Bvg.5 !fD)ظ>8=( , xWƥ hf7'H(zrNf1aM" n6$) *_ EU,I Jפj@ )Dp  gVA>s7;0OdS'eRFi}9.ɕ5:0NepA,ؙt4WWgzEa 6ԝ tYR[w(Rt8_ BdQrbbRB)rAD3Y!YZc\A[:fbKfK'DF'TxWYApH|&CgdA\X.\:6*pYbNe?Om*ICDg[Dbz!SR:"C.%;dxsj\`3.r:y Š-:bW$2D"g#qB5:!Ab3C@{"ld>PC~ACQ *1Aۥ6ȊsjX19*y1T?((B ((!-:u:7$zJmwUM5y8œ~z`:Q ?#HZÃBJ|W4a71&:]8!".o;2!"2$3`r()Jx3rT*m`Wygt@ M 0H=K{ M;aީ3ڣ}!c0o~Ŭ3;%YĢ]c2 35X0xz;8m0n0C@ 4>R'Od{1]O#l85j5f6qr.?W eE@^7;)^s2:f%Yf4f˫cwTv0&C;.7*[IJ2H#j?5ZӓayqYwۊ|g;V;g|g|g\t'n+>Q;.A, >HyVD+ZUs+36}2܋ Dǎv&7%į2P61AiF:!Wlg\vM$D2ܿy!Il ynDĘ&5mef4Gygƽ ! z3`*tH("ikkcpSLi-CW qi Ӳ=۴]۶}۸ۺ2̦:z<`u `y- ْhJRH/ =+wE,gE|{B&!|×(g#|k̸ om/ʨ}~A~.2C{]?C˸ }oԩa{+2lhJ9a9zkk;qywϝ ' k5mG.O]P4CE`6FZs_]\!3H 3\т 8w; i hXAlr Ѕ] Gwc00bw5{7p1~j{@qP\qYe~ qNCPjgMB`sWedxxȠhȐ¸ 9DʈZ)h!*k{k5y;hP9q{ -=M]m}=ƙ,urn~n8c̰̤\Ti ,<|)W@kbSdm-ȑ&0>J09Xə4kڼ3p4eIHC1/QҥLzĠ  vqd$b,TF[%l)~:e-G2rϻ >PYmɐO1y"(DT39+KaeU\FhH'2+5!7pa`)IFIݽQ×# u(;!3Qӗۿ?#݇Rϕ1E}T:7 rāyXVlAQ}j̩Cq{IZ)Qs^x"w*E`H!6`~JvSzLD\h$CTӒZne'f{Ry5q)&Bj&0YU4R ,gDzU]ԃQ!ɡf(y4_ty.Z< ޥ j!j K#v@`*L,,++䕚EJ[xuAaC ҄Zp 0h>,)EBjSR.. NiFd. h J^;iy ob(4eOX])ԱBPXq 0QL"IF4 ;.uI:.xs4™V@2w>򚂐6:+[Pҗ4 n+IFP#Mêx'4"zTrL6z;-@ )_LmS ըBUPYjB VuT ְM25@:+') ,"J]%u3 .zaW_ml Y/%e/;TjTe;І-5&ˋʚv%-k_ Bm`]k܂5R`ӚuVK*WpeDqKv?}ʱj7z-yzէ  K^κ!.1kHXeCX#Թ&]ߡf|u+̢05 XwݨK\ ѠOj aS N OC{5)&kILjD!lD態qAۡX_G²aOh mٽc~0/}`k_~aUD!2th#_ -"){$N%FnbeV 6Rf&uY44&*}:fOP"3a '(og !dÃ=xW? feul2tHyO5$R }w{qP*…GyL88ȇp[7wI6:qCwT;iwmG`"0 Mu` 8\6=v{Uvΰwc=yX )mdC`M$fg 8>8EUcT4󌽀hrGdSX]d>^߱xDqPpx Z0^\7hp_|KQ$V Pd)A&dQyIȐp^3Y^T;64V@e%fPIA P&I>Fظ 17F-FPOZ *i',i 7d2QBwV58Y7pFxYVWŗ_agOir>49M)ZRs ve%Ie9EV[ɘ䘲pMp h]k$U`8hӵiF69Kr%afT3X8]D>P޹2"~P% +py8b9!M|"_!MLaAHSQAESbsw2a  rS f@nCZY G$fDv-R ZNqn"!(Vcz&]䣬AJBC+er&iBQ "tj$kR v'yHr&إ,v$ PZ:8U psb1J.P@K1s,&,q,,LBq""%K y%C,mcjJ1Ap: 1G l8tjS71L6!&˲+ GۃBODLO;pJONT WeROVk\_+_ZM*16`iwWSqb+uk|Ew{Sy˷ k䷇C˸ 㸑K*ZPй)ݰV+ &^2ɺ_e2 nV{]R&KZ@rڵ;_*>d[9e:) 5#P_{aQ,ez/%=%}w;PFNy' dOjLUJYG0k <a90+q0OcQ87@PPbwhP6x3!uuaA~մb6 2$r7\ >ܰ/|GGr;'ȈS$9w.Qj7dLp A4g7:vqK(q /LlN Թ=0 z ;ĸ`i)Nz|LFd&"\D/3gBƌ@LōEj NҋIl1x fQ-m ,IXb\)A|ʆJaN4 !!>AmLۜL}B \oաA.ɱk pt(<<q fz Fswqlt1ґϴ톆3NҜͅ%N'bfl7NZtH*"FBsȼ""#/8PB5 #sq)q.rr`z 7G V#@}*qͯs@ < {{(ݦ K s&+:{΄";͏њ0Q"vZc+C)a~<oTuL*y[q!Z󕋭)P ry/x|4wĻU=3{OTs֚p56 *MS7gP7 \6PzG|uC$l{4c}^ܭpSߞLm7}M_9:,a Q$ 6 b|ߺ)X}+.B` !}.m{'/5ݝA.=5 .A|%p!B.N@-5~4Px$Q֐A@ (8> SZ|\>'FUZ :͠@M[dOψ& h=$rp;ݠDŇ+cd$+l`dqI[N$@0͍IGDbڍZYHB^Fq^rU Gk!Q ғF~ HX6CBQ۫@%l8m-qrÞl~DA!ݵB¿c0gH`Qfi̘p%@ :鬒up&r :Iײ; =j &jҴπs=΍F anT(?ɷ`& gʮ<P7ȲPN1˨ K6]܈É<5; GU # 15~r`&27,ldȎ&Ǡ:cAIZ Ɛ(aP{/8YMS , U }ˣܘʶhK6Zo_E$c8Ŭ/}yBPwQY&g]wcl w@ \+!Kɇ$ hp5Y*΀L = ,_BZp<۠IU OLl lZ._ZO6 OZCL ȏl_¼)..׼ O)085kݣk j=(-2 V[H0^(*1&D˗0c~2^>}H!Nq71 l-lsQ 1}v$vԀOzY+PpφSsZz Hⵖ 2ןj+޿cʗ:3ETǐ#K$_iY:l&ʠC;(gAd zmӢcV1]+l N2 L|I˘lKNسknҁ&626VA2A{ %G? Z \`p' Ã2HV&LDR*"eD%8̡wxOiWXAtfĆKLnG*L>Ђ)A!6Pg,ZB(q x543$:jtGG:):kRDXO!.o2瘐)I|=e»A$&8]gHԦj"X;`N%M]tj$:uL4WzUAY.펈DYЏ\'[s5iX슄(ҸL]D,'5jf+cIq:X PE޹J B"_Rݎ]v7[wS&ԭAe=![5&w3;Ħq׮4k/Coc?2Wn skqO r55* |, fATP~ /#wGc Dg*(Yˣ6?o/J9 ՘\mt OVC2qsz s X2|G|[p.v^r$X U`x')w,)ه1 *`.|LxC}>TVuq!A2 Vbq$6H`D[:>xP3cDx,hDg4(QATy%&PWh3YLy#,BFA v1EP%.'V@ug,o.089#cWq$Y_@)F}R*E#X._pZ(#$m!{'yDX~Cx@O'E1[9; AtXru'"Y~ax #q T(~> csg 'y_'q=ӡ*p?Rg^ OpCh{s$&Q rE`sI:"\Z59 B`1V*i<1`;]昑C`J[.iiDZ0f9Plh9cȒ7xP:QArwvX k7wQO8-S5taH׊8"C}g aF98qpZ&d#+Ɂ^BC#Vms.V2gSK E_#K6:sf0 /р\3U)v{4U6#ɉ˩3D GAB 2gQcB=5H kWBx|j זq=1LJydr؂U1")'Bs,&#a(9G ˀ硾"MH*䣄T(!ࡿ1. `=80 gHlrEjAkيcYI'ߥrLj)dwZGRh$ܕ~~ͦ*#?jˠGC_Dw pyₕ%wbk| }}RH8&*'7}7#U@ X."bǩY1iqHlRwX.CyoRk_)ro4J qQ] 'vmyw)`IpiC?@*y/ }r**d~VN?/&&T}' "Z 㰠AKHI=VI Fw'װ'q{+8vy/y _3a@OɈ$[FXYD9VWóBz?Opqv[%S0i_Z%KlUdKf[&h#j9b(a 6Wtwk$^ E A 0zԸC۴tcJp wr}v! yZt8HOd29E1Z k17[ P ;lڂYaxnֽG˾ &V !h *{+۱мZf+\٫=JF*PMֿm6|8<KÎAcRf>f>LC9 ¸qQ),)БZٱi  )i^HTrH*$1h2 ̌l˸T? 4-t-@C]ʑ£،R\Ocmrh.0 Aΐ& .@Rn/"\߃~Wy/[].ű]SuTKf=.!˷TɮzCkZxbYn~N+pj)%DתtUQGy[M0ӈbO!WN\%>)Z >`$ a7#`.uH 앢E cłN^-&~A w(~˻$O&nn> O!_O ]_"L:("!.N+-4N1T_:M73@L=WFLC5}LOLIO?ROKOOPQQ&SDUP9Q" VZ:)4J얈$#ڃPAoN)`с8Yc* @&PVM_@Հ jen@"mpnQ/4:͙YRD9ċ)0䘔.KoP3= ,C>x9UE! ŝۮM 0)$/U` 2:h~#`nށf*"b&ˋ-}6ވc:R#ʁX݉ J).ueydJ*~h< AMiR^X j0~() lpP.j#ҊHrNj˪0uJFOޞkt:2h9E7PvmKHz \.w3e10E Ђ x@>2L ~ >AdKwçuHrX_uLf)O$|>3MI-v#j@Nv#=XF4&k N.;;pa7O\pZ:eHOόx;\SpNO*;p7~2kT5W'NL̶ u"dcf#Î]؀6Ї=o=M`}:l@vwk"?#^;");7d88.4o.<,iAakHt0( 1Ȁ jpt p#p&9(lI]rtn/qn(< rIc\y61W (" , &,f u_k20,A^,ψ4qllh"8a E"8cE(̀6Һ]I HJr%#䫈!c]آ0q&y*Ähx?[uf>./dBayE_0ĀG8Z:ХL .@<ёvZLD)0"hIFr :VY^D*mh" )Mz#d\Do\-;4q`y_ N:=|R0=C1xbF ZPMɄ R ˧H H:" %G8-SR޿A}K_~PԠΈ!j&FuSrQxUl`*P:#2[@gXj1P BԸ.-J[רF X嵪u2V*tW8 _Yؿl(,;f ^54I{¾ l;iF*.z*HImx|dMHlU[Nuv8R-lwQ"By ߨ7|.R?!wƇoBAWA. *m]p$ kkaux$.3Tz6_\bIcQ<g^c< yD-S$+yLn eX{q>l!*l첗_d.όXhKVcbNjqBmk^qs;y:|n߫g~V;UF̢=&zxvO  [z@ t$|0^̡i Hͥ͡ސ5/ "V]Bgc50bXvclf8xn>z=t?ڜ = kWޫP"}7 &V *3ؑ(R67ٻ(,ދ~{|1~ᵢZPfRM|&LMll ,oi1PTpywSlEPMM$d#6-5t\Q"1uS9"p=7!a:[n&7*=k .C0b,HL,-:Tm -bn$^tc"/nq ^܂yh$dƲƻ􅼼4qTh e zD6W^l{5&':ϼ&_H$rˠ5-`s0!HDyP}$* "lLl`יozuMvCH~_6asw0wh~G'!!Ӏˇ GkէXSq}1nR}dc#P*c Ѱ #6R`Y$?/?17Nc+0 b5}qPcNPq#8Uyх^hL?hVx@?v0 p 0jywWOQA"bdNP QhOŰ"1&ΣCvJ%:PP' 6ׁXKw-7!&X(ъ(xs )Jӷqշ>v'zD T;d!m[y̴Ln'qQ:u4|6(T>yg!|fAx6&50/v·{q`Ax;LJzB>TȘ"Ќ *%>aARPTQu'rԊ1:?2& 9iEfcbf3ppg(f(w&-u+ %@e%8l y̸'rV hS|9 x* n .*Xc[0# V=~w/m rwqQ{U0XK~ w'kɑ򵛦Y[ 4%_fC! \΅$&EkYcK%a6OɃ@)\oٛAk2^1Xe2@;Ax(_ƚI! IKEWv`vi(0wKU&i_eB9*_% :JQ&3dJ#m㙢4 A*C e?zy'w;jX;YiIfWHxPB9QKr(OXjggDr) J:QL*= _ xu+Y9aj0V I&:+z՜x:330r90-ua RӠs ~j; Y~Jf" Ns<3J>V=2 CYB:ya1SO!4N= :hr x0eWN#?s! ) q@b7t Yʄ=ʴxCblz ~wR]۸`jT#Z 9_G!dg51=#]k˻C: D>`.@۱`m;ݲ$Ӡ3?l .@Ii+lN@PF\G4pɰ +A~;pH!Q1YSFE`LIIWР [n0r $r??(T 4+aoi!!)mL8AaH wEO1ë4,;1ZPySjl?#$(g!ju#0*ʱG@b! QM IU=P(0Yb!ڕ D?RSV0ƸSk Q2ܴƸyeh_l :FqFpQ:Lw92 ;ɥ ɡ)lT; ˀREIGBʡ=_˸̿Sr1Ap͟ ̟,ͬP\ŋ\g \7I,̴J9~t-sZxt\ KtOn#Ŷh= cUUAbة\'"ىD؀Sm6%eLk@@BB{Aɭ M,`7 zчMSH$[DPM=3)G^ncdmn$GT}f`d:L! P=bmf>}fHgk3=Lm)'ݵ}9/ӣeӹs\]R9[;A =>*GBBaIO>#K.U?Un!@W]>Y[cN Fekq.usy ;PK?0V+VPKN:AOEBPS/img/impmodes.gif&GIF87a}}}{{{wwwqqqUUUSSSMMMKKKGGG???===;;;777333111+++)))'''%%%!!! vvvppphhhfffddd```\\\ZZZXXXRRRPPPHHHDDD@@@<<<:::888444222000...***(((&&&"""  ,=1YY1׵YYj8@` Hŋ*xG  9PXA`uS\ɲ%.jdĐU@IH˟@ &A5b c F>JJ>~8@hVf#HժٳhN*۶jSx2}#ˈYn]̸cB%GK`U@ǠCbDl]X"$۸sO,tieU9[Pq1FͼsrFNͩ,ϳkwMWxĬUQOx1(N>E_c<#H5r:}F(^0`" 0 R#5 -hSIPHT@C16A$po R,)5.#ƈ ȑ9 i啶) wdp㉖Ihhp8fW9HtAMOiVeILj)H>39hxRZf 曉>X]?V0"\鬴FiNp~!PVj.֪,⟡Fꅾ𨒶jD,"f I顸.€ R0!trN6 ZoCX : ' 7G,Wlgwq/tRCB+ʃ0pA0,31D \8lH'L7PG-TWmXgN"SJ%ф5 &W/+iT-ӀM@C (NGҌbw L|g Cb؆{Hv{6\ 1nl`Ԩ>':T0`kV*zێ>$vY˱t:Ώzjʱn;}Nzo-bӮs<y"|M}K(xdu@  a.f@wsz&o!$P`0mv0a&$! *PG$p CEpX b C- 67g(֘T1=oD'@r^X%.xTE|]#@[8*o1&LaJnBcK^cEfuސkhDMLe&0iM![#~0tˀ*w VÕ-) ŒǖRE5̖T )fFFi$G5M9Rvq9Ytc?68Ф@ aOq%$&A$JrmDAqЖȏ(=' 䢍Foxt%"-JFGJ=f\dR^'UIJCӚiJv@P1bTyIR/ p fM}*JZcsV!ixuh]LYqִ5/kF[JWs^rguJ_v%Eaد0WyJUD6EbTU4Tβe#^lE% nC*-/N"y).t#lI_FVS 9d!2BHmAru\Q.oQt*v3\t>1;µlnPkDuohʋnu¿B ( *$X([  `T^L ,Ĕ*dxM$9 XDDł{AoppZm"9rKdHإm|xc,Wr ^991PbGkaĎ@ t+b'X@B9D EW+c.- 0=є'ikx"AJN&QD5&f0 r&5fפ@( bx d+p%1jI`cQ2Q"@ADze|(;exMonNt8p%a*՗2fg7B׶T&|ZAm~8h>xW) am )yK"2 u{b}mB&01 {9wuM@0@3|9 ןw'%#,gf<Ȟ}@z 0@6g&e y'*)'r( *Z ,:xww/60*}9J ;J =a!k'HdvyRlD mKk\^sdUS0$[Jmf'erVXΆFuz/ʖ*Ɩ U{Jt:gupM: ר-J` z]5P [@n3VHuIW{Up*ws}Uڞ⥹$JVɚjh5PHP`Vܺ7 *j^犮``U>EtUʅЙVG*ZP@LSʰJv5K* BP(*=P02;G0L(  YAW2f8D[F[[B aG۴NP Bk!? GqpA:O69vNc E, 1kv 0bz;Eۀ_:\E* ;$H 5p1=^7qHYU'jOTp/QkH#Ձq"dӮf8 WQ5XQ0HZY!k\*QZ7RW))&:5USNw л>$RaB,"NR>xbg"ľ̶(% Sk +9uZjj1:jSR,Nl!̣$_%!L )+ c- !/ K#\q'!h&:CfĐKçPsJtà UjDgRsyzb 1'B\7Du6*sYq5Ve% { ֖:ǀ4;h 8y=*mQ8=ʁ=< ͉҇[ Px(~8,自a +B#p}#p E0 14YT[aҌ塵pX$iwmUL-&^au  #i!n ֮  T,Y՝vy.y.YyS:-:1%@݂v݆ uiѩ_m* rx DaR'JbQ P"Β=ΐ4qrax)%J> lc D 3T}Les!aH0P'Vog$ '8זm5R}lEb"\6PZ;VI#?p`9De]Sq Dwc*Dұ !HC]YӎP/ - Oa_@zz+)i8b'[% mRfʔMo >Ӧ_"\ OP,릠,LOInGOImBbތ |E1G໔`NKܙ7$`XVzsϋ0}C1Pao *uh?(_a<MVLF|R_ |r{>P0? @>:P\A| ;,52SC"<ձQԙ/,ͯ4IG;@ɟꡊtacZsɄ#8HX(Xd)9YB)I0IÀJ`J[+SiۣZ32bPQ ]KIM-M }Mm8<YAY]2P0 ӕW-RTA1yhSx.cwf) WF̠ʕ,[WOG1!mVMr@inɖEʲ#% 46դ ?Z5֭\zd_^ITlKK)+;}{.xU( 8uH7=}p dTF`ŏRʤKf\qSϧ5Tٴ4Z7Nj+Sfә +VN&" ]~X vKRὶ񽲄Hm ڞQ<ƽQ|8GS!Ş 1F` O԰:Bl=r}"x܇7\r ZU.s%ˎP }À[*sUr|':Bj+BNKn8vl$tң)H1 jʇDGu񏜜MԨ,/Pe}Q MXJc,Ub+ JY4[(wdJ1RBq@]*4e SkH/LcLDP:6Hd&,'4Ӝ45ñm.gvccsH@~ Fg):.⡒1WsrViVt+hy2NY2.Ɍ]@,W". -}AA*72S!ɐ 8FgPQiӏSzUX W?!` VJ'bhTThTKaתhRfAֶg]ʫaM4}+$Oҕe/{$x |Њv-G:Uˬmk{r<h_VlV[_v-q[C@Vq 'm-Jwne".ޕ%b w]}NzQ]jwvAqX{MW=)^K-ͦ'} #aN{`, I36DI!9O7 08[a1"iB:A>F ,4uvWv_iq.Yn$b1kdőx8 dܚfu}ijoDg6isg;L`n4  AU:m.i}qtt,A{a@jtR0]m` Ѧ&-d$#hKY U3$< ]eGk[ǿ"4 M< 8umO;2~* Q!&)nߌ/O- )6Ӧ3~we 4 F!Rqlߢ8d9v$@@dᾆ4S#!/ #x)'Ck.J.X- $:g.(́)<¨:q?M<ʈ$fo>-#/ݭxU+ddW MoMf[,v}l{Hstt7sv}|{5NF{x-ᗉx]njG6PO:?/yYjϻ wr[|+<׃AЎZV$A \+rEH%^s؀_me=mukV(AKPR} X9P*{~DVWE| G Qp*" SGi:DUQp%ᗀDt\7+_dfW%@r((K1@ӁF%;b n Ea-S !HG(!D L`7BaG{T62" Ci7LB\Ch4{ 3D2!2T2B0!w$ fSR8xC"SsjO H  Ca$(pdCj уu}:ri4E%_?x 0PyG|3,D=x'C'@ 75 MPr7!5qIc21 *4+R ))RIGXr`x* B+tTM0O^ WCF,=,N> Bq+CP3hAcqG g( աg_B/=0/PE. ,l0;Bϲ0 S+qg0QĊ3ǐ~aiX]P )5 2Y)+"߃rf }5ִk~ a)˸kt "!%ָg5t"' .Ras%%ۅ 0 kbQpYGpk˻뻿 +xuXU*1R0+Kk׋٫˽B97 @ ¦wGx;f+p`.;PK#'&PKN:AOEBPS/img/discard.gif >GIF87aF}}}wwwqqqkkkYYYUUUSSSKKK???;;;777333--- pppnnnhhhfff```XXXPPPJJJHHHDDD>>>888000...***&&&""" ,F@pH,Ȥrl:ШtJZجvzxL.zn\~~o}x=R=KGCg~( $3D3$D3BBG᯵E(30( $ B03=C CЩk.^7s*\! 3a wB@`Hm‰D"R B(S4#y#Q`pV$ݑ./2i*񶲨ѣVZ7  4(4bTHÊDO`XI!n] m ߿E{h4ƿaPbSw }5@/>Ϡ;Ӝz̠@iB:-&(dh#%iO-ri$(Dͼy˭ZEd 櫇 (,M'։'րU @@A5 35__SDFUhG^,DE=Z`܈jhxታ4``9aVYC+`ȍL"nPWhc(&0"EBM0 3嘚`*;2$xE>љBi'#=P(M DA Q!!ީy% YDd))U·g4 $i$*m8饎DT8U'i98j8 B*+\VĕĬEDܰ&m=%[@2Tdو:$`ĖiJ$ + 9UE0O[ř+0Kܿ.7 J f 7ԋDE1 |Z7 #47  9d N ![@h @ǜF6ltm|Bs!|l ©HS<0I`0$A@Ki+,ID>|Lx-μmEƯ<\q m!ADNGmMzǴL,=)LP3fzIAo+2CA)j m)HR%b Ԙ 7 ԗգ 9:xB3%/.1GWb8!TEX SALZ"r\9ekA@L, !@<ŀg*7(nQGihXye8Dx7%ص Bժ[!a ;9ڢ%{Y5M0'3X.y]oK, P-~Z[ ]H}~rIr3%s 1၌diPu*>Cf6 !j4  % 00#UVL+zf0fFB @uW&44`8NlP@6qDAЛG-&ƶ{I"Pkdłs(7lV#+Cjn y$qQ0DҸ .=*%~5s'ݎbrgh5SkkH`[x)aqB@[]&wl%c4VTSajFu>,q#@J{0D^H8K<;B.Zɾ5meHug$a; 4wt <1i@G>h[`ZX 7W S Ӟku!R< 1EJyM=Q.+ Ck`u\0=<|N`cj(:я-1!p T'Sa;*[ 0NO`orݯ ұDH xɃ븀wY>?E?x}Uw_ǝ)I=MϽ;zMOB7ķQsSC/]ؾ_:Xc _:@p@/B++}xA(${nzRWx} RpAMh `y(Q.D- fP%gXdy&X \"H/Zւg8,g3;pp'@L؄NPR8TXVxV:4Q>-fxhjlȆ!\X>#,4̷|؇~<f z f/> BX0M@ ؉L>>000***&&&$$$""",%@pH,!ol:ШtJIHu`సu,^|N~ b0 4|4 kus,}a ,XaBFr04 Q<4QҌƾnwJbR!3$F!e0eAgeUyz)'H(:(rZ*b*BpwFBP5XY"~)th^e+J\*Bְ cFRUAp!P%TaiMʢ_Ӿ<#zroS󄕡{,\@FְZ*y~5Th.>(ssW*Y:Q݌ucBpbܩyxZɵ |wGǓ~`ӟU'vB.`'،8@&g!@}@] apDL@Txhr?aNjwX&3jPSTI]mE]E礒*0y$e~'b,؝qD``SME03%a"d)aHj)([yb$;~HP%RuEqijx}'^!ÚЃ4 o ^J4T Z 'K 94&\iZY' Z3|~ΖGfBd%,p 7nJ:O[?'՛=|bEѠVբŏP& `#9jT@6?@00SWnPg $ 08d0LB /$D'|&!)BB?i iH K , a6hEOD !E -g<E^Q;~_X$XF*qH(DfTǧN,)a `9JYDg*2у)ND@E&񎍙d!^’bA)UJ )E<(<1& AyA:lyW>cppcP.7XM+ VtOO6i$ %:M"zB% :pq愇'~sƄ8=O>ħ1:CpЄ&e=A27ʋD 'H退ăJH`+)J+a#"THNzS:"}$R%X lL)JU)RrHE" ,R$oUa>H"DZ}Ql\Mr^ )b:d'KYɎ=ҵ-hGKҚŬ.0r \x @0-p'<nK X Ө{.*q unYw}w3;PK=vPKN:AOEBPS/img/et_column_trans.gif GIF87al:}}}{{{wwwqqqUUUKKK???;;;777555333))) hhhfff```XXXTTTPPPHHHDDD@@@<<<888222(((&&&""",l:@pH,Ȥrl:ШtJZLzxL. |NPv$l zU,qF_H)/M$V ,N)Oˋ$$QsP{OO"[P /./j E& ÇS @3jx`U-~{1gO?XGJS ,g& 6`"8(VhaA#SПr$& dPN}(1 (4Xl`<裍4GL6'R! Her ڣ|r kx"EeFR,kFH `|[d'DhdiD|P\ŞYAZ-(UN@0(~iI0B\'/fi CDtP:u@aLJJDD89LyDcI!".L Qy;DPĊ Ol'P e~,.g1BE"p aqQ, "a%K;3Q$,Tvpб"rѼ|*D1i<"AfhJVU@-ЗPPp GH(JKB=S_+`L(vʊu+76C{":|:6B(Ȼ,(>5y A"dhsB(fNf@Lw7a[ㄐ6`-#H=b ^@; !lg[$M{`h EtALw\295( #^}<, Bd#%Aް'V0szqàڲpE=Ј90!&3DpM! c#[n3p|9(Uhdp&L pNT,8,qH5F1`m#.G$zچҶE| `Fd G=Mfi&k+.(q L!XH.KH':O@Bp$YDB)A:DLɣ:8Ҝ̃[ M<CުkI1kʅtUwb*9Xc. *A}ņX E,ױvSlA64JȚJ'$]2P0 `ZvtgR2U jlҮ |nE5 y:mDo! f:#fV$4wfe`{t8Y7Q/FEZ^+ +AE<cB+1:?NV1:VV&@ -bD &nʰay4TA(jₓ9A ӂ!cќMZ 2?P1c|Ǿ&IաH k0l;7KA T8թD4:Nc 9 OA^B"77p pԟYֲBu܂++ ܦ 'K0@oŋ3j +dH@Ǘ0cʜ#!4dX(Ȥ!)-TѣH*m"I<Kj5Bx% $ڵ۷p}%c䠡dX q Lx'&7TA>0PDAg˘3k16rO,V̺yW@Hb.Ѭ T Ph'N8Wl߽$*Pسw!M坰y|k_E@ )U(ʺ(uwހ $ص@J>  Qo `+W$&)&hT+4HZx `Ukjnm +”3rX.&0P 9` ,$l(,0,4 +PXbD{!]t8tb@ Sg\w`-dmhlp@E' InAך*![ @!Gk.Kmcx=⌗~pIJJ )P{Iۧ3@H)FP頴Q1 ~HgN1Ȧ/"c5>ԫ xyDQG@b4f A@l F/Uuk 7<0?a 0,ABcyNFٓu|⸈pY^  dC"xλ*21&aFcD|xPzpd1o)v؏l%,#<#h=T9Xv'qQ"١If80@(H{Tm baT:Mo|FI_Hhx1(zOG @Lf.nI\i͢1$Ԗ'YA`>P2ǩ fÙa@9LA@" ̑9Nqu J]0m MyrLAdH_&gGBtji$+F <f~84,J!@8ƴ$p~^In|NO(‡TJC"|r zYz.>H` B{*)}̼t@' o*`m6SƚL@$dI+@`.miQ]﹂! $Pr<N;HAp~(DM Pݗls)Ne wDAeXG mT ,j c‰D꬝洕/%HF߮5(>nYMKDZ9 ;/ּ jgw ʗ[T/ʴ-*G|$t\A82@X$Qu vBH@#1VQ_w ݲC&ā޵&b;zbT9Ub,eq@-QMe7fr ;tEzp1 R/0?aa)j8%QzV7څ'%VwM)@<8r@a|GqsPB 1*0 )M1!4hH'?}wƠa6Xuy$;v)`SG ,!065!V#AfLP/05`|_x !uX+8u5)՘/,8p$Y Aݘ/dQ(C1mR^ c'V4*!B%(+0"P&K!x-A3mHx$ȇ>х¤uqxu7 e#D]?\#QcWI&7)Cѓ% _H;!vg0b .4@'M: c`*q \av'v&$aˡlyqxpL.^?2 #;EZQư;+(H0ۄY7 L N)W#: QrZ(&ˑZSrLDe&IY+閧<]k@@!q""!`7 ҁs AgMG`h谙BI)YbuўKQ"1dW a (w 'g YLY Jty<9 b0Pbn:|-YBRiT ?~ wlzA vgt)q'sVxm'bwP)+8Z 5f*UT#1!FX lv!Whk" fMf$"4,` I N*~)i8gIzP5 ix, (N Ҩ9 ڣZ%lm0Wtjl5 {Jv u4i#0.JkDBU,!FZV :[4Eai$SZfZZWrl4tMG Pщh =u10EȪ * .b␭7aGzof֪ egg+{ʧ6p)QJ 3@䉤%CZ1bn 괩@g@fv6HBP%HzDu2P`q: ;,:4: D U;uK#J a  '_ 8hh+70;jK§Ju%mpO17$#x;>x?W f!:`' Q9 <-RJ{Ba(`7sv@1B ~R9!CYbP_kX,<."wġ[=*S)4ˌ%kYbrrʞAa}&1˗ք]BՙaW#SI~s7<+žDbInhfBb$mE f!A)ƐtA;űC~)u`ZZW?;%8mAiʥ~c৮(O=+)W t(bٶ2>4"9·(Ad3]8;L* c%jmu:f0A71s$û1=qU4@$%9q1Ae^;5}q8Maum;x'` l- \ёG<zhD.9h3<-?0`935<#P;0-LOw@:T|A&4;*dflS7ˁq`4G#'14St!fpL'G38mXf[)0;X 7"=$]k5pY3a8h0$P p=Uz*y~)'($9и 8[_=lnԎZaWdT 9|C~Q"F} `y[1;AO M-ZO@%9:mur ;ÃmOJ~%5i'ƾGp}=r#m8)Q)XܺXV]N&hH [b D89`Ad2*mf! ! 1<5+ζdT+hRB'Sev‘vc$B/n $ɤ#+ dAPM  ]ˁ΄[|9#핽  J#RV5$). ~&G v+|V)`7> K"D CI$V-8I>9PB}hzk v[. rpE aGjJ!$ n蹚ms6g[G΁V{.I!ȔPxQF ^~/K "@>qE Nq5{`;PKuvqPKN:AOEBPS/img/lobfile_spec.gifm&GIF87a,}}}wwwqqqmmmUUUKKKCCCAAA???;;;777333111---+++)))'''%%%!!! vvvppphhhfffddd```\\\XXXRRRPPPHHHDDD@@@>>><<<888666444222000...,,,***(((&&&$$$"""  ,,9əXяX K"; *X-ьHC*IHŋbČQ "x@sS\H—X(D2L#5 iapHlh)NA=)ju̩BFIVzjK%sܔKh Bϖx{[ >!{sD`LFmn; LV?B/%dd XD{ZPdkز>tS  \wՃ@س'yҖ K.HFrA-G@p9һ/>~ߌY\Dx&f)HOස'mlء ߉ vb~LE aO!@H"1#ҞD ]& 3 $X6! r[eaΜWya#4 \`Cݥ9AIrcXL'墌zRu)<Z%W:]F|C&^gD j)("U9祘k,%v6n z`OYb_Oά9f{qLf! ^ƆգlsM={`Ld@6/4tc:ԣH K#r Y)C9|.l߶qb?PDI^Q,@$ubNt$ I RM=A#!ĠEbFRvh CT2m3.XL KBO|GO&.%z3!Vdt>"sG'zt/%yIɁf= sst,|q3!.$"*$G e"'rPEx@gQ$ Pa4Mc ̞HU%"lI%I˞tDR3nܠhIrRX)""K3'9h⹑y6/7Jֲch#bTr-$b3xj!*xJ'JT[īl[MfJ JTUƮ/eCx2} RG5!jEP@Egc+*>@\Y09"G6F~.@eiS@&tR vjYq&sYx F8OPVQ1UĪwc)B w#fuwKf0%ԁH B[ʀB8y ..|MRL,E ߌ1wm1pXaQ<q1р&|89 1KH)PIA=X#@r | L2cBl / UǶ``* W pS`r"v@@ ( da?Q;tp9[Ҙδ7N{0 pϼ"YeeZ0\ղב_+6qd`@bzaޅQ1vۊX6:&_;z>]cy+!r6C xr[;sCX#n ߮ ׽A7Np8r+;Rx+$>8%lBf,ީ ar\A*gEmLETe+:턾b !z?֢NJ jH5C3(<7`Vs{5u,I/Ӊ\/HiF삗$]$#FoFY$V7QRI>RMB`1Tt6c-xs;I7&9wi%FN>~MKZq艢cJ'L\&n܀,ɦkD}"'#FE##&;r1)cHJ7 q[ +; & u%a=8{Ѷa$>SEA6'627msI$9Q<؃#7GGXpE^+98&*R[SƂtG+cB;Hyu%4Hš<q娌7%G?sH_)cX+ƍh vh YY 5#A9 Ist 4I)vttFّ&m) L jx0v) Lwh _m1p6n'av UœF9l> W*yP% xT)nwHpfd|XK 0U~d)-yqg)lٖnpr9tYvyxz|ٗ~9Yy٘9Yyٙ9Yyٚ9Yyٛ9?i&ŹʰK0'9YIǜ) I I y ٞ yW>xWw7 !{&' z ډ`s*Ӊ z\+#*Qbe:0;G+ J0 &j3 "G^٣>5 CGD c46pLڤԉ NZVzXJ9 q20Mxv0RJ [|D,`:2y;Ц с6?2 ~v!8G Zd 4{wA="sAxax F=oA ꢦf!W¢wqzq)qp3Zybqu7Lb(3W@7pl"a!2b&*,TZ paXaZbkҢrA@%7Ax *: ۩Oǰ {aXn*5K>8 𣗖n' gF L0i&&q2&Hw3$^Ѵıy_ F dkk) D[W3.)$27[4ʉY&[H'qba!8Cu*HH؉Y4v kiE:zd΃]{:`KJ%TKM# I< @ F`: IeW>gQë )9["54)E\1+2 о Fǹ@:vܻ`Q<{W-bl ) ω뽬$l%!  < <;KUêí`lƪU< L<7V@:\o> (|aBLD|օrw7*'$]B ah+z<'g[Zm)PP]%Y:bV:b Z7Kp\ >!)%|m](3ݦmؙb#w jکPBpiN<5jx W ;p[>偱0إ>,9~=[ 80c̮sPgCfhܕՇҙeA ߡFfN]dƜl(n8UxK"K":uSI#^٦C-ȭ ? %,䤐錢yV uО 0\J&bܴ ,Bɮ &+]Ob krs]Q6&>&z؇!0!T.dB?Resd+ Y Cpav7 > 3W#OV  *(AtPP> [0Uxܻm0 &wq>  ^]+ػ0/ o:bO L;7?ɖ=LLK `v?܍b,ǖZ,@\峐o@2sv6xhRY"x>hE J_onV +$Ϥz;sK-&a?~HP RAOb.ʵo/ E5^֏ oբ`uo!O38((8ؒ@xȘHX9SI aI*:JZj J*#c"u+;K[;cd˛ ,&{nqeV8A`` .`>aB؜%%*,Qa(yG, bX<8a*F-xbaE ʈc2F^>dBI%p!-V) 8ʈXdVhВP*tۍ[~ fa͐d/vd|%@II!AZɧXqYȑIK>l&y:GNiIaf0H, cN`)Y О cBi%ˠ馩!h4k:ȊlHz(XPUv@jV2mI=mkG2`hj ` Goシ:[ph+ mr-! L-$1ń8$[Cz./6*r. s2 3.B #^n< d0B)X-r{%Rr^$P2W7lfV"2kTvhmy}5;Dq}zp2 4׷xu53u=zd'ʀ'q<ϱBg[÷.  t8CH`<:Z8Fwo)=G#Fo'?LD. *bn>n쑌bnP_@-FP?`hK A1!hq9q,+G! Z0w :cs )Tn@υX xĆiU^Pf=!{ X9OYp1QA3]F%.ȈFωbb+H9"ƎyE㡢v2p|`b OA4h"EJl(I rFaKd&G>bZH5v6TŒp,-2 e.[FɔW& ]>Dfl DZ6Ҙ$) ,Қ&5CLc@nr新 m6{ wL:uJ g1牽z螀(PS'@Nd(DPS4!eVt*))̏tBQbוnӜ)@/)_tBz)`LLxJ,#H 5լn)xV`-+cu?&6Fʵ3F;7׼ tĠQ(I v3(UmMkfsY쇀6ke€\e? xCd< v H4mG*Hm-q*wmsk+ FBKO[kL#֦OQaqli ]V CԨ59Pա+( 'Ӷ.EA%م jhs"xe"`:(,|D0aCoVDD(qNbh|U7¢≄ Et,ͽ;TVjQ87" ]8^jF$o+_|@#@ْqR%Kl8)'k)_ϡUIϛ+}VE]3o>Bk}\_Q l!}kЬdXQBh}j_E1#}{6亟 ft^x#Y#~IG] 8埍@ ug{@« }!~/&i)qE\S<-2$ VI|!q FLlr'̬% {텬P#\|\Ժ Q'&lpU5@  GV4"D?"8lV sA u :/sA@S1cm'p4yc_o $cWP" Pm 5p>>)hne J 2S" "=ǃC),CY0~XB40a|&$2*ocA)/2482?*22 Er 8*1uƔdQ㙨%3?C3sb1d å}A-_?azlp: e+(Z[n:H" F:lZڦ|Z)F u bX5,rgЌ@f :4 f'!"DH=_gU f Q0teB{J!c5|2aզk`F( S s?2amftuZę 렭ۊ#"I6]3@B`@Ipjo $@#ZP8g˱ !{0`?-3S2H )jzH&7(}Bv~Y /XW <۳vWf>i+RIk5At][ @;4"^k`9bGA^zWuju;Y"Pv˷/ ڷ{ m[XpU5wFp +඘;PKQr&m&PKN:AOEBPS/img/coll_fld.gif/GIF87a!O}}}wwwkkkUUUKKKGGG===;;;777333---!!! |||ppphhhfffddd```XXXRRRJJJHHHFFFDDD@@@<<<:::888222000&&&""" Ͽ,!OpH,Ȥrl:ШtJZجvzస Rzn|NCxi)6 2j R0) 0G 266%X  K zWKHFEYIDC{2)ZH霨0%@L%ĚJA˥a…BJAnF,QcQ;C3%D 2ܩ& Kl *u6O$*1#j %dpksPBn+x'!Me[-%VVG@@ ˘M#Ʀ QC BĒ BDaװc˞MQa8J r=sZNZӣ7FZBJ-ϑ<Oӫ_ϾBh *1cetN`r=P 5Xx%蔃[MB VXnDSc`WR8P  .1&pЃBCF4LI `D (;WRI+y wtRQkF8ԉ T~t)9U؞4@)96gz33)J~ Δ^vb~'4h4꜆rM &J( 0SA6$z (q_ju4F *Zlz*r--v:Ƕ趫|ۊc￵ +',zG,q*÷vYw\? zQ\7/!!;1r_&/@XI 2/Ϝ4꼳/$İ# G#qԠH$ 7|p R۳H8 < Nel}rU8y6?p* ^/&Mcy ܟ{DBP8#-d< ACb9T랇.I+vl>W8(6ȓܖ#ZCݽs^A|'?>~s})v=Y5l|7[O [ ';Fz[2#eYRFX |ǬllAYC s=uns2Q3TbgnT)D!4bW|(;%(ngbρՊ@Č`9B`qE|T6aPb*YRz9(p +W[H1&!ڈW 9߰ 6 7EST [£l&F&hv_I.,8K$*%LU@tbEU <,2 P*+Kr (Ja*BΐY(FW13<0F2hS g0řKHAV0^/ L3 d5JPis| Kn1+H>y\i6P iř Ѩ*`-`{HSd)PMh0;L"C*үD7wׄG6A?6'#2vg›B7ƃ!@BA? DI慧n%U4nTAN! {mI> B{1n.S̺@"{&qȚ"-@pwzl@CV@ū@ >pM1P {L4V₋y toٝ`F?I"';eSYTsh$  |j">\+qB QALQK@V%ԋgjP(YeTbZjЂMb:]A zZͬf7z hGKҚ6N-Q,!j KZ>8ᢵe[O16$J)!zuDi(׊xI_ZO{mݚ(<'NaFMrY5!_+Xz.deB(חͮ|y '$(W~rK o%2#GMӄEwVBnwWCM8Ʊ U-$nɱv%"psdN3ďdOܒ3/ eljДQN`5b6r-HŠƜ23x[(A=7̐! ;obn'?;:}LM 9[PNF+" rߥMEsWݳTxԸ&ܩM1 @H{a\u)Pf;4&Mj%ڣ`[o"NbnP{7w=a|cx4Y _W6Mb1E늛לZWKY@.25ИqiKc3hj1-N<,ˑ KW  k?E [b.;%$tU8=Pz-ꕘpIZ|=b/;^\kwDnQ3)Uj_"FfFm#Q-n$\fYfQz'sfG{A3 A@""ь'q//|̏,@D&0F*#}$A7мe8#:-qwTG?,_<1,1O}aH wTǑbuOw~%Q\ցg/.c4a#e>L&tLLb% w: Q#e7J -2x%\x#?eB#0PRHC0{B$u8 # 4!r=@hc)8(GpqV'Qq+~P?b"JW_&i%jQ'~ %t7 P,Q$qB!*7ROp >)4`XP^ f:L燹p;/;M,Ÿ[-1[hR4f3t%ĴU\Ng`,$pWrf4j@o (ex B_yn(7h([*,e΄@FhG[Hb"?OiB,%,yfoaxt ~%wB2 ŀ(S]3r#n2+@>-"_~!,&6!/6g Q0 `i*I )&F I$fbW7Pai=V'9Z%}f/o9 Cs "F'5i I 3F/ MP Ds$捋v`8rd~Vef.L N1Rc0zv1 K1@or Fvtlyp;y-"k 3<93Y !z #1Ї 9riTwNFE]0s6$=VJ'0s !*  HE` y9gǟ0eSLS  0#x8% 㙞Jj)+ -0FRwP?9ᷤƐt$ q5WE*eJ&c4jp9F9 *JC]qq5jkPvSp/P C~&c"mS Gr`Q3DzzT੏1)hh099 Й#@)-?,2GxW4 ,I4pb$0VШ9C1tDc:XI<4%Br[w'd(@ [2Djt ֬ bz4!?z91Ӄ`=-)41!.{)H}jD`U|V )f&3AEJ5DcE{@J)*12z72 >JGG`5*.|@HCIK#O x(t(AI(A^2b  4[C*> ^%jĴL1 $5(I?*Äƴ|d/Ҙfk{<F: ]F!"3@S3@sZJS30GOaT!t0~;X !  PJoRcsey4`TPS Cҭi`[`:aϰi@uZ\9*0lEA %QIKQ@Ĵ8i"м+i^}{%` i#p92Y3`X2`UI/"L6خH;!'xGG2Z[˘L^5)Ut=5&:FKGdx] { P61RًG2 U;Nl$([!d&u&E\^L*q1bLuΕư2n -Ӹ6u/YǎQ7eqFEOv1G/y6:Ucu,a ^W` [io!FZz.y| \8P"GZ.OǎW& |'c+i"K\r!ӕ i J. J~$y24VwҪ0 -S9 Ͳ<`Õm Ѩ[A0~CbDϒA ], V *,} B;v +,w m/\־s=pvgIҠ+ 4@![Pg%:sqqlCz`dG Yn -}] ] .V4JAj2҈FPT}٢0ٲ@٘לpG3'sN .G; z}.j}ɐQ]XҞl4<=*ƍ j/ɍ 55K |f 3pFe+T=57M9 ܖGm R5@P\ 5d@Z"MJG gm ~ۥ NőD$(|z5D讘(.;Fֽãl:mp/ 杯ϨBez&Ca'4yD7h #|@ zpԢ]nm+氪0[C.Dajr楘`U$xB_UaN!71îċcF W!-聄D1%+⊫K|yѳXK4 Y6 eKj+?脃@RF]W(#6bCr8E(`?a#@G :`풸 .@)҈"JnAbIqaUi5N/L8:<>@-0Y Уb~N70냍ct)ab18`sY"ؐ[#ݑKQ,..[|c\RpB~2.^μx!t_QBF7q/} .T>];3gSm`GζrHӸ 1ڼ/HqΟAǟ .d$_|!`FEuw^ӫPړ>p#82x)9IYiy *:JJ +;x:#SȐ"X{{pȼ -\ 쌬q(tu~݈p|CP/?O_o0 <(DѾ m D!0$.s1k@FKCA *ټ3Ν<{ 4h>>888444222000&&&$$$"""  ,h@pH,Ȥrl:ШtJZجvz`8znp)L_Y'ah&;))a E;K0[;M0 E 3OɔP)lQCL)P0Q0Q"| 8*\ȰÇ#JTS~8zg`!dUSiTQQ%ƒh)qt0kZ\%bęUDy&^@8vI^4 L5(LuMhPi¡Aԣf6 )FVm^s%)D0Tj-jK&h@TBG4G ʩpZϩqic+B <9)Ye+ێZBc XUyڧMgwU"x/v⏜f\ǿI H$A G\3X|žD+D1b5׶ |R%<<(mF*G! 7ҲCmODe.S*&KH4È*FhD -9Jт)D)޴r-s\FT_jt7.jhvI-(lPbaE }u@x*30ߒK))Cy={8nY,cз*Ms\XssV/ LwLߒɋǕ ;VġjE?Ӿ~.?<2L j>RP2 A?- (8̡w@0 Ր &:Pb R"pM̀>W4 b8#"$o0V2ȌF("8 @ npY!Bs” 0 F ydR!,Q9KI?MrvX޸=e*$<*y),[,"D M:fqVKNdτ&4A!,V cqXj@ӼX4e6@^&JLSt:ؕw HHԸ`AO5& cTSq 74N ]5_#,\pN#<$wb6#EG2nL2iD\ bsC| \$*pzeS;SF,lVO{)tJ у(qZT*IUTn{;G n=B9p2(e}(PLIZ%TUKPdFb0!z,SBcf` $5zcmD$UBخ1:NO7jH'6m1<"Zص@"פ H׼[wo0ԭ'yfp/;77ދ~50{.RٍHepv`+40x-'yWb($tۘS@s Ĉ`GXlET%;:xȓAkL8՘ kr{#(x]ۋ27 v<1;c#FCm` 9#qL(ŽF5¯Dxn1\4Bh#E y D@9yۤ/0$7x> fMG$v&QmwYE  2 RzvbwuQg0+5

`(iHy^02 [$$jV:F &葒\`mdIVnRwݐacPwXl]%M]#a Frk?lmI>wt\P pOpSIvDiX?c6 iw:q~k挺1Q`GEbJv bdaaIesu<]v tpSFk0ec1]Ac76-FԹdCAxEBk k )BY\2BYy5`ٟ[yO&p _J@ީ9C㥜֠p0 b視hl*FR7$ k P٢uZm=i8h%j;i7Jx4SD&z`lGIAIl3~usMP ]s s(5OpKhs7v0RtxK})e Q إUj%*ma'~N҉HKMTs-xNybS*TNч}2K5$Lg6.4'MR; XOv0jj0xQ^e2iK PT ,>7fh'sjPYSN# ZoxZ3thg-0R b4]U 0Ůڨ!+(VzsJ ܡLA4G(ȦUxʎ|Vy0zIWXĢܱJ쐥-7~۱  j%cSOj "[1+ l_9:{ H1l9snah 4ҙf@ ;+ps BatX b ziT$cVpKɱmtu wyK { Y})`T@Ÿ8ډݩ! opu`dDt.z\#ukn $9 Y0L r \lDR;B:~[涤=vF}xZTj0U9K e+sCKqZ0JLjZd`EsgQ2f'Zy 4qxG3|Ƙ L Sh+fR.'9|GЇ33! +q :*0*X"8$C @ (9 tWAtRgS6zꐋT,` A슷z,gvbcvA,tW$1kGs, " "[ǃCJ-kYlSl Yȓ~9 y@h]k `GI#Yª  {jK )q;PKMwPKN:AOEBPS/img/et_dateformat.gif^GIF87al{{{wwwqqqYYYUUUMMMKKKAAA???;;;777333---)))''' vvvtttppphhhfffdddbbb```ZZZXXXPPPJJJHHHFFFDDD@@@888222000...,,,***(((&&&$$$"""  ,l/:JľJ3 t/*\Ȱr f, CIIB a0l qFOɳϟR)8Ԧ@JJABC (ճhӪ]K/ o_I`ط"2˶߿rn#0aKLeDȶ~hgS^ͺװc˞M۸sͻ߭`zqZ.,ܤX@0NkνËOӟ7džOaעk 1Ec{E6߁|6C(6"3G, CuT z]!$b-|23⌕Aqc <2Wc=ٗ+"Ud-- d+3pW%ea$ 啤H? Gb%d:Ae;R˘jYimYw g3h!#餔"(^8n).~**5j+2)*k-j+$ު++!*(+l&Hl"l>KHyjv[-ӊ ;n~j,n鼲[bǠll%' `@ +!RLlr# !sʕnNJԞ/.#E,B8Bzoq .#J$NAS ` &3n3YkcJlp0}[DV"t2Vep62sLn Hn:! h VJ14JK_6:ظчlx#T8 '>Hv`煼4]9WaHc"82ׂ'}"H(q4ŕ\/e=[J2Q.♶A9P>V؀1Cֿ b|1@@"|]G&i csRC rw$#bty&!_f#"dc%`cQ"rC%4D?98BAH _̏'@ 6y v,,JJCoGa4 tx # u4E, keD1M ,/@@Mm/ CCـ5)Ȇ gЍnB=N"TŽ.weŁu@/J R~F@+)Ԁ>H`v(2b{W;8".nqPS  yfnKB񟭬FRE "suro?C;)Fd$Ɏhb:`4o ]&1QPEсO#PMTTꀈ<@k: @4u-٨`~1 d ,>MXqҹEgkYI hD/@S#w^iX$DZp+1t:5.=iٛlf6\UX u@[ WkOXsmRS^u{K(DZ1ӤG8Z3va>:E&1*bh ;JbAX ~T޸l/i'1`rXQ% k6nE0q a$%#Fra$P!EΩ+ RDVK [. &:4ui~YkVq~-!QͥX鴒Pcgs1U'R|Dy;!$&ӅS lEܛ(p5R{ *TW8qbHP;([d՘ H:#ՉZYa#0i ]0_"[4E~qfK9=܅F6 h+I 廈U[+":ᴕH{~!k Ʃ>*$ `_QT mQvd}c1Ȭ勠v'ܮB2Wyx<$[V]x/~{b}yMRDpDu'-17KWZJw?y$k#ɬWL|6S)}*"4aNs+z30wP4`,Ӏx 2 Gp&tNb7Z.:y7{iI%FI?dXnhXbZc&4s=$GBftiw&JyXA$G \q)-@ \%Nbu(aޕ$^m8KD~Kvq3B3m:3it:4$䓢 Sҏ5LX'@ה &|%! h_d6f ,4.M]r#߱tIUpQ&ȓFqCo~VL$m>Y4κxݻ&..޹.$^gY;q؜1+sx=2QpdIp?GzG^"qV# $T*aIEDͳ쎠z-3`h嘁Zf׬Tr.ax>z^|~gEW֔Yb9;pIAr `Gy03ΚICGp!KXqĴ^ɘZdNL.)꫱( cNq1II]C !uc'lc㮕^:!1/>זrmf&f>k݈0ަobSE| W$:}!v(%!:/r7[^.-0'8?Y;?w*JTN1W!^cax/EƱrbNȾ J UxuGb^*KkAe䆤:Y/_s:)yFU:d4T>X1AGOb%P++("vbqPJCqX?$q*P7(~#jhIN8/ͯJ)\qtŷN(aaԚԝ!Nןk P02bx؈ 9t29i *:JZi9 +;K[k{ ,<,y!h  ظ} 113qH8n͉0H.m\0 <0…ߪqvHn*VmC6Z0я/"Ln#r;~ 9d d{1 :ѤK;2̦:ٴkD}T's4ċ? Qݞf=:6H=v}2 dZͧ- :A(ҽlz~9P@9' ݇|`EyYmGa^hYnȡybu3#lXɠ0i(A5JxM;/hD/X~ΐLp`@Ĩ 6AIgz Z46̑Jld$,2[tip. ] $`Ⱦ֨I! i6=8Ί Ь1? uzP A:_C:6Zmh7NKMwvJu,Lq쨛w\ 7QDz w_yܺzq⤘墯znnN{9^{NO|?/||OOZu}oߕlI~oX/W ;PKi0c^PKN:AOEBPS/img/datatype_spec.gif ZGIF87ah}}}{{{wwwqqqmmmkkkYYYUUUSSSMMMKKKGGG???;;;777333---+++)))'''%%%###!!! vvvppphhhfffdddbbb```\\\ZZZXXXRRRPPPJJJHHHDDD@@@>>><<<888444222000...,,,***(((&&&"""  ,hDd G 2Rˆ%G G))2 V%R2G 7) HoΓ#JH9l.8RǏ C*ND'% 0cʜI+PÚ@ :E+ %%@PJ 2.8FNj;ޥ< `)*O-vG}-++ ˷v "Ia2R-!%D#xsa69M# VT^V)Ai l+)lK0`gEAVU.[6هd$T;n@"ӫ_Ͼ˟O`_1 kX1v]u`EDtDV؂9I܇ )x? Yb;p vb6d`iu`nΉh"J/E!uVǸSY4FD Y[oqU;b! i.%iHhUz9-C }1hZAo|6Prh/v6Bv)L삡!-UI~Ia.M(>D(ZP*r9^sAO(jy؎ݔPZo"k0;Y1`1'tVD' 7H#>CƊ//~տ,d& '5WlqP|%w,L$r((KRSR<ӻΠ @L8 ю04PGLuQ3 [5rY{ד8͈d6!%0VcuwrJiuߥ=HyieYuaES7$Á x#h+ NQEJD*䏳؝w|y`QTiTaf !#.^ s_Kii9 _=j7}F. bMG%*V4yW bP1vp|]m}iQA %ͯn9?A嫑 K1B#(#å$q1\TM ""S΄1HLpVRG:*(*)>"Mh"jczװE-fcb6 hd4L&U=5@#(G[!Q-}gGaqRHh+w&JN䒍 #QFcQғdsb^:%*rbZ=˥. XJFv=W T̙UT&=DL䙫4q#dY+⛭@PKisYjQjWzwq!yOcJ!q?Av>MpU5mLJN<6Go }$ zI"nҖo$4S-9Zt-*,+!N/ąO*ʠ:U%j$.'2x)@pe&UN?bR}(FυUU臶vÎ/&ȶdrJUKCP*yU %$=>O$\DQۙװf*ШHJh (Q[w8 I*we `d!uUjG۲QvIYQ6(npK{18К%5|{}J/\BՕ:5`8l8.Ks)8Y"s/UrBBL % aEL'Bbڔ#M,w5XJBӕ/)&ԿW] *ĭ /q3DT Ʋf.W}t%`wv>Mz ,G'1l\^)#b4kWtpNk6pc`o+fh W0[Z=  X0bPKuc|ۡuoa:tY!ڔt!O) ڭUs 88w%Yq 9oB9 zcM:})a!V~F g?땢4lPIKUEʤ#&lGbQkJ\bwz6駚bZ"*&s1z8#h $RKjbwlA &ud!`{z'vk?8:|ڼGz+!:p:%cB[Z&de.(Yq)yqu3^r(2r7["75D_DKL%Eз~CD#8xˆeZ'h"Yv&q`?"&{6-8.<c0c ?j(eACv6#:Kĥzb$j"pgX롾k`6qĘO $zw@`4v:#gMê`^j~3 LJ aYp^ƧR$ۣ~vFPd<<4/Zg'UL/I Y`)6ؿ.P\?` a A^| DjV#L\; Vh:{眉, D̅ |H10K, 35ˬ?Cr @̞p`,~̆@W)/2@p k% cbsIX Эѭ\MɷuPA@<7- =9p)gq򒣋0@ ֠ȕsΕiD3߂FRR==GQ=pȝFVCM.a0x?Bc&&(pl؂M g0NNe(;莮 0`P%%e$S9UVh,8OWd=u.?zb ~n#>4sQY3us|;5nB+j00LZso!9D6jt7C0( ۞O]w6 jo\n|wɓn=Eȅ8 Iq#h p!D_tȋ 6m Kɕ,[^W8?BX*R$uCQßǩ#4|K:}lq;=܈ C*Ԝ)={i;0)FcU t I!I '&rH%عTvKD"N:ehɐ)) v3_LV<ָҌ h(^EcEف) ԙ&%cIQUȥtgZy1Ayi(K3lƖ"IH*lp(H _F%V{ĵrZ(a+ L7fn6~!ȠD@/ĺGt`߁Ǯ% dըji%ܩGZHO+njJn&2R u@$t/}1L P#S/UO,xFj8]mBٔiLjvb 2e!vYtie5^Ƽd޹0AE98duIORmX枛u)bvy"̂fgAB rrw5*5'{B=(3=}"d(0"/d؟[gf) !Uߩ.gʽڋub€ӏ(kuwCtE`j }]CcT@.ٝLtӓ=!H \_skF<";`A" Bq]Htùم 5A^ddki!)’;00XSH%_*BDYj=/5Sq|b"[SBLqޗPa^28@/" `Wp訋;BoafTr /*IL̬uY7!ű,EK٤-lJ!K!`v`QˢBEFs.VY*i!0W'?SV(F8-e.^4szu'"O%P'PX)P1)OlV(E/:QA:#=MQ8#a F,m*l,xRKc*hd.)*m XpiWR F%lYTrN) =niM)>-KxNoC<8 : 3.NU7(6:LJS]Z] Hխ#0X0wGzHVgOv;KANK[3젌UqicXnL/G(Lk=~ܖi᛭3D6*tPTzKɭH)uB6 <'w[S=6ës=03R$n%#So)D(ڋ.@| u 13PNslJUT0uD X`0࢚2f!Bpx$ELڛ WxE$+yL /8,kyz ,W xMLB$\ |cūCqgwx,WPX~ym~Uں @g Ϗor=\/>_\|D6|h {}$pw 76vg7v69).'w?Ӧ1(Vł[79+!t;3@?GC8u`H7SQMHYyCXQ0Xau Ig'ixu m膃lo(s(M5Hwsyȇ}q)ᇁl(hS80u$ngK%ղE=0w/@T#(O>d 3a-U v PQ${qxc+uh)X A4vn!v҇;e8HAGdʨHK"%TyhI4-uuK01˜HP1Kx?̷X8 ctaIюi?va|Q7T,!) BwFQ1n1 /K[).M6Q"iAu oo0:Lr6 rj$p"'Q'eB )@"y/?82\2nS;9cSQ8f2/ґe+)@;+o T H@.•."fb/OyB'-8.Mp/A,F`7 /0PՒe`'<0C0#C0hmfٜ"\9H#4;c2\!ߩMa +Q@gB ;34£Ct4N4[c9 Lɢ5!L!qD] 9'6E~3 šbɗ&Ѳ6 $8!&Y8sz.› b +`] f)O)=c= \'@ oy jH# R !zSV O_ nR Da>4>?t%psZKZy1I6^0yT'z t'=nD>XoXk ^ 꺮E"* @k4D"Q aN)r% 3GvF 3pD#r^uD*,eKw a0P9h[PlsZɺvgJ*0Yъ( H$vyZ' x 9+Pس QԷV!4t0 Q61X3ԈcKG̖`Pա.wPՑ n[@xvF,կ0aIcxBQL*uɂ Ǡb />ո4 ŻǛ*E_˔HMw0V\U2_/f-dUZ/i?K l6DU[(W^s p X5Xw9zlԾ;XUEW 1v Y57^,CZ9?Ǫr+e '] Ƞ[w-*1\`\4aCW5qTl@%)Hs$v`$\D0kׂ4 A{T"]\^)p}C*x &0|VUaLaylbZaa9%1Ǧ+0_/ǸG|8v&;#>Ʒ#`0E滹He`.pW`~l}P V[naHpxtAggwTqC*YɡfJv(ܚ2ߜiiPr{LmGU,īzm,9)4kb2AhHz*Mn=l<ü+[A "-_q6,<=~ o3J5-,H hӤ0ӕ1K]#%p9ԡ&U͒Jmcq/M`h]J?s]݅njdBJC+׀ֻP ؒ0u=RtZ]g*LN(:Bn9YhaNؤQ) NHT՗݆ХM MlFҮM Ck6+2K0i!{ۑWǻ$ (9F 7haH;Բ fA@"4a^'bxP\py=f\s|bm|$C\MtR3ፇ;M( wAW{lDΠBA`K ~ֻp `48xw2@agcchΪ`;N^ fMI%@B|gg] ~ஔ]&`.9]?H9h N47nw84hWh N=MrF]n^J(ҽptP8Z6zF `u`#%Ԧ0:GHgv-SHv&#;EA,{ %G禮KA1R.ӾZ?˞ꘀsA09DήqP,p)..QRon&UKqszX*C  " D%EOnQ:B//%-o輕Ul|2,D/,p@&uPZWO\ފlo3pUuքԙ?oշ.W-M %b2Ղ(b=jd`!7&q1X젞e@G.0ӗ wFឋ)3SD44`:r@h|7$^8)k?_ z j1G1ٕ<3Dݪ(௹LatGDG d R;Vd)Ÿd ; ;ůݻȑލR);ٝɛdF#Uϟa%Xb#B(wŋwo }l+ GzdB]ּd"r˟@:*ة7f'$%r"s## EuǢ` UjhaLQWp(ekN*&74SQ T]Vnu2kc6ReutuAб^̷m[~) BG"u;^Rn868Sq R E @~ɼi7$Vd #u yNj}A\'x[\#M؈ VWR,c&%x% Ck8 G BX_yfjHq)d&=.ݠAHԏ,6#&RPe(DƸ%b;t[7@Uۢd$PR%zr>t7)m%&;,fYmUhP>,5)Cl<fؘ@RZUW]$*se] U^9 QNɽt0l l DD(@v#P;;xI2>VW&-V#[Uۮ#Tll6|2r?,}:Ұb% *Mq ;"Ϳ>T 7Ln_JEKnq$8D2U#q`}X*E>x _ʝȬ{Ӌ*Z`Jy nPhM<p:c2Nk '!*#( F2``^[sšXGsV%-B0@6p,pL@W> y dy-x& b>dDa,jG X+dE@0$HO 2jpH8R#Lx9Flv {G _u@JÎ攰\]MJos)Ȧ8W"k% I8Y5rpÔA[!H*+Ω2ui/#c+pRAp"M)6$J $midMCR"T+HNzJTv<}dN [7=Lv5.GԫzX,])Ⱥ$c6 :fL*бT`EbT׾N(#*D +JjQ3vLO wJ! ̳h1C(Qj(+W ZB$:y0@6B>dF- [ӭWM6|NZ̈̌Lj\ݖckuI#nRv^.(4 2hWnKW5}J39l~KxH)~I;R!-ΰ_ gb3H(ф (Dрg : fLXUg7aI< 2:y@ʾVbLeZc`Q<`%fʳ$`_6>3-k ?RL lD< ]O90FS3J$(1hTVos)8;CUA^Y4D)0e V+1͉LC$=lk ySE vW#Fcؒw-' B wyMel4vf RVY/ߨf3{^B<ލwgf7 rI"}9l q(ԙ? Fb%Tf݀ZiNHXaN_bݟ ' u@E]'`T@=|F{O6o%w]8RJ>>`/ N%?5W5um5/N3kJSOTccsxK??Ns Ŀ_߻.Y>kWc >gy+l}O+Ǭk7ow۸2|2{&/xƀ h.Bw;,@t2"@oÁxgXrcJGS2[7vnR`,_v ʄQscm<؃CD1carǦ/qMAN2bG:VmR퀃Km]J̰!*i(A!Qȅ:q|WpfAA93B~TTWxrx&2x@vCM[hpox$7l-'3#}Hh|8ANJ@;@FW(5?؆Bv&8ހm( f8$Q?ڈ ׸f1Qbv8>{[~ fxG~g67ِ9 YcV}Ct}9ؑ$y>b* )ْB25) 4Y]R< , y94(uA)q&axw|LyZ"x%D`W]N^U\}?Cg7lH0bp)8V1bCq@6;e9>yy99zi@D+O'sx!2' IAkYUdٗ#cavH(PvkHmY\nyzI CU)YjFq97\i%2Jhl t G " a%S9my|5'A0/N1<39WnuF@#HQYvW_7@QT")uV{:ũ| ꠞ6 _ [Er3rߙa(i\RTO4t9'`PiY9 'I _s)!Nr( 3dfJZo ~eUwowbBXMwar%jj` >􉢒U] 縩ک^=ɓ@iUYUJMT j?a1M z#Wƚ%*|A b_аj:b *Z6s7Pd5vd4cA6cjc` wԊ)nRf*Ns ˬD5  KW" 1hQ%4d&{(*C9srkBԦQE:<۳>@)/gH۬u􀳠{bGuuMiOQg~X24Dn\1_{ dh[W oPppe?Szd!FX"q24iΑSmr+t+>ty:"!y8l6\rby"c뀹0DZmq,(l7%yC:r) {J+깱,τI˓3kjkc; ;P:lD趝cڹ,vDP kбܷ UG19kDJ:f9 c) 0 Tbg\lN!U7.׼p J!E()z(0z@NT HIýD)1> 5; |7:ʽJQ2_@Osu F(AƼc]Kwk/ʰ;b|CҥɰN ̼,$$ڬ$Ģ̕kƨ 9ȃ Mk))|(@u;. Xlih4a50Yʁ$Jԙh@(ޡ Qvԕ+Ɍ 'ȉb;g1,{!!R彜*}a1͍P1JR481jU5[6!=ԖLԩa5wR 27],ˇC HGT" Uϵ,H,32^2rp-1>(>i^E-cJԟW2Q!0 H`/PT :۴]}W[@-|ͫN=\Bd);6c0<]}+K96U|>Iռ#X>/Ƭ>75GͱN꾌-2*mn>Y j൱nt .aBnp!`#'d2β^3:h>NDF~BJΩIP@*KsP՞05'3&xp\^Pl P!X`ނbN34Xc ''|\'M"Q>{C\<䭲"#3 E. m/)U ERbW^$Q%!^bٰΌw~h9#K"Rc9T.$hԲJ4uߧ󼐐V~^M4n25`- [vV@C !u_A:A3UI8%E4>W . @Aa C 3!_%أO2KLfVݬ_A8%3%QX3B.$S^PIz_ـB6b[` @e_OtI;)%\Eb Xne]c4p!1|oYPS /s6OE _c@n ַE %S4%`<5>Jsy0B9eH2nex3!X%_ gb 93h|0q f%S P' G';>U%S59B[tP%ִ9&EC!^҃$ADXhxXbuqYXpI Idy0JZjz +;K[[xd[{xxP(Pɀʐ"x*,iyX yl*e+>N^n~ήSy:izyR}LRRRR:… :|1_9DJ1ƍ;vD"B`~b&t3Ν<{RGD=4iRT?Ai0H` 4tJJ<͔֞lۺ} WU^AtWb"ZSf% TT9 ڑZ93ù{3'21(@hṴFP1FO=%׿j3zJċ%ږtM%ZbDW=ʩ.\wM˛/rtK<7L?kVn[!B%}o`>:Uȁ܀ !%&<"aNddXb*H ~gH GH1 J;PHH{NmCV VdJ>"rFBc QD j dwAH%1M.i fǍ b!R }iH) djÀhm銛35 ;R^]@e8"G;ܰgt@2DȈH LbE@"Ы al.l> mNKm@_j_B ޘ"d4/Bf*́6 ^m$6Gr;R̬'L :4o:VZ708*)h,JayReK{&IIRNbFeBp QRGI̽l*+$J+P|H ˌYrzGڶk.a9e^V̊n **@ng=(ke-'2‹dpYWPLyYWw^>MSv WY669kxLɍ:yY N*Gf3b e`@#~bZ&EdJX:6i\{aI nD@,[S4H„܆G֩(P{ҸFQYdRUIWHF5S# PFm ɚ5EsPc˹s5Aa%*6c XЩMΞ72P=RiBT-{驅Hj OcsP7)G:#b3u >:Ȉt]MI9ܮ iӤƄG uE&I}%(jEZJ;r @cmNjN{>Bni`#(^D#M͋GWu>meN;Nh[8ɠW1t<E lh V"B+,'v@q1\A5_I )3v m)_~<=1&^Aě.kI  [pC{(aĘغYDnx-yΌ'=_W\S8̴٤wbLUpS(zO5 텒{#x__yRÜY1{VD{K[)iPf ħ}NOֿg?i>qp(C!6w `4U 9pw6h% R!hm#U^dp}f ]B3u|817'+'”!M!BXDWBNP҂* +h:c˵ 6L1!^'Ղ&bvy8jÇzc %P%:em .guxM!s)I42G'SăEmbWfaqR`1qWpzш`㇀?dXRQVD(~9AI)((CGH aU>,rnH]Ц\+Qe[TX-Hep^[pKP∎\./!/ZF$A/q/70 p3PHBA,0}1`i =`3 0!Sm3gX/evp[!5s}4H7.199y7X^ 3Lxq2ƷLG7NQ)<4y |(Y8KzC_6\Y^ e5biXik7jɖqynp)wهW90vY3yd &S)ɘ 邒I<;əET I<;2``IsCasA$S#C-OS?Q:20G9|)1~GiY:3@0Tc<ᗅ!5> ("sV0BX|arB9Q@ < &q`ivYvsyya+œ Etqۉ tHkpH`p)!u{BȰLGh,)Њg! z󱣽PWL0xUYtwBH8W^b'"=J % f*KN3P0P0QP(Dp^\uPUO!QE)fbj+N׋(+x9^3xF ( XQ3*!ȋ'C. +MW.٥rV$V zz'tVVVM_uUH*rCJSѧeq/f" B[BSbJ[?h&FUJQe~6O+ q&ZZ^Z'XQlqsUu!7¨n9كf(0R=+,%Ly-dO';cF8V^m3Bd,`z0!dQ`ƏDpN%'è'4#;Ay & PrRQ JdWG5p@d8cHd;+=*ɘ,:#!:kbH8zS$QHHrMk SƤq ВohO#h8hl1ڠBQ|O|0}k%ቌe*BH|@o>r ZdQ U%fdmCmаfsnZfgTK5E_0XP"Y fbz0ĐN9et|A;`[r _a8` tabVW"a1DWPkpi "W wr% ]䠙mv)t@X'y$D8) G%4kyt>`qF'2ab #"{!ꊅFpG< p(Ěv! !A!LXв# E@B = 9gҠ-|;<$]V=\J5 7 {\DԿlImB="f:=<6A>`0 JPi,`Pk-s=Z(,*AAF-FL؉b?؏ ّ-|*XңԜRVK}Cz5=DzMfi'6ژ1t =۔Y|lN}0R\n^k'eK"x"^C$Up:_5)eX 46^b^hg(kܜD I $V[ e s J~)iA0Yn!l^+;F朙e4gva6l&8I2 pIy{np^\eLƼ n~3 ܞ陑#RZeݵ^~Θ:GĮ< ۱^ڷ~3MnَN:~N>~>=.JA2xH # in9C=6_2!*3 :yrIfL4#頂%8;%pJV j-7!kb-7r:;:p>ֶcQrl'SlP6aQ R@zF7J~2 +NzR% UzF'RTT $Rf-# w&ơUV0+hՋjD +xS1E@4lŪFupNWOz1VXp 2+ժ7,% a |^ { hjau20^Kt ! /OoͲqp#c32D`c0d+'Vb)DD%DYRRVdd ;)%;ėDDR˜;;DGV2DdЛDd)): ˻7ʷ H` :1"WQD("oG%S +Z1"5lBi@ JƨѣHGEwdeUO^B-իXjʵBNA ٳ=kv•]ʝK]T+5KÈXԟ#KL2GYv ͠CF IPgհc˞ TG ;n} (NW]μУKNkν;d‹Oӫ_Ͼ|v;PKuV0Z ZPKN:AOEBPS/et_params.htm The ORACLE_LOADER Access Driver

14 The ORACLE_LOADER Access Driver

This chapter describes the ORACLE_LOADER access driver which provides a set of access parameters unique to external tables of the type ORACLE_LOADER. You can use the access parameters to modify the default behavior of the access driver. The information you provide through the access driver ensures that data from the data source is processed so that it matches the definition of the external table.

The following topics are discussed in this chapter:

To use the information in this chapter, you must have some knowledge of the file format and record format (including character sets and field datatypes) of the data files on your platform. You must also know enough about SQL to be able to create an external table and perform queries against it.

You may find it helpful to use the EXTERNAL_TABLE=GENERATE_ONLY parameter in SQL*Loader to get the proper access parameters for a given SQL*Loader control file. When you specify GENERATE_ONLY, all the SQL statements needed to do the load using external tables, as described in the control file, are placed in the SQL*Loader log file. These SQL statements can be edited and customized. The actual load can be done later without the use of SQL*Loader by executing these statements in SQL*Plus.


See Also:

"EXTERNAL_TABLE"


Notes:

  • It is sometimes difficult to describe syntax without using other syntax that is not documented until later in the chapter. If it is not clear what some syntax is supposed to do, then you might want to skip ahead and read about that particular element.

  • Many examples in this chapter show a CREATE TABLE...ORGANIZATION EXTERNAL statement followed by a sample of contents of the data file for the external table. These contents are not part of the CREATE TABLE statement, but are shown to help complete the example.

  • When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks. See "Reserved Words for the ORACLE_LOADER Access Driver".


access_parameters Clause

The access parameters clause contains comments, record formatting, and field formatting information.

The description of the data in the data source is separate from the definition of the external table. This means that:

  • The source file can contain more or fewer fields than there are columns in the external table

  • The datatypes for fields in the data source can be different from the columns in the external table

The access driver ensures that data from the data source is processed so that it matches the definition of the external table.

The syntax for the access_parameters clause is as follows:

Description of et_access_param.gif follows
Description of the illustration et_access_param.gif


Note:

These access parameters are collectively referred to as the opaque_format_spec in the SQL CREATE TABLE...ORGANIZATION EXTERNAL statement.


See Also:


comments

Comments are lines that begin with two hyphens followed by text. Comments must be placed before any access parameters, for example:

--This is a comment.
--This is another comment.
RECORDS DELIMITED BY NEWLINE

All text to the right of the double hyphen is ignored, until the end of the line.

record_format_info

The record_format_info clause is an optional clause that contains information about the record, such as its format, the character set of the data, and what rules are used to exclude records from being loaded. For a full description of the syntax, see "record_format_info Clause".

field_definitions

The field_definitions clause is used to describe the fields in the data file. If a data file field has the same name as a column in the external table, then the data from the field is used for that column. For a full description of the syntax, see "field_definitions Clause".

column_transforms

The column_transforms clause is an optional clause used to describe how to load columns in the external table that do not map directly to columns in the data file. This is done using the following transforms: NULL, CONSTANT, CONCAT, and LOBFILE. For a full description of the syntax, see "column_transforms Clause".

record_format_info Clause

The record_format_info clause contains information about the record, such as its format, the character set of the data, and what rules are used to exclude records from being loaded. Additionally, the PREPROCESSOR subclause allows you to optionally specify the name of a user-supplied program that will run and modify the contents of a data file so that the ORACLE_LOADER access driver can parse it.

The record_format_info clause is optional. If the clause is not specified, then the default value is RECORDS DELIMITED BY NEWLINE. The syntax for the record_format_info clause is as follows:

Description of et_record_spec.gif follows
Description of the illustration et_record_spec.gif

The et_record_spec_options clause allows you to optionally specify additional formatting information. You can specify as many of the formatting options as you wish, in any order. The syntax of the options is as follows:

Description of et_record_spec_options.gif follows
Description of the illustration et_record_spec_options.gif

FIXED length

The FIXED clause is used to identify the records as all having a fixed size of length bytes. The size specified for FIXED records must include any record termination characters, such as newlines. Compared to other record types, fixed-length fields in fixed-length records are the easiest field and record formats for the access driver to process.

The following is an example of using FIXED records. It assumes there is a 1-byte newline character at the end of each record in the data file. It is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (RECORDS FIXED 20 FIELDS (first_name CHAR(7),
                                                                    last_name CHAR(8),
                                                                    year_of_birth CHAR(4)))
                         LOCATION ('info.dat'));

Alvin  Tolliver1976
KennethBaer    1963
Mary   Dube    1973

VARIABLE size

The VARIABLE clause is used to indicate that the records have a variable length and that each record is preceded by a character string containing a number with the count of bytes for the record. The length of the character string containing the count field is the size argument that follows the VARIABLE parameter. Note that size indicates a count of bytes, not characters. The count at the beginning of the record must include any record termination characters, but it does not include the size of the count field itself. The number of bytes in the record termination characters can vary depending on how the file is created and on what platform it is created.

The following is an example of using VARIABLE records. It assumes there is a 1-byte newline character at the end of each record in the data file. It is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (RECORDS VARIABLE 2 FIELDS TERMINATED BY ','
                                             (first_name CHAR(7),
                                              last_name CHAR(8),
                                              year_of_birth CHAR(4)))
                         LOCATION ('info.dat'));

21Alvin,Tolliver,1976,
19Kenneth,Baer,1963,
16Mary,Dube,1973,

DELIMITED BY

The DELIMITED BY clause is used to indicate the characters that identify the end of a record.

If DELIMITED BY NEWLINE is specified, then the actual value used is platform-specific. On UNIX platforms, NEWLINE is assumed to be "\n". On Windows NT, NEWLINE is assumed to be "\r\n".

If DELIMITED BY string is specified, then string can be either text or a series of hexadecimal digits enclosed within quotation marks and prefixed by OX or X. If it is text, then the text is converted to the character set of the data file and the result is used for identifying record boundaries. See "string".

If the following conditions are true, then you must use hexadecimal digits to identify the delimiter:

  • The character set of the access parameters is different from the character set of the data file.

  • Some characters in the delimiter string cannot be translated into the character set of the data file.

The hexadecimal digits are converted into bytes, and there is no character set translation performed on the hexadecimal string.

If the end of the file is found before the record terminator, then the access driver proceeds as if a terminator was found, and all unprocessed data up to the end of the file is considered part of the record.


Caution:

Do not include any binary data, including binary counts for VARCHAR and VARRAW, in a record that has delimiters. Doing so could cause errors or corruption, because the binary data will be interpreted as characters during the search for the delimiter.

The following is an example of using DELIMITED BY records.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (RECORDS DELIMITED BY '|' FIELDS TERMINATED BY ','
                                              (first_name CHAR(7),
                                               last_name CHAR(8),
                                               year_of_birth CHAR(4)))
                         LOCATION ('info.dat'));

Alvin,Tolliver,1976|Kenneth,Baer,1963|Mary,Dube,1973

CHARACTERSET

The CHARACTERSET string clause identifies the character set of the data file. If a character set is not specified, then the data is assumed to be in the default character set for the database. See "string".


Note:

The settings of NLS environment variables on the client have no effect on the character set used for the database.


See Also:

Oracle Database Globalization Support Guide for a listing of Oracle-supported character sets

PREPROCESSOR


Caution:

There are security implications to consider when using the PREPROCESSOR clause. See Oracle Database Security Guide for more information.

If the file you want to load contains data records that are not in a format supported by the ORACLE_LOADER access driver, then use the PREPROCESSOR clause to specify a user-supplied preprocessor program that will execute for every data file. Note that the program specification must be enclosed in a shell script if it uses arguments (see the description of "file_spec").

The preprocessor program converts the data to a record format supported by the access driver and then writes the converted record data to standard output (stdout), which the access driver reads as input. The syntax of the PREPROCESSOR clause is as follows:

Description of et_preprocessor_spec.gif follows
Description of the illustration et_preprocessor_spec.gif

directory_spec

Specifies the directory object containing the name of the preprocessor program to execute for every data file. The user accessing the external table must have the EXECUTE privilege for the directory object that is used. If directory_spec is omitted, then the default directory specified for the external table is used.


Caution:

For security reasons, Oracle strongly recommends that a separate directory, not the default directory, be used to store preprocessor programs. Do not store any other files in the directory in which preprocessor programs are stored.

The preprocessor program must reside in a directory object, so that access to it can be controlled for security reasons. The OS system manager must create a directory corresponding to the directory object and must verify that OS-user ORACLE has access to that directory. DBAs must ensure that only approved users are allowed access to the directory object associated with the directory path. Although multiple database users can have access to a directory object, only those with the EXECUTE privilege can run a preprocessor in that directory. No existing database user with read-write privileges to a directory object will be able to use the preprocessing feature. DBAs can prevent preprocessors from ever being used by never granting the EXECUTE privilege to anyone for a directory object.


See Also:

Oracle Database SQL Language Reference for information about granting the EXECUTE privilege

file_spec

The name of the preprocessor program. It is appended to the path name associated with the directory object that is being used (either the directory_spec or the default directory for the external table). The file_spec cannot contain an absolute or relative directory path.

If the preprocessor program requires any arguments (for example, gunzip -c), then you must specify the program name and its arguments in an executable shell script (or on Windows systems, in a batch (.bat) file). The shell script must reside in directory_spec. Keep the following in mind when you create a shell script for use with the PREPROCESSOR clause:

  • The full path name must be specified for system commands such as gunzip.

  • The preprocessor shell script must have EXECUTE permissions

  • The data file listed in the external table LOCATION clause should be referred to by $1. (On Windows systems, the LOCATION clause should be referred to by %1.)

  • On Windows systems, the first line in the .bat file must be the following:

    @echo off
    

    Otherwise, by default, Windows will echo the contents of the batch file (which will be treated as input by the external table access driver).

See Example 14-2 for an example of using a shell script.

It is important to verify that the correct version of the preprocessor program is in the operating system directory.

Example 14-1 shows a sample use of the PREPROCESSOR clause when creating an external table. Note that the preprocessor file is in a separate directory from the data files and log files.

Example 14-1 Specifying the PREPROCESSOR Clause

SQL> CREATE TABLE xtab (recno varchar2(2000))
     2    ORGANIZATION EXTERNAL (
     3    TYPE ORACLE_LOADER
     4    DEFAULT DIRECTORY data_dir
     5    ACCESS PARAMETERS (
     6    RECORDS DELIMITED BY NEWLINE
     7    PREPROCESSOR execdir:'zcat'
     8    FIELDS (recno char(2000)))
     9    LOCATION ('foo.dat.gz'))
   10    REJECT LIMIT UNLIMITED;
Table created.

Example 14-2 shows how to specify a shell script on the PREPROCESSOR clause when creating an external table.

Example 14-2 Using the PREPROCESSOR Clause with a Shell Script

SQL> CREATE TABLE xtab (recno varchar2(2000))
     2    ORGANIZATION EXTERNAL (
     3    TYPE ORACLE_LOADER
     4    DEFAULT DIRECTORY data_dir
     5    ACCESS PARAMETERS (
     6    RECORDS DELIMITED BY NEWLINE
     7    PREPROCESSOR execdir:'uncompress.sh'
     8    FIELDS (recno char(2000)))
     9    LOCATION ('foo.dat.gz'))
   10    REJECT LIMIT UNLIMITED;
Table created.

Using Parallel Processing with the PREPROCESSOR Clause

External tables treats each data file specified on the LOCATION clause as a single granule. To make the best use of parallel processing with the PREPROCESSOR clause, the data to be loaded should be split into multiple files (granules). This is because external tables limits the degree of parallelism to the number of data files present. For example, if you specify a degree of parallelism of 16, but have only 10 data files, then in effect the degree of parallelism is 10 because 10 slave processes will be busy and 6 will be idle. It is best to not have any idle slave processes. So if you do specify a degree of parallelism, then ideally it should be no larger than the number of data files so that all slave processes are kept busy.


See Also:


Restriction When Using the PREPROCESSOR Clause

  • The PREPROCESSOR clause is not available on databases that use the Database Vault feature.

LANGUAGE

The LANGUAGE clause allows you to specify a language name (for example, FRENCH), from which locale-sensitive information about the data can be derived. The following are some examples of the type of information that can be derived from the language name:

  • Day and month names and their abbreviations

  • Symbols for equivalent expressions for A.M., P.M., A.D., and B.C.

  • Default sorting sequence for character data when the ORDER BY SQL clause is specified

  • Writing direction (right to left or left to right)

  • Affirmative and negative response strings (for example, YES and NO)


See Also:

Oracle Database Globalization Support Guide for a listing of Oracle-supported languages

TERRITORY

The TERRITORY clause allows you to specify a territory name to further determine input data characteristics. For example, in some countries a decimal point is used in numbers rather than a comma (for example, 531.298 instead of 531,298).


See Also:

Oracle Database Globalization Support Guide for a listing of Oracle-supported territories

DATA IS...ENDIAN

The DATA IS...ENDIAN clause indicates the endianness of data whose byte order may vary depending on the platform that generated the data file. Fields of the following types are affected by this clause:

  • INTEGER

  • UNSIGNED INTEGER

  • FLOAT

  • BINARY_FLOAT

  • DOUBLE

  • BINARY_DOUBLE

  • VARCHAR (numeric count only)

  • VARRAW (numeric count only)

  • Any character datatype in the UTF16 character set

  • Any string specified by RECORDS DELIMITED BY string and in the UTF16 character set

A common platform that generates little-endian data is Windows NT. Big-endian platforms include Sun Solaris and IBM MVS. If the DATA IS...ENDIAN clause is not specified, then the data is assumed to have the same endianness as the platform where the access driver is running. UTF-16 data files may have a mark at the beginning of the file indicating the endianness of the data. This mark will override the DATA IS...ENDIAN clause.

BYTEORDERMARK (CHECK | NOCHECK)

The BYTEORDERMARK clause is used to specify whether the data file should be checked for the presence of a byte-order mark (BOM). This clause is meaningful only when the character set is Unicode.

BYTEORDERMARK NOCHECK indicates that the data file should not be checked for a BOM and that all the data in the data file should be read as data.

BYTEORDERMARK CHECK indicates that the data file should be checked for a BOM. This is the default behavior for a data file in a Unicode character set.

The following are examples of some possible scenarios:

  • If the data is specified as being little or big-endian and CHECK is specified and it is determined that the specified endianness does not match the data file, then an error is returned. For example, suppose you specify the following:

    DATA IS LITTLE ENDIAN 
    BYTEORDERMARK CHECK 
    

    If the BOM is checked in the Unicode data file and the data is actually big-endian, then an error is returned because you specified little-endian.

  • If a BOM is not found and no endianness is specified with the DATA IS...ENDIAN parameter, then the endianness of the platform is used.

  • If BYTEORDERMARK NOCHECK is specified and the DATA IS...ENDIAN parameter specified an endianness, then that value is used. Otherwise, the endianness of the platform is used.


    See Also:

    "Byte Ordering"

STRING SIZES ARE IN

The STRING SIZES ARE IN clause is used to indicate whether the lengths specified for character strings are in bytes or characters. If this clause is not specified, then the access driver uses the mode that the database uses. Character types with embedded lengths (such as VARCHAR) are also affected by this clause. If this clause is specified, then the embedded lengths are a character count, not a byte count. Specifying STRING SIZES ARE IN CHARACTERS is needed only when loading multibyte character sets, such as UTF16.

LOAD WHEN

The LOAD WHEN condition_spec clause is used to identify the records that should be passed to the database. The evaluation method varies:

  • If the condition_spec references a field in the record, then the clause is evaluated only after all fields have been parsed from the record, but before any NULLIF or DEFAULTIF clauses have been evaluated.

  • If the condition specification references only ranges (and no field names), then the clause is evaluated before the fields are parsed. This is useful for cases where the records in the file that are not to be loaded cannot be parsed into the current record definition without errors.

See "condition_spec".

The following are some examples of using LOAD WHEN:

LOAD WHEN (empid != BLANKS)
LOAD WHEN ((dept_id = "SPORTING GOODS" OR dept_id = "SHOES") AND total_sales != 0)

BADFILE | NOBADFILE

The BADFILE clause names the file to which records are written when they cannot be loaded because of errors. For example, a record was written to the bad file because a field in the data file could not be converted to the datatype of a column in the external table. Records that fail the LOAD WHEN clause are not written to the bad file but are written to the discard file instead. Also, any errors in using a record from an external table (such as a constraint violation when using INSERT INTO...AS SELECT... from an external table) will not cause the record to be written to the bad file.

The purpose of the bad file is to have one file where all rejected data can be examined and fixed so that it can be loaded. If you do not intend to fix the data, then you can use the NOBADFILE option to prevent creation of a bad file, even if there are bad records.

If you specify BADFILE, then you must specify a file name or you will receive an error.

If neither BADFILE nor NOBADFILE is specified, then the default is to create a bad file if at least one record is rejected. The name of the file will be the table name followed by _%p, and it will have an extension of .bad.

See "[directory object name:] filename".

DISCARDFILE | NODISCARDFILE

The DISCARDFILE clause names the file to which records are written that fail the condition in the LOAD WHEN clause. The discard file is created when the first record to be discarded is encountered. If the same external table is accessed multiple times, then the discard file is rewritten each time. If there is no need to save the discarded records in a separate file, then use NODISCARDFILE.

If you specify DISCARDFILE, then you must specify a file name or you will receive an error.

If neither DISCARDFILE nor NODISCARDFILE is specified, then the default is to create a discard file if at least one record fails the LOAD WHEN clause. The name of the file will be the table name followed by _%p and it will have an extension of .dsc.

See "[directory object name:] filename".

LOG FILE | NOLOGFILE

The LOGFILE clause names the file that contains messages generated by the external tables utility while it was accessing data in the data file. If a log file already exists by the same name, then the access driver reopens that log file and appends new log information to the end. This is different from bad files and discard files, which overwrite any existing file. NOLOGFILE is used to prevent creation of a log file.

If you specify LOGFILE, then you must specify a file name or you will receive an error.

If neither LOGFILE nor NOLOGFILE is specified, then the default is to create a log file. The name of the file will be the table name followed by _%p and it will have an extension of .log.

See "[directory object name:] filename".

SKIP

Skips the specified number of records in the data file before loading. SKIP can be specified only when nonparallel access is being made to the data.

READSIZE

The READSIZE parameter specifies the size of the read buffer used to process records. The size of the read buffer must be at least as big as the largest input record the access driver will encounter. The size is specified with an integer indicating the number of bytes. The default value is 512 KB (524288 bytes). You must specify a larger value if any of the records in the data file are larger than 512 KB. There is no limit on how large READSIZE can be, but practically, it is limited by the largest amount of memory that can be allocated by the access driver.

The amount of memory available for allocation is another limit because additional buffers might be allocated. The additional buffer is used to correctly complete the processing of any records that may have been split (either in the data; at the delimiter; or if multi character/byte delimiters are used, in the delimiter itself).

DISABLE_DIRECTORY_LINK_CHECK

By default, the ORACLE_LOADER access driver checks before opening data and log files to ensure that the directory being used is not a symbolic link. The DISABLE_DIRECTORY_LINK_CHECK parameter (which takes no arguments) directs the access driver to bypass this check, allowing you to use files for which the parent directory may be a symbolic link.


Caution:

Use of this parameter involves security risks because symbolic links can potentially be used to redirect the input/output of the external table load operation.

DATE_CACHE

By default, the date cache feature is enabled (for 1000 elements). To completely disable the date cache feature, set it to 0.

DATE_CACHE specifies the date cache size (in entries). For example, DATE_CACHE=5000 specifies that each date cache created can contain a maximum of 5000 unique date entries. Every table has its own date cache, if one is needed. A date cache is created only if at least one date or timestamp value is loaded that requires datatype conversion in order to be stored in the table.

The date cache feature is enabled by default. The default date cache size is 1000 elements. If the default size is used and the number of unique input values loaded exceeds 1000, then the date cache feature is automatically disabled for that table. However, if you override the default and specify a nonzero date cache size and that size is exceeded, then the cache is not disabled.

You can use the date cache statistics (entries, hits, and misses) contained in the log file to tune the size of the cache for future similar loads.

string

A string is a quoted series of characters or hexadecimal digits. If it is a series of characters, then those characters will be converted into the character set of the data file. If it is a series of hexadecimal digits, then there must be an even number of hexadecimal digits. The hexadecimal digits are converted into their binary translation, and the translation is treated as a character string in the character set of the data file. This means that once the hexadecimal digits have been converted into their binary translation, there is no other character set translation that occurs. The syntax for a string is as follows:

Description of et_string.gif follows
Description of the illustration et_string.gif

condition_spec

The condition_spec is an expression that evaluates to either true or false. It specifies one or more conditions that are joined by Boolean operators. The conditions and Boolean operators are evaluated from left to right. (Boolean operators are applied after the conditions are evaluated.) Parentheses can be used to override the default order of evaluation of Boolean operators. The evaluation of condition_spec clauses slows record processing, so these clauses should be used sparingly. The syntax for condition_spec is as follows:

Description of et_cond_spec.gif follows
Description of the illustration et_cond_spec.gif

Note that if the condition specification contains any conditions that reference field names, then the condition specifications are evaluated only after all fields have been found in the record and after blank trimming has been done. It is not useful to compare a field to BLANKS if blanks have been trimmed from the field.

The following are some examples of using condition_spec:

empid = BLANKS OR last_name = BLANKS
(dept_id = SPORTING GOODS OR dept_id = SHOES) AND total_sales != 0

See Also:

"condition"

[directory object name:] filename

This clause is used to specify the name of an output file (BADFILE, DISCARDFILE, or LOGFILE). The directory object name is the name of a directory object where the user accessing the external table has privileges to write. If the directory object name is omitted, then the value specified for the DEFAULT DIRECTORY clause in the CREATE TABLE...ORGANIZATION EXTERNAL statement is used.

The filename parameter is the name of the file to create in the directory object. The access driver does some symbol substitution to help make file names unique in parallel loads. The symbol substitutions supported for UNIX and Windows NT are as follows (other platforms may have different symbols):

  • %p is replaced by the process ID of the current process. For example, if the process ID of the access driver is 12345, then exttab_%p.log becomes exttab_12345.log.

  • %a is replaced by the agent number of the current process. The agent number is the unique number assigned to each parallel process accessing the external table. This number is padded to the left with zeros to fill three characters. For example, if the third parallel agent is creating a file and bad_data_%a.bad was specified as the file name, then the agent would create a file named bad_data_003.bad.

  • %% is replaced by %. If there is a need to have a percent sign in the file name, then this symbol substitution is used.

If the % character is encountered followed by anything other than one of the preceding characters, then an error is returned.

If %p or %a is not used to create unique file names for output files and an external table is being accessed in parallel, then output files may be corrupted or agents may be unable to write to the files.

If you specify BADFILE (or DISCARDFILE or LOGFILE), then you must specify a file name for it or you will receive an error. However, if you do not specify BADFILE (or DISCARDFILE or LOGFILE), then the access driver uses the name of the table followed by _%p as the name of the file. If no extension is supplied for the file, then a default extension will be used. For bad files, the default extension is .bad; for discard files, the default is .dsc; and for log files, the default is .log.

condition

A condition compares a range of bytes or a field from the record against a constant string. The source of the comparison can be either a field in the record or a byte range in the record. The comparison is done on a byte-by-byte basis. If a string is specified as the target of the comparison, then it will be translated into the character set of the data file. If the field has a noncharacter datatype, then no datatype conversion is performed on either the field value or the string. The syntax for a condition is as follows:

Description of et_condition.gif follows
Description of the illustration et_condition.gif

range start : range end

This clause describes a range of bytes or characters in the record to use for a condition. The value used for the STRING SIZES ARE clause determines whether range refers to bytes or characters. The range start and range end are byte or character offsets into the record. The range start must be less than or equal to the range end. Finding ranges of characters is faster for data in fixed-width character sets than it is for data in varying-width character sets. If the range refers to parts of the record that do not exist, then the record is rejected when an attempt is made to reference the range. The range start:range end must be enclosed in parentheses. For example, (10:13).


Note:

The data file should not mix binary data (including datatypes with binary counts, such as VARCHAR) and character data that is in a varying-width character set or more than one byte wide. In these cases, the access driver may not find the correct start for the field, because it treats the binary data as character data when trying to find the start.

The following are some examples of using condition:

LOAD WHEN empid != BLANKS
LOAD WHEN (10:13) = 0x'00000830'
LOAD WHEN PRODUCT_COUNT = "MISSING"

IO_OPTIONS clause

The IO_OPTIONS clause allows you to specify I/O options used by the operating system for reading the data files. The only options available for specification are DIRECTIO and NODIRECTIO (the default).

If the DIRECTIO option is specified, then an attempt is made to open the data file and read it using direct I/O. If successful, then the operating system and NFS server (if the file is on an NFS server) do not cache the data read from the file. This can improve the read performance for the data file, especially if the file is large. If the DIRECTIO option is not supported for the data file being read, then the file is opened and read but the DIRECTIO option is ignored.

If the NODIRECTIO option is specified or if the IO_OPTIONS clause is not specified at all, then direct I/O is not used to read the data files.

field_definitions Clause

The field_definitions clause names the fields in the data file and specifies how to find them in records.

If the field_definitions clause is omitted, then the following is assumed:

  • The fields are delimited by ','

  • The fields are of datatype CHAR

  • The maximum length of the field is 255

  • The order of the fields in the data file is the order in which the fields were defined in the external table

  • No blanks are trimmed from the field

The following is an example of an external table created without any access parameters. It is followed by a sample data file, info.dat, that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4))
 ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir LOCATION ('info.dat'));

Alvin,Tolliver,1976
Kenneth,Baer,1963

The syntax for the field_definitions clause is as follows:

Description of et_fields_clause.gif follows
Description of the illustration et_fields_clause.gif

IGNORE_CHARS_AFTER_EOR

This optional parameter specifies that if extraneous characters are found after the end-of-record that do not satisfy the record definition, then they will be ignored.

Error messages are written to the external tables log file if all four of the following conditions apply:

  • The IGNORE_CHARS_AFTER_EOR parameter is set or the field allows free formatting (that is, the field is specified by a delimiter or enclosure character(s) and/or the field is variable length)

  • Characters remain after the end-of-record

  • The access parameter MISSING FIELD VALUES ARE NULL is not set

  • The field does not have absolute positioning

The error messages that get written to the external tables log file are as follows:

KUP-04021: field formatting error for field Col1
KUP-04023: field start is after end of record
KUP-04101: record 2 rejected in file /home/oracle/datafiles/example.dat

delim_spec Clause

The delim_spec clause is used to identify how all fields are terminated in the record. The delim_spec specified for all fields can be overridden for a particular field as part of the field_list clause. For a full description of the syntax, see "delim_spec".

trim_spec Clause

The trim_spec clause specifies the type of whitespace trimming to be performed by default on all character fields. The trim_spec clause specified for all fields can be overridden for individual fields by specifying a trim_spec clause for those fields. For a full description of the syntax, see "trim_spec".

MISSING FIELD VALUES ARE NULL

MISSING FIELD VALUES ARE NULL indicates that if there is not enough data in a record for all fields, then those fields with missing data values are set to NULL. For a full description of the syntax, see "MISSING FIELD VALUES ARE NULL".

REJECT ROWS WITH ALL NULL FIELDS

REJECT ROWS WITH ALL NULL FIELDS indicates that a row will not be loaded into the external table if all referenced fields in the row are null. If this parameter is not specified, then the default value is to accept rows with all null fields. The setting of this parameter is written to the log file either as "reject rows with all null fields" or as "rows with all null fields are accepted."

field_list Clause

The field_list clause identifies the fields in the data file and their datatypes. For a full description of the syntax, see "field_list".

delim_spec

The delim_spec clause is used to find the end (and if ENCLOSED BY is specified, the start) of a field. Its syntax is as follows:

Description of et_delim_spec.gif follows
Description of the illustration et_delim_spec.gif

If ENCLOSED BY is specified, then the access driver starts at the current position in the record and skips over all whitespace looking for the first delimiter. All whitespace between the current position and the first delimiter is ignored. Next, the access driver looks for the second enclosure delimiter (or looks for the first one again if a second one is not specified). Everything between those two delimiters is considered part of the field.

If TERMINATED BY string is specified with the ENCLOSED BY clause, then the terminator string must immediately follow the second enclosure delimiter. Any whitespace between the second enclosure delimiter and the terminating delimiter is skipped. If anything other than whitespace is found between the two delimiters, then the row is rejected for being incorrectly formatted.

If TERMINATED BY is specified without the ENCLOSED BY clause, then everything between the current position in the record and the next occurrence of the termination string is considered part of the field.

If OPTIONALLY is specified, then TERMINATED BY must also be specified. The OPTIONALLY parameter means the ENCLOSED BY delimiters can either both be present or both be absent. The terminating delimiter must be present regardless of whether the ENCLOSED BY delimiters are present. If OPTIONALLY is specified, then the access driver skips over all whitespace, looking for the first nonblank character. Once the first nonblank character is found, the access driver checks to see if the current position contains the first enclosure delimiter. If it does, then the access driver finds the second enclosure string and everything between the first and second enclosure delimiters is considered part of the field. The terminating delimiter must immediately follow the second enclosure delimiter (with optional whitespace allowed between the second enclosure delimiter and the terminating delimiter). If the first enclosure string is not found at the first nonblank character, then the access driver looks for the terminating delimiter. In this case, leading blanks are trimmed.


See Also:

Table 10-5 for a description of the access driver's default trimming behavior. You can override this behavior with LTRIM and RTRIM.

After the delimiters have been found, the current position in the record is set to the spot after the last delimiter for the field. If TERMINATED BY WHITESPACE was specified, then the current position in the record is set to after all whitespace following the field.

A missing terminator for the last field in the record is not an error. The access driver proceeds as if the terminator was found. It is an error if the second enclosure delimiter is missing.

The string used for the second enclosure can be included in the data field by including the second enclosure twice. For example, if a field is enclosed by single quotation marks, then it could contain a single quotation mark by specifying two single quotation marks in a row, as shown in the word don't in the following example:

'I don''t like green eggs and ham'

There is no way to quote a terminator string in the field data without using enclosing delimiters. Because the field parser does not look for the terminating delimiter until after it has found the enclosing delimiters, the field can contain the terminating delimiter.

In general, specifying single characters for the strings is faster than multiple characters. Also, searching data in fixed-width character sets is usually faster than searching data in varying-width character sets.


Note:

The use of the backslash character (\) within strings is not supported in external tables.

Example: External Table with Terminating Delimiters

The following is an example of an external table that uses terminating delimiters. It is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (FIELDS TERMINATED BY WHITESPACE)
                         LOCATION ('info.dat'));

Alvin Tolliver 1976
Kenneth Baer 1963
Mary Dube 1973

Example: External Table with Enclosure and Terminator Delimiters

The following is an example of an external table that uses both enclosure and terminator delimiters. Remember that all whitespace between a terminating string and the first enclosure string is ignored, as is all whitespace between a second enclosing delimiter and the terminator. The example is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4)) 
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                        ACCESS PARAMETERS (FIELDS TERMINATED BY "," ENCLOSED BY "("  AND ")")
                        LOCATION ('info.dat'));

(Alvin) ,   (Tolliver),(1976)
(Kenneth),  (Baer) ,(1963)
(Mary),(Dube) ,   (1973)

Example: External Table with Optional Enclosure Delimiters

The following is an example of an external table that uses optional enclosure delimiters. Note that LRTRIM is used to trim leading and trailing blanks from fields. The example is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth CHAR(4))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (FIELDS TERMINATED BY ','
                                            OPTIONALLY ENCLOSED BY '(' and ')'
                                            LRTRIM)
                         LOCATION ('info.dat'));

Alvin ,   Tolliver , 1976
(Kenneth),  (Baer), (1963)
( Mary ), Dube ,    (1973)

trim_spec

The trim_spec clause is used to specify that spaces should be trimmed from the beginning of a text field, the end of a text field, or both. Spaces include blanks and other nonprinting characters such as tabs, line feeds, and carriage returns. The syntax for the trim_spec clause is as follows:

Description of et_trim_spec.gif follows
Description of the illustration et_trim_spec.gif

NOTRIM indicates that no characters will be trimmed from the field.

LRTRIM, LTRIM, and RTRIM are used to indicate that characters should be trimmed from the field. LRTRIM means that both leading and trailing spaces are trimmed. LTRIM means that leading spaces will be trimmed. RTRIM means trailing spaces are trimmed.

LDRTRIM is used to provide compatibility with SQL*Loader trim features. It is the same as NOTRIM except in the following cases:

  • If the field is not a delimited field, then spaces will be trimmed from the right.

  • If the field is a delimited field with OPTIONALLY ENCLOSED BY specified, and the optional enclosures are missing for a particular instance, then spaces will be trimmed from the left.

The default is LDRTRIM. Specifying NOTRIM yields the fastest performance.

The trim_spec clause can be specified before the field list to set the default trimming for all fields. If trim_spec is omitted before the field list, then LDRTRIM is the default trim setting. The default trimming can be overridden for an individual field as part of the datatype_spec.

If trimming is specified for a field that is all spaces, then the field will be set to NULL.

In the following example, all data is fixed-length; however, the character data will not be loaded with leading spaces. The example is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20),
year_of_birth CHAR(4))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (FIELDS LTRIM)
                         LOCATION ('info.dat'));

Alvin,           Tolliver,1976
Kenneth,         Baer,    1963
Mary,            Dube,    1973

MISSING FIELD VALUES ARE NULL

MISSING FIELD VALUES ARE NULL indicates that if there is not enough data in a record for all fields, then those fields with missing data values are set to NULL. If MISSING FIELD VALUES ARE NULL is not specified, and there is not enough data in the record for all fields, then the row is rejected.

In the following example, the second record is stored with a NULL set for the year_of_birth column, even though the data for the year of birth is missing from the data file. If the MISSING FIELD VALUES ARE NULL clause was omitted from the access parameters, then the second row would be rejected because it did not have a value for the year_of_birth column. The example is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth INT)
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (FIELDS TERMINATED BY ","
                                            MISSING FIELD VALUES ARE NULL)
                         LOCATION ('info.dat'));
 
Alvin,Tolliver,1976
Baer,Kenneth
Mary,Dube,1973

field_list

The field_list clause identifies the fields in the data file and their datatypes. Evaluation criteria for the field_list clause are as follows:

  • If no datatype is specified for a field, then it is assumed to be CHAR(1) for a nondelimited field, and CHAR(255)for a delimited field.

  • If no field list is specified, then the fields in the data file are assumed to be in the same order as the fields in the external table. The datatype for all fields is CHAR(255) unless the column in the database is CHAR or VARCHAR. If the column in the database is CHAR or VARCHAR, then the datatype for the field is still CHAR but the length is either 255 or the length of the column, whichever is greater.

  • If no field list is specified and no delim_spec clause is specified, then the fields in the data file are assumed to be in the same order as fields in the external table. All fields are assumed to be CHAR(255) and terminated by a comma.

This example shows the definition for an external table with no field_list and a delim_spec. It is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15), last_name CHAR(20), year_of_birth INT)
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (FIELDS TERMINATED BY "|")
                         LOCATION ('info.dat'));

Alvin|Tolliver|1976
Kenneth|Baer|1963
Mary|Dube|1973

The syntax for the field_list clause is as follows:

Description of et_field_list.gif follows
Description of the illustration et_field_list.gif

field_name

The field_name is a string identifying the name of a field in the data file. If the string is not within quotation marks, then the name is uppercased when matching field names with column names in the external table.

If field_name matches the name of a column in the external table that is referenced in the query, then the field value is used for the value of that external table column. If the name does not match any referenced name in the external table, then the field is not loaded but can be used for clause evaluation (for example WHEN or NULLIF).

pos_spec

The pos_spec clause indicates the position of the column within the record. For a full description of the syntax, see "pos_spec Clause".

datatype_spec

The datatype_spec clause indicates the datatype of the field. If datatype_spec is omitted, then the access driver assumes the datatype is CHAR(255). For a full description of the syntax, see "datatype_spec Clause".

init_spec

The init_spec clause indicates when a field is NULL or has a default value. For a full description of the syntax, see "init_spec Clause".

pos_spec Clause

The pos_spec clause indicates the position of the column within the record. The setting of the STRING SIZES ARE IN clause determines whether pos_spec refers to byte positions or character positions. Using character positions with varying-width character sets takes significantly longer than using character positions with fixed-width character sets. Binary and multibyte character data should not be present in the same data file when pos_spec is used for character positions. If they are, then the results are unpredictable. The syntax for the pos_spec clause is as follows:

Description of et_position_spec.gif follows
Description of the illustration et_position_spec.gif

start

The start parameter is the number of bytes or characters from the beginning of the record to where the field begins. It positions the start of the field at an absolute spot in the record rather than relative to the position of the previous field.

*

The * parameter indicates that the field begins at the first byte or character after the end of the previous field. This is useful if you have a varying-length field followed by a fixed-length field. This option cannot be used for the first field in the record.

increment

The increment parameter positions the start of the field at a fixed number of bytes or characters from the end of the previous field. Use *-increment to indicate that the start of the field starts before the current position in the record (this is a costly operation for multibyte character sets). Use *+increment to move the start after the current position.

end

The end parameter indicates the absolute byte or character offset into the record for the last byte of the field. If start is specified along with end, then end cannot be less than start. If * or increment is specified along with end, and the start evaluates to an offset larger than the end for a particular record, then that record will be rejected.

length

The length parameter indicates that the end of the field is a fixed number of bytes or characters from the start. It is useful for fixed-length fields when the start is specified with *.

The following example shows various ways of using pos_spec. It is followed by a sample of the data file that can be used to load it.

CREATE TABLE emp_load (first_name CHAR(15),
                      last_name CHAR(20),
                      year_of_birth INT,
                      phone CHAR(12),
                      area_code CHAR(3),
                      exchange CHAR(3),
                      extension CHAR(4))
  ORGANIZATION EXTERNAL
  (TYPE ORACLE_LOADER
   DEFAULT DIRECTORY ext_tab_dir
   ACCESS PARAMETERS
     (FIELDS RTRIM
            (first_name (1:15) CHAR(15),
             last_name (*:+20),
             year_of_birth (36:39),
             phone (40:52),
             area_code (*-12: +3),
             exchange (*+1: +3),
             extension (*+1: +4)))
   LOCATION ('info.dat'));

Alvin          Tolliver            1976415-922-1982
Kenneth        Baer                1963212-341-7912
Mary           Dube                1973309-672-2341

datatype_spec Clause

The datatype_spec clause is used to describe the datatype of a field in the data file if the datatype is different than the default. The datatype of the field can be different than the datatype of a corresponding column in the external table. The access driver handles the necessary conversions. The syntax for the datatype_spec clause is as follows:

Description of et_datatype_spec.gif follows
Description of the illustration et_datatype_spec.gif

If the number of bytes or characters in any field is 0, then the field is assumed to be NULL. The optional DEFAULTIF clause specifies when the field is set to its default value. Also, the optional NULLIF clause specifies other conditions for when the column associated with the field is set to NULL. If the DEFAULTIF or NULLIF clause is true, then the actions of those clauses override whatever values are read from the data file.


See Also:


[UNSIGNED] INTEGER [EXTERNAL] [(len)]

This clause defines a field as an integer. If EXTERNAL is specified, then the number is a character string. If EXTERNAL is not specified, then the number is a binary field. The valid values for len in binary integer fields are 1, 2, 4, and 8. If len is omitted for binary integers, then the default value is whatever the value of sizeof(int) is on the platform where the access driver is running. Use of the DATA IS {BIG | LITTLE} ENDIAN clause may cause the data to be byte-swapped before it is stored.

If EXTERNAL is specified, then the value of len is the number of bytes or characters in the number (depending on the setting of the STRING SIZES ARE IN BYTES or CHARACTERS clause). If no length is specified, then the default value is 255.

The default value of the [UNSIGNED] INTEGER [EXTERNAL] [(len)] datatype is determined as follows:

  • If no length specified, then the default length is 1.

  • If no length is specified and the field is delimited with a DELIMITED BY NEWLINE clause, then the default length is 1.

  • If no length is specified and the field is delimited with a DELIMITED BY clause, then the default length is 255 (unless the delimiter is NEWLINE, as stated above).

DECIMAL [EXTERNAL] and ZONED [EXTERNAL]

The DECIMAL clause is used to indicate that the field is a packed decimal number. The ZONED clause is used to indicate that the field is a zoned decimal number. The precision field indicates the number of digits in the number. The scale field is used to specify the location of the decimal point in the number. It is the number of digits to the right of the decimal point. If scale is omitted, then a value of 0 is assumed.

Note that there are different encoding formats of zoned decimal numbers depending on whether the character set being used is EBCDIC-based or ASCII-based. If the language of the source data is EBCDIC, then the zoned decimal numbers in that file must match the EBCDIC encoding. If the language is ASCII-based, then the numbers must match the ASCII encoding.

If the EXTERNAL parameter is specified, then the data field is a character string whose length matches the precision of the field.

ORACLE_DATE

ORACLE_DATE is a field containing a date in the Oracle binary date format. This is the format used by the DTYDAT datatype in Oracle Call Interface (OCI) programs. The field is a fixed length of 7.

ORACLE_NUMBER

ORACLE_NUMBER is a field containing a number in the Oracle number format. The field is a fixed length (the maximum size of an Oracle number field) unless COUNTED is specified, in which case the first byte of the field contains the number of bytes in the rest of the field.

ORACLE_NUMBER is a fixed-length 22-byte field. The length of an ORACLE_NUMBER COUNTED field is one for the count byte, plus the number of bytes specified in the count byte.

Floating-Point Numbers

The following four datatypes, DOUBLE, FLOAT, BINARY_DOUBLE, and BINARY_FLOAT are floating-point numbers.

DOUBLE and FLOAT are the floating-point formats used natively on the platform in use. They are the same datatypes used by default for the DOUBLE and FLOAT datatypes in a C program on that platform. BINARY_FLOAT and BINARY_DOUBLE are floating-point numbers that conform substantially with the Institute for Electrical and Electronics Engineers (IEEE) Standard for Binary Floating-Point Arithmetic, IEEE Standard 754-1985. Because most platforms use the IEEE standard as their native floating-point format, FLOAT and BINARY_FLOAT are the same on those platforms and DOUBLE and BINARY_DOUBLE are also the same.


Note:

See Oracle Database SQL Language Reference for more information about floating-point numbers

DOUBLE

The DOUBLE clause indicates that the field is the same format as the C language DOUBLE datatype on the platform where the access driver is executing. Use of the DATA IS {BIG | LITTLE} ENDIAN clause may cause the data to be byte-swapped before it is stored. This datatype may not be portable between certain platforms.

FLOAT [EXTERNAL]

The FLOAT clause indicates that the field is the same format as the C language FLOAT datatype on the platform where the access driver is executing. Use of the DATA IS {BIG | LITTLE} ENDIAN clause may cause the data to be byte-swapped before it is stored. This datatype may not be portable between certain platforms.

If the EXTERNAL parameter is specified, then the field is a character string whose maximum length is 255. See

BINARY_DOUBLE

BINARY_DOUBLE is a 64-bit, double-precision, floating-point number datatype. Each BINARY_DOUBLE value requires 9 bytes, including a length byte. See the information in the note provided for the FLOAT datatype for more details about floating-point numbers.

BINARY_FLOAT

BINARY_FLOAT is a 32-bit, single-precision, floating-point number datatype. Each BINARY_FLOAT value requires 5 bytes, including a length byte. See the information in the note provided for the FLOAT datatype for more details about floating-point numbers.

RAW

The RAW clause is used to indicate that the source data is binary data. The len for RAW fields is always in number of bytes. When a RAW field is loaded in a character column, the data that is written into the column is the hexadecimal representation of the bytes in the RAW field.

CHAR

The CHAR clause is used to indicate that a field is a character datatype. The length (len) for CHAR fields specifies the largest number of bytes or characters in the field. The len is in bytes or characters, depending on the setting of the STRING SIZES ARE IN clause.

If no length is specified for a field of datatype CHAR, then the size of the field is assumed to be 1, unless the field is delimited:

  • For a delimited CHAR field, if a length is specified, then that length is used as a maximum.

  • For a delimited CHAR field for which no length is specified, the default is 255 bytes.

  • For a delimited CHAR field that is greater than 255 bytes, you must specify a maximum length. Otherwise you will receive an error stating that the field in the data file exceeds maximum length.

The date_format_spec clause is used to indicate that the field contains a date or time in the specified format.

The following example shows the use of the CHAR clause.

SQL> CREATE TABLE emp_load
  2    (employee_number      CHAR(5),
  3     employee_dob         CHAR(20),
  4     employee_last_name   CHAR(20),
  5     employee_first_name  CHAR(15),
  6     employee_middle_name CHAR(15),
  7     employee_hire_date   DATE)
  8  ORGANIZATION EXTERNAL
  9    (TYPE ORACLE_LOADER
 10     DEFAULT DIRECTORY def_dir1
 11     ACCESS PARAMETERS
 12       (RECORDS DELIMITED BY NEWLINE
 13        FIELDS (employee_number      CHAR(2),
 14                employee_dob         CHAR(20),
 15                employee_last_name   CHAR(18),
 16                employee_first_name  CHAR(11),
 17                employee_middle_name CHAR(11),
 18                employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
 19               )
 20       )
 21     LOCATION ('info.dat')
 22    );
 
Table created.

date_format_spec

The date_format_spec clause is used to indicate that a character string field contains date data, time data, or both, in a specific format. This information is used only when a character field is converted to a date or time datatype and only when a character string field is mapped into a date column.

For detailed information about the correct way to specify date and time formats, see Oracle Database SQL Language Reference.

The syntax for the date_format_spec clause is as follows:

Description of et_dateformat.gif follows
Description of the illustration et_dateformat.gif

DATE

The DATE clause indicates that the string contains a date.

MASK

The MASK clause is used to override the default globalization format mask for the datatype. If a date mask is not specified, then the settings of NLS parameters for the database (not the session settings) for the appropriate globalization parameter for the datatype are used. The NLS_DATABASE_PARAMETERS view shows these settings.

  • NLS_DATE_FORMAT for DATE datatypes

  • NLS_TIMESTAMP_FORMAT for TIMESTAMP datatypes

  • NLS_TIMESTAMP_TZ_FORMAT for TIMESTAMP WITH TIME ZONE datatypes

Please note the following:

  • The database setting for the NLS_NUMERIC_CHARACTERS initialization parameter (that is, from the NLS_DATABASE_PARAMETERS view) governs the decimal separator for implicit conversion from character to numeric datatypes.

  • A group separator is not allowed in the default format.

TIMESTAMP

The TIMESTAMP clause indicates that a field contains a formatted timestamp.

INTERVAL

The INTERVAL clause indicates that a field contains a formatted interval. The type of interval can be either YEAR TO MONTH or DAY TO SECOND.

The following example shows a sample use of a complex DATE character string and a TIMESTAMP character string. It is followed by a sample of the data file that can be used to load it.

SQL> CREATE TABLE emp_load
  2    (employee_number      CHAR(5),
  3     employee_dob         CHAR(20),
  4     employee_last_name   CHAR(20),
  5     employee_first_name  CHAR(15),
  6     employee_middle_name CHAR(15),
  7     employee_hire_date   DATE,
  8     rec_creation_date    TIMESTAMP WITH TIME ZONE)
  9  ORGANIZATION EXTERNAL
 10    (TYPE ORACLE_LOADER
 11     DEFAULT DIRECTORY def_dir1
 12     ACCESS PARAMETERS
 13       (RECORDS DELIMITED BY NEWLINE
 14        FIELDS (employee_number      CHAR(2),
 15                employee_dob         CHAR(20),
 16                employee_last_name   CHAR(18),
 17                employee_first_name  CHAR(11),
 18                employee_middle_name CHAR(11),
 19                employee_hire_date   CHAR(22) date_format DATE mask "mm/dd/yyyy hh:mi:ss AM",
 20                rec_creation_date    CHAR(35) date_format TIMESTAMP WITH TIME ZONE mask "DD-MON-RR HH.MI.SSXFF AM TZH:TZM"
 21               )
 22       )
 23     LOCATION ('infoc.dat')
 24    );
 
Table created.
SQL> SELECT * FROM emp_load;
 
EMPLO EMPLOYEE_DOB         EMPLOYEE_LAST_NAME   EMPLOYEE_FIRST_ EMPLOYEE_MIDDLE
----- -------------------- -------------------- --------------- ---------------
EMPLOYEE_
---------
REC_CREATION_DATE
---------------------------------------------------------------------------
56    november, 15, 1980   baker                mary            alice
01-SEP-04
01-DEC-04 11.22.03.034567 AM -08:00
 
87    december, 20, 1970   roper                lisa            marie
01-JAN-02
01-DEC-02 02.03.00.678573 AM -08:00
 
 
2 rows selected.

The info.dat file looks like the following. Note that this is 2 long records. There is one space between the data fields (09/01/2004, 01/01/2002) and the time field that follows.

56november, 15, 1980  baker             mary       alice      09/01/2004 08:23:01 AM01-DEC-04 11.22.03.034567 AM -08:00
87december, 20, 1970  roper             lisa       marie      01/01/2002 02:44:55 PM01-DEC-02 02.03.00.678573 AM -08:00

VARCHAR and VARRAW

The VARCHAR datatype has a binary count field followed by character data. The value in the binary count field is either the number of bytes in the field or the number of characters. See "STRING SIZES ARE IN" for information about how to specify whether the count is interpreted as a count of characters or count of bytes.

The VARRAW datatype has a binary count field followed by binary data. The value in the binary count field is the number of bytes of binary data. The data in the VARRAW field is not affected by the DATA ISENDIAN clause.

The VARIABLE 2 clause in the ACCESS PARAMETERS clause specifies the size of the binary field that contains the length.

The optional length_of_length field in the specification is the number of bytes in the count field. Valid values for length_of_length for VARCHAR are 1, 2, 4, and 8. If length_of_length is not specified, then a value of 2 is used. The count field has the same endianness as specified by the DATA ISENDIAN clause.

The max_len field is used to indicate the largest size of any instance of the field in the data file. For VARRAW fields, max_len is number of bytes. For VARCHAR fields, max_len is either number of characters or number of bytes depending on the STRING SIZES ARE IN clause.

The following example shows various uses of VARCHAR and VARRAW. The content of the data file, info.dat, is shown following the example.

CREATE TABLE emp_load
             (first_name CHAR(15),
              last_name CHAR(20),
              resume CHAR(2000),
              picture RAW(2000))
  ORGANIZATION EXTERNAL
  (TYPE ORACLE_LOADER
   DEFAULT DIRECTORY ext_tab_dir
   ACCESS PARAMmjETERS
     (RECORDS
        VARIABLE 2
        DATA IS BIG ENDIAN
        CHARACTERSET US7ASCII
      FIELDS (first_name VARCHAR(2,12),
              last_name VARCHAR(2,20),
              resume VARCHAR(4,10000),
              picture VARRAW(4,100000)))
    LOCATION ('info.dat'));

Contents of info.dat Data File

The contents of the data file used in the example are as follows:.

0005Alvin0008Tolliver0000001DAlvin Tolliver's Resume etc. 0000001013f4690a30bc29d7e40023ab4599ffff

It is important to understand that, for the purposes of readable documentation, the binary values for the count bytes and the values for the raw data are shown in the data file in italics, with 2 characters per binary byte. The values in an actual data file would be in binary format, not ASCII. Therefore, if you attempt to use this example by cutting and pasting, then you will receive an error.

VARCHARC and VARRAWC

The VARCHARC datatype has a character count field followed by character data. The value in the count field is either the number of bytes in the field or the number of characters. See "STRING SIZES ARE IN" for information about how to specify whether the count is interpreted as a count of characters or count of bytes. The optional length_of_length is either the number of bytes or the number of characters in the count field for VARCHARC, depending on whether lengths are being interpreted as characters or bytes.

The maximum value for length_of_lengths for VARCHARC is 10 if string sizes are in characters, and 20 if string sizes are in bytes. The default value for length_of_length is 5.

The VARRAWC datatype has a character count field followed by binary data. The value in the count field is the number of bytes of binary data. The length_of_length is the number of bytes in the count field.

The max_len field is used to indicate the largest size of any instance of the field in the data file. For VARRAWC fields, max_len is number of bytes. For VARCHARC fields, max_len is either number of characters or number of bytes depending on the STRING SIZES ARE IN clause.

The following example shows various uses of VARCHARC and VARRAWC. The length of the picture field is 0, which means the field is set to NULL.

CREATE TABLE emp_load
             (first_name CHAR(15),
              last_name CHAR(20),
              resume CHAR(2000),
              picture RAW (2000))
  ORGANIZATION EXTERNAL
  (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_tab_dir
    ACCESS PARAMETERS
      (FIELDS (first_name VARCHARC(5,12),
               last_name VARCHARC(2,20),
               resume VARCHARC(4,10000),
               picture VARRAWC(4,100000)))
  LOCATION ('info.dat'));

00007William05Ricca0035Resume for William Ricca is missing0000

init_spec Clause

The init_spec clause is used to specify when a field should be set to NULL or when it should be set to a default value. The syntax for the init_spec clause is as follows:

Description of et_init_spec.gif follows
Description of the illustration et_init_spec.gif

Only one NULLIF clause and only one DEFAULTIF clause can be specified for any field. These clauses behave as follows:

  • If NULLIF condition_spec is specified and it evaluates to true, then the field is set to NULL.

  • If DEFAULTIF condition_spec is specified and it evaluates to true, then the value of the field is set to a default value. The default value depends on the datatype of the field, as follows:

    • For a character datatype, the default value is an empty string.

    • For a numeric datatype, the default value is a 0.

    • For a date datatype, the default value is NULL.

  • If a NULLIF clause and a DEFAULTIF clause are both specified for a field, then the NULLIF clause is evaluated first and the DEFAULTIF clause is evaluated only if the NULLIF clause evaluates to false.

column_transforms Clause

The optional column_transforms clause provides transforms that you can use to describe how to load columns in the external table that do not map directly to columns in the data file. The syntax for the column_transforms clause is as follows:

Description of et_column_trans.gif follows
Description of the illustration et_column_trans.gif

transform

Each transform specified in the transform clause identifies a column in the external table and then a specifies how to calculate the value of the column. The syntax is as follows:

Description of et_transform.gif follows
Description of the illustration et_transform.gif

The NULL transform is used to set the external table column to NULL in every row. The CONSTANT transform is used to set the external table column to the same value in every row. The CONCAT transform is used to set the external table column to the concatenation of constant strings and/or fields in the current record from the data file. The LOBFILE transform is used to load data into a field for a record from another data file. Each of these transforms is explained further in the following sections.

column_name

The column_name uniquely identifies a column in the external table to be loaded. Note that if the name of a column is mentioned in the transform clause, then that name cannot be specified in the FIELDS clause as a field in the data file.

NULL

When the NULL transform is specified, every value of the field is set to NULL for every record.

CONSTANT

The CONSTANT transform uses the value of the string specified as the value of the column in the record. If the column in the external table is not a character string type, then the constant string will be converted to the datatype of the column. This conversion will be done for every row.

The character set of the string used for datatype conversions is the character set of the database.

CONCAT

The CONCAT transform concatenates constant strings and fields in the data file together to form one string. Only fields that are character datatypes and that are listed in the fields clause can be used as part of the concatenation. Other column transforms cannot be specified as part of the concatenation.

LOBFILE

The LOBFILE transform is used to identify a file whose contents are to be used as the value for a column in the external table. All LOBFILEs are identified by an optional directory object and a file name in the form directory object:filename. The following rules apply to use of the LOBFILE transform:

  • Both the directory object and the file name can be either a constant string or the name of a field in the field clause.

  • If a constant string is specified, then that string is used to find the LOBFILE for every row in the table.

  • If a field name is specified, then the value of that field in the data file is used to find the LOBFILE.

  • If a field name is specified for either the directory object or the file name and if the value of that field is NULL, then the column being loaded by the LOBFILE is also set to NULL.

  • If the directory object is not specified, then the default directory specified for the external table is used.

  • If a field name is specified for the directory object, then the FROM clause also needs to be specified.

Note that the entire file is used as the value of the LOB column. If the same file is referenced in multiple rows, then that file is reopened and reread in order to populate each column.

lobfile_attr_list

The lobfile_attr_list lists additional attributes of the LOBFILE. The syntax is as follows:

Description of et_lobfile_attr.gif follows
Description of the illustration et_lobfile_attr.gif

The FROM clause lists the names of all directory objects that will be used for LOBFILEs. It is used only when a field name is specified for the directory object of the name of the LOBFILE. The purpose of the FROM clause is to determine the type of access allowed to the named directory objects during initialization. If directory object in the value of field is not a directory object in this list, then the row will be rejected.

The CLOB attribute indicates that the data in the LOBFILE is character data (as opposed to RAW data). Character data may need to be translated into the character set used to store the LOB in the database.

The CHARACTERSET attribute contains the name of the character set for the data in the LOBFILEs.

The BLOB attribute indicates that the data in the LOBFILE is raw data.

If neither CLOB nor BLOB is specified, then CLOB is assumed. If no character set is specified for character LOBFILEs, then the character set of the data file is assumed.

Example: Creating and Loading an External Table Using ORACLE_LOADER

The steps in this section show an example of using the ORACLE_LOADER access driver to create and load an external table. A traditional table named emp is defined along with an external table named emp_load. The external data is then loaded into an internal table.

  1. Assume your .dat file looks as follows:

    56november, 15, 1980  baker             mary       alice     09/01/2004
    87december, 20, 1970  roper             lisa       marie     01/01/2002
    
  2. Execute the following SQL statements to set up a default directory (which contains the data source) and to grant access to it:

    CREATE DIRECTORY def_dir1 AS '/usr/apps/datafiles';
    GRANT READ ON DIRECTORY ext_tab_dir TO SCOTT;
    
  3. Create a traditional table named emp:

    CREATE TABLE emp (emp_no CHAR(6), last_name CHAR(25), first_name CHAR(20), middle_initial CHAR(1), hire_date DATE, dob DATE);
    
  4. Create an external table named emp_load:

    SQL> CREATE TABLE emp_load
      2    (employee_number      CHAR(5),
      3     employee_dob         CHAR(20),
      4     employee_last_name   CHAR(20),
      5     employee_first_name  CHAR(15),
      6     employee_middle_name CHAR(15),
      7     employee_hire_date   DATE)
      8  ORGANIZATION EXTERNAL
      9    (TYPE ORACLE_LOADER
     10     DEFAULT DIRECTORY def_dir1
     11     ACCESS PARAMETERS
     12       (RECORDS DELIMITED BY NEWLINE
     13        FIELDS (employee_number      CHAR(2),
     14                employee_dob         CHAR(20),
     15                employee_last_name   CHAR(18),
     16                employee_first_name  CHAR(11),
     17                employee_middle_name CHAR(11),
     18                employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
     19               )
     20       )
     21     LOCATION ('info.dat')
     22    );
     
    Table created.
    
  5. Load the data from the external table emp_load into the table emp:

    SQL> INSERT INTO emp (emp_no,
      2                   first_name,
      3                   middle_initial,
      4                   last_name,
      5                   hire_date,
      6                   dob)
      7  (SELECT employee_number,
      8          employee_first_name,
      9          substr(employee_middle_name, 1, 1),
     10          employee_last_name,
     11          employee_hire_date,
     12          to_date(employee_dob,'month, dd, yyyy')
     13  FROM emp_load);
     
    2 rows created.
    
  6. Perform the following select operation to verify that the information in the .dat file was loaded into the emp table:

    SQL> SELECT * FROM emp;
     
    EMP_NO LAST_NAME                 FIRST_NAME           M HIRE_DATE DOB
    ------ ------------------------- -------------------- - --------- ---------
    56     baker                     mary                 a 01-SEP-04 15-NOV-80
    87     roper                     lisa                 m 01-JAN-02 20-DEC-70
     
    2 rows selected.
    

Notes about this example:

  • The employee_number field in the data file is converted to a character string for the employee_number field in the external table.

  • The data file contains an employee_dob field that is not loaded into any field in the table.

  • The substr function is used on the employee_middle_name column in the external table to generate the value for middle_initial in table emp.

  • The character string for employee_hire_date in info.dat is automatically converted into a DATE datatype at external table access time, using the format mask specified in the external table definiition.

  • Unlike employee_hire_date, the DATE datatype conversion for employee_dob is done at SELECT time and is not part of the external table definition.


See Also:

Oracle Database SQL Language Reference for detailed information about the correct way to specify date and time formats

Parallel Loading Considerations for the ORACLE_LOADER Access Driver

The ORACLE_LOADER access driver attempts to divide large data files into chunks that can be processed separately.

The following file, record, and data characteristics make it impossible for a file to be processed in parallel:

  • Sequential data sources (such as a tape drive or pipe)

  • Data in any multibyte character set whose character boundaries cannot be determined starting at an arbitrary byte in the middle of a string

    This restriction does not apply to any data file with a fixed number of bytes per record.

  • Records with the VAR format

Specifying a PARALLEL clause is of value only when large amounts of data are involved.

Performance Hints When Using the ORACLE_LOADER Access Driver

When you monitor performance, the most important measurement is the elapsed time for a load. Other important measurements are CPU usage, memory usage, and I/O rates.

You can alter performance by increasing or decreasing the degree of parallelism. The degree of parallelism indicates the number of access drivers that can be started to process the data files. The degree of parallelism enables you to choose on a scale between slower load with little resource usage and faster load with all resources utilized. The access driver cannot automatically tune itself, because it cannot determine how many resources you want to dedicate to the access driver.

An additional consideration is that the access drivers use large I/O buffers for better performance (you can use the READSIZE clause in the access parameters to specify the size of the buffers). On databases with shared servers, all memory used by the access drivers comes out of the system global area (SGA). For this reason, you should be careful when using external tables on shared servers.

Performance can also sometimes be increased with use of date cache functionality. By using the date cache to specify the number of unique dates anticipated during the load, you can reduce the number of date conversions done when many duplicate date or timestamp values are present in the input data. The date cache functionality provided by external tables is identical to the date cache functionality provided by SQL*Loader. See "DATE_CACHE" for a detailed description.

In addition to changing the degree of parallelism and using the date cache to improve performance, consider the following information:

  • Fixed-length records are processed faster than records terminated by a string.

  • Fixed-length fields are processed faster than delimited fields.

  • Single-byte character sets are the fastest to process.

  • Fixed-width character sets are faster to process than varying-width character sets.

  • Byte-length semantics for varying-width character sets are faster to process than character-length semantics.

  • Single-character delimiters for record terminators and field delimiters are faster to process than multicharacter delimiters.

  • Having the character set in the data file match the character set of the database is faster than a character set conversion.

  • Having datatypes in the data file match the datatypes in the database is faster than datatype conversion.

  • Not writing rejected rows to a reject file is faster because of the reduced overhead.

  • Condition clauses (including WHEN, NULLIF, and DEFAULTIF) slow down processing.

  • The access driver takes advantage of multithreading to streamline the work as much as possible.

Restrictions When Using the ORACLE_LOADER Access Driver

This section lists restrictions to be aware of then you use the ORACLE_LOADER access driver.

  • SQL strings cannot be specified in access parameters for the ORACLE_LOADER access driver. As a workaround, you can use the DECODE clause in the SELECT clause of the statement that is reading the external table. Alternatively, you can create a view of the external table that uses the DECODE clause and select from that view rather than the external table.

  • The use of the backslash character (\) within strings is not supported in external tables. See "Use of the Backslash Escape Character".

  • When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks.

Reserved Words for the ORACLE_LOADER Access Driver

When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks. The following are the reserved words for the ORACLE_LOADER access driver:

  • ALL

  • AND

  • ARE

  • ASTERISK

  • AT

  • ATSIGN

  • BADFILE

  • BADFILENAME

  • BACKSLASH

  • BENDIAN

  • BIG

  • BLANKS

  • BY

  • BYTES

  • BYTESTR

  • CHAR

  • CHARACTERS

  • CHARACTERSET

  • CHARSET

  • CHARSTR

  • CHECK

  • CLOB

  • COLLENGTH

  • COLON

  • COLUMN

  • COMMA

  • CONCAT

  • CONSTANT

  • COUNTED

  • DATA

  • DATE

  • DATE_CACHE

  • DATE_FORMAT

  • DATEMASK

  • DAY

  • DEBUG

  • DECIMAL

  • DEFAULTIF

  • DELIMITBY

  • DELIMITED

  • DISCARDFILE

  • DOT

  • DOUBLE

  • DOUBLETYPE

  • DQSTRING

  • DQUOTE

  • DSCFILENAME

  • ENCLOSED

  • ENDIAN

  • ENDPOS

  • EOF

  • EQUAL

  • EXIT

  • EXTENDED_IO_PARAMETERS

  • EXTERNAL

  • EXTERNALKW

  • EXTPARM

  • FIELD

  • FIELDS

  • FILE

  • FILEDIR

  • FILENAME

  • FIXED

  • FLOAT

  • FLOATTYPE

  • FOR

  • FROM

  • HASH

  • HEXPREFIX

  • IN

  • INTEGER

  • INTERVAL

  • LANGUAGE

  • IS

  • LEFTCB

  • LEFTTXTDELIM

  • LEFTP

  • LENDIAN

  • LDRTRIM

  • LITTLE

  • LOAD

  • LOBFILE

  • LOBPC

  • LOBPCCONST

  • LOCAL

  • LOCALTZONE

  • LOGFILE

  • LOGFILENAME

  • LRTRIM

  • LTRIM

  • MAKE_REF

  • MASK

  • MINUSSIGN

  • MISSING

  • MISSINGFLD

  • MONTH

  • NEWLINE

  • NO

  • NOCHECK

  • NOT

  • NOBADFILE

  • NODISCARDFILE

  • NOLOGFILE

  • NOTEQUAL

  • NOTERMBY

  • NOTRIM

  • NULL

  • NULLIF

  • OID

  • OPTENCLOSE

  • OPTIONALLY

  • OPTIONS

  • OR

  • ORACLE_DATE

  • ORACLE_NUMBER

  • PLUSSIGN

  • POSITION

  • PROCESSING

  • QUOTE

  • RAW

  • READSIZE

  • RECNUM

  • RECORDS

  • REJECT

  • RIGHTCB

  • RIGHTTXTDELIM

  • RIGHTP

  • ROW

  • ROWS

  • RTRIM

  • SCALE

  • SECOND

  • SEMI

  • SETID

  • SIGN

  • SIZES

  • SKIP

  • STRING

  • TERMBY

  • TERMEOF

  • TERMINATED

  • TERMWS

  • TERRITORY

  • TIME

  • TIMESTAMP

  • TIMEZONE

  • TO

  • TRANSFORMS

  • UNDERSCORE

  • UINTEGER

  • UNSIGNED

  • VALUES

  • VARCHAR

  • VARCHARC

  • VARIABLE

  • VARRAW

  • VARRAWC

  • VLENELN

  • VMAXLEN

  • WHEN

  • WHITESPACE

  • WITH

  • YEAR

  • ZONED

PKR腁mPKN:AOEBPS/img_text/sut81007.htm% Description of the illustration sut81007.eps

The figure in sut81007.gif shows a representation of two fields.

The first field contains two blanks followed by the letters aaaa. The field is terminated by ',' and enclosed by '"'. Before the terminator, there are two blank spaces.

After the terminator, the next field starts. It contains two blanks followed by the letters bbbb and is terminated by ','.

PK 9op*%PKN:A)OEBPS/img_text/et_record_spec_options.htm1 Description of the illustration et_record_spec_options.eps
et_record_spec_options ::= 
 
    CHARACTERSET string
    |PREPROCESSOR [directory_spec:] file_spec 
    |DATA IS { LITTLE | BIG } ENDIAN
    |BYTEORDERMARK { CHECK | NOCHECK } 
    |STRING SIZES ARE IN { BYTES | CHARACTERS }
    |LOAD WHEN condition_spec 
    |{ NOBADFILE |  BADFILE [directory object name:] filename } 
    |{ NODISCARDFILE | DISCARDFILE [directory object name:] filename } 
    |{ NOLOGFILE | LOGFILE [directory object name:] filename } 
    |{READSIZE integer | DISABLE_DIRECTORY_LINK_CHECK | DATE_CACHE integer | SKIP integer}
    |IO_OPTIONS {DIRECTIO | NODIRECTIO}
   [,]...
PKoP61PKN:AOEBPS/img_text/raw.htm Description of the illustration raw.eps
raw ::=
 
RAW [(length)]
PKj(PKN:AOEBPS/img_text/fields_spec.htmc Description of the illustration fields_spec.eps
fields_spec ::= 
 
FIELDS { enclosure_spec | termination_spec [ [OPTIONALLY] enclosure_spec ] }
PKwhcPKN:AOEBPS/img_text/into_table3.htm: Description of the illustration into_table3.eps
into_table3 ::=
 
  [OPTIONS (FILE=database_filename)]
PKm@N?:PKN:A%OEBPS/img_text/datatype_spec_cont.htm Description of the illustration datatype_spec_cont.eps
datatype_spec_cont ::= 
           
 CHAR [(length)] [delim_spec]                             
 |VARCHARC (length_of_length [, max_size_bytes])       
 |VARRAWC  (length_of_length [, max_size_bytes])        
 |[LONG] VARRAW [(max_bytes)]                            
 |DATE [EXTERNAL] [(length)] [\"mask\"] [delim_spec] 
 |{ TIME|TIMESTAMP } [(fractional_second_precision)] [WITH [LOCAL] TIME ZONE] [\"mask\"]        
 |INTERVAL [ { YEAR [(year_precision)] TO MONTH | DAY [(day_precision)] TO SECOND [(fractional_second_precision)]} ]    
PKcPKN:AOEBPS/img_text/sid_spec.htm; Description of the illustration sid_spec.eps
sid_spec ::= 
 
  SID ( { fieldname | CONSTANT SID_val } )
PK@;PKN:AOEBPS/img_text/graphic_ext.htm> Description of the illustration graphic_ext.eps
graphic_ext ::=
 
GRAPHIC EXTERNAL [(graphic_char_length)]
PKqVJC>PKN:AOEBPS/img_text/expinit.htm_ Description of the illustration expinit.eps
expinit ::= 
 
 expdp   [HELP = {YES | NO}] | username/password [@connect_identifier] [AS SYSDBA] [ExpStart]
PKo<]PKN:AOEBPS/img_text/varray.htm? Description of the illustration varray.eps
varray ::= 
 
  VARRAY (SDF_spec count_spec init_spec |count_spec field_list | delim_spec)
PK%3 D?PKN:A OEBPS/img_text/et_field_list.htm[ Description of the illustration et_field_list.eps
et_field_list ::= 
 
(field_name [pos_spec] [datatype_spec] [init_spec]) 
[,  (field_name [pos_spec] [datatype_spec] [init_spec])]...
   
PK~O`[PKN:AOEBPS/img_text/infile.htm) Description of the illustration infile.eps
infile ::= 
 
  INFILE { * | input_filename } [os_file_proc_clause] 
PKi!&.)PKN:A OEBPS/img_text/et_dateformat.html Description of the illustration et_dateformat.eps
et_dateformat ::=
 
[DATE_FORMAT]
{
{  DATE 
| TIMESTAMP [WITH [LOCAL] TIME ZONE] } MASK \"date/time mask\"  
| INTERVAL { YEAR_TO_MONTH | DAY_TO_SECOND }
PK5`qlPKN:AOEBPS/img_text/badfile.htm  Description of the illustration badfile.eps
badfile ::= 
 
  [BADFILE  filename]
PK/ PKN:AOEBPS/img_text/concatenate.htm7 Description of the illustration concatenate.eps
concatenate ::= 
 
  CONCATENATE { integer | (integer) }
  |CONTINUEIF 
  { { THIS | NEXT } [PRESERVE] [(] (pos_spec)
  |[LAST [PRESERVE] [(] } operator { str | X'hex_str' } [)]
PKaȳ<7PKN:A"OEBPS/img_text/et_lobfile_attr.htmO Description of the illustration et_lobfile_attr.eps
et_lobfile_attr ::=
 
  FROM (directory object name,) ... |
  CLOB |
  BLOB |
  CHARACTERSET = character set name
  
PKtأPKN:AOEBPS/img_text/enclose.htmp Description of the illustration enclose.eps
enclose ::= 
 
  ENCLOSED [BY] [ 'string' | X'hexstr' ] [AND] [ 'string' | X'hexstr' ]
PKupPKN:AOEBPS/img_text/vargraphic.htm Description of the illustration vargraphic.eps
vargraphic ::=
 
  VARGRAPHIC [ (max_length) ]
PK PKN:AOEBPS/img_text/impracopt.htm{ Description of the illustration impracopt.eps
impracopt ::=

[
|CLUSTER = {YES | NO}
|SERVICE_NAME = service_name
]
PK`D{PKN:AOEBPS/img_text/char_length.htmj Description of the illustration char_length.eps
char_length ::=
 
  [LENGTH [SEMANTICS] {BYTE | CHAR | CHARACTER}] 
PKÚPKN:AOEBPS/img_text/date.htm Description of the illustration date.eps
date ::=
 
DATE [(length)] [mask] [delim_spec]
PKPKN:AOEBPS/img_text/expdynopts.htm\ Description of the illustration expdynopts.eps
expdynopts ::= 
 
 ADD_FILE = [directory_object:] file_name 
  [, [directory_object:] file_name] ...
[ CONTINUE_CLIENT 
| EXIT_CLIENT
| FILESIZE = integer 
| HELP 
| KILL_JOB 
| PARALLEL = integer
| START_JOB  [= SKIP_CURRENT = {YES | NO} ]
| STATUS [= integer] 
| STOP_JOB [= IMMEDIATE] ]
PK)Ja\PKN:AOEBPS/img_text/expencrypt.htmD Description of the illustration expencrypt.eps
expencrypt ::=  
[ 
| ENCRYPTION = {ALL | DATA_ONLY | METADATA_ONLY | ENCRYPTED_COLUMNS_ONLY | NONE}
| ENCRYPTION_ALGORITHM = {AES128 | AES192 | AES256}
| ENCRYPTION_MODE = {PASSWORD | TRANSPARENT | DUAL}
| ENCRYPTION_PASSWORD = password
]
PKsIDPKN:A#OEBPS/img_text/et_position_spec.htmP Description of the illustration et_position_spec.eps
et_position_spec ::=
 
[POSITION] ({ start | *  | { + | - } increment } { : | - } {end | length})
PKyvUPPKN:A%OEBPS/img_text/et_oracle_datapump.htmI Description of the illustration et_oracle_datapump.eps

et_oracle_datapump ::=

[comments]

([ {ENCRYPTION {ENABLED | DISABLED} |

{NOLOGFILE | LOGFILE [directory object name :] file name} |

COMPRESSION {ENABED | DISABLED}}|

VERSION {COMPATIBLE | LATEST | version number}])

PKaPKN:AOEBPS/img_text/delim_spec.htmA Description of the illustration delim_spec.eps
delim_spec ::= 
 
   { termination_spec [[OPTIONALLY] enclosure_spec] | enclosure_spec }
PKFAPKN:A"OEBPS/img_text/et_column_trans.htm@ Description of the illustration et_column_trans.eps
et_column_trans ::=
 
 COLUMN TRANSFORMS (transform,)...
PK0SE@PKN:AOEBPS/img_text/decimal.htm Description of the illustration decimal.eps
decimal ::= 
 
  DECIMAL (precision [ , scale ])
PKPKN:AOEBPS/img_text/expopts.htm Description of the illustration expopts.eps
expopts ::= 
 
[ 
| COMPRESSION = {ALL | DATA_ONLY | METADATA_ONLY | NONE}
| CONTENT = {ALL | DATA_ONLY | METADATA_ONLY}
| DATA_OPTIONS = XML_CLOBS
| ESTIMATE = {BLOCKS | STATISTICS}  
| ESTIMATE_ONLY = {YES | NO}
| ExpEncrypt
| ExpFilter
| FLASHBACK_SCN = scn_value
| FLASHBACK_TIME = timestamp
| JOB_NAME = jobname_string
| NETWORK_LINK = database_link
| PARALLEL = integer
| ExpRacOpt
| ExpRemap
| SOURCE_EDITION = source_edition_name
| STATUS = integer
| TRANSPORTABLE = {ALWAYS | NEVER}
| VERSION = {COMPATIBLE | LATEST | version_string}
| ExpDiagnostics
PKj5 PKN:AOEBPS/img_text/dbverify_seg.html Description of the illustration dbverify_seg.eps
dbverify_seg ::= 
 
  dbv USERID = username/password 
  | SEGMENT_ID = tsn.segfile.segblock
  | LOGFILE = filename
  | FEEDBACK = integer
  | HELP  = { Y | N }
  | PARFILE = filename 
  | HIGH_SCN = integer 
 
End of description.
PKqlPKN:A!OEBPS/img_text/et_record_spec.htmV Description of the illustration et_record_spec.eps
et_record_spec ::= 
 
  RECORDS { FIXED integer | VARIABLE integer | DELIMITED BY { NEWLINE | string } }
  [et_record_spec_options]
PK`[VPKN:A"OEBPS/img_text/et_access_param.htm Description of the illustration et_access_param.eps

et_access_param ::=

[comments] [record_format_info] [field_definitions] [column_transforms]

PK-r!PKN:AOEBPS/img_text/et_string.html Description of the illustration et_string.eps
et_string ::=
 
{ 
 "text" 
|'text' 
|{X | 0X} "hex digit hex digit" ["hex digit hex digit"]... 
|{X | 0X} 'hex digit hex digit' ['hex digit hex digit']... 
}
PKEd"qlPKN:AOEBPS/img_text/continueif.htm Description of the illustration continueif.eps
continueif ::= 
 
  CONTINUEIF { [THIS | NEXT] [PRESERVE]  [(]  pos_spec | LAST [PRESERVE] [(] }  
             operator { str | X'hex_str' } [)] 
PKD;IPKN:A OEBPS/img_text/datatype_spec.htm7 Description of the illustration datatype_spec.eps
datatype_spec ::= 
 
  delim_spec                                                      
 |INTEGER { [(length)] [{ SIGNED | UNSIGNED }] |               
 [EXTERNAL [(length)] [delim_spec]] }        
 |FLOAT [EXTERNAL [(length)] [delim_spec] ]                
 |{ DECIMAL | ZONED } { [EXTERNAL [(length)] [delim_spec]] | 
 (precision [, scale]) }            
 |{ DOUBLE | BYTEINT | SMALLINT { SIGNED | UNSIGNED } }                
 |RAW [(length)]                                         
 |GRAPHIC [EXTERNAL] [(graphic_char_length)]                
 |{ VARGRAPHIC | VARCHAR } [(max_length)]                      
 datatype_spec_cont
PK,<7PKN:AOEBPS/img_text/sut81003.htm- Description of the illustration sut81003.eps

The figure in sut81003.gif shows an example of field conversion.

There is a datafile that contains two fields, both defined in the SQL*Loader control file as CHAR(5). Field 1 contains the letters aaa as data. Field 2 contains the letters bbb as data.

In the database into which the fields are to be inserted, there are two columns. Column 1 is defined as a fixed-length CHAR column of length 5. Therefore, when the data from Field 1 in the datafile is inserted, the data is shown as left-justified in that column, which remains 5 bytes wide. The extra space on the right is padded with blanks.

Column 2 is defined as a varying-length field with a maximum length of 5 bytes. The data from the datafile for Field 2 is left-justified. Therefore, when the data is inserted into column 2 in the database, it is shownas left-justified as well, but the length remains 3 bytes.

PKM2-PKN:AOEBPS/img_text/fieldname.htm Description of the illustration fieldname.eps
fieldname ::= 
 
  full_fieldname
PKĜPKN:AOEBPS/img_text/et_cond_spec.htmf Description of the illustration et_cond_spec.eps
et_cond_spec ::= 
 
{
 { condition | condition_spec { AND | OR } condition_spec } 
 | ( { condition | condition_spec {AND | OR } condition_spec } )
}
PK-Ӌ8kfPKN:AOEBPS/img_text/dgen_fld.htmZ Description of the illustration dgen_fld.eps
dgen_fld ::= 
  
  { RECNUM 
  | SYSDATE 
  | CONSTANT val 
  | SEQUENCE [( { COUNT | MAX | integer } [, incr] )] 
  | { REF_spec init_spec | SID_spec init_spec | BFILE_spec init_spec }
  | EXPRESSION "sql_string" }
PK2l#_ZPKN:AOEBPS/img_text/discard.htme Description of the illustration discard.eps
discard ::= 
 
  [DISCARDFILE filename] [{ DISCARDS | DISCARDMAX } integer]
PKטjePKN:AOEBPS/img_text/et_trim_spec.html Description of the illustration et_trim_spec.eps
et_trim_spec ::=
 
{ LRTRIM | NOTRIM | LTRIM | RTRIM | LDRTRIM }
PK( PKN:A OEBPS/img_text/infile_clause.htm4 Description of the illustration infile_clause.eps
infile_clause ::= 
 
  INFILE { * | input_filename } [os_file_proc_clause] [ BADFILE filename ] 
 [DISCARDFILE filename] [{ DISCARDS | DISCARDMAX } integer]
 [" { var | fix | [str [ 'string' | X'hex_string ] ] | integer } "]
PKآ1u94PKN:A!OEBPS/img_text/impdiagnostics.htm1 Description of the illustration impdiagnostics.eps
impdiagnostics ::= 
 
[ 
 ABORT_STEP = {YES | NO}
| DATA_ACCESS_METHOD = {EXT_TAB | DIRECT_PATH | CONVENTIONAL}
| KEEP_MASTER = {YES | NO}
| MASTER_ONLY = {YES | NO}
| METRICS = {YES | NO}
]
PK61PKN:AOEBPS/img_text/pos_spec.htm  Description of the illustration pos_spec.eps
pos_spec ::= 
 
  ( { start | * [+integer] } [{ : | - } end] )
PK/%P PKN:AOEBPS/img_text/nid.htm_ Description of the illustration nid.eps
nid ::=
 
nid TARGET = [username] / [password] [@service_name]
 
[REVERT = { YES | NO }
|DBNAME = new_db_name [SETNAME = { YES | NO }]]
 
[LOGFILE = logfile [APPEND = { YES | NO }] [HELP = { YES | NO }]]
 
End of description.
PKgd_PKN:AOEBPS/img_text/expracopt.htm{ Description of the illustration expracopt.eps
expracopt ::=

[
|CLUSTER = {YES | NO}
|SERVICE_NAME = service_name
]
PK (z{PKN:AOEBPS/img_text/field_list.htm6 Description of the illustration field_list.eps
field_list :==
 
(column_name { dgen_fld_spec 
| scalar_fld_spec 
| col_obj_fld_spec 
| collection_fld_spec
| filler_fld_spec }
)
[(, column_name { d_gen_fld_spec 
| scalar_fld_spec 
| col_obj_fld_spec 
| collection_fld_spec
| filler_fld_spec }
)]...
PKxPKN:AOEBPS/img_text/et_init_spec.htm? Description of the illustration et_init_spec.eps
et_init_spec ::=
 
[{ DEFAULTIF | NULLIF } condition_spec]
PKD?PKN:AOEBPS/img_text/sdf.htmf Description of the illustration sdf.eps
sdf ::=
 
  SDF ( [ field_name | CONSTANT filename ] [os_file_proc_clause] [READSIZE size] 
 
 [CHARACTERSET name] [LENGTH [SEMANTICS] {BYTE | CHAR | CHARACTER}] 
 
 [BYTEORDER { BIG | LITTLE } [ENDIAN]] [BYTEORDERMARK { CHECK | NOCHECK }] [delim_spec] )
PKlkfPKN:AOEBPS/img_text/impinit.htm} Description of the illustration impinit.eps
impinit ::= 
 
 impdp [HELP = {YES | NO}] | username/password [@connect_identifier] [AS SYSDBA] ImpStart
PKX-cePKN:AOEBPS/img_text/sut81008.htmE Description of the illustration sut81008.eps

The figure in sut81008.gif shows a representation of two fields terminated by whitespace. Field 1 contains 5 blanks, followed by the letters aaaa. Five blanks follow Field 1.

Field 2 starts at the next nonwhitespace character. It contains the letters bbbb.

PKcPKN:A#OEBPS/img_text/col_obj_fld_spec.htm^ Description of the illustration col_obj_fld_spec.eps
col_obj_fld_spec ::= 
 
  COLUMN OBJECT [TREAT AS typename] [init_spec] field_list [sql_string_spec]
PKPPKN:A OEBPS/img_text/byteordermark.htmW Description of the illustration byteordermark.eps
byteordermark ::=
 
  [BYTEORDERMARK {CHECK | NOCHECK}]
PKwW\WPKN:AOEBPS/img_text/scalar.htmP Description of the illustration scalar.eps
scalar ::=
 
 { [LOBFILE_spec] | [POSITION pos_spec] } [datatype_spec] [PIECED]
 [init_spec] ["sql_string"]
PK\.UPPKN:AOEBPS/img_text/expfileopts.htm4 Description of the illustration expfileopts.eps
expfileopts ::= 
 
[
DIRECTORY = directory_object 
|DUMPFILE = [directory_object:]file_name 
 [, [directory_object:]file_name] ...
|FILESIZE = number_of_bytes 
| { LOGFILE = [directory_object:]file_name | NOLOGFILE = {YES | NO} }
|PARFILE =  [directory_path] file_name]
|REUSE_DUMPFILE = {YES | NO}
]
PKxvt]94PKN:AOEBPS/img_text/sequence.htme Description of the illustration sequence.eps
sequence ::=
 
column_name SEQUENCE ( {COUNT | MAX | integer} [, incr]  ) 
PKǓjePKN:AOEBPS/img_text/count.htm> Description of the illustration count.eps
count ::= 
 
  COUNT ({ fieldname | CONSTANT positive_integer })
PKi C>PKN:A'OEBPS/img_text/et_preprocessor_spec.htm5 Description of the illustration et_preprocessor_spec.eps
et_preprocessor_spec ::=

PREPROCESSOR
[directory_spec:]
file_spec
PK+:5PKN:AOEBPS/img_text/oid_spec.htm Description of the illustration oid_spec.eps
oid_spec ::= 
 
  OID (fieldname)
PKp\ PKN:AOEBPS/img_text/recsize_spec.htm< Description of the illustration recsize_spec.eps
recsize_spec ::=
 
 [RECSIZE integer] [BUFFERS integer]
PK1tA<PKN:AOEBPS/img_text/expmodes.htm6 Description of the illustration expmodes.eps
expmodes ::= 
 
[ FULL = {YES | NO}
| SCHEMAS = schema_name [, schema_name] ...
| TABLES = [schema_name.] table_name [:partition_name] [, ...]
| TABLESPACES = tablespace_name [, tablespace_name] ...
| TRANSPORT_TABLESPACES = tablespace_name [, tablespace_name] ... 
  [TRANSPORT_FULL_CHECK = {YES | NO}]
]
PKS;6PKN:AOEBPS/img_text/into_table7.htm, Description of the illustration into_table7.eps
into_table7 ::= 
 
  [TREAT AS typename]
PK|q1,PKN:AOEBPS/img_text/graphic.htm Description of the illustration graphic.eps
graphic ::=
 
GRAPHIC [(graphic_char_length)]
PK!PKN:AOEBPS/img_text/into_table5.htmB Description of the illustration into_table5.eps
into_table5 ::= 
 
  [EXCEPTIONS table] [WHEN field_condition]
PKU,GBPKN:AOEBPS/img_text/sut81088.htm Description of the illustration sut81088.eps

The figure in sut81088.gif shows SQL*Loader receiving input datafiles and a SQL*Loader control file as input.

SQL*Loader then outputs a log file, bad files, and discard files. Also, the figure shows that the database into which SQL*Loader loaded the input data now contains tables and indexes.

PK<PKN:AOEBPS/img_text/into_table6.htm( Description of the illustration into_table6.eps
into_table6 ::=
 
  [ OID_spec | SID_spec ] [FIELDS [delim_spec]] [TRAILING [NULLCOLS]] 
PK?-(PKN:A OEBPS/img_text/remote_config.htm! Description of the illustration remote_config.eps

This is a text description of remote_config.gif. This image is described in the text preceding the image.

PK\r&!PKN:AOEBPS/img_text/zoned.htm Description of the illustration zoned.eps
zoned ::= 
 
  ZONED (precision [, scale])
PKGPKN:A!OEBPS/img_text/intotab_clause.htm= Description of the illustration intotab_clause.eps
intotab_clause ::= 
 
 INTO TABLE name [SORTED [INDEXES] (name)]
 [SINGLEROW] [ ( { PARTITION name | SUBPARTITION name } ) ]  
 [ {RESUME [ { YES | NO [REPLACE] } ] | INSERT | REPLACE [ USING  {DELETE | TRUNCATE} ]
 | TRUNCATE | APPEND} ]
[OPTIONS (STORAGE=storage_spec, FILE=database_filename)]
[ [EVALUATE CHECK_CONSTRAINTS] 
[REENABLE] [DISABLED_CONSTRAINTS] ]
[EXCEPTIONS table]
 [WHEN field_condition]
[ OID_spec| SID_spec  | XMLTYPE_spec] [FIELDS [delim_spec]] [TRAILING [NULCOLS]] 
[TREAT AS typename]
[SKIP n] field_list
PK"B=PKN:AOEBPS/img_text/sut81009.htmM Description of the illustration sut81009.eps

The figure in sut81009.gif shows a representation of two fields terminated by ',' and optionally enclosed by '"'. Field 1 is enclosed by '"', begins with three whitespace characters, and then contains the letters aaaa. A nonwhitespace character (in this case, a comma) signals the end of the field.

Field 2 starts after four whitespace characters. It contains the letters bbbb, and is terminated by ','.

PKv6bRMPKN:AOEBPS/img_text/impnetopts.htm8 Description of the illustration impnetopts.eps
impnetopts ::= 
 
[ ESTIMATE = {BLOCKS | STATISTICS}
| { FLASHBACK_SCN = scn_number | FLASHBACK_TIME = timestamp }
| { TRANSPORTABLE = {ALWAYS | NEVER} | TRANSPORT_TABLESPACES = tablespace_name [,...] }
   TRANSPORT_DATAFILES=datafile_name
| [TRANSPORT_FULL_CHECK = {YES | NO}]
]
PKL=8PKN:AOEBPS/img_text/terminat.htm} Description of the illustration terminat.eps
terminat ::= 
 
  TERMINATED [BY] { WHITESPACE | X'hexstr' | 'string' | EOF }
PKBPKN:AOEBPS/img_text/fld_cond.htm Description of the illustration fld_cond.eps
fld_cond  ::= 
 
  [(] 
  {full_fieldname | pos_spec } operator { 'char_string' | X'hex_string' | BLANKS} 
  [)] AND
  [(]
  {full_fieldname | pos_spec } operator { 'char_string' | X'hex_string' | BLANKS}
  [)]...
PKާPKN:AOEBPS/img_text/init.htm2 Description of the illustration init.eps
init ::= 
 
  { NULLIF | DEFAULTIF } field_condition 
PKS72PKN:AOEBPS/img_text/varchar.htm Description of the illustration varchar.eps
varchar ::= 
 
  VARCHAR [ (max_length) ]
PK.DwPKN:AOEBPS/img_text/into_table1.htm Description of the illustration into_table1.eps
into_table1 ::= 
 
  INTO TABLE name [ ( { PARTITION name | SUBPARTITION name } ) ] 
  { INSERT | REPLACE | TRUNCATE | APPEND }
PK?$PKN:AOEBPS/img_text/impdynopts.htmF Description of the illustration impdynopts.eps
impdynopts ::= 
 
  CONTINUE_CLIENT
| EXIT_CLIENT
| HELP
| KILL_JOB
| PARALLEL = integer
| START_JOB [= SKIP_CURRENT = { YES | NO }]
| STATUS [= integer]
| STOP_JOB [= IMMEDIATE]
PK*PKN:A!OEBPS/img_text/expdiagnostics.htm7 Description of the illustration expdiagnostics.eps
expdiagnostics ::= 
 
[ 
 DATA_ACCESS_METHOD = {EXT_TAB | DIRECT_PATH}
| KEEP_MASTER = {YES | NO}
| METRICS = {YES | NO}
]
PK<7PKN:AOEBPS/img_text/xmltype_spec.htm Description of the illustration xmltype_spec.eps
xmltype_spec ::=

   XMLTYPE (fieldname)
PK.l$PKN:AOEBPS/img_text/sut81005.htm\ Description of the illustration sut81005.eps

The figure in sut81005.gif shows a representation of a CHAR(9) field that contains 5 blanks followed by the letters aaaa.

The next field begins immediately afterwards and contains 5 blanks, the letters bbbb, and is terminated by ','.

PKPKN:AOEBPS/img_text/impopts.htm h Description of the illustration impopts.eps
impopts ::= 
 
[ 
| CONTENT = {ALL | DATA_ONLY | METADATA_ONLY}
| DATA_OPTIONS = {DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS}
| ENCRYPTION_PASSWORD = password
| ImpFilter
| JOB_NAME = jobname_string
| PARALLEL = integer
| ImpRacOpt
| ImpRemap
| REUSE_DATAFILES  = {YES | NO}
| PARTITION_OPTIONS = {NONE | DEPARTITION | EXCHANGE | MERGE}
| SKIP_UNUSABLE_INDEXES  = {YES | NO}
| STATUS = [integer]
| STREAMS_CONFIGURATION  = {YES | NO}
| TABLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | REPLACE}
| TARGET_EDITION = target_edition_name
| TRANSFORM = {SEGMENT_ATTRIBUTES | STORAGE | OID | PARTITION | PCTSPACE}:value[:object_type]
| VERSION = {COMPATIBLE | LATEST | version_string}
| ImpDiagnostics
]
PK.h PKN:AOEBPS/img_text/impmodes.htm Description of the illustration impmodes.eps
impmodes ::= 
 
[ FULL = {YES | NO}
| SCHEMAS = schema_name [,...]
| TABLES = [schema_name.]table_name[:partition_name] [,...]
| TABLESPACES = tablespace_name [, ...]
PK{6:PKN:AOEBPS/img_text/impremap.htm[ Description of the illustration impremap.eps
impremap ::= 
 
[ 
REMAP_DATA = [[schema.]table.column : [schema.]pkg.function] [,...]
| REMAP_DATAFILE = source_datafile:target_datafile [,...]
| REMAP_SCHEMA = source_schema:target_schema [,...]
| REMAP_TABLE = [schema_name.]old_table_name[:partition]:new_table_name [,...]
| REMAP_TABLESPACE = source_tablespace:target_tablespace [,...]
]
PKekPKN:A#OEBPS/img_text/et_datatype_spec.htmb Description of the illustration et_datatype_spec.eps
et_datatype_spec ::=
{
[UNSIGNED] INTEGER [EXTERNAL] [(len)] [delim_spec]                          
|{ DECIMAL | ZONED } [EXTERNAL] [(len)] [delim_spec] (precision [ , scale]) 
|ORACLE_DATE                                                    
|ORACLE_NUMBER [COUNTED]                                      
|FLOAT [EXTERNAL] [(len)] [delim_spec] 
|DOUBLE 
|BINARY_FLOAT [EXTERNAL] [(len)] [delim_spec] 
|BINARY_DOUBLE                           
|RAW [(len)]                                                 
|CHAR [(len)] [delim_spec] [trim_spec] [date_format_spec]       
|{ VARCHAR | VARRAW | VARCHARC | VARRAWC } ([length_of_length ,] max_len)
}
PKtrgbPKN:AOEBPS/img_text/sut81006.htm8 Description of the illustration sut81006.eps

The figure in sut81006.gif shows a representation of a field that contains 5 blanks, followed by the letters aaaa, and terminated by ','.

The next field begins immediately after the delimiter. It contains 4 blanks followed by the letters bbbb, and is terminated by ','.

PK{1PKN:A#OEBPS/img_text/et_fields_clause.htm2 Description of the illustration et_fields_clause.eps
et_fields_clause ::=
 
 FIELDS
 [IGNORE_CHARS_AFTER_EOR]
 [delim_spec]
 [trim_spec]
 [MISSING FIELD VALUES ARE NULL]
 [REJECT ROWS WITH ALL NULL FIELDS]
 [field_list]
PKPKN:AOEBPS/img_text/expstart.htmp Description of the illustration expstart.eps
expstart ::= 

 ExpModes ExpOpts ExpFileOpts
| ATTACH [= [schema_name.]job_name] [ENCRYPTION_PASSWORD=password]
PK8upPKN:AOEBPS/img_text/byteorder.html Description of the illustration byteorder.eps
byteorder ::=
 
  [BYTEORDER  {BIG | LITTLE} [ENDIAN]]
PKuAqlPKN:A OEBPS/img_text/et_delim_spec.htmh Description of the illustration et_delim_spec.eps
et_delim_spec ::=
 
{
 ENCLOSED BY string [AND string] 
 | TERMINATED BY { string | WHITESPACE } 
[[OPTIONALLY] ENCLOSED BY string [AND string]]
}
PKmhPKN:AOEBPS/img_text/coll_fld.htm* Description of the illustration coll_fld.eps
coll_fld ::= 
 
  { nested_table_spec | [BOUNDFILLER] varray_spec }
PK&/*PKN:A OEBPS/img_text/decision_tree.htmb Description of the illustration decision_tree.eps

This is a text description of decision_tree.gif, which shows a flow chart that is designed to help you decide on the type of LogMiner dictionary to use. The flow chart indicates that if your answer to each of the following questions is yes, then you should use the LogMiner dictionary in the online catalog:

  • Will LogMiner have access to the source database?

  • Will column definitions will be unchanged?

  • Will the database be open?

If your answer to the first or second of the preceding questions is no, then the flow chart poses these questions:

  • Will the database be open for write access?

  • Might column definitions change?

If your answer to both of these question is yes, then the flow chart indicates that you should use the dictionary in the redo log files. If the answer to either of these questions is no, or if the answer to the question "Will the database be open" is no, then the flow chart poses this question:

Will the instance be started?

If the answer to this question is yes, then you should use the dictionary extracted to a flat file.

If the answer to this question is no, then you cannot use LogMiner.

PK(+PKN:AOEBPS/img_text/filler_fld.htmm Description of the illustration filler_fld.eps
filler_fld ::= 
 
 { FILLER | BOUNDFILLER } [pos_spec] [datatype_spec] [PIECED] 
PKcrmPKN:AOEBPS/img_text/expremap.htm? Description of the illustration expremap.eps
expremap ::= 
 
[ 
REMAP_DATA = [[schema.]table.column : [schema.]pkg.function] [,...]
]
PK}vD?PKN:A!OEBPS/img_text/load_statement.htmV Description of the illustration load_statement.eps
load_statement ::=
 
  [{ UNRECOVERABLE | RECOVERABLE }] { LOAD | CONTINUE_LOAD } [DATA] [CHARACTERSET char_set_name]
[LENGTH [SEMANTICS] { BYTE | CHAR | CHARACTER }] [BYTEORDER { BIG | LITTLE } [ENDIAN]]
[BYTEORDERMARK { CHECK | NOCHECK }]
[infile_clause] [, infile_clause]... [READSIZE size] [READBUFFERS integer] 
[{ INSERT | APPEND | REPLACE | TRUNCATE }] 
[concatenate_clause][PRESERVE BLANKS] into_table_clause [, into_table_clause]... [BEGINDATA]
PKْ[VPKN:AOEBPS/img_text/impstart.htmp Description of the illustration impstart.eps
impstart ::= 
 
ImpModes ImpOpts ImpFileOpts
| ATTACH [= [schema_name.]job_name] [ENCRYPTION_PASSWORD=password]
PK˜upPKN:AOEBPS/img_text/sut81018.htm5 Description of the illustration sut81018.eps

The figure in sut81018.gif illustrates the concepts described in this section about how conventional path and direct path operate.

PKY:5PKN:AOEBPS/img_text/expfilter.htm Description of the illustration expfilter.eps
expfilter ::= 
 
[ 
| EXCLUDE = object_type[:name_clause] [,...]
| INCLUDE = object_type[:name_clause] [,...]
| QUERY = [[schema_name.]table_name:] query_clause [,...]
| SAMPLE = [[schema_name.]table_name:] sample_percent [,...]
]
PK™ܻ!PKN:AOEBPS/img_text/into_table4.htmp Description of the illustration into_table4.eps
into_table4 ::= 
 
  [ [EVALUATE CHECK_CONSTRAINTS] [REENABLE] [DISABLED_CONSTRAINTS] ]
PKᵌ)PKN:AOEBPS/img_text/ref.htmk Description of the illustration ref.eps
ref ::= 
 
  REF ( { fieldname | CONSTANT val } [, { fieldname | CONSTANT val }]... )
PKDAFpkPKN:AOEBPS/img_text/nested_table.html Description of the illustration nested_table.eps
nested_table ::= 
 
  NESTED TABLE (SDF_spec count_spec init_spec  | count_spec field_list |delim_spec)
PKΕqlPKN:AOEBPS/img_text/impfileopts.htm& Description of the illustration impfileopts.eps
impfileopts ::= 
 
[ DIRECTORY = directory_object
| NETWORK_LINK = database_link ImpNetworkOpts
| DUMPFILE   = [directory_object:]file_name [,[directory_object:]file_name] [,...]
| LOGFILE = [directory_object:]file_name [,[directory_object:]file_name] [,...] 
| NOLOGFILE= {YES | NO}
| PARFILE = [directory_path]file_name
| SQLFILE = [directory_object:]file_name
]
PKO^E+&PKN:AOEBPS/img_text/char.htm  Description of the illustration char.eps
char ::=
 
CHAR [(length)] [delim_spec]
PK%3 PKN:AOEBPS/img_text/bfile.html Description of the illustration bfile.eps
bfile ::= 
 
  BFILE ( { fieldname | CONSTANT val } , { fieldname | CONSTANT val } )
PKwqlPKN:AOEBPS/img_text/dbverify.htm, Description of the illustration dbverify.eps
dbverify ::= 
 
  dbv [ USERID=username/password ]
    FILE = filename
  | { START = block_address | END = block_address }
  | BLOCKSIZE = integer
  | HIGH_SCN = integer
  | LOGFILE = filename
  | FEEDBACK = integer
  | HELP  = { Y | N } 
  | PARFILE = filename
  
End of description.
PKptPKN:AOEBPS/img_text/impfilter.htm Description of the illustration impfilter.eps
impfilter ::= 
 
[ 
| EXCLUDE = object_type[:name_clause] [,object_type[:name_clause]] [,...]
| INCLUDE = object_type[:name_clause] [,object_type[:name_clause]] [,...]
| QUERY = [[schema_name.]table_name:]query_clause
]
PK)}PKN:AOEBPS/img_text/parallel.htm Description of the illustration parallel.eps
parallel ::=
 
PARALLEL = { true | false }
PK嗫PKN:AOEBPS/img_text/lobfile_spec.htm) Description of the illustration lobfile_spec.eps
lobfile_spec ::= 
 
  LOBFILE ({ fieldname | CONSTANT filename } [CHARACTERSET name]
 [LENGTH [SEMANTICS] { BYTE | CHAR | CHARACTER }] [BYTEORDER { BIG | LITTLE } [ENDIAN]]
[BYTEORDERMARK { CHECK | NOCHECK }] )
PKϾ.)PKN:AOEBPS/img_text/options.htm Description of the illustration options.eps
options ::=
  
  OPTIONS (options)
PK=ƺ PKN:AOEBPS/img_text/et_transform.htmz Description of the illustration et_transform.eps
et_transform ::=
 
 column_name FROM 
 {
  NULL |
 
  CONSTANT string |
  
  CONCAT (field_name | CONSTANT string ,) ... |             
               
  LOBFILE (fieldname | CONSTANT string: ,) ...
                 
   [lobfile_attr_list] 
 }
PKVzPKN:AOEBPS/img_text/et_condition.htmR Description of the illustration et_condition.eps
et_condition ::=
 
{
  { FIELDNAME |  (range start : range end) }
  { operator } 
  { string | hexstring | BLANKS  } 
 |
  { FIELDNAME |  (range start : range end) } 
  { operator } 
  { string | hexstring | BLANKS  } 
}
PK3PKN:AOEBPS/part_other.htm/ Other Utilities

Part IV

Other Utilities

This part contains the following chapters:

Chapter 16, "ADRCI: ADR Command Interpreter"

This chapter describes the Automatic Diagnostic Repository Command Interpreter (ADRCI), a command-line tool used to manage Oracle Database diagnostic data.

Chapter 17, "DBVERIFY: Offline Database Verification Utility"

This chapter describes how to use the offline database verification utility, DBVERIFY.

Chapter 18, "DBNEWID Utility"

This chapter describes how to use the DBNEWID utility to change the name or ID, or both, for a database.

Chapter 19, "Using LogMiner to Analyze Redo Log Files"

This chapter describes the Oracle LogMiner utility, which enables you to query redo logs through a SQL interface.

Chapter 20, "Using the Metadata APIs"

This chapter describes the Metadata API, which you can use to extract and manipulate complete representations of the metadata for database objects.

Chapter 21, "Original Export"

This chapter describes how to use the original Export utility to write data from an Oracle database into dump files for use by the original Import utility.

Chapter 22, "Original Import"

This chapter describes how to use the original Import utility to import dump files created by the original Export utility.

PK2ͩ4 / PKN:AOEBPS/dp_overview.htm Overview of Oracle Data Pump

1 Overview of Oracle Data Pump

Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another.

This chapter discusses the following topics:

Data Pump Components

Oracle Data Pump is made up of three distinct parts:

  • The command-line clients, expdp and impdp

  • The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)

  • The DBMS_METADATA PL/SQL package (also known as the Metadata API)

The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and Data Pump Import utility, respectively.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command line. These parameters enable the exporting and importing of data and metadata for a complete database or for subsets of a database.

When metadata is moved, Data Pump uses functionality provided by the DBMS_METADATA PL/SQL package. The DBMS_METADATA package provides a centralized facility for the extraction, manipulation, and re-creation of dictionary metadata.

The DBMS_DATAPUMP and DBMS_METADATA PL/SQL packages can be used independently of the Data Pump clients.


Note:

All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that for unprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. (For security reasons, DBAs must ensure that only approved users are allowed access to directory objects.) For privileged users, a default directory object is available. See "Default Locations for Dump, Log, and SQL Files" for more information about directory objects.


See Also:


How Does Data Pump Move Data?

For information about how Data Pump moves data in and out of databases, see the following sections:


Note:

Data Pump does not load tables with disabled unique indexes. To load data into the table, the indexes must be either dropped or reenabled.

The following sections briefly explain how and when each of these data movement mechanisms is used.

Using Data File Copying to Move Data

The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file. This method is used in the following situations:

  • The TRANSPORT_TABLESPACES parameter is used to specify a transportable mode export. Only metadata for the specified tablespaces is exported.

  • The TRANSPORTABLE=ALWAYS parameter is supplied on a table mode export (specified with the TABLES parameter). Only metadata for the tables, partitions, and subpartitions specified on the TABLES parameter is exported.

When an export operation uses data file copying, the corresponding import job always also uses data file copying. During the ensuing import operation, both the data files and the export dump file must be loaded.

When data is moved by using data file copying, the character sets must be identical on both the source and target databases.

In addition to copying the data, you may need to prepare it by using the Recovery Manager (RMAN) CONVERT command to perform some data conversions. You can do this at either the source or target database.


See Also:


Using Direct Path to Move Data

After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it. For example, if a table contains a column of type BFILE, then direct path cannot be used to load that table and external tables is used instead.

The following sections describe situations in which direct path cannot be used for loading and unloading:

Situations in Which Direct Path Load Is Not Used

If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to load the data for that table:

  • A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.

  • A domain index exists for a LOB column.

  • A table is in a cluster.

  • There is an active trigger on a preexisting table.

  • Fine-grained access control is enabled in insert mode on a preexisting table.

  • A table contains BFILE columns or columns of opaque types.

  • A referential integrity constraint is present on a preexisting table.

  • A table contains VARRAY columns with an embedded opaque type.

  • The table has encrypted columns.

  • The table into which data is being imported is a preexisting table and at least one of the following conditions exists:

    • There is an active trigger

    • The table is partitioned

    • Fine-grained access control is in insert mode

    • A referential integrity constraint exists

    • A unique index exists

  • Supplemental logging is enabled and the table has at least one LOB column.

  • The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.

  • A table contains a column (including a VARRAY column) with a TIMESTAMP WITH TIME ZONE datatype and the version of the time zone data file is different between the export and import systems.

Situations in Which Direct Path Unload Is Not Used

If any of the following conditions exist for a table, then Data Pump uses the external table method to unload data, rather than the direct path method:

  • Fine-grained access control for SELECT is enabled.

  • The table is a queue table.

  • The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.

  • The table contains encrypted columns.

  • The table contains a column of an evolved type that needs upgrading.

  • The table contains a column of type LONG or LONG RAW that is not last.

  • The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.

Using External Tables to Move Data

When data file copying is not selected and the data cannot be moved using direct path, the external table mechanism is used. The external table mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND hint is used on import to speed the copying of the data into the database. The representation of data for direct path data and external table data is the same in a dump file. Therefore, Data Pump might use the direct path mechanism at export time, but use external tables when the data is imported into the target database. Similarly, Data Pump might use external tables for the export, but use direct path for the import.

In particular, Data Pump uses external tables in the following situations:

  • Loading and unloading very large tables and partitions in situations where it is advantageous to use parallel SQL capabilities

  • Loading tables with global or domain indexes defined on them, including partitioned object tables

  • Loading tables with active triggers or clustered tables

  • Loading and unloading tables with encrypted columns

  • Loading tables with fine-grained access control enabled for inserts

  • Loading tables that are partitioned differently at load time and unload time

  • Loading a table not created by the import operation (the table exists before the import starts)


Note:

When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement.


See Also:


Using Conventional Path to Move Data

In situations where there are conflicting table attributes, Data Pump is not able to load data into a table using either direct path or external tables. In such cases, conventional path is used, which can affect performance.

Using Network Link Import to Move Data

When the Import NETWORK_LINK parameter is used to specify a network link for an import operation, SQL is directly used to move the data using an INSERT SELECT statement. The SELECT clause retrieves the data from the remote database over the network link. The INSERT clause uses SQL to insert the data into the target database. There are no dump files involved.

When the Export NETWORK_LINK parameter is used to specify a network link for an export operation, the data from the remote database is written to dump files on the target database. (Note that to export from a read-only database, the NETWORK_LINK parameter is required.)

Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.

Because reading over a network is generally slower than reading from a disk, network link is the slowest of the four access methods used by Data Pump and may be undesirable for very large jobs.

Supported Link Types

The following types of database links are supported for use with Data Pump Export and Import:

  • Public (both public and shared)

  • Fixed user

  • Connected user

Unsupported Link Types

The database link type, Current User, is not supported for use with Data Pump Export or Import.


See Also:


Required Roles for Data Pump Export and Import Operations

Many Data Pump Export and Import operations require the user to have the DATAPUMP_EXP_FULL_DATABASE role and/or the DATAPUMP_IMP_FULL_DATABASE role. These roles are automatically defined for Oracle databases when you run the standard scripts that are part of database creation. (Note that although the names of these roles contain the word FULL, these roles are actually required for all export and import modes, not only Full mode.)

The DATAPUMP_EXP_FULL_DATABASE role affects only export operations. The DATAPUMP_IMP_FULL_DATABASE role affects import operations and operations that use the Import SQLFILE parameter. These roles allow users performing exports and imports to do the following:

  • Perform the operation outside the scope of their schema

  • Monitor jobs that were initiated by another user

  • Export objects (such as tablespace definitions) and import objects (such as directory definitions) that unprivileged users cannot reference

These are powerful roles. Database administrators should use caution when granting these roles to users.

Although the SYS schema does not have either of these roles assigned to it, all security checks performed by Data Pump that require these roles also grant access to the SYS schema.


See Also:

Oracle Database Security Guide for more information about predefined roles in an Oracle Database installation

What Happens During Execution of a Data Pump Job?

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.

Coordination of a Job

For every Data Pump Export job and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

Tracking Progress Within a Job

While the data and metadata are being transferred, a master table is used to track the progress within a job. The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:

  • For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.

  • For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.

The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have the CREATE TABLE system privilege and a sufficient tablespace quota for creation of the master table. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.

For all operations, the information in the master table is used to restart a job.

The master table is either retained or dropped, depending on the circumstances, as follows:

  • Upon successful job completion, the master table is dropped.

  • If a job is stopped using the STOP_JOB interactive command, then the master table is retained for use in restarting the job.

  • If a job is killed using the KILL_JOB interactive command, then the master table is dropped and the job cannot be restarted.

  • If a job terminates unexpectedly, then the master table is retained. You can delete it if you do not intend to restart the job.

  • If a job stops before it starts running (that is, before any database objects have been copied), then the master table is dropped.


See Also:

"JOB_NAME" for more information about how job names are formed

Filtering Data and Metadata During a Job

Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such as TABLE, INDEX, or DIRECTORY). The class of an object is called its object type. You can use the EXCLUDE and INCLUDE parameters to restrict the types of objects that are exported and imported. The objects can be based upon the name of the object or the name of the schema that owns the object. You can also specify data-specific filters to restrict the rows that are exported and imported.

Transforming Metadata During a Job

When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects. This is done using the following Data Pump Import parameters: REMAP_DATAFILE, REMAP_SCHEMA, REMAP_TABLE,REMAP_TABLESPACE, TRANSFORM, and PARTITION_OPTIONS.

Maximizing Job Performance

Data Pump can employ multiple worker processes, running in parallel, to increase job performance. Use the PARALLEL parameter to set a degree of parallelism that takes maximum advantage of current conditions. For example, to limit the effect of a job on a production system, the database administrator (DBA) might want to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example, PARALLEL could be set to 2 during production hours to restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to worker processes that perform the data and metadata processing within an operation. These worker processes operate in parallel. In general, the degree of parallelism should be set to no more than twice the number of CPUs on an instance.


Note:

The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.

Loading and Unloading of Data

The worker processes unload and load metadata and table data. During import they also rebuild indexes. Some of these operations may be done in parallel: unloading and loading table data, rebuilding indexes, and loading package bodies. Worker processes are created as needed until the number of worker processes equals the value supplied for the PARALLEL command-line parameter. The number of active worker processes can be reset throughout the life of a job. Worker processes can be started on different nodes in an Oracle Real Application Clusters (Oracle RAC) environment.


Note:

The value of PARALLEL is restricted to 1 in the Standard Edition of Oracle Database.

When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution. In such a case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated from a pool of available processes in an Oracle RAC environment.


See Also:


Monitoring Job Status

The Data Pump Export and Import utilities can attach to a job in either logging mode or interactive-command mode.

In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be exported, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).


See Also:

  • The Export STATUS parameter for information about changing the frequency of the status display in command-line Export

  • The Import STATUS parameter for information about changing the frequency of the status display in command-line Import


In interactive-command mode, job status can be displayed on request. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.


See Also:

  • The interactive Export STATUS com3g̘mand

  • The interactive Import STATUS command


A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered during execution of the job, and records the completion status of the job.


See Also:

  • The Export LOGFILE parameter for information on how to set the file specification for an export log file

  • The Import LOGFILE parameter for information on how to set the file specification for a import log file


An alternative way to determine job status or to get other information about Data Pump jobs, would be to query the DBA_DATAPUMP_JOBS, USER_DATAPUMP_JOBS, or DBA_DATAPUMP_SESSIONS views. See Oracle Database Reference for descriptions of these views.

Monitoring the Progress of Executing Jobs

Data Pump operations that transfer table data (export and import) maintain an entry in the V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.

Use of the COMPRESSION, ENCRYPTION, ENCRYPTION_ALGORITHM, ENCRYPTION_MODE, ENCRYPTION_PASSWORD, QUERY, and REMAP_DATA parameters are not reflected in the determination of estimate values.

The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.

The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:

  • USERNAME - job owner

  • OPNAME - job name

  • TARGET_DESC - job operation

  • SOFAR - megabytes transferred thus far during the job

  • TOTALWORK - estimated number of megabytes in the job

  • UNITS - megabytes (MB)

  • MESSAGE - a formatted status message of the form:

    'job_name: operation_name : nnn out of mmm MB done'
    

File Allocation

Data Pump jobs manage the following types of files:

  • Dump files to contain the data and metadata that is being moved.

  • Log files to record the messages associated with an operation.

  • SQL files to record the output of a SQLFILE operation. A SQLFILE operation is invoked using the Data Pump Import SQLFILE parameter and results in all the SQL DDL that Import would be executing based on other parameters, being written to a SQL file.

  • Files specified by the DATA_FILES parameter during a transportable import.

An understanding of how Data Pump allocates and handles these files will help you to use Export and Import to their fullest advantage.

Specifying Files and Adding Additional Dump Files

For export operations, you can specify dump files at the time the job is defined, and also at a later time during the operation. For example, if you discover that space is running low during an export operation, then you can add additional dump files by using the Data Pump Export ADD_FILE command in interactive mode.

For import operations, all dump files must be specified at the time the job is defined.

Log files and SQL files overwrite previously existing files. For dump files, you can use the Export REUSE_DUMPFILES parameter to specify whether to overwrite a preexisting dump file.

Default Locations for Dump, Log, and SQL Files

Because Data Pump is server-based rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires that directory paths be specified as directory objects. A directory object maps a name to a directory path on the file system. DBAs must ensure that only approved users are allowed access to the directory object associated with the directory path.

The following example shows a SQL statement that creates a directory object named dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.

SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';

The reason that a directory object is required is to ensure data security and integrity. For example:

  • If you were allowed to specify a directory path location for an input file, then you might be able to read data that the server has access to, but to which you should not.

  • If you were allowed to specify a directory path location for an output file, then the server might overwrite a file that you might not normally have privileges to delete.

On UNIX and Windows NT systems, a default directory object, DATA_PUMP_DIR, is created at database creation or whenever the database dictionary is upgraded. By default, it is available only to privileged users. (The user SYSTEM has read and write access to the DATA_PUMP_DIR directory, by default.)

If you are not a privileged user, then before you can run Data Pump Export or Data Pump Import, a directory object must be created by a database administrator (DBA) or by any user with the CREATE ANY DIRECTORY privilege.

After a directory is created, the user creating the directory object must grant READ or WRITE permission on the directory to other users. For example, to allow the Oracle database to read and write files on behalf of user hr in the directory named by dpump_dir1, the DBA must execute the following command:

SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;

Note that READ or WRITE permission to a directory object only means that the Oracle database reads or writes that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.

Data Pump Export and Import use the following order of precedence to determine a file's location:

  1. If a directory object is specified as part of the file specification, then the location specified by that directory object is used. (The directory object must be separated from the file name by a colon.)

  2. If a directory object is not specified as part of the file specification, then the directory object named by the DIRECTORY parameter is used.

  3. If a directory object is not specified as part of the file specification, and if no directory object is named by the DIRECTORY parameter, then the value of the environment variable, DATA_PUMP_DIR, is used. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object is DUMP_FILES1, and it is located at '/usr/apps/dumpfiles1'.

    SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';
    

    Then, a user on a UNIX-based client system using csh can assign the value DUMP_FILES1 to the environment variable DATA_PUMP_DIR. The DIRECTORY parameter can then be omitted from the command line. The dump file employees.dmp, and the log file export.log, are written to '/usr/apps/dumpfiles1'.

    %setenv DATA_PUMP_DIR DUMP_FILES1
    %expdp hr TABLES=employees DUMPFILE=employees.dmp
    
  4. If none of the previous three conditions yields a directory object and you are a privileged user, then Data Pump attempts to use the value of the default server-based directory object, DATA_PUMP_DIR. This directory object is automatically created at database creation or when the database dictionary is upgraded. You can use the following SQL query to see the path definition for DATA_PUMP_DIR:

    SQL> SELECT directory_name, directory_path FROM dba_directories
    2 WHERE directory_name='DATA_PUMP_DIR';
    

    If you are not a privileged user, then access to the DATA_PUMP_DIR directory object must have previously been granted to you by a DBA.

    Do not confuse the default DATA_PUMP_DIR directory object with the client-based environment variable of the same name.

Oracle RAC Considerations

Keep the following considerations in mind when working in an Oracle RAC environment.

  • To use Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.

    The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Data Pump and/or external tables processes may run.

  • The default Data Pump behavior is that worker processes can run on any instance in an Oracle RAC configuration. Therefore, workers on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the CLUSTER=no parameter to constrain all worker processes to the instance where the Data Pump job was started.

  • Under certain circumstances, Data Pump uses parallel query slaves to load or unload data. In an Oracle RAC environment, Data Pump does not control where these slaves run, and they may run on other instances in the Oracle RAC, regardless of what is specified for CLUSTER and SERVICE_NAME for the Data Pump job. Controls for parallel query operations are independent of Data Pump. When parallel query slaves run on other instances as part of a Data Pump job, they also require access to the physical storage of the dump file set.

Using Directory Objects When Oracle Automatic Storage Management Is Enabled

If you use Data Pump Export or Import with Oracle Automatic Storage Management (Oracle ASM) enabled, then you must define the directory object used for the dump file so that the Oracle ASM disk group name is used (instead of an operating system directory path). A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the Oracle ASM dump file as follows:

SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';

Then you would create a separate directory object for the log file:

SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';

To enable user hr to have access to these directory objects, you would assign the necessary privileges, for example:

SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr;
SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;

You would then use the following Data Pump Export command (you will be prompted for a password):

> expdp hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log

See Also:


Using Substitution Variables

Instead of, or in addition to, listing specific file names, you can use the DUMPFILE parameter during export operations to specify multiple dump files, by using a substitution variable (%U) in the file name. This is called a dump file template. The new dump files are created as they are needed, beginning with 01 for %U, then using 02, 03, and so on. Enough dump files are created to allow all processes specified by the current setting of the PARALLEL parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the FILESIZE parameter, then it is closed and a new dump file (with a new generated name) is created to take its place.

If multiple dump file templates are provided, they are used to generate dump files in a round-robin fashion. For example, if expa%U, expb%U, and expc%U were all specified for a job having a parallelism of 6, then the initial dump files created would be expa01.dmp, expb01.dmp, expc01.dmp, expa02.dmp, expb02.dmp, and expc02.dmp.

For import and SQLFILE operations, if dump file specifications expa%U, expb%U, and expc%U are specified, then the operation begins by attempting to open the dump files expa01.dmp, expb01.dmp, and expc01.dmp. It is possible for the master table to span multiple dump files, so until all pieces of the master table are found, dump files continue to be opened by incrementing the substitution variable and looking up the new file names (for example, expa02.dmp, expb02.dmp, and expc02.dmp). If a dump file does not exist, then the operation stops incrementing the substitution variable for the dump file specification that was in error. For example, if expb01.dmp and expb02.dmp are found but expb03.dmp is not found, then no more files are searched for using the expb%U specification. Once the entire master table is found, it is used to determine whether all dump files in the dump file set have been located.

Moving Data Between Different Database Releases

Because most Data Pump operations are performed on the server side, if you are using any release of the database other than COMPATIBLE, then you must provide the server with the specific release information. Otherwise, errors may occur. To specify release information, use the VERSION parameter.


See Also:


Keep the following information in mind when you are using Data Pump Export and Import to move data between different database releases:

  • If you specify a database release that is older than the current database release, then certain features may be unavailable. For example, specifying VERSION=10.1 causes an error if data compression is also specified for the job because compression was not supported in Oracle Database 10g release 1 (10.1).

  • On a Data Pump export, if you specify a database release that is older than the current database release, then a dump file set is created that you can import into that older release of the database. However, the dump file set does not contain any objects that the older database release does not support.

  • Data Pump Import can always read dump file sets created by older releases of the database.

  • Data Pump Import cannot read dump file sets created by a database release that is newer than the current database release, unless those dump file sets were created with the VERSION parameter set to the release of the target database. Therefore, the best way to perform a downgrade is to perform your Data Pump export with the VERSION parameter set to the release of the target database.

  • When operating across a network link, Data Pump requires that the source and target databases differ by no more than one version. For example, if one database is Oracle Database 11g, then the other database must be either 11g or 10g. Note that Data Pump checks only the major version number (for example, 10g and 11g), not specific release numbers (for example, 10.1, 10.2, 11.1, or 11.2).

SecureFiles LOB Considerations

When you use Data Pump Export to export SecureFiles LOBs, the resulting behavior depends on several things, including the value of the Export VERSION parameter, whether ContentType is present, and whether the LOB is archived and data is cached. The following scenarios cover different combinations of these variables:

  • If a table contains SecureFiles LOBs with ContentType and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then the ContentType is not exported.

  • If a table contains SecureFiles LOBs with ContentType and the Export VERSION parameter is set to a value of 11.2.0.0.0 or later, then the ContentType is exported and restored on a subsequent import.

  • If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then the SecureFiles LOB data is exported and the archive metadata is dropped. In this scenario, if VERSION is set to 11.1 or later, then the SecureFiles LOB becomes a vanilla SecureFiles LOB. But if VERSION is set to a value earlier than 11.1, then the SecureFiles LOB becomes a BasicFiles LOB.

  • If a table contains a SecureFiles LOB that is currently archived but the data is not cached, and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then an ORA-45001 error is returned.

  • If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export VERSION parameter is set to a value of 11.2.0.0.0 or later, then both the cached data and the archive metadata is exported.


See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for more information about SecureFiles

Data Pump Exit Codes

Oracle Data Pump provides the results of export and import operations immediately upon completion. In addition to recording the results in a log file, Data Pump may also report the outcome in a process exit code. This allows you to check the outcome of a Data Pump job from the command line or a script.

Table 1-1 describes the Data Pump exit codes for Unix and Windows NT.

Table 1-1 Data Pump Exit Codes

Exit CodeMeaning

EX_SUCC 0

The export or import job completed successfully. No errors are displayed to the output device or recorded in the log file, if there is one.

EX_SUCC_ERR 5

The export or import job completed successfully but there were errors encountered during the job. The errors are displayed to the output device and recorded in the log file, if there is one.

EX_FAIL 1

The export or import job encountered one or more fatal errors, including the following:

  • Errors on the command line or in command syntax

  • Oracle database errors from which export or import cannot recover

  • Operating system errors (such as malloc)

  • Invalid parameter values that prevent the job from starting (for example, an invalid directory object specified in the DIRECTORY parameter)

A fatal error is displayed to the output device but may not be recorded in the log file. Whether it is recorded in the log file can depend on several factors, including:

  • Was a log file specified at the start of the job?

  • Did the processing of the job proceed far enough for a log file to be opened?


PKu=3PKN:AOEBPS/ldr_field_list.htm SQL*Loader Field List Reference

10 SQL*Loader Field List Reference

This chapter describes the field-list portion of the SQL*Loader control file. The following topics are discussed:

Field List Contents

The field-list portion of a SQL*Loader control file provides information about fields being loaded, such as position, datatype, conditions, and delimiters.

Example 10-1 shows the field list section of the sample control file that was introduced in Chapter 9.

Example 10-1 Field List Section of Sample Control File

.
.
.
1  (hiredate  SYSDATE,
2     deptno  POSITION(1:2)  INTEGER EXTERNAL(2)
              NULLIF deptno=BLANKS,
3       job   POSITION(7:14)  CHAR  TERMINATED BY WHITESPACE
              NULLIF job=BLANKS  "UPPER(:job)",
       mgr    POSITION(28:31) INTEGER EXTERNAL 
              TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
       ename  POSITION(34:41) CHAR 
              TERMINATED BY WHITESPACE  "UPPER(:ename)",
       empno  POSITION(45) INTEGER EXTERNAL 
              TERMINATED BY WHITESPACE,
       sal    POSITION(51) CHAR  TERMINATED BY WHITESPACE
              "TO_NUMBER(:sal,'$99,999.99')",
4      comm   INTEGER EXTERNAL  ENCLOSED BY '(' AND '%'
              ":comm * 100"
    )

In this sample control file, the numbers that appear to the left would not appear in a real control file. They are keyed in this sample to the explanatory notes in the following list:

  1. SYSDATE sets the column to the current system date. See "Setting a Column to the Current Date".

  2. POSITION specifies the position of a data field. See "Specifying the Position of a Data Field".

    INTEGER EXTERNAL is the datatype for the field. See "Specifying the Datatype of a Data Field" and "Numeric EXTERNAL".

    The NULLIF clause is one of the clauses that can be used to specify field conditions. See "Using the WHEN, NULLIF, and DEFAULTIF Clauses".

    In this sample, the field is being compared to blanks, using the BLANKS parameter. See "Comparing Fields to BLANKS".

  3. The TERMINATED BY WHITESPACE clause is one of the delimiters it is possible to specify for a field. See "Specifying Delimiters".

  4. The ENCLOSED BY clause is another possible field delimiter. See "Specifying Delimiters".

Specifying the Position of a Data Field

To load data from the data file, SQL*Loader must know the length and location of the field. To specify the position of a field in the logical record, use the POSITION clause in the column specification. The position may either be stated explicitly or relative to the preceding field. Arguments to POSITION must be enclosed in parentheses. The start, end, and integer values are always in bytes, even if character-length semantics are used for a data file.

The syntax for the position specification (pos_spec) clause is as follows:

Description of pos_spec.gif follows
Description of the illustration pos_spec.gif

Table 10-1 describes the parameters for the position specification clause.

Table 10-1 Parameters for the Position Specification Clause

ParameterDescription

start

The starting column of the data field in the logical record. The first byte position in a logical record is 1.

end

The ending position of the data field in the logical record. Either start-end or start:end is acceptable. If you omit end, then the length of the field is derived from the datatype in the data file. Note that CHAR data specified without start or end, and without a length specification (CHAR(n)), is assumed to have a length of 1. If it is impossible to derive a length from the datatype, then an error message is issued.

*


Specifies that the data field follows immediately after the previous field. If you use * for the first data field in the control file, then that field is assumed to be at the beginning of the logical record. When you use * to specify position, the length of the field is derived from the datatype.

+integer

You can use an offset, specified as +integer, to offset the current field from the next position after the end of the previous field. A number of bytes, as specified by +integer, are skipped before reading the value for the current field.


You may omit POSITION entirely. If you do, then the position specification for the data field is the same as if POSITION(*) had been used.

Using POSITION with Data Containing Tabs

When you are determining field positions, be alert for tabs in the data file. Suppose you use the SQL*Loader advanced SQL string capabilities to load data from a formatted report. You would probably first look at a printed copy of the report, carefully measure all character positions, and then create your control file. In such a situation, it is highly likely that when you attempt to load the data, the load will fail with multiple "invalid number" and "missing field" errors.

These kinds of errors occur when the data contains tabs. When printed, each tab expands to consume several columns on the paper. In the data file, however, each tab is still only one character. As a result, when SQL*Loader reads the data file, the POSITION specifications are wrong.

To fix the problem, inspect the data file for tabs and adjust the POSITION specifications, or else use delimited fields.

Using POSITION with Multiple Table Loads

In a multiple table load, you specify multiple INTO TABLE clauses. When you specify POSITION(*) for the first column of the first table, the position is calculated relative to the beginning of the logical record. When you specify POSITION(*) for the first column of subsequent tables, the position is calculated relative to the last column of the last table loaded.

Thus, when a subsequent INTO TABLE clause begins, the position is not set to the beginning of the logical record automatically. This allows multiple INTO TABLE clauses to process different parts of the same physical record. For an example, see "Extracting Multiple Logical Records".

A logical record might contain data for one of two tables, but not both. In this case, you would reset POSITION. Instead of omitting the position specification or using POSITION(*+n) for the first field in the INTO TABLE clause, use POSITION(1) or POSITION(n).

Examples of Using POSITION

siteid  POSITION (*) SMALLINT 
siteloc POSITION (*) INTEGER 

If these were the first two column specifications, then siteid would begin in column 1, and siteloc would begin in the column immediately following.

ename  POSITION (1:20)  CHAR 
empno  POSITION (22-26) INTEGER EXTERNAL 
allow  POSITION (*+2)   INTEGER EXTERNAL TERMINATED BY "/" 

Column ename is character data in positions 1 through 20, followed by column empno, which is presumably numeric data in columns 22 through 26. Column allow is offset from the next position (27) after the end of empno by +2, so it starts in column 29 and continues until a slash is encountered.

Specifying Columns and Fields

You may load any number of a table's columns. Columns defined in the database, but not specified in the control file, are assigned null values.

A column specification is the name of the column, followed by a specification for the value to be put in that column. The list of columns is enclosed by parentheses and separated with commas as follows:

(columnspec,columnspec, ...)

Each column name (unless it is marked FILLER) must correspond to a column of the table named in the INTO TABLE clause. A column name must be enclosed in quotation marks if it is a SQL or SQL*Loader reserved word, contains special characters, or is case sensitive.

If the value is to be generated by SQL*Loader, then the specification includes the RECNUM, SEQUENCE, or CONSTANT parameter. See "Using SQL*Loader to Generate Data for Input".

If the column's value is read from the data file, then the data field that contains the column's value is specified. In this case, the column specification includes a column name that identifies a column in the database table, and a field specification that describes a field in a data record. The field specification includes position, datatype, null restrictions, and defaults.

It is not necessary to specify all attributes when loading column objects. Any missing attributes will be set to NULL.

Specifying Filler Fields

A filler field, specified by BOUNDFILLER or FILLER is a data file mapped field that does not correspond to a database column. Filler fields are assigned values from the data fields to which they are mapped.

Keep the following in mind regarding filler fields:

  • The syntax for a filler field is same as that for a column-based field, except that a filler field's name is followed by FILLER.

  • Filler fields have names but they are not loaded into the table.

  • Filler fields can be used as arguments to init_specs (for example, NULLIF and DEFAULTIF).

  • Filler fields can be used as arguments to directives (for example, SID, OID, REF, and BFILE).

    To avoid ambiguity, if a Filler field is referenced in a directive, such as BFILE, and that field is declared in the control file inside of a column object, then the field name must be qualified with the name of the column object. This is illustrated in the following example:

    LOAD DATA 
    INFILE * 
    INTO TABLE BFILE1O_TBL REPLACE 
    FIELDS TERMINATED BY ',' 
    ( 
       emp_number char, 
       emp_info_b column object 
       ( 
       bfile_name FILLER char(12), 
       emp_b BFILE(constant "SQLOP_DIR", emp_info_b.bfile_name) NULLIF 
      emp_info_b.bfile_name = 'NULL' 
       ) 
    ) 
    BEGINDATA 
    00001,bfile1.dat, 
    00002,bfile2.dat, 
    00003,bfile3.dat, 
    
  • Filler fields can be used in field condition specifications in NULLIF, DEFAULTIF, and WHEN clauses. However, they cannot be used in SQL strings.

  • Filler field specifications cannot contain a NULLIF or DEFAULTIF clause.

  • Filler fields are initialized to NULL if TRAILING NULLCOLS is specified and applicable. If another field references a nullified filler field, then an error is generated.

  • Filler fields can occur anyplace in the data file, including inside the field list for an object or inside the definition of a VARRAY.

  • SQL strings cannot be specified as part of a filler field specification, because no space is allocated for fillers in the bind array.


    Note:

    The information in this section also applies to specifying bound fillers by using BOUNDFILLER. The only exception is that with bound fillers, SQL strings can be specified as part of the field, because space is allocated for them in the bind array.

A sample filler field specification looks as follows:

 field_1_count FILLER char,
 field_1 varray count(field_1_count)
 (
    filler_field1  char(2),
    field_1  column object
    (
      attr1 char(2),
      filler_field2  char(2),
      attr2 char(2),
    )
    filler_field3  char(3),
 )
 filler_field4 char(6)

Specifying the Datatype of a Data Field

The datatype specification of a field tells SQL*Loader how to interpret the data in the field. For example, a datatype of INTEGER specifies binary data, while INTEGER EXTERNAL specifies character data that represents a number. A CHAR field can contain any character data.

Only one datatype can be specified for each field; if a datatype is not specified, then CHAR is assumed.

"SQL*Loader Datatypes" describes how SQL*Loader datatypes are converted into Oracle datatypes and gives detailed information about each SQL*Loader datatype.

Before you specify the datatype, you must specify the position of the field.

SQL*Loader Datatypes

SQL*Loader datatypes can be grouped into portable and nonportable datatypes. Within each of these two groups, the datatypes are subgrouped into value datatypes and length-value datatypes.

Portable versus nonportable refers to whether the datatype is platform dependent. Platform dependency can exist for several reasons, including differences in the byte ordering schemes of different platforms (big-endian versus little-endian), differences in the number of bits in a platform (16-bit, 32-bit, 64-bit), differences in signed number representation schemes (2's complement versus 1's complement), and so on. In some cases, such as with byte ordering schemes and platform word length, SQL*Loader provides mechanisms to help overcome platform dependencies. These mechanisms are discussed in the descriptions of the appropriate datatypes.

Both portable and nonportable datatypes can be values or length-values. Value datatypes assume that a data field has a single part. Length-value datatypes require that the data field consist of two subfields where the length subfield specifies how long the value subfield can be.


See Also:

Chapter 10, "SQL*Loader Field List Reference" for information about loading a variety of datatypes including column objects, object tables, REF columns, and LOBs (BLOBs, CLOBs, NCLOBs and BFILEs)

Nonportable Datatypes

Nonportable datatypes are grouped into value datatypes and length-value datatypes. The nonportable value datatypes are as follows:

  • INTEGER(n)

  • SMALLINT

  • FLOAT

  • DOUBLE

  • BYTEINT

  • ZONED

  • (packed) DECIMAL

The nonportable length-value datatypes are as follows:

  • VARGRAPHIC

  • VARCHAR

  • VARRAW

  • LONG VARRAW

The syntax for the nonportable datatypes is shown in the syntax diagram for "datatype_spec".

INTEGER(n)

The data is a full-word binary integer, where n is an optionally supplied length of 1, 2, 4, or 8. If no length specification is given, then the length, in bytes, is based on the size of a LONG INT in the C programming language on your particular platform.

INTEGERs are not portable because their byte size, their byte order, and the representation of signed values may be different between systems. However, if the representation of signed values is the same between systems, then SQL*Loader may be able to access INTEGER data with correct results. If INTEGER is specified with a length specification (n), and the appropriate technique is used (if necessary) to indicate the byte order of the data, then SQL*Loader can access the data with correct results between systems. If INTEGER is specified without a length specification, then SQL*Loader can access the data with correct results only if the size of a LONG INT in the C programming language is the same length in bytes on both systems. In that case, the appropriate technique must still be used (if necessary) to indicated the byte order of the data.

Specifying an explicit length for binary integers is useful in situations where the input data was created on a platform whose word length differs from that on which SQL*Loader is running. For instance, input data containing binary integers might be created on a 64-bit platform and loaded into a database using SQL*Loader on a 32-bit platform. In this case, use INTEGER(8) to instruct SQL*Loader to process the integers as 8-byte quantities, not as 4-byte quantities.

By default, INTEGER is treated as a SIGNED quantity. If you want SQL*Loader to treat it as an unsigned quantity, then specify UNSIGNED. To return to the default behavior, specify SIGNED.

SMALLINT

The data is a half-word binary integer. The length of the field is the length of a half-word integer on your system. By default, it is treated as a SIGNED quantity. If you want SQL*Loader to treat it as an unsigned quantity, then specify UNSIGNED. To return to the default behavior, specify SIGNED.

SMALLINT can be loaded with correct results only between systems where a SHORT INT has the same length in bytes. If the byte order is different between the systems, then use the appropriate technique to indicate the byte order of the data. See "Byte Ordering".


Note:

This is the SHORT INT datatype in the C programming language. One way to determine its length is to make a small control file with no data and look at the resulting log file. This length cannot be overridden in the control file.

FLOAT

The data is a single-precision, floating-point, binary number. If you specify end in the POSITION clause, then end is ignored. The length of the field is the length of a single-precision, floating-point binary number on your system. (The datatype is FLOAT in C.) This length cannot be overridden in the control file.

FLOAT can be loaded with correct results only between systems where the representation of FLOAT is compatible and of the same length. If the byte order is different between the two systems, then use the appropriate technique to indicate the byte order of the data. See "Byte Ordering".

DOUBLE

The data is a double-precision, floating-point binary number. If you specify end in the POSITION clause, then end is ignored. The length of the field is the length of a double-precision, floating-point binary number on your system. (The datatype is DOUBLE or LONG FLOAT in C.) This length cannot be overridden in the control file.

DOUBLE can be loaded with correct results only between systems where the representation of DOUBLE is compatible and of the same length. If the byte order is different between the two systems, then use the appropriate technique to indicate the byte order of the data. See "Byte Ordering".

BYTEINT

The decimal value of the binary representation of the byte is loaded. For example, the input character x"1C" is loaded as 28. The length of a BYTEINT field is always 1 byte. If POSITION(start:end) is specified, then end is ignored. (The datatype is UNSIGNED CHAR in C.)

An example of the syntax for this datatype is:

(column1 position(1) BYTEINT, 
column2 BYTEINT, 
... 
) 

ZONED

ZONED data is in zoned decimal format: a string of decimal digits, one per byte, with the sign included in the last byte. (In COBOL, this is a SIGN TRAILING field.) The length of this field equals the precision (number of digits) that you specify.

The syntax for the ZONED datatype is:

Description of zoned.gif follows
Description of the illustration zoned.gif

In this syntax, precision is the number of digits in the number, and scale (if given) is the number of digits to the right of the (implied) decimal point. The following example specifies an 8-digit integer starting at position 32:

sal  POSITION(32)  ZONED(8), 
 

The Oracle database uses the VAX/VMS zoned decimal format when the zoned data is generated on an ASCII-based platform. It is also possible to load zoned decimal data that is generated on an EBCDIC-based platform. In this case, Oracle uses the IBM format as specified in the ESA/390 Principles of Operations, version 8.1 manual. The format that is used depends on the character set encoding of the input data file. See "CHARACTERSET Parameter" for more information.

DECIMAL

DECIMAL data is in packed decimal format: two digits per byte, except for the last byte, which contains a digit and sign. DECIMAL fields allow the specification of an implied decimal point, so fractional values can be represented.

The syntax for the DECIMAL datatype is:

Description of decimal.gif follows
Description of the illustration decimal.gif

The precision parameter is the number of digits in a value. The length of the field in bytes, as computed from digits, is (N+1)/2 rounded up.

The scale parameter is the scaling factor, or number of digits to the right of the decimal point. The default is zero (indicating an integer). The scaling factor can be greater than the number of digits but cannot be negative.

An example is:

sal DECIMAL (7,2) 

This example would load a number equivalent to +12345.67. In the data record, this field would take up 4 bytes. (The byte length of a DECIMAL field is equivalent to (N+1)/2, rounded up, where N is the number of digits in the value, and 1 is added for the sign.)

VARGRAPHIC

The data is a varying-length, double-byte character set (DBCS). It consists of a length subfield followed by a string of double-byte characters. The Oracle database does not support double-byte character sets; however, SQL*Loader reads them as single bytes and loads them as RAW data. Like RAW data, VARGRAPHIC fields are stored without modification in whichever column you specify.


Note:

The size of the length subfield is the size of the SQL*Loader SMALLINT datatype on your system (C type SHORT INT). See "SMALLINT" for more information.

VARGRAPHIC data can be loaded with correct results only between systems where a SHORT INT has the same length in bytes. If the byte order is different between the systems, then use the appropriate technique to indicate the byte order of the length subfield. See "Byte Ordering".

The syntax for the VARGRAPHIC datatype is:

Description of vargraphic.gif follows
Description of the illustration vargraphic.gif

The length of the current field is given in the first 2 bytes. A maximum length specified for the VARGRAPHIC datatype does not include the size of the length subfield. The maximum length specifies the number of graphic (double-byte) characters. It is multiplied by 2 to determine the maximum length of the field in bytes.

The default maximum field length is 2 KB graphic characters, or 4 KB (2 * 2KB). To minimize memory requirements, specify a maximum length for such fields whenever possible.

If a position specification is specified (using pos_spec) before the VARGRAPHIC statement, then it provides the location of the length subfield, not of the first graphic character. If you specify pos_spec(start:end), then the end location determines a maximum length for the field. Both start and end identify single-character (byte) positions in the file. Start is subtracted from (end + 1) to give the length of the field in bytes. If a maximum length is specified, then it overrides any maximum length calculated from the position specification.

If a VARGRAPHIC field is truncated by the end of the logical record before its full length is read, then a warning is issued. Because the length of a VARGRAPHIC field is embedded in every occurrence of the input data for that field, it is assumed to be accurate.

VARGRAPHIC data cannot be delimited.

VARCHAR

A VARCHAR field is a length-value datatype. It consists of a binary length subfield followed by a character string of the specified length. The length is in bytes unless character-length semantics are used for the data file. In that case, the length is in characters. See "Character-Length Semantics".

VARCHAR fields can be loaded with correct results only between systems where a SHORT data field INT has the same length in bytes. If the byte order is different between the systems, or if the VARCHAR field contains data in the UTF16 character set, then use the appropriate technique to indicate the byte order of the length subfield and of the data. The byte order of the data is only an issue for the UTF16 character set. See "Byte Ordering".


Note:

The size of the length subfield is the size of the SQL*Loader SMALLINT datatype on your system (C type SHORT INT). See "SMALLINT" for more information.

The syntax for the VARCHAR datatype is:

Description of varchar.gif follows
Description of the illustration varchar.gif

A maximum length specified in the control file does not include the size of the length subfield. If you specify the optional maximum length for a VARCHAR datatype, then a buffer of that size, in bytes, is allocated for these fields. However, if character-length semantics are used for the data file, then the buffer size in bytes is the max_length times the size in bytes of the largest possible character in the character set. See "Character-Length Semantics".

The default maximum size is 4 KB. Specifying the smallest maximum length that is needed to load your data can minimize SQL*Loader's memory requirements, especially if you have many VARCHAR fields.

The POSITION clause, if used, gives the location, in bytes, of the length subfield, not of the first text character. If you specify POSITION(start:end), then the end location determines a maximum length for the field. Start is subtracted from (end + 1) to give the length of the field in bytes. If a maximum length is specified, then it overrides any length calculated from POSITION.

If a VARCHAR field is truncated by the end of the logical record before its full length is read, then a warning is issued. Because the length of a VARCHAR field is embedded in every occurrence of the input data for that field, it is assumed to be accurate.

VARCHAR data cannot be delimited.

VARRAW

VARRAW is made up of a 2-byte binary length subfield followed by a RAW string value subfield.

VARRAW results in a VARRAW with a 2-byte length subfield and a maximum size of 4 KB (that is, the default). VARRAW(65000) results in a VARRAW with a length subfield of 2 bytes and a maximum size of 65000 bytes.

VARRAW fields can be loaded between systems with different byte orders if the appropriate technique is used to indicate the byte order of the length subfield. See "Byte Ordering".

LONG VARRAW

LONG VARRAW is a VARRAW with a 4-byte length subfield instead of a 2-byte length subfield.

LONG VARRAW results in a VARRAW with 4-byte length subfield and a maximum size of 4 KB (that is, the default). LONG VARRAW(300000) results in a VARRAW with a length subfield of 4 bytes and a maximum size of 300000 bytes.

LONG VARRAW fields can be loaded between systems with different byte orders if the appropriate technique is used to indicate the byte order of the length subfield. See "Byte Ordering".

Portable Datatypes

The portable datatypes are grouped into value datatypes and length-value datatypes. The portable value datatypes are as follows:

  • CHAR

  • Datetime and Interval

  • GRAPHIC

  • GRAPHIC EXTERNAL

  • Numeric EXTERNAL (INTEGER, FLOAT, DECIMAL, ZONED)

  • RAW

The portable length-value datatypes are as follows:

  • VARCHARC

  • VARRAWC

The syntax for these datatypes is shown in the diagram for "datatype_spec".

The character datatypes are CHAR, DATE, and the numeric EXTERNAL datatypes. These fields can be delimited and can have lengths (or maximum lengths) specified in the control file.

CHAR

The data field contains character data. The length, which is optional, is a maximum length. Note the following regarding length:

  • If a length is not specified, then it is derived from the POSITION specification.

  • If a length is specified, then it overrides the length in the POSITION specification.

  • If no length is given and there is no POSITION specification, then CHAR data is assumed to have a length of 1, unless the field is delimited:

    • For a delimited CHAR field, if a length is specified, then that length is used as a maximum.

    • For a delimited CHAR field for which no length is specified, the default is 255 bytes.

    • For a delimited CHAR field that is greater than 255 bytes, you must specify a maximum length. Otherwise you will receive an error stating that the field in the data file exceeds maximum length.

The syntax for the CHAR datatype is:

Description of char.gif follows
Description of the illustration char.gif

Datetime and Interval Datatypes

Both datetimes and intervals are made up of fields. The values of these fields determine the value of the datatype.

The datetime datatypes are:

  • DATE

  • TIME

  • TIME WITH TIME ZONE

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

Values of datetime datatypes are sometimes called datetimes. In the following descriptions of the datetime datatypes you will see that, except for DATE, you are allowed to optionally specify a value for fractional_second_precision. The fractional_second_precision specifies the number of digits stored in the fractional part of the SECOND datetime field. When you create a column of this datatype, the value can be a number in the range 0 to 9. The default is 6.

The interval datatypes are:

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

Values of interval datatypes are sometimes called intervals. The INTERVAL YEAR TO MONTH datatype lets you optionally specify a value for year_precision. The year_precision value is the number of digits in the YEAR datetime field. The default value is 2.

The INTERVAL DAY TO SECOND datatype lets you optionally specify values for day_precision and fractional_second_precision. The day_precision is the number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2. The fractional_second_precision specifies the number of digits stored in the fractional part of the SECOND datetime field. When you create a column of this datatype, the value can be a number in the range 0 to 9. The default is 6.


See Also:

Oracle Database SQL Language Reference for more detailed information about specifying datetime and interval datatypes, including the use of fractional_second_precision, year_precision, and day_precision

DATE

The DATE field contains character data that should be converted to an Oracle date using the specified date mask. The syntax for the DATE field is:

Description of date.gif follows
Description of the illustration date.gif

For example:

LOAD DATA 
INTO TABLE dates (col_a POSITION (1:15) DATE "DD-Mon-YYYY") 
BEGINDATA 
1-Jan-2008 
1-Apr-2008 28-Feb-2008 

Whitespace is ignored and dates are parsed from left to right unless delimiters are present. (A DATE field that consists entirely of whitespace is loaded as a NULL field.)

The length specification is optional, unless a varying-length date mask is specified. The length is in bytes unless character-length semantics are used for the data file. In that case, the length is in characters. See "Character-Length Semantics".

In the preceding example, the date mask, "DD-Mon-YYYY" contains 11 bytes, with byte-length semantics. Therefore, SQL*Loader expects a maximum of 11 bytes in the field, so the specification works properly. But, suppose a specification such as the following is given:

DATE "Month dd, YYYY" 

In this case, the date mask contains 14 bytes. If a value with a length longer than 14 bytes is specified, such as "September 30, 2008", then a length must be specified.

Similarly, a length is required for any Julian dates (date mask "J"). A field length is required any time the length of the date string could exceed the length of the mask (that is, the count of bytes in the mask).

If an explicit length is not specified, then it can be derived from the POSITION clause. It is a good idea to specify the length whenever you use a mask, unless you are absolutely sure that the length of the data is less than, or equal to, the length of the mask.

An explicit length specification, if present, overrides the length in the POSITION clause. Either of these overrides the length derived from the mask. The mask may be any valid Oracle date mask. If you omit the mask, then the default Oracle date mask of "dd-mon-yy" is used.

The length must be enclosed in parentheses and the mask in quotation marks.

A field of datatype DATE may also be specified with delimiters. For more information, see "Specifying Delimiters".

TIME

The TIME datatype stores hour, minute, and second values. It is specified as follows:

TIME [(fractional_second_precision)]
TIME WITH TIME ZONE

The TIME WITH TIME ZONE datatype is a variant of TIME that includes a time zone displacement in its value. The time zone displacement is the difference (in hours and minutes) between local time and UTC (coordinated universal time, formerly Greenwich mean time). It is specified as follows:

TIME [(fractional_second_precision)] WITH [LOCAL] TIME ZONE

If the LOCAL option is specified, then data stored in the database is normalized to the database time zone, and time zone displacement is not stored as part of the column data. When the data is retrieved, it is returned in the user's local session time zone.

TIMESTAMP

The TIMESTAMP datatype is an extension of the DATE datatype. It stores the year, month, and day of the DATE datatype, plus the hour, minute, and second values of the TIME datatype. It is specified as follows:

TIMESTAMP [(fractional_second_precision)]

If you specify a date value without a time component, then the default time is 12:00:00 a.m. (midnight).

TIMESTAMP WITH TIME ZONE

The TIMESTAMP WITH TIME ZONE datatype is a variant of TIMESTAMP that includes a time zone displacement in its value. The time zone displacement is the difference (in hours and minutes) between local time and UTC (coordinated universal time, formerly Greenwich mean time). It is specified as follows:

TIMESTAMP [(fractional_second_precision)] WITH TIME ZONE
TIMESTAMP WITH LOCAL TIME ZONE

The TIMESTAMP WITH LOCAL TIME ZONE datatype is another variant of TIMESTAMP that includes a time zone offset in its value. Data stored in the database is normalized to the database time zone, and time zone displacement is not stored as part of the column data. When the data is retrieved, it is returned in the user's local session time zone. It is specified as follows:

TIMESTAMP [(fractional_second_precision)] WITH LOCAL TIME ZONE
INTERVAL YEAR TO MONTH

The INTERVAL YEAR TO MONTH datatype stores a period of time using the YEAR and MONTH datetime fields. It is specified as follows:

INTERVAL YEAR [(year_precision)] TO MONTH
INTERVAL DAY TO SECOND

The INTERVAL DAY TO SECOND datatype stores a period of time using the DAY and SECOND datetime fields. It is specified as follows:

INTERVAL DAY [(day_precision)] TO SECOND [(fractional_second_precision)]

GRAPHIC

The data is in the form of a double-byte character set (DBCS). The Oracle database does not support double-byte character sets; however, SQL*Loader reads them as single bytes. Like RAW data, GRAPHIC fields are stored without modification in whichever column you specify.

The syntax for the GRAPHIC datatype is:

Description of graphic.gif follows
Description of the illustration graphic.gif

For GRAPHIC and GRAPHIC EXTERNAL, specifying POSITION(start:end) gives the exact location of the field in the logical record.

If you specify a length for the GRAPHIC (EXTERNAL) datatype, however, then you give the number of double-byte graphic characters. That value is multiplied by 2 to find the length of the field in bytes. If the number of graphic characters is specified, then any length derived from POSITION is ignored. No delimited data field specification is allowed with GRAPHIC datatype specification.

GRAPHIC EXTERNAL

If the DBCS field is surrounded by shift-in and shift-out characters, then use GRAPHIC EXTERNAL. This is identical to GRAPHIC, except that the first and last characters (the shift-in and shift-out) are not loaded.

The syntax for the GRAPHIC EXTERNAL datatype is:

Description of graphic_ext.gif follows
Description of the illustration graphic_ext.gif

GRAPHIC indicates that the data is double-byte characters. EXTERNAL indicates that the first and last characters are ignored. The graphic_char_length value specifies the length in DBCS (see "GRAPHIC").

For example, let [ ] represent shift-in and shift-out characters, and let # represent any double-byte character.

To describe ####, use POSITION(1:4) GRAPHIC or POSITION(1) GRAPHIC(2).

To describe [####], use POSITION(1:6) GRAPHIC EXTERNAL or POSITION(1) GRAPHIC EXTERNAL(2).

Numeric EXTERNAL

The numeric EXTERNAL datatypes are the numeric datatypes (INTEGER, FLOAT, DECIMAL, and ZONED) specified as EXTERNAL, with optional length and delimiter specifications. The length is in bytes unless character-length semantics are used for the data file. In that case, the length is in characters. See "Character-Length Semantics".

These datatypes are the human-readable, character form of numeric data. The same rules that apply to CHAR data regarding length, position, and delimiters apply to numeric EXTERNAL data. See "CHAR" for a complete description of these rules.

The syntax for the numeric EXTERNAL datatypes is shown as part of "datatype_spec".


Note:

The data is a number in character form, not binary representation. Therefore, these datatypes are identical to CHAR and are treated identically, except for the use of DEFAULTIF. If you want the default to be null, then use CHAR; if you want it to be zero, then use EXTERNAL. See "Using the WHEN, NULLIF, and DEFAULTIF Clauses".

FLOAT EXTERNAL data can be given in either scientific or regular notation. Both "5.33" and "533E-2" are valid representations of the same value.

RAW

When raw, binary data is loaded "as is" into a RAW database column, it is not converted by the Oracle database. If it is loaded into a CHAR column, then the Oracle database converts it to hexadecimal. It cannot be loaded into a DATE or number column.

The syntax for the RAW datatype is as follows:

Description of raw.gif follows
Description of the illustration raw.gif

The length of this field is the number of bytes specified in the control file. This length is limited only by the length of the target column in the database and by memory resources. The length is always in bytes, even if character-length semantics are used for the data file. RAW data fields cannot be delimited.

VARCHARC

The datatype VARCHARC consists of a character length subfield followed by a character string value-subfield.

The declaration for VARCHARC specifies the length of the length subfield, optionally followed by the maximum size of any string. If byte-length semantics are in use for the data file, then the length and the maximum size are both in bytes. If character-length semantics are in use for the data file, then the length and maximum size are in characters. If a maximum size is not specified, then 4 KB is the default regardless of whether byte-length semantics or character-length semantics are in use.

For example:

  • VARCHARC results in an error because you must at least specify a value for the length subfield.

  • VARCHARC(7) results in a VARCHARC whose length subfield is 7 bytes long and whose maximum size is 4 KB (the default) if byte-length semantics are used for the data file. If character-length semantics are used, then it results in a VARCHARC with a length subfield that is 7 characters long and a maximum size of 4 KB (the default). Remember that when a maximum size is not specified, the default of 4 KB is always used, regardless of whether byte-length or character-length semantics are in use.

  • VARCHARC(3,500) results in a VARCHARC whose length subfield is 3 bytes long and whose maximum size is 500 bytes if byte-length semantics are used for the data file. If character-length semantics are used, then it results in a VARCHARC with a length subfield that is 3 characters long and a maximum size of 500 characters.

See "Character-Length Semantics".

VARRAWC

The datatype VARRAWC consists of a RAW string value subfield.

For example:

  • VARRAWC results in an error.

  • VARRAWC(7) results in a VARRAWC whose length subfield is 7 bytes long and whose maximum size is 4 KB (that is, the default).

  • VARRAWC(3,500) results in a VARRAWC whose length subfield is 3 bytes long and whose maximum size is 500 bytes.

Conflicting Native Datatype Field Lengths

There are several ways to specify a length for a field. If multiple lengths are specified and they conflict, then one of the lengths takes precedence. A warning is issued when a conflict exists. The following rules determine which field length is used:

  1. The size of SMALLINT, FLOAT, and DOUBLE data is fixed, regardless of the number of bytes specified in the POSITION clause.

  2. If the length (or precision) specified for a DECIMAL, INTEGER, ZONED, GRAPHIC, GRAPHIC EXTERNAL, or RAW field conflicts with the size calculated from a POSITION(start:end) specification, then the specified length (or precision) is used.

  3. If the maximum size specified for a character or VARGRAPHIC field conflicts with the size calculated from a POSITION(start:end) specification, then the specified maximum is used.

For example, assume that the native datatype INTEGER is 4 bytes long and the following field specification is given:

column1 POSITION(1:6) INTEGER 

In this case, a warning is issued, and the proper length (4) is used. The log file shows the actual length used under the heading "Len" in the column table:

Column Name             Position   Len  Term Encl Datatype 
----------------------- --------- ----- ---- ---- --------- 
COLUMN1                       1:6     4             INTEGER 

Field Lengths for Length-Value Datatypes

A control file can specify a maximum length for the following length-value datatypes: VARCHAR, VARCHARC, VARGRAPHIC, VARRAW, and VARRAWC. The specified maximum length is in bytes if byte-length semantics are used for the field, and in characters if character-length semantics are used for the field. If no length is specified, then the maximum length defaults to 4096 bytes. If the length of the field exceeds the maximum length, then the record is rejected with the following error:

Variable length field exceed maximum length

Datatype Conversions

The datatype specifications in the control file tell SQL*Loader how to interpret the information in the data file. The server defines the datatypes for the columns in the database. The link between these two is the column name specified in the control file.

SQL*Loader extracts data from a field in the input file, guided by the datatype specification in the control file. SQL*Loader then sends the field to the server to be stored in the appropriate column (as part of an array of row inserts).

SQL*Loader or the server does any necessary data conversion to store the data in the proper internal format. This includes converting data from the data file character set to the database character set when they differ.


Note:

When you use SQL*Loader conventional path to load character data from the data file into a LONG RAW column, the character data is interpreted has a HEX string. SQL converts the HEX string into its binary representation. Be aware that any string longer than 4000 bytes exceeds the byte limit for the SQL HEXTORAW conversion operator. Therefore, SQL returns the Oracle error ORA-01461. SQL*Loader will reject that row and continue loading.

The datatype of the data in the file does not need to be the same as the datatype of the column in the Oracle table. The Oracle database automatically performs conversions, but you need to ensure that the conversion makes sense and does not generate errors. For instance, when a data file field with datatype CHAR is loaded into a database column with datatype NUMBER, you must ensure that the contents of the character field represent a valid number.


Note:

SQL*Loader does not contain datatype specifications for Oracle internal datatypes such as NUMBER or VARCHAR2. The SQL*Loader datatypes describe data that can be produced with text editors (character datatypes) and with standard programming languages (native datatypes). However, although SQL*Loader does not recognize datatypes like NUMBER and VARCHAR2, any data that the Oracle database can convert can be loaded into these or other database columns.

Datatype Conversions for Datetime and Interval Datatypes

Table 10-2 shows which conversions between Oracle database datatypes and SQL*Loader control file datetime and interval datatypes are supported and which are not.

In the table, the abbreviations for the Oracle Database Datatypes are as follows:

N = NUMBER

C = CHAR or VARCHAR2

D = DATE

T = TIME and TIME WITH TIME ZONE

TS = TIMESTAMP and TIMESTAMP WITH TIME ZONE

YM = INTERVAL YEAR TO MONTH

DS = INTERVAL DAY TO SECOND

For the SQL*Loader datatypes, the definitions for the abbreviations in the table are the same for D, T, TS, YM, and DS. However, as noted in the previous section, SQL*Loader does not contain datatype specifications for Oracle internal datatypes such as NUMBER,CHAR, and VARCHAR2. However, any data that the Oracle database can convert can be loaded into these or other database columns.

For an example of how to read this table, look at the row for the SQL*Loader datatype DATE (abbreviated as D). Reading across the row, you can see that datatype conversion is supported for the Oracle database datatypes of CHAR, VARCHAR2, DATE, TIMESTAMP, and TIMESTAMP WITH TIME ZONE datatypes. However, conversion is not supported for the Oracle database datatypes NUMBER, TIME, TIME WITH TIME ZONE, INTERVAL YEAR TO MONTH, or INTERVAL DAY TO SECOND datatypes.

Table 10-2 Datatype Conversions for Datetime and Interval Datatypes

SQL*Loader DatatypeOracle Database Datatype (Conversion Support)

N

N (Yes), C (Yes), D (No), T (No), TS (No), YM (No), DS (No)

C

N (Yes), C (Yes), D (Yes), T (Yes), TS (Yes), YM (Yes), DS (Yes)

D

N (No), C (Yes), D (Yes), T (No), TS (Yes), YM (No), DS (No)

T

N (No), C (Yes), D (No), T (Yes), TS (Yes), YM (No), DS (No)

TS

N (No), C (Yes), D (Yes), T (Yes), TS (Yes), YM (No), DS (No)

YM

N (No), C (Yes), D (No), T (No), TS (No), YM (Yes), DS (No)

DS

N (No), C (Yes), D (No), T (No), TS (No), YM (No), DS (Yes)


Specifying Delimiters

The boundaries of CHAR, datetime, interval, or numeric EXTERNAL fields can also be marked by delimiter characters contained in the input data record. The delimiter characters are specified using various combinations of the TERMINATED BY, ENCLOSED BY, and OPTIONALLY ENCLOSED BY clauses (the TERMINATED BY clause, if used, must come first). The delimiter specification comes after the datatype specification.

For a description of how data is processed when various combinations of delimiter clauses are used, see "How Delimited Data Is Processed".


Note:

The RAW datatype can also be marked by delimiters, but only if it is in an input LOBFILE, and only if the delimiter is TERMINATED BY EOF (end of file).

Syntax for Termination and Enclosure Specification

The following diagram shows the syntax for termination_spec and enclosure_spec.

Description of terminat.gif follows
Description of the illustration terminat.gif

Description of enclose.gif follows
Description of the illustration enclose.gif

Table 10-3 describes the syntax for the termination and enclosure specifications used to specify delimiters.

Table 10-3 Parameters Used for Specifying Delimiters

ParameterDescription

TERMINATED

Data is read until the first occurrence of a delimiter.

BY

An optional word to increase readability.

WHITESPACE

Delimiter is any whitespace character including spaces, tabs, blanks, line feeds, form feeds, or carriage returns. (Only used with TERMINATED, not with ENCLOSED.)

OPTIONALLY

Data can be enclosed by the specified character. If SQL*Loader finds a first occurrence of the character, then it reads the data value until it finds the second occurrence. If the data is not enclosed, then the data is read as a terminated field. If you specify an optional enclosure, then you must specify a TERMINATED BY clause (either locally in the field definition or globally in the FIELDS clause).

ENCLOSED

The data will be found between two delimiters.

string

The delimiter is a string.

X'hexstr'

The delimiter is a string that has the value specified by X'hexstr' in the character encoding scheme, such as X'1F' (equivalent to 31 decimal). "X" can be either lowercase or uppercase.

AND

Specifies a trailing enclosure delimiter that may be different from the initial enclosure delimiter. If AND is not present, then the initial and trailing delimiters are assumed to be the same.

EOF

Indicates that the entire file has been loaded into the LOB. This is valid only when data is loaded from a LOB file. Fields terminated by EOF cannot be enclosed.


Here are some examples, with samples of the data they describe:

TERMINATED BY ','                      a data string, 
ENCLOSED BY '"'                        "a data string" 
TERMINATED BY ',' ENCLOSED BY '"'      "a data string", 
ENCLOSED BY '(' AND ')'                (a data string) 

Delimiter Marks in the Data

Sometimes the punctuation mark that is a delimiter must also be included in the data. To make that possible, two adjacent delimiter characters are interpreted as a single occurrence of the character, and this character is included in the data. For example, this data:

(The delimiters are left parentheses, (, and right parentheses, )).) 

with this field specification:

ENCLOSED BY "(" AND ")" 

puts the following string into the database:

The delimiters are left parentheses, (, and right parentheses, ). 

For this reason, problems can arise when adjacent fields use the same delimiters. For example, with the following specification:

field1 TERMINATED BY "/" 
field2 ENCLOSED by "/" 

the following data will be interpreted properly:

This is the first string/      /This is the second string/ 

But if field1 and field2 were adjacent, then the results would be incorrect, because

This is the first string//This is the second string/ 

would be interpreted as a single character string with a "/" in the middle, and that string would belong to field1.

Maximum Length of Delimited Data

The default maximum length of delimited data is 255 bytes. Therefore, delimited fields can require significant amounts of storage for the bind array. A good policy is to specify the smallest possible maximum value if the fields are shorter than 255 bytes. If the fields are longer than 255 bytes, then you must specify a maximum length for the field, either with a length specifier or with the POSITION clause.

For example, if you have a string literal that is longer than 255 bytes, then in addition to using SUBSTR(), use CHAR() to specify the longest string in any record for the field. An example of how this would look is as follows, assuming that 600 bytes is the longest string in any record for field1:

field1 CHAR(600) SUBSTR(:field, 1, 240)

Loading Trailing Blanks with Delimiters

Trailing blanks are not loaded with nondelimited datatypes unless you specify PRESERVE BLANKS. If a data field is 9 characters long and contains the value DANIELbbb, where bbb is three blanks, then it is loaded into the Oracle database as "DANIEL" if declared as CHAR(9).

If you want the trailing blanks, then you could declare it as CHAR(9) TERMINATED BY ':', and add a colon to the data file so that the field is DANIELbbb:. This field is loaded as "DANIEL ", with the trailing blanks. You could also specify PRESERVE BLANKS without the TERMINATED BY clause and obtain the same results.

How Delimited Data Is Processed

To specify delimiters, field definitions can use various combinations of the TERMINATED BY, ENCLOSED BY, and OPTIONALLY ENCLOSED BY clauses. The following sections describe the processing that takes place in each case:

Each of these scenarios is described in the following sections.

Fields Using Only TERMINATED BY

If TERMINATED BY is specified for a field without ENCLOSED BY, then the data for the field is read from the starting position of the field up to, but not including, the first occurrence of the TERMINATED BY delimiter. If the terminator delimiter is found in the first column position of a field, then the field is null. If the end of the record is found before the TERMINATED BY delimiter, then all data up to the end of the record is considered part of the field.

If TERMINATED BY WHITESPACE is specified, then data is read until the first occurrence of a whitespace character (spaces, tabs, blanks, line feeds, form feeds, or carriage returns). Then the current position is advanced until no more adjacent whitespace characters are found. This allows field values to be delimited by varying amounts of whitespace. However, unlike non-whitespace terminators, if a whitespace terminator is found in the first column position of a field, then the field is not treated as null and can result in record rejection or fields loaded into incorrect columns.

Fields Using ENCLOSED BY Without TERMINATED BY

The following steps take place when a field uses an ENCLOSED BY clause without also using a TERMINATED BY clause.

  1. Any whitespace at the beginning of the field is skipped.

  2. The first non-whitespace character found must be the start of a string that matches the first ENCLOSED BY delimiter. If it is not, then the row is rejected.

  3. If the first ENCLOSED BY delimiter is found, then the search for the second ENCLOSED BY delimiter begins.

  4. If two of the second ENCLOSED BY delimiters are found adjacent to each other, then they are interpreted as a single occurrence of the delimiter and included as part of the data for the field. The search then continues for another instance of the second ENCLOSED BY delimiter.

  5. If the end of the record is found before the second ENCLOSED BY delimiter is found, then the row is rejected.

Fields Using ENCLOSED BY With TERMINATED BY

The following steps take place when a field uses an ENCLOSED BY clause and also uses a TERMINATED BY clause.

  1. Any whitespace at the beginning of the field is skipped.

  2. The first non-whitespace character found must be the start of a string that matches the first ENCLOSED BY delimiter. If it is not, then the row is rejected.

  3. If the first ENCLOSED BY delimiter is found, then the search for the second ENCLOSED BY delimiter begins.

  4. If two of the second ENCLOSED BY delimiters are found adjacent to each other, then they are interpreted as a single occurrence of the delimiter and included as part of the data for the field. The search then continues for the second instance of the ENCLOSED BY delimiter.

  5. If the end of the record is found before the second ENCLOSED BY delimiter is found, then the row is rejected.

  6. If the second ENCLOSED BY delimiter is found, then the parser looks for the TERMINATED BY delimiter. If the TERMINATED BY delimiter is anything other than WHITESPACE, then whitespace found between the end of the second ENCLOSED BY delimiter and the TERMINATED BY delimiter is skipped over.


    Note:

    Only WHITESPACE is allowed between the second ENCLOSED BY delimiter and the TERMINATED BY delimiter. Any other characters will cause an error.

  7. The row is not rejected if the end of the record is found before the TERMINATED BY delimiter is found.

Fields Using OPTIONALLY ENCLOSED BY With TERMINATED BY

The following steps take place when a field uses an OPTIONALLY ENCLOSED BY clause and a TERMINATED BY clause.

  1. Any whitespace at the beginning of the field is skipped.

  2. The parser checks to see if the first non-whitespace character found is the start of a string that matches the first OPTIONALLY ENCLOSED BY delimiter. If it is not, and the OPTIONALLY ENCLOSED BY delimiters are not present in the data, then the data for the field is read from the current position of the field up to, but not including, the first occurrence of the TERMINATED BY delimiter. If the TERMINATED BY delimiter is found in the first column position, then the field is null. If the end of the record is found before the TERMINATED BY delimiter, then all data up to the end of the record is considered part of the field.

  3. If the first OPTIONALLY ENCLOSED BY delimiter is found, then the search for the second OPTIONALLY ENCLOSED BY delimiter begins.

  4. If two of the second OPTIONALLY ENCLOSED BY delimiters are found adjacent to each other, then they are interpreted as a single occurrence of the delimiter and included as part of the data for the field. The search then continues for the second OPTIONALLY ENCLOSED BY delimiter.

  5. If the end of the record is found before the second OPTIONALLY ENCLOSED BY delimiter is found, then the row is rejected.

  6. If the OPTIONALLY ENCLOSED BY delimiter is present in the data, then the parser looks for the TERMINATED BY delimiter. If the TERMINATED BY delimiter is anything other than WHITESPACE, then whitespace found between the end of the second OPTIONALLY ENCLOSED BY delimiter and the TERMINATED BY delimiter is skipped over.

  7. The row is not rejected if the end of record is found before the TERMINATED BY delimiter is found.


Caution:

Be careful when you specify whitespace characters as the TERMINATED BY delimiter and are also using OPTIONALLY ENCLOSED BY. SQL*Loader strips off leading whitespace when looking for an OPTIONALLY ENCLOSED BY delimiter. If the data contains two adjacent TERMINATED BY delimiters in the middle of a record (usually done to set a field in the record to NULL), then the whitespace for the first TERMINATED BY delimiter will be used to terminate a field, but the remaining whitespace will be considered as leading whitespace for the next field rather than the TERMINATED BY delimiter for the next field. If you want to load a NULL value, then you must include the ENCLOSED BY delimiters in the data.

Conflicting Field Lengths for Character Datatypes

A control file can specify multiple lengths for the character-data fields CHAR, DATE, and numeric EXTERNAL. If conflicting lengths are specified, then one of the lengths takes precedence. A warning is also issued when a conflict exists. This section explains which length is used.

Predetermined Size Fields

If you specify a starting position and ending position for one of these fields, then the length of the field is determined by these specifications. If you specify a length as part of the datatype and do not give an ending position, the field has the given length. If starting position, ending position, and length are all specified, and the lengths differ, then the length given as part of the datatype specification is used for the length of the field, as follows:

POSITION(1:10) CHAR(15) 

In this example, the length of the field is 15.

Delimited Fields

If a delimited field is specified with a length, or if a length can be calculated from the starting and ending positions, then that length is the maximum length of the field. The specified maximum length is in bytes if byte-length semantics are used for the field, and in characters if character-length semantics are used for the field. If no length is specified or can be calculated from the start and end positions, then the maximum length defaults to 255 bytes. The actual length can vary up to that maximum, based on the presence of the delimiter.

If delimiters and also starting and ending positions are specified for the field, then only the position specification has any effect. Any enclosure or termination delimiters are ignored.

If the expected delimiter is absent, then the end of record terminates the field. If TRAILING NULLCOLS is specified, then remaining fields are null. If either the delimiter or the end of record produces a field that is longer than the maximum, then SQL*Loader rejects the record and returns an error.

Date Field Masks

The length of a date field depends on the mask, if a mask is specified. The mask provides a format pattern, telling SQL*Loader how to interpret the data in the record. For example, assume the mask is specified as follows:

"Month dd, yyyy" 

Then "May 3, 2008" would occupy 11 bytes in the record (with byte-length semantics), while "January 31, 2009" would occupy 16.

If starting and ending positions are specified, however, then the length calculated from the position specification overrides a length derived from the mask. A specified length such as DATE(12) overrides either of those. If the date field is also specified with terminating or enclosing delimiters, then the length specified in the control file is interpreted as a maximum length for the field.


See Also:

"Datetime and Interval Datatypes" for more information about the DATE field

Specifying Field Conditions

A field condition is a statement about a field in a logical record that evaluates as true or false. It is used in the WHEN, NULLIF, and DEFAULTIF clauses.


Note:

If a field used in a clause evaluation has a NULL value, then that clause will always evaluate to FALSE. This feature is illustrated in Example 10-5.

A field condition is similar to the condition in the CONTINUEIF clause, with two important differences. First, positions in the field condition refer to the logical record, not to the physical record. Second, you can specify either a position in the logical record or the name of a field in the data file (including filler fields).


Note:

A field condition cannot be based on fields in a secondary data file (SDF).

The syntax for the field_condition clause is as follows:

Description of fld_cond.gif follows
Description of the illustration fld_cond.gif

The syntax for the pos_spec clause is as follows:

Description of pos_spec.gif follows
Description of the illustration pos_spec.gif

Table 10-4 describes the parameters used for the field condition clause. For a full description of the position specification parameters, see Table 10-1.

Table 10-4 Parameters for the Field Condition Clause

ParameterDescription

pos_spec

Specifies the starting and ending position of the comparison field in the logical record. It must be surrounded by parentheses. Either start-end or start:end is acceptable.

The starting location can be specified as a column number, or as * (next column), or as *+n (next column plus an offset).

If you omit an ending position, then the length of the field is determined by the length of the comparison string. If the lengths are different, then the shorter field is padded. Character strings are padded with blanks, hexadecimal strings with zeros.

start

Specifies the starting position of the comparison field in the logical record.

end

Specifies the ending position of the comparison field in the logical record.

full_fieldname

full_fieldname is the full name of a field specified using dot notation. If the field col2 is an attribute of a column object col1, then when referring to col2 in one of the directives, you must use the notation col1.col2. The column name and the field name referencing or naming the same entity can be different, because the column name never includes the full name of the entity (no dot notation).

operator

A comparison operator for either equal or not equal.

char_string

A string of characters enclosed in single or double quotation marks that is compared to the comparison field. If the comparison is true, then the current record is inserted into the table.

X'hex_string'

A string of hexadecimal digits, where each pair of digits corresponds to one byte in the field. It is enclosed in single or double quotation marks. If the comparison is true, then the current record is inserted into the table.

BLANKS

Enables you to test a field to see if it consists entirely of blanks. BLANKS is required when you are loading delimited data and you cannot predict the length of the field, or when you use a multibyte character set that has multiple blanks.


Comparing Fields to BLANKS

The BLANKS parameter makes it possible to determine if a field of unknown length is blank.

For example, use the following clause to load a blank field as null:

full_fieldname ... NULLIF column_name=BLANKS 

The BLANKS parameter recognizes only blanks, not tabs. It can be used in place of a literal string in any field comparison. The condition is true whenever the column is entirely blank.

The BLANKS parameter also works for fixed-length fields. Using it is the same as specifying an appropriately sized literal string of blanks. For example, the following specifications are equivalent:

fixed_field CHAR(2) NULLIF fixed_field=BLANKS 
fixed_field CHAR(2) NULLIF fixed_field="  " 

There can be more than one blank in a multibyte character set. It is a good idea to use the BLANKS parameter with these character sets instead of specifying a string of blank characters.

The character string will match only a specific sequence of blank characters, while the BLANKS parameter will match combinations of different blank characters. For more information about multibyte character sets, see "Multibyte (Asian) Character Sets".

Comparing Fields to Literals

When a data field is compared to a literal string that is shorter than the data field, the string is padded. Character strings are padded with blanks, for example:

NULLIF (1:4)=" " 

This example compares the data in position 1:4 with 4 blanks. If position 1:4 contains 4 blanks, then the clause evaluates as true.

Hexadecimal strings are padded with hexadecimal zeros, as in the following clause:

NULLIF (1:4)=X'FF' 

This clause compares position 1:4 to hexadecimal 'FF000000'.

Using the WHEN, NULLIF, and DEFAULTIF Clauses

The following information applies to scalar fields. For nonscalar fields (column objects, LOBs, and collections), the WHEN, NULLIF, and DEFAULTIF clauses are processed differently because nonscalar fields are more complex.

The results of a WHEN, NULLIF, or DEFAULTIF clause can be different depending on whether the clause specifies a field name or a position.

  • If the WHEN, NULLIF, or DEFAULTIF clause specifies a field name, then SQL*Loader compares the clause to the evaluated value of the field. The evaluated value takes trimmed whitespace into consideration. See "Trimming Whitespace" for information about trimming blanks and tabs.

  • If the WHEN, NULLIF, or DEFAULTIF clause specifies a position, then SQL*Loader compares the clause to the original logical record in the data file. No whitespace trimming is done on the logical record in that case.

Different results are more likely if the field has whitespace that is trimmed, or if the WHEN, NULLIF, or DEFAULTIF clause contains blanks or tabs or uses the BLANKS parameter. If you require the same results for a field specified by name and for the same field specified by position, then use the PRESERVE BLANKS option. The PRESERVE BLANKS option instructs SQL*Loader not to trim whitespace when it evaluates the values of the fields.

The results of a WHEN, NULLIF, or DEFAULTIF clause are also affected by the order in which SQL*Loader operates, as described in the following steps. SQL*Loader performs these steps in order, but it does not always perform all of them. Once a field is set, any remaining steps in the process are ignored. For example, if the field is set in Step 5, then SQL*Loader does not move on to Step 6.

  1. SQL*Loader evaluates the value of each field for the input record and trims any whitespace that should be trimmed (according to existing guidelines for trimming blanks and tabs).

  2. For each record, SQL*Loader evaluates any WHEN clauses for the table.

  3. If the record satisfies the WHEN clauses for the table, or no WHEN clauses are specified, then SQL*Loader checks each field for a NULLIF clause.

  4. If a NULLIF clause exists, then SQL*Loader evaluates it.

  5. If the NULLIF clause is satisfied, then SQL*Loader sets the field to NULL.

  6. If the NULLIF clause is not satisfied, or if there is no NULLIF clause, then SQL*Loader checks the length of the field from field evaluation. If the field has a length of 0 from field evaluation (for example, it was a null field, or whitespace trimming resulted in a null field), then SQL*Loader sets the field to NULL. In this case, any DEFAULTIF clause specified for the field is not evaluated.

  7. If any specified NULLIF clause is false or there is no NULLIF clause, and if the field does not have a length of 0 from field evaluation, then SQL*Loader checks the field for a DEFAULTIF clause.

  8. If a DEFAULTIF clause exists, then SQL*Loader evaluates it.

  9. If the DEFAULTIF clause is satisfied, then the field is set to 0 if the field in the data file is a numeric field. It is set to NULL if the field is not a numeric field. The following fields are numeric fields and will be set to 0 if they satisfy the DEFAULTIF clause:

    • BYTEINT

    • SMALLINT

    • INTEGER

    • FLOAT

    • DOUBLE

    • ZONED

    • (packed) DECIMAL

    • Numeric EXTERNAL (INTEGER, FLOAT, DECIMAL, and ZONED)

  10. If the DEFAULTIF clause is not satisfied, or if there is no DEFAULTIF clause, then SQL*Loader sets the field with the evaluated value from Step 1.

The order in which SQL*Loader operates could cause results that you do not expect. For example, the DEFAULTIF clause may look like it is setting a numeric field to NULL rather than to 0.


Note:

As demonstrated in these steps, the presence of NULLIF and DEFAULTIF clauses results in extra processing that SQL*Loader must perform. This can affect performance. Note that during Step 1, SQL*Loader will set a field to NULL if its evaluated length is zero. To improve performance, consider whether it might be possible for you to change your data to take advantage of this. The detection of NULLs as part of Step 1 occurs much more quickly than the processing of a NULLIF or DEFAULTIF clause.

For example, a CHAR(5) will have zero length if it falls off the end of the logical record or if it contains all blanks and blank trimming is in effect. A delimited field will have zero length if there are no characters between the start of the field and the terminator.

Also, for character fields, NULLIF is usually faster to process than DEFAULTIF (the default for character fields is NULL).


Examples of Using the WHEN, NULLIF, and DEFAULTIF Clauses

Example 10-2 through Example 10-5 clarify the results for different situations in which the WHEN, NULLIF, and DEFAULTIF clauses might be used. In the examples, a blank or space is indicated with a period (.). Assume that col1 and col2 are VARCHAR2(5) columns in the database.

Example 10-2 DEFAULTIF Clause Is Not Evaluated

The control file specifies:

(col1 POSITION (1:5),
 col2 POSITION (6:8) CHAR INTEGER EXTERNAL DEFAULTIF col1 = 'aname')

The data file contains:

aname...

In Example 10-2, col1 for the row evaluates to aname. col2 evaluates to NULL with a length of 0 (it is ... but the trailing blanks are trimmed for a positional field).

When SQL*Loader determines the final loaded value for col2, it finds no WHEN clause and no NULLIF clause. It then checks the length of the field, which is 0 from field evaluation. Therefore, SQL*Loader sets the final value for col2 to NULL. The DEFAULTIF clause is not evaluated, and the row is loaded as aname for col1 and NULL for col2.

Example 10-3 DEFAULTIF Clause Is Evaluated

The control file specifies:

.
.
.
PRESERVE BLANKS
.
.
.
(col1 POSITION (1:5),
 col2 POSITION (6:8) INTEGER EXTERNAL DEFAULTIF col1 = 'aname'

The data file contains:

aname...

In Example 10-3, col1 for the row again evaluates to aname. col2 evaluates to '...' because trailing blanks are not trimmed when PRESERVE BLANKS is specified.

When SQL*Loader determines the final loaded value for col2, it finds no WHEN clause and no NULLIF clause. It then checks the length of the field from field evaluation, which is 3, not 0.

Then SQL*Loader evaluates the DEFAULTIF clause, which evaluates to true because col1 is aname, which is the same as aname.

Because col2 is a numeric field, SQL*Loader sets the final value for col2 to 0. The row is loaded as aname for col1 and as 0 for col2.

Example 10-4 DEFAULTIF Clause Specifies a Position

The control file specifies:

(col1 POSITION (1:5), 
 col2 POSITION (6:8) INTEGER EXTERNAL DEFAULTIF (1:5) = BLANKS)

The data file contains:

.....123

In Example 10-4, col1 for the row evaluates to NULL with a length of 0 (it is ..... but the trailing blanks are trimmed). col2 evaluates to 123.

When SQL*Loader sets the final loaded value for col2, it finds no WHEN clause and no NULLIF clause. It then checks the length of the field from field evaluation, which is 3, not 0.

Then SQL*Loader evaluates the DEFAULTIF clause. It compares (1:5) which is ..... to BLANKS, which evaluates to true. Therefore, because col2 is a numeric field (integer EXTERNAL is numeric), SQL*Loader sets the final value for col2 to 0. The row is loaded as NULL for col1 and 0 for col2.

Example 10-5 DEFAULTIF Clause Specifies a Field Name

The control file specifies:

(col1 POSITION (1:5), 
 col2 POSITION(6:8) INTEGER EXTERNAL DEFAULTIF col1 = BLANKS)

The data file contains:

.....123

In Example 10-5, col1 for the row evaluates to NULL with a length of 0 (it is ..... but the trailing blanks are trimmed). col2 evaluates to 123.

When SQL*Loader determines the final value for col2, it finds no WHEN clause and no NULLIF clause. It then checks the length of the field from field evaluation, which is 3, not 0.

Then SQL*Loader evaluates the DEFAULTIF clause. As part of the evaluation, it checks to see that col1 is NULL from field evaluation. It is NULL, so the DEFAULTIF clause evaluates to false. Therefore, SQL*Loader sets the final value for col2 to 123, its original value from field evaluation. The row is loaded as NULL for col1 and 123 for col2.

Loading Data Across Different Platforms

When a data file created on one platform is to be loaded on a different platform, the data must be written in a form that the target system can read. For example, if the source system has a native, floating-point representation that uses 16 bytes, and the target system's floating-point numbers are 12 bytes, then the target system cannot directly read data generated on the source system.

The best solution is to load data across an Oracle Net database link, taking advantage of the automatic conversion of datatypes. This is the recommended approach, whenever feasible, and means that SQL*Loader must be run on the source system.

Problems with interplatform loads typically occur with native datatypes. In some situations, it is possible to avoid problems by lengthening a field by padding it with zeros, or to read only part of the field to shorten it (for example, when an 8-byte integer is to be read on a system that uses 4-byte integers, or the reverse). Note, however, that incompatible datatype implementation may prevent this.

If you cannot use an Oracle Net database link and the data file must be accessed by SQL*Loader running on the target system, then it is advisable to use only the portable SQL*Loader datatypes (for example, CHAR, DATE, VARCHARC, and numeric EXTERNAL). Data files written using these datatypes may be longer than those written with native datatypes. They may take more time to load, but they transport more readily across platforms.

If you know in advance that the byte ordering schemes or native integer lengths differ between the platform on which the input data will be created and the platform on which SQL*loader will be run, then investigate the possible use of the appropriate technique to indicate the byte order of the data or the length of the native integer. Possible techniques for indicating the byte order are to use the BYTEORDER parameter or to place a byte-order mark (BOM) in the file. Both methods are described in "Byte Ordering". It may then be possible to eliminate the incompatibilities and achieve a successful cross-platform data load. If the byte order is different from the SQL*Loader default, then you must indicate a byte order.

Byte Ordering


Note:

The information in this section is only applicable if you are planning to create input data on a system that has a different byte-ordering scheme than the system on which SQL*Loader will be run. Otherwise, you can skip this section.

SQL*Loader can load data from a data file that was created on a system whose byte ordering is different from the byte ordering on the system where SQL*Loader is running, even if the data file contains certain nonportable datatypes.

By default, SQL*Loader uses the byte order of the system where it is running as the byte order for all data files. For example, on a Sun Solaris system, SQL*Loader uses big-endian byte order. On an Intel or an Intel-compatible PC, SQL*Loader uses little-endian byte order.

Byte order affects the results when data is written and read an even number of bytes at a time (typically 2 bytes, 4 bytes, or 8 bytes). The following are some examples of this:

  • The 2-byte integer value 1 is written as 0x0001 on a big-endian system and as 0x0100 on a little-endian system.

  • The 4-byte integer 66051 is written as 0x00010203 on a big-endian system and as 0x03020100 on a little-endian system.

Byte order also affects character data in the UTF16 character set if it is written and read as 2-byte entities. For example, the character 'a' (0x61 in ASCII) is written as 0x0061 in UTF16 on a big-endian system, but as 0x6100 on a little-endian system.

All Oracle-supported character sets, except UTF16, are written one byte at a time. So, even for multibyte character sets such as UTF8, the characters are written and read the same way on all systems, regardless of the byte order of the system. Therefore, data in the UTF16 character set is nonportable because it is byte-order dependent. Data in all other Oracle-supported character sets is portable.

Byte order in a data file is only an issue if the data file that contains the byte-order-dependent data is created on a system that has a different byte order from the system on which SQL*Loader is running. If SQL*Loader knows the byte order of the data, then it swaps the bytes as necessary to ensure that the data is loaded correctly in the target database. Byte swapping means that data in big-endian format is converted to little-endian format, or the reverse.

To indicate byte order of the data to SQL*Loader, you can use the BYTEORDER parameter, or you can place a byte-order mark (BOM) in the file. If you do not use one of these techniques, then SQL*Loader will not correctly load the data into the data file.


See Also:

Case study 11, Loading Data in the Unicode Character Set, for an example of how SQL*Loader handles byte swapping. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Specifying Byte Order

To specify the byte order of data in the input data files, use the following syntax in the SQL*Loader control file:

Description of byteorder.gif follows
Description of the illustration byteorder.gif

The BYTEORDER parameter has the following characteristics:

  • BYTEORDER is placed after the LENGTH parameter in the SQL*Loader control file.

  • It is possible to specify a different byte order for different data files. However, the BYTEORDER specification before the INFILE parameters applies to the entire list of primary data files.

  • The BYTEORDER specification for the primary data files is also used as the default for LOBFILEs and SDFs. To override this default, specify BYTEORDER with the LOBFILE or SDF specification.

  • The BYTEORDER parameter is not applicable to data contained within the control file itself.

  • The BYTEORDER parameter applies to the following:

    • Binary INTEGER and SMALLINT data

    • Binary lengths in varying-length fields (that is, for the VARCHAR, VARGRAPHIC, VARRAW, and LONG VARRAW datatypes)

    • Character data for data files in the UTF16 character set

    • FLOAT and DOUBLE datatypes, if the system where the data was written has a compatible floating-point representation with that on the system where SQL*Loader is running

  • The BYTEORDER parameter does not apply to any of the following:

    • Raw datatypes (RAW, VARRAW, or VARRAWC)

    • Graphic datatypes (GRAPHIC, VARGRAPHIC, or GRAPHIC EXTERNAL)

    • Character data for data files in any character set other than UTF16

    • ZONED or (packed) DECIMAL datatypes

Using Byte Order Marks (BOMs)

Data files that use a Unicode encoding (UTF-16 or UTF-8) may contain a byte-order mark (BOM) in the first few bytes of the file. For a data file that uses the character set UTF16, the values {0xFE,0xFF} in the first two bytes of the file are the BOM indicating that the file contains big-endian data. The values {0xFF,0xFE} are the BOM indicating that the file contains little-endian data.

If the first primary data file uses the UTF16 character set and it also begins with a BOM, then that mark is read and interpreted to determine the byte order for all primary data files. SQL*Loader reads and interprets the BOM, skips it, and begins processing data with the byte immediately after the BOM. The BOM setting overrides any BYTEORDER specification for the first primary data file. BOMs in data files other than the first primary data file are read and used for checking for byte-order conflicts only. They do not change the byte-order setting that SQL*Loader uses in processing the data file.

In summary, the precedence of the byte-order indicators for the first primary data file is as follows:

  • BOM in the first primary data file, if the data file uses a Unicode character set that is byte-order dependent (UTF16) and a BOM is present

  • BYTEORDER parameter value, if specified before the INFILE parameters

  • The byte order of the system where SQL*Loader is running

For a data file that uses a UTF8 character set, a BOM of {0xEF,0xBB,0xBF} in the first 3 bytes indicates that the file contains UTF8 data. It does not indicate the byte order of the data, because data in UTF8 is not byte-order dependent. If SQL*Loader detects a UTF8 BOM, then it skips it but does not change any byte-order settings for processing the data files.

SQL*Loader first establishes a byte-order setting for the first primary data file using the precedence order just defined. This byte-order setting is used for all primary data files. If another primary data file uses the character set UTF16 and also contains a BOM, then the BOM value is compared to the byte-order setting established for the first primary data file. If the BOM value matches the byte-order setting of the first primary data file, then SQL*Loader skips the BOM, and uses that byte-order setting to begin processing data with the byte immediately after the BOM. If the BOM value does not match the byte-order setting established for the first primary data file, then SQL*Loader issues an error message and stops processing.

If any LOBFILEs or secondary data files are specified in the control file, then SQL*Loader establishes a byte-order setting for each LOBFILE and secondary data file (SDF) when it is ready to process the file. The default byte-order setting for LOBFILEs and SDFs is the byte-order setting established for the first primary data file. This is overridden if the BYTEORDER parameter is specified with a LOBFILE or SDF. In either case, if the LOBFILE or SDF uses the UTF16 character set and contains a BOM, the BOM value is compared to the byte-order setting for the file. If the BOM value matches the byte-order setting for the file, then SQL*Loader skips the BOM, and uses that byte-order setting to begin processing data with the byte immediately after the BOM. If the BOM value does not match, then SQL*Loader issues an error message and stops processing.

In summary, the precedence of the byte-order indicators for LOBFILEs and SDFs is as follows:

  • BYTEORDER parameter value specified with the LOBFILE or SDF

  • The byte-order setting established for the first primary data file


    Note:

    If the character set of your data file is a unicode character set and there is a byte-order mark in the first few bytes of the file, then do not use the SKIP parameter. If you do, then the byte-order mark will not be read and interpreted as a byte-order mark.

Suppressing Checks for BOMs

A data file in a Unicode character set may contain binary data that matches the BOM in the first bytes of the file. For example the integer(2) value 0xFEFF = 65279 decimal matches the big-endian BOM in UTF16. In that case, you can tell SQL*Loader to read the first bytes of the data file as data and not check for a BOM by specifying the BYTEORDERMARK parameter with the value NOCHECK. The syntax for the BYTEORDERMARK parameter is:

Description of byteordermark.gif follows
Description of the illustration byteordermark.gif

BYTEORDERMARK NOCHECK indicates that SQL*Loader should not check for a BOM and should read all the data in the data file as data.

BYTEORDERMARK CHECK tells SQL*Loader to check for a BOM. This is the default behavior for a data file in a Unicode character set. But this specification may be used in the control file for clarification. It is an error to specify BYTEORDERMARK CHECK for a data file that uses a non-Unicode character set.

The BYTEORDERMARK parameter has the following characteristics:

  • It is placed after the optional BYTEORDER parameter in the SQL*Loader control file.

  • It applies to the syntax specification for primary data files, and also to LOBFILEs and secondary data files (SDFs).

  • It is possible to specify a different BYTEORDERMARK value for different data files; however, the BYTEORDERMARK specification before the INFILE parameters applies to the entire list of primary data files.

  • The BYTEORDERMARK specification for the primary data files is also used as the default for LOBFILEs and SDFs, except that the value CHECK is ignored in this case if the LOBFILE or SDF uses a non-Unicode character set. This default setting for LOBFILEs and secondary data files can be overridden by specifying BYTEORDERMARK with the LOBFILE or SDF specification.

Loading All-Blank Fields

Fields that are totally blank cause the record to be rejected. To load one of these fields as NULL, use the NULLIF clause with the BLANKS parameter.

If an all-blank CHAR field is surrounded by enclosure delimiters, then the blanks within the enclosures are loaded. Otherwise, the field is loaded as NULL.

A DATE or numeric field that consists entirely of blanks is loaded as a NULL field.


See Also:


Trimming Whitespace

Blanks, tabs, and other nonprinting characters (such as carriage returns and line feeds) constitute whitespace. Leading whitespace occurs at the beginning of a field. Trailing whitespace occurs at the end of a field. Depending on how the field is specified, whitespace may or may not be included when the field is inserted into the database. This is illustrated in Figure 10-1, where two CHAR fields are defined for a data record.

The field specifications are contained in the control file. The control file CHAR specification is not the same as the database CHAR specification. A data field defined as CHAR in the control file simply tells SQL*Loader how to create the row insert. The data could then be inserted into a CHAR, VARCHAR2, NCHAR, NVARCHAR2, or even a NUMBER or DATE column in the database, with the Oracle database handling any necessary conversions.

By default, SQL*Loader removes trailing spaces from CHAR data before passing it to the database. So, in Figure 10-1, both Field 1 and Field 2 are passed to the database as 3-byte fields. However, when the data is inserted into the table, there is a difference.

Figure 10-1 Example of Field Conversion

Description of Figure 10-1 follows
Description of "Figure 10-1 Example of Field Conversion "

Column 1 is defined in the database as a fixed-length CHAR column of length 5. So the data (aaa) is left-justified in that column, which remains 5 bytes wide. The extra space on the right is padded with blanks. Column 2, however, is defined as a varying-length field with a maximum length of 5 bytes. The data for that column (bbb) is left-justified as well, but the length remains 3 bytes.

Table 10-5 summarizes when and how whitespace is removed from input data fields when PRESERVE BLANKS is not specified. See "How the PRESERVE BLANKS Option Affects Whitespace Trimming" for details on how to prevent whitespace trimming.

Table 10-5 Behavior Summary for Trimming Whitespace

SpecificationDataResultLeading Whitespace PresentFoot 1 Trailing Whitespace PresentFootref 1

Predetermined size

__aa__

__aa

Yes

No

Terminated

__aa__,

__aa__

Yes

YesFoot 2 

Enclosed

"__aa__"

__aa__

Yes

Yes

Terminated and enclosed

"__aa__",

__aa__

Yes

Yes

Optional enclosure (present)

"__aa__",

__aa__

Yes

Yes

Optional enclosure (absent)

__aa__,

aa__

No

Yes

Previous field terminated by whitespace

__aa__

aaFoot 3 

No

Footref 3



Footnote 1 When an all-blank field is trimmed, its value is NULL.

Footnote 2 Except for fields that are terminated by whitespace.

Footnote 3 Presence of trailing whitespace depends on the current field's specification, as shown by the other entries in the table.

The rest of this section discusses the following topics with regard to trimming whitespace:

Datatypes for Which Whitespace Can Be Trimmed

The information in this section applies only to fields specified with one of the character-data datatypes:

  • CHAR datatype

  • Datetime and interval datatypes

  • Numeric EXTERNAL datatypes:

    • INTEGER EXTERNAL

    • FLOAT EXTERNAL

    • (packed) DECIMAL EXTERNAL

    • ZONED (decimal) EXTERNAL


      Note:

      Although VARCHAR and VARCHARC fields also contain character data, these fields are never trimmed. These fields include all whitespace that is part of the field in the data file.

Specifying Field Length for Datatypes for Which Whitespace Can Be Trimmed

There are two ways to specify field length. If a field has a constant length that is defined in the control file with a position specification or the datatype and length, then it has a predetermined size. If a field's length is not known in advance, but depends on indicators in the record, then the field is delimited, using either enclosure or termination delimiters.

If a position specification with start and end values is defined for a field that also has enclosure or termination delimiters defined, then only the position specification has any effect. The enclosure and termination delimiters are ignored.

Predetermined Size Fields

Fields that have a predetermined size are specified with a starting position and ending position, or with a length, as in the following examples:

loc POSITION(19:31) 
loc CHAR(14) 

In the second case, even though the exact position of the field is not specified, the length of the field is predetermined.

Delimited Fields

Delimiters are characters that demarcate field boundaries.

Enclosure delimiters surround a field, like the quotation marks in the following example, where "__" represents blanks or tabs:

"__aa__"

Termination delimiters signal the end of a field, like the comma in the following example:

__aa__, 

Delimiters are specified with the control clauses TERMINATED BY and ENCLOSED BY, as shown in the following example:

loc TERMINATED BY "." OPTIONALLY ENCLOSED BY '|' 

Relative Positioning of Fields

This section describes how SQL*Loader determines the starting position of a field in the following situations:

No Start Position Specified for a Field

When a starting position is not specified for a field, it begins immediately after the end of the previous field. Figure 10-2 illustrates this situation when the previous field (Field 1) has a predetermined size.

Figure 10-2 Relative Positioning After a Fixed Field

Description of Figure 10-2 follows
Description of "Figure 10-2 Relative Positioning After a Fixed Field "

Previous Field Terminated by a Delimiter

If the previous field (Field 1) is terminated by a delimiter, then the next field begins immediately after the delimiter, as shown in Figure 10-3.

Figure 10-3 Relative Positioning After a Delimited Field

Description of Figure 10-3 follows
Description of "Figure 10-3 Relative Positioning After a Delimited Field "

Previous Field Has Both Enclosure and Termination Delimiters

When a field is specified with both enclosure delimiters and a termination delimiter, then the next field starts after the termination delimiter, as shown in Figure 10-4. If a nonwhitespace character is found after the enclosure delimiter, but before the terminator, then SQL*Loader generates an error.

Figure 10-4 Relative Positioning After Enclosure Delimiters

Description of Figure 10-4 follows
Description of "Figure 10-4 Relative Positioning After Enclosure Delimiters "

Leading Whitespace

In Figure 10-4, both fields are stored with leading whitespace. Fields do not include leading whitespace in the following cases:

  • When the previous field is terminated by whitespace, and no starting position is specified for the current field

  • When optional enclosure delimiters are specified for the field, and the enclosure delimiters are not present

These cases are illustrated in the following sections.

Previous Field Terminated by Whitespace 

If the previous field is TERMINATED BY WHITESPACE, then all whitespace after the field acts as the delimiter. The next field starts at the next nonwhitespace character. Figure 10-5 illustrates this case.

Figure 10-5 Fields Terminated by Whitespace

Description of Figure 10-5 follows
Description of "Figure 10-5 Fields Terminated by Whitespace"

This situation occurs when the previous field is explicitly specified with the TERMINATED BY WHITESPACE clause, as shown in the example. It also occurs when you use the global FIELDS TERMINATED BY WHITESPACE clause.

Optional Enclosure Delimiters

Leading whitespace is also removed from a field when optional enclosure delimiters are specified but not present.

Whenever optional enclosure delimiters are specified, SQL*Loader scans forward, looking for the first enclosure delimiter. If an enclosure delimiter is not found, then SQL*Loader skips over whitespace, eliminating it from the field. The first nonwhitespace character signals the start of the field. This situation is shown in Field 2 in Figure 10-6. (In Field 1 the whitespace is included because SQL*Loader found enclosure delimiters for the field.)

Figure 10-6 Fields Terminated by Optional Enclosure Delimiters

Description of Figure 10-6 follows
Description of "Figure 10-6 Fields Terminated by Optional Enclosure Delimiters"

Unlike the case when the previous field is TERMINATED BY WHITESPACE, this specification removes leading whitespace even when a starting position is specified for the current field.


Note:

If enclosure delimiters are present, then leading whitespace after the initial enclosure delimiter is kept, but whitespace before this delimiter is discarded. See the first quotation mark in Field 1, Figure 10-6.

Trimming Trailing Whitespace

Trailing whitespace is always trimmed from character-data fields that have a predetermined size. These are the only fields for which trailing whitespace is always trimmed.

Trimming Enclosed Fields

If a field is enclosed, or terminated and enclosed, like the first field shown in Figure 10-6, then any whitespace outside the enclosure delimiters is not part of the field. Any whitespace between the enclosure delimiters belongs to the field, whether it is leading or trailing whitespace.

How the PRESERVE BLANKS Option Affects Whitespace Trimming

To prevent whitespace trimming in all CHAR, DATE, and numeric EXTERNAL fields, you specify PRESERVE BLANKS as part of the LOAD statement in the control file. However, there may be times when you do not want to preserve blanks for all CHAR, DATE, and numeric EXTERNAL fields. Therefore, SQL*Loader also enables you to specify PRESERVE BLANKS as part of the datatype specification for individual fields, rather than specifying it globally as part of the LOAD statement.

In the following example, assume that PRESERVE BLANKS has not been specified as part of the LOAD statement, but you want the c1 field to default to zero when blanks are present. You can achieve this by specifying PRESERVE BLANKS on the individual field. Only that field is affected; blanks will still be removed on other fields.

c1 INTEGER EXTERNAL(10) PRESERVE BLANKS DEFAULTIF c1=BLANKS

In this example, if PRESERVE BLANKS were not specified for the field, then it would result in the field being improperly loaded as NULL (instead of as 0).

There may be times when you want to specify PRESERVE BLANKS as an option to the LOAD statement and have it apply to most CHAR, DATE, and numeric EXTERNAL fields. You can override it for an individual field by specifying NO PRESERVE BLANKS as part of the datatype specification for that field, as follows:

c1 INTEGER EXTERNAL(10) NO PRESERVE BLANKS

How [NO] PRESERVE BLANKS Works with Delimiter Clauses

The PRESERVE BLANKS option is affected by the presence of the delimiter clauses, as follows:

  • Leading whitespace is left intact when optional enclosure delimiters are not present

  • Trailing whitespace is left intact when fields are specified with a predetermined size

For example, consider the following field, where underscores represent blanks:

__aa__, 

Suppose this field is loaded with the following delimiter clause:

TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' 

In such a case, if PRESERVE BLANKS is specified, then both the leading whitespace and the trailing whitespace are retained. If PRESERVE BLANKS is not specified, then the leading whitespace is trimmed.

Now suppose the field is loaded with the following clause:

TERMINATED BY WHITESPACE

In such a case, if PRESERVE BLANKS is specified, then it does not retain the space at the beginning of the next field, unless that field is specified with a POSITION clause that includes some of the whitespace. Otherwise, SQL*Loader scans past all whitespace at the end of the previous field until it finds a nonblank, nontab character.

Applying SQL Operators to Fields

A wide variety of SQL operators can be applied to field data with the SQL string. This string can contain any combination of SQL expressions that are recognized by the Oracle database as valid for the VALUES clause of an INSERT statement. In general, any SQL function that returns a single value that is compatible with the target column's datatype can be used. SQL strings can be applied to simple scalar column types and also to user-defined complex types such as column object and collections. See the information about expressions in the Oracle Database SQL Language Reference.

The column name and the name of the column in a SQL string bind variable must, with the interpretation of SQL identifier rules, correspond to the same column. But the two names do not necessarily have to be written exactly the same way, as in the following example of specifying the control file:

LOAD DATA 
INFILE * 
APPEND INTO TABLE XXX 
( "Last"   position(1:7)     char   "UPPER(:\"Last\")" 
   first   position(8:15)    char   "UPPER(:first || :FIRST || :\"FIRST\")" 
) 
BEGINDATA 
Phil Grant 
Jason Taylor 

Note the following about the preceding example:

  • If, during table creation, a column identifier is declared using double quotation marks because it contains lowercase and/or special-case letters (as in the column named "Last" above), then the column name in the bind variable must exactly match the column name used in the CREATE TABLE statement.

  • If a column identifier is declared without double quotation marks during table creation (as in the column name first above), then because first, FIRST, and "FIRST" all point to the same column, any of these written formats in a SQL string bind variable would be acceptable.

The following requirements and restrictions apply when you are using SQL strings:

  • If your control file specifies character input that has an associated SQL string, then SQL*Loader makes no attempt to modify the data. This is because SQL*Loader assumes that character input data that is modified using a SQL operator will yield results that are correct for database insertion.

  • The SQL string appears after any other specifications for a given column.

  • The SQL string must be enclosed in double quotation marks.

  • To enclose a column name in quotation marks within a SQL string, you must use escape characters.

    In the preceding example, Last is enclosed in double quotation marks to preservNee the mixed case, and the double quotation marks necessitate the use of the backslash (escape) character.

  • If a SQL string contains a column name that references a column object attribute, then the full object attribute name must be used in the bind variable. Each attribute name in the full name is an individual identifier. Each identifier is subject to the SQL identifier quoting rules, independent of the other identifiers in the full name. For example, suppose you have a column object named CHILD with an attribute name of "HEIGHT_%TILE". (Note that the attribute name is in double quotation marks.) To use the full object attribute name in a bind variable, any one of the following formats would work:

    • :CHILD.\"HEIGHT_%TILE\"

    • :child.\"HEIGHT_%TILE\"

    Enclosing the full name (:\"CHILD.HEIGHT_%TILE\") generates a warning message that the quoting rule on an object attribute name used in a bind variable has changed. The warning is only to suggest that the bind variable be written correctly; it will not cause the load to abort. The quoting rule was changed because enclosing the full name in quotation marks would have caused SQL to interpret the name as one identifier rather than a full column object attribute name consisting of multiple identifiers.

  • The SQL string is evaluated after any NULLIF or DEFAULTIF clauses, but before a date mask.

  • If the Oracle database does not recognize the string, then the load terminates in error. If the string is recognized, but causes a database error, then the row that caused the error is rejected.

  • SQL strings are required when using the EXPRESSION parameter in a field specification.

  • The SQL string cannot reference fields that are loaded using OID, SID, REF, or BFILE. Also, it cannot reference filler fields.

  • In direct path mode, a SQL string cannot reference a VARRAY, nested table, or LOB column. This also includes a VARRAY, nested table, or LOB column that is an attribute of a column object.

  • The SQL string cannot be used on RECNUM, SEQUENCE, CONSTANT, or SYSDATE fields.

  • The SQL string cannot be used on LOBs, BFILEs, XML columns, or a file that is an element of a collection.

  • In direct path mode, the final result that is returned after evaluation of the expression in the SQL string must be a scalar datatype. That is, the expression may not return an object or collection datatype when performing a direct path load.

Referencing Fields

To refer to fields in the record, precede the field name with a colon (:). Field values from the current record are substituted. A field name preceded by a colon (:) in a SQL string is also referred to as a bind variable. Note that bind variables enclosed in single quotation marks are treated as text literals, not as bind variables.

The following example illustrates how a reference is made to both the current field and to other fields in the control file. It also illustrates how enclosing bind variables in single quotation marks causes them to be treated as text literals. Be sure to read the notes following this example to help you fully understand the concepts it illustrates.

LOAD DATA
INFILE *
APPEND INTO TABLE YYY
(
 field1  POSITION(1:6) CHAR "LOWER(:field1)"
 field2  CHAR TERMINATED BY ','
         NULLIF ((1) = 'a') DEFAULTIF ((1)= 'b')
         "RTRIM(:field2)"
 field3  CHAR(7) "TRANSLATE(:field3, ':field1', ':1')",
 field4  COLUMN OBJECT
 (
  attr1  CHAR(3)  "UPPER(:field4.attr3)",
  attr2  CHAR(2),
  attr3  CHAR(3)  ":field4.attr1 + 1"
 ),
 field5  EXPRESSION "MYFUNC(:FIELD4, SYSDATE)"
)
BEGINDATA
ABCDEF1234511  ,:field1500YYabc
abcDEF67890    ,:field2600ZZghl

Notes About This Example: 

  • In the following line, :field1 is not enclosed in single quotation marks and is therefore interpreted as a bind variable:

    field1 POSITION(1:6) CHAR "LOWER(:field1)"

  • In the following line, ':field1' and ':1' are enclosed in single quotation marks and are therefore treated as text literals and passed unchanged to the TRANSLATE function:

    field3 CHAR(7) "TRANSLATE(:field3, ':field1', ':1')"

    For more information about the use of quotation marks inside quoted strings, see "Specifying File Names and Object Names".

  • For each input record read, the value of the field referenced by the bind variable will be substituted for the bind variable. For example, the value ABCDEF in the first record is mapped to the first field :field1. This value is then passed as an argument to the LOWER function.

  • A bind variable in a SQL string need not reference the current field. In the preceding example, the bind variable in the SQL string for field FIELD4.ATTR1 references field FIELD4.ATTR3. The field FIELD4.ATTR1 is still mapped to the values 500 and 600 in the input records, but the final values stored in its corresponding columns are ABC and GHL.

  • field5 is not mapped to any field in the input record. The value that is stored in the target column is the result of executing the MYFUNC PL/SQL function, which takes two arguments. The use of the EXPRESSION parameter requires that a SQL string be used to compute the final value of the column because no input data is mapped to the field.

Common Uses of SQL Operators in Field Specifications

SQL operators are commonly used for the following tasks:

  • Loading external data with an implied decimal point:

     field1 POSITION(1:9) DECIMAL EXTERNAL(8) ":field1/1000"
  • Truncating fields that could be too long:

     field1 CHAR TERMINATED BY "," "SUBSTR(:field1, 1, 10)"

Combinations of SQL Operators

Multiple operators can also be combined, as in the following examples:

field1 POSITION(*+3) INTEGER EXTERNAL
       "TRUNC(RPAD(:field1,6,'0'), -2)"
field1 POSITION(1:8) INTEGER EXTERNAL
       "TRANSLATE(RTRIM(:field1),'N/A', '0')"
field1 CHAR(10)
       "NVL( LTRIM(RTRIM(:field1)), 'unknown' )"

Using SQL Strings with a Date Mask

When a SQL string is used with a date mask, the date mask is evaluated after the SQL string. Consider a field specified as follows:

field1 DATE "dd-mon-yy" "RTRIM(:field1)"

SQL*Loader internally generates and inserts the following:

TO_DATE(RTRIM(<field1_value>), 'dd-mon-yyyy')

Note that when using the DATE field datatype, it is not possible to have a SQL string without a date mask. This is because SQL*Loader assumes that the first quoted string it finds after the DATE parameter is a date mask. For instance, the following field specification would result in an error (ORA-01821: date format not recognized):

field1 DATE "RTRIM(TO_DATE(:field1, 'dd-mon-yyyy'))"

In this case, a simple workaround is to use the CHAR datatype.

Interpreting Formatted Fields

It is possible to use the TO_CHAR operator to store formatted dates and numbers. For example:

field1 ... "TO_CHAR(:field1, '$09999.99')"

This example could store numeric input data in formatted form, where field1 is a character column in the database. This field would be stored with the formatting characters (dollar sign, period, and so on) already in place.

You have even more flexibility, however, if you store such values as numeric quantities or dates. You can then apply arithmetic functions to the values in the database, and still select formatted values for your reports.

An example of using the SQL string to load data from a formatted report is shown in case study 7, Extracting Data from a Formatted Report. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Using SQL Strings to Load the ANYDATA Database Type

The ANYDATA database type can contain data of different types. To load the ANYDATA type using SQL*loader, it must be explicitly constructed by using a function call. The function is invoked using support for SQL strings as has been described in this section.

For example, suppose you have a table with a column named miscellaneous which is of type ANYDATA. You could load the column by doing the following, which would create an ANYDATA type containing a number.

LOAD DATA
INFILE *
APPEND INTO TABLE  ORDERS
(
miscellaneous CHAR "SYS.ANYDATA.CONVERTNUMBER(:miscellaneous)"
)
BEGINDATA
4

There can also be more complex situations in which you create an ANYDATA type that contains a different type depending upon the values in the record. To do this, you could write your own PL/SQL function that would determine what type should be in the ANYDATA type, based on the value in the record, and then call the appropriate ANYDATA.Convert*() function to create it.


See Also:


Using SQL*Loader to Generate Data for Input

The parameters described in this section provide the means for SQL*Loader to generate the data stored in the database record, rather than reading it from a data file. The following parameters are described:

Loading Data Without Files

It is possible to use SQL*Loader to generate data by specifying only sequences, record numbers, system dates, constants, and SQL string expressions as field specifications.

SQL*Loader inserts as many records as are specified by the LOAD statement. The SKIP parameter is not permitted in this situation.

SQL*Loader is optimized for this case. Whenever SQL*Loader detects that only generated specifications are used, it ignores any specified data file—no read I/O is performed.

In addition, no memory is required for a bind array. If there are any WHEN clauses in the control file, then SQL*Loader assumes that data evaluation is necessary, and input records are read.

Setting a Column to a Constant Value

This is the simplest form of generated data. It does not vary during the load or between loads.

CONSTANT Parameter

To set a column to a constant value, use CONSTANT followed by a value:

CONSTANT  value

CONSTANT data is interpreted by SQL*Loader as character input. It is converted, as necessary, to the database column type.

You may enclose the value within quotation marks, and you must do so if it contains whitespace or reserved words. Be sure to specify a legal value for the target column. If the value is bad, then every record is rejected.

Numeric values larger than 2^32 - 1 (4,294,967,295) must be enclosed in quotation marks.


Note:

Do not use the CONSTANT parameter to set a column to null. To set a column to null, do not specify that column at all. Oracle automatically sets that column to null when loading the record. The combination of CONSTANT and a value is a complete column specification.

Setting a Column to an Expression Value

Use the EXPRESSION parameter after a column name to set that column to the value returned by a SQL operator or specially written PL/SQL function. The operator or function is indicated in a SQL string that follows the EXPRESSION parameter. Any arbitrary expression may be used in this context provided that any parameters required for the operator or function are correctly specified and that the result returned by the operator or function is compatible with the datatype of the column being loaded.

EXPRESSION Parameter

The combination of column name, EXPRESSION parameter, and a SQL string is a complete field specification:

column_name EXPRESSION "SQL string"

In both conventional path mode and direct path mode, the EXPRESSION parameter can be used to load the default value into column_name:

column_name EXPRESSION "DEFAULT"

Note that if DEFAULT is used and the mode is direct path, then use of a sequence as a default will not work.

Setting a Column to the Data File Record Number

Use the RECNUM parameter after a column name to set that column to the number of the logical record from which that record was loaded. Records are counted sequentially from the beginning of the first data file, starting with record 1. RECNUM is incremented as each logical record is assembled. Thus it increments for records that are discarded, skipped, rejected, or loaded. If you use the option SKIP=10, then the first record loaded has a RECNUM of 11.

RECNUM Parameter

The combination of column name and RECNUM is a complete column specification.

column_name RECNUM

Setting a Column to the Current Date

A column specified with SYSDATE gets the current system date, as defined by the SQL language SYSDATE parameter. See the section on the DATE datatype in Oracle Database SQL Language Reference.

SYSDATE Parameter

The combination of column name and the SYSDATE parameter is a complete column specification.

column_name SYSDATE

The database column must be of type CHAR or DATE. If the column is of type CHAR, then the date is loaded in the form 'dd-mon-yy.' After the load, it can be loaded only in that form. If the system date is loaded into a DATE column, then it can be loaded in a variety of forms that include the time and the date.

A new system date/time is used for each array of records inserted in a conventional path load and for each block of records loaded during a direct path load.

Setting a Column to a Unique Sequence Number

The SEQUENCE parameter ensures a unique value for a particular column. SEQUENCE increments for each record that is loaded or rejected. It does not increment for records that are discarded or skipped.

SEQUENCE Parameter

The combination of column name and the SEQUENCE parameter is a complete column specification.

Description of sequence.gif follows
Description of the illustration sequence.gif

Table 10-6 describes the parameters used for column specification.

Table 10-6 Parameters Used for Column Specification

ParameterDescription

column_name

The name of the column in the database to which to assign the sequence.

SEQUENCE

Use the SEQUENCE parameter to specify the value for a column.

COUNT

The sequence starts with the number of records already in the table plus the increment.

MAX

The sequence starts with the current maximum value for the column plus the increment.

integer

Specifies the specific sequence number to begin with.

incr

The value that the sequence number is to increment after a record is loaded or rejected. This is optional. The default is 1.


If a record is rejected (that is, it has a format error or causes an Oracle error), then the generated sequence numbers are not reshuffled to mask this. If four rows are assigned sequence numbers 10, 12, 14, and 16 in a particular column, and the row with 12 is rejected, then the three rows inserted are numbered 10, 14, and 16, not 10, 12, and 14. This allows the sequence of inserts to be preserved despite data errors. When you correct the rejected data and reinsert it, you can manually set the columns to agree with the sequence.

Case study 3, Loading a Delimited Free-Format File, provides an example of using the SEQUENCE parameter. (See "SQL*Loader Case Studies" for information on how to access case studies.)

Generating Sequence Numbers for Multiple Tables

Because a unique sequence number is generated for each logical input record, rather than for each table insert, the same sequence number can be used when inserting data into multiple tables. This is frequently useful.

Sometimes, however, you might want to generate different sequence numbers for each INTO TABLE clause. For example, your data format might define three logical records in every input record. In that case, you can use three INTO TABLE clauses, each of which inserts a different part of the record into the same table. When you use SEQUENCE(MAX), SQL*Loader will use the maximum from each table, which can lead to inconsistencies in sequence numbers.

To generate sequence numbers for these records, you must generate unique numbers for each of the three inserts. Use the number of table-inserts per record as the sequence increment, and start the sequence numbers for each insert with successive numbers.

Example: Generating Different Sequence Numbers for Each Insert

Suppose you want to load the following department names into the dept table. Each input record contains three department names, and you want to generate the department numbers automatically.

Accounting     Personnel      Manufacturing
Shipping       Purchasing     Maintenance 
... 

You could use the following control file entries to generate unique department numbers:

INTO TABLE dept 
(deptno  SEQUENCE(1, 3), 
 dname   POSITION(1:14) CHAR) 
INTO TABLE dept 
(deptno  SEQUENCE(2, 3), 
 dname   POSITION(16:29) CHAR) 
INTO TABLE dept 
(deptno  SEQUENCE(3, 3), 
 dname   POSITION(31:44) CHAR) 

The first INTO TABLE clause generates department number 1, the second number 2, and the third number 3. They all use 3 as the sequence increment (the number of department names in each record). This control file loads Accounting as department number 1, Personnel as 2, and Manufacturing as 3.

The sequence numbers are then incremented for the next record, so Shipping loads as 4, Purchasing as 5, and so on.

PKIlNPKN:A OEBPS/toc.ncx Oracle® Database Utilities, 11g Release 2 (11.2) Cover Oracle Database Utilities , 11g Release 2 (11.2) List of Examples List of Figures List of Tables Oracle Database Utilities, 11g Release 2 (11.2) Preface What's New in Database Utilities? Oracle Data Pump Overview of Oracle Data Pump Data Pump Export Data Pump Import Data Pump Legacy Mode Data Pump Performance The Data Pump API SQL*Loader SQL*Loader Concepts SQL*Loader Command-Line Reference SQL*Loader Control File Reference SQL*Loader Field List Reference Loading Objects, LOBs, and Collections Conventional and Direct Path Loads External Tables External Tables Concepts The ORACLE_LOADER Access Driver The ORACLE_DATAPUMP Access Driver Other Utilities ADRCI: ADR Command Interpreter DBVERIFY: Offline Database Verification Utility DBNEWID Utility Using LogMiner to Analyze Redo Log Files Using the Metadata APIs Original Export Original Import Appendixes SQL*Loader Syntax Diagrams Index Copyright PK4~(PKN:AOEBPS/part_apps.htm> Appendixes

Part V

Appendixes

This section contains the following appendix:

Appendix A, "SQL*Loader Syntax Diagrams"

This appendix provides diagrams of the SQL*Loader syntax.

PKpbL:PKN:AOEBPS/adrci.htm ADRCI: ADR Command Interpreter

16 ADRCI: ADR Command Interpreter

The Automatic Diagnostic Repository Command Interpreter (ADRCI) utility is a command-line tool that you use to manage Oracle Database diagnostic data.

This chapter contains the following sections:


See Also:

Oracle Database Administrator's Guide for more information about managing diagnostic data.

About the ADR Command Interpreter (ADRCI) Utility

ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database 11g. ADRCI enables you to:

  • View diagnostic data within the Automatic Diagnostic Repository (ADR).

  • View Health Monitor reports.

  • Package incident and problem information into a zip file for transmission to Oracle Support.

Diagnostic data includes incident and problem descriptions, trace files, dumps, health monitor reports, alert log entries, and more.

ADR data is secured by operating system permissions on the ADR directories, hence there is no need to log in to ADRCI.

ADRCI has a rich command set, and can be used in interactive mode or within scripts.


Note:

The easier and recommended way to manage diagnostic data is with the Oracle Enterprise Manager Support Workbench (Support Workbench). ADRCI provides a command-line alternative to most of the functionality of the Support Workbench, and adds capabilities such as listing and querying trace files.

See Oracle Database Administrator's Guide for complete information about the Support Workbench.


Definitions

The following are definitions of terms used for ADRCI and the Oracle Database fault diagnosability infrastructure:

Automatic Diagnostic Repository (ADR)

The Automatic Diagnostic Repository (ADR) is a file-based repository for database diagnostic data such as traces, dumps, the alert log, health monitor reports, and more. It has a unified directory structure across multiple instances and multiple products. Beginning with release 11g, the database, Oracle Automatic Storage Management (Oracle ASM), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data underneath its own ADR home directory (see "ADR Home"). For example, in an Oracle Real Application Clusters (Oracle RAC) environment with shared storage and Oracle ASM, each database instance and each Oracle ASM instance has a home directory within the ADR. The ADR's unified directory structure enables customers and Oracle Support to correlate and analyze diagnostic data across multiple instances and multiple products.

Problem

A problem is a critical error in the database. Critical errors include internal errors such as ORA-00600 and other severe errors such as ORA-07445 (operating system exception) or ORA-04031 (out of memory in the shared pool). Problems are tracked in the ADR. Each problem has a problem key and a unique problem ID. (See "Problem Key".)

Incident

An incident is a single occurrence of a problem. When a problem occurs multiple times, an incident is created for each occurrence. Incidents are tracked in the ADR. Each incident is identified by a numeric incident ID, which is unique within the ADR. When an incident occurs, the database makes an entry in the alert log, sends an incident alert to Oracle Enterprise Manager, gathers diagnostic data about the incident in the form of dump files (incident dumps), tags the incident dumps with the incident ID, and stores the incident dumps in an ADR subdirectory created for that incident.

Diagnosis and resolution of a critical error usually starts with an incident alert. You can obtain a list of all incidents in the ADR with an ADRCI command. Each incident is mapped to a single problem only.

Incidents are flood-controlled so that a single problem does not generate too many incidents and incident dumps. See Oracle Database Administrator's Guide for more information about incident flood control.

Problem Key

Every problem has a problem key, which is a text string that includes an error code (such as ORA 600) and in some cases, one or more error parameters. Two incidents are considered to have the same root cause if their problem keys match.

Incident Package

An incident package (package) is a collection of data about incidents for one or more problems. Before sending incident data to Oracle Support it must be collected into a package using the Incident Packaging Service (IPS). After a package is created, you can add external files to the package, remove selected files from the package, or scrub (edit) selected files in the package to remove sensitive data.

A package is a logical construct only, until you create a physical file from the package contents. That is, an incident package starts out as a collection of metadata in the ADR. As you add and remove package contents, only the metadata is modified. When you are ready to upload the data to Oracle Support, you create a physical package using ADRCI, which saves the data into a zip file. You can then upload the zip file to Oracle Support.

Finalizing

Before ADRCI can generate a physical package from a logical package, the package must be finalized. This means that other components are called to add any correlated diagnostic data files to the incidents already in this package. Finalizing also adds recent trace files, alert log entries, Health Monitor reports, SQL test cases, and configuration information. This step is run automatically when a physical package is generated, and can also be run manually using the ADRCI utility. After manually finalizing a package, you can review the files that were added and then remove or edit any that contain sensitive information.


See Also:

Oracle Database Administrator's Guide for more information about correlated diagnostic data

ADR Home

An ADR home is the root directory for all diagnostic data—traces, dumps, alert log, and so on—for a particular instance of a particular Oracle product or component. For example, in an Oracle RAC environment with Oracle ASM, each database instance and each Oracle ASM instance has an ADR home. All ADR homes share the same hierarchical directory structure. Some of the standard subdirectories in each ADR home include alert (for the alert log), trace (for trace files), and incident (for incident information). All ADR homes are located within the ADR base directory. (See "ADR Base".)

Some ADRCI commands can work with multiple ADR homes simultaneously. The current ADRCI homepath determines the ADR homes that are searched for diagnostic data when an ADRCI command is issued. See "Homepath" for more information.

ADR Base

To permit correlation of diagnostic data across multiple ADR homes, ADR homes are grouped together under the same root directory called the ADR base. For example, in an Oracle RAC environment, the ADR base could be on a shared disk, and the ADR home for each Oracle RAC instance could be located under this ADR base.

The location of the ADR base for a database instance is set by the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or is null, the database sets it to a default value. See Oracle Database Administrator's Guide for details.

When multiple database instances share an Oracle home, whether they are multiple single instances or the instances of an Oracle RAC database, and when one or more of these instances set ADR base in different locations, the last instance to start up determines the default ADR base for ADRCI.

Homepath

All ADRCI commands operate on diagnostic data in the current ADR homes. More than one ADR home can be current at any one time. Some ADRCI commands (such as SHOW INCIDENT) search for and display diagnostic data from all current ADR homes, while other commands require that only one ADR home be current, and display an error message if more than one are current.

The ADRCI homepath determines the ADR homes that are current. It does so by pointing to a directory within the ADR base hierarchy. If it points to a single ADR home directory, then that ADR home is the only current ADR home. If the homepath points to a directory that is above the ADR home directory level in the hierarchy, then all ADR homes that are below this directory become current.

The homepath is null by default when ADRCI starts. This means that all ADR homes under ADR base are current.

The SHOW HOME and SHOW HOMEPATH commands display a list of the ADR homes that are current, and the SET HOMEPATH command sets the homepath.


See Also:


Starting ADRCI and Getting Help

You can use ADRCI in interactive mode or batch mode. Details are provided in the following sections:

Using ADRCI in Interactive Mode

Interactive mode prompts you to enter individual commands one at a time.

To use ADRCI in interactive mode:

  1. Ensure that the ORACLE_HOME and PATH environment variables are set properly.

    On the Windows platform, these environment variables are set in the Windows registry automatically upon installation. On other platforms, you must set and check environment variables with operating system commands.

    The PATH environment variable must include ORACLE_HOME/bin.

  2. Enter the following command at the operating system command prompt:

    ADRCI
    

    The utility starts and displays the following prompt:

    adrci>
    
  3. Enter ADRCI commands, following each with the Enter key.

  4. Enter one of the following commands to exit ADRCI:

    EXIT
    QUIT
    

Getting Help

With the ADRCI help system, you can:

  • View a list of ADR commands.

  • View help for an individual command.

  • View a list of ADRCI command line options.

To view a list of ADRCI commands:

  1. Start ADRCI in interactive mode.

    See "Using ADRCI in Interactive Mode" for instructions.

  2. At the ADRCI prompt, enter the following command:

    HELP
    

To get help for a specific ADRCI command:

  1. Start ADRCI in interactive mode.

    See "Using ADRCI in Interactive Mode" for instructions.

  2. At the ADRCI prompt, enter the following command:

    HELP command
    

    For example, to get help on the SHOW TRACEFILE command, enter the following:

    HELP SHOW TRACEFILE
    

To view a list of command line options:

  • Enter the following command at the operating system command prompt:

    ADRCI -HELP
    

    The utility displays output similar to the following:

    Syntax:
       adrci [-help] [script=script_filename] [exec="command [;command;...]"]
     
    Options      Description                     (Default)
    -----------------------------------------------------------------
    script       script file name                (None)
    help         help on the command options     (None)
    exec         exec a set of commands          (None)
    -----------------------------------------------------------------
    

Using ADRCI in Batch Mode

Batch mode enables you to run a series of ADRCI commands at once, without being prompted for input. To use batch mode, you add a command line parameter to the ADRCI command when you start ADRCI. Batch mode enables you to include ADRCI commands in shell scripts or Windows batch files. Like interactive mode, the ORACLE_HOME and PATH environment variables must be set before starting ADRCI.

The following command line parameters are available for batch operation:

Table 16-1 ADRCI Command Line Parameters for Batch Operation

ParameterDescription

EXEC

Enables you to submit one or more ADRCI commands on the operating system command line that starts ADRCI. Commands are separated by semicolons (;).

SCRIPT

Enables you to run a script containing ADRCI commands.


To submit ADRCI commands on the command line:

  • Enter the following command at the operating system command prompt:

    ADRCI EXEC="COMMAND[; COMMAND]..."
    

    For example, to run the SHOW HOMES command in batch mode, enter the following command at the operating system command prompt:

    ADRCI EXEC="SHOW HOMES"
    

    To run the SHOW HOMES command followed by the SHOW INCIDENT command, enter the following:

    ADRCI EXEC="SHOW HOMES; SHOW INCIDENT"
    

To run ADRCI scripts:

  • Enter the following command at the operating system command prompt:

  • ADRCI SCRIPT=SCRIPT_FILE_NAME
    

    For example, to run a script file named adrci_script.txt, enter the following command at the operating system command prompt:

    ADRCI SCRIPT=adrci_script.txt
    

    A script file contains a series of commands separated by semicolons (;) or line breaks, such as:

  • SET HOMEPATH diag/rdbms/orcl/orcl; SHOW ALERT -term
    

Setting the ADRCI Homepath Before Using ADRCI Commands

When diagnosing a problem, you may want to work with diagnostic data from multiple database instances or components, or you may want to focus on diagnostic data from one instance or component. To work with diagnostic data from multiple instances or components, you must ensure that the ADR homes for all of these instances or components are current. To work with diagnostic data from only one instance or component, you must ensure that only the ADR home for that instance or component is current. You control the ADR homes that are current by setting the ADRCI homepath.

If multiple homes are current, this means that the homepath points to a directory in the ADR directory structure that contains multiple ADR home directories underneath it. To focus on a single ADR home, you must set the homepath to point lower in the directory hierarchy, to a single ADR home directory.

For example, if the Oracle RAC database with database name orclbi has two instances, where the instances have SIDs orclbi1 and orclbi2, and Oracle RAC is using a shared Oracle home, the following two ADR homes exist:

/diag/rdbms/orclbi/orclbi1/
/diag/rdbms/orclbi/orclbi2/

In all ADRCI commands and output, ADR home directory paths (ADR homes) are always expressed relative to ADR base. So if ADR base is currently /u01/app/oracle, the absolute paths of these two ADR homes are the following:

/u01/app/oracle/diag/rdbms/orclbi/orclbi1/
/u01/app/oracle/diag/rdbms/orclbi/orclbi2/

You use the SET HOMEPATH command to set one or more ADR homes to be current. If ADR base is /u01/app/oracle and you want to set the homepath to /u01/app/oracle/diag/rdbms/orclbi/orclbi2/, you use this command:

adrci> set homepath diag/rdbms/orclbi/orclbi2

When ADRCI starts, the homepath is null by default, which means that all ADR homes under ADR base are current. In the previously cited example, therefore, the ADR homes for both Oracle RAC instances would be current.

adrci> show homes
ADR Homes:
diag/rdbms/orclbi/orclbi1
diag/rdbms/orclbi/orclbi2

In this case, any ADRCI command that you run, assuming that the command supports more than one current ADR home, works with diagnostic data from both ADR homes. If you were to set the homepath to /diag/rdbms/orclbi/orclbi2, only the ADR home for the instance with SID orclbi2 would be current.

adrci> set homepath diag/rdbms/orclbi/orclbi2
adrci> show homes
ADR Homes:
diag/rdbms/orclbi/orclbi2

In this case, any ADRCI command that you run would work with diagnostic data from this single ADR home only.


See Also:


Viewing the Alert Log

Beginning with Oracle Database 11g, the alert log is written as both an XML-formatted file and as a text file. You can view either format of the file with any text editor, or you can run an ADRCI command to view the XML-formatted alert log with the XML tags omitted. By default, ADRCI displays the alert log in your default editor. You can use the SET EDITOR command to change your default editor.

To view the alert log with ADRCI:

  1. Start ADRCI in interactive mode.

    See "Starting ADRCI and Getting Help" for instructions.

  2. (Optional) Use the SET HOMEPATH command to select (make current) a single ADR home.

    You can use the SHOW HOMES command first to see a list of current ADR homes. See "Homepath" and "Setting the ADRCI Homepath Before Using ADRCI Commands" for more information.

  3. At the ADRCI prompt, enter the following command:

    SHOW ALERT
    

    If more than one ADR home is current, you are prompted to select a single ADR home from a list. The alert log is displayed, with XML tags omitted, in your default editor.

  4. Exit the editor to return to the ADRCI command prompt.

The following are variations on the SHOW ALERT command:

SHOW ALERT -TAIL

This displays the last portion of the alert log (the last 10 entries) in your terminal session.

SHOW ALERT -TAIL 50

This displays the last 50 entries in the alert log in your terminal session.

SHOW ALERT -TAIL -F

This displays the last 10 entries in the alert log, and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform live monitoring of the alert log. Press CTRL+C to stop waiting and return to the ADRCI prompt.

SPOOL /home/steve/MYALERT.LOG
SHOW ALERT -TERM
SPOOL OFF

This outputs the alert log, without XML tags, to the file /home/steve/MYALERT.LOG.

SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'"

This displays only alert log messages that contain the string 'ORA-600'. The output looks something like this:

ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orclbi/orclbi:
******************************************************************************
01-SEP-06 09.17.44.849000000 PM -07:00
AlertMsg1: ORA-600 dbgris01, addr=0xa9876541

See Also:


Finding Trace Files

ADRCI enables you to view the names of trace files that are currently in the automatic diagnostic repository (ADR). You can view the names of all trace files in the ADR, or you can apply filters to view a subset of names. For example, ADRCI has commands that enable you to:

  • Obtain a list of trace files whose file name matches a search string.

  • Obtain a list of trace files in a particular directory.

  • Obtain a list of trace files that pertain to a particular incident.

You can combine filtering functions by using the proper command line parameters.

The SHOW TRACEFILE command displays a list of the trace files that are present in the trace directory and in all incident directories under the current ADR home. When multiple ADR homes are current, the traces file lists from all ADR homes are output one after another.

The following statement lists the names of all trace files in the current ADR homes, without any filtering:

SHOW TRACEFILE

The following statement lists the name of every trace file that has the string mmon in its file name. The percent sign (%) is used as a wildcard character, and the search string is case sensitive.

SHOW TRACEFILE %mmon%

This statement lists the name of every trace file that is located in the /home/steve/temp directory and that has the string mmon in its file name:

SHOW TRACEFILE %mmon% -PATH /home/steve/temp

This statement lists the names of trace files in reverse order of last modified time. That is, the most recently modified trace files are listed first.

SHOW TRACEFILE -RT

This statement lists the names of all trace files related to incident number 1681:

SHOW TRACEFILE -I 1681

See Also:


Viewing Incidents

The ADRCI SHOW INCIDENT command displays information about open incidents. For each incident, the incident ID, problem key, and incident creation time are shown. If the ADRCI homepath is set so that there are multiple current ADR homes, the report includes incidents from all of them.

To view a report of all open incidents:

  1. Start ADRCI in interactive mode, and ensure that the homepath points to the correct directory within the ADR base directory hierarchy.

    See "Starting ADRCI and Getting Help" and "Homepath" for details.

  2. At the ADRCI prompt, enter the following command:

    SHOW INCIDENT
    

    ADRCI generates output similar to the following:

ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orclbi/orclbi:
*****************************************************************************
INCIDENT_ID       PROBLEM_KEY               CREATE_TIME
----------------- ------------------------- ---------------------------------
3808              ORA 603                   2010-06-18 21:35:49.322161 -07:00
3807              ORA 600 [4137]            2010-06-18 21:35:47.862114 -07:00
3805              ORA 600 [4136]            2010-06-18 21:35:25.012579 -07:00
3804              ORA 1578                  2010-06-18 21:35:08.483156 -07:00
4 rows fetched

The following are variations on the SHOW INCIDENT command:

SHOW INCIDENT -MODE BRIEF
SHOW INCIDENT -MODE DETAIL

These commands produce more detailed versions of the incident report.

SHOW INCIDENT -MODE DETAIL -P "INCIDENT_ID=1681"

This shows a detailed incident report for incident 1681 only.

Packaging Incidents

You can use ADRCI commands to package one or more incidents for transmission to Oracle Support for analysis. Background information and instructions are presented in the following topics:

About Packaging Incidents

Packaging incidents is a three-step process:

Step 1: Create a logical incident package.

The incident package (package) is denoted as logical because it exists only as metadata in the automatic diagnostic repository (ADR). It has no content until you generate a physical package from the logical package. The logical package is assigned a package number, and you refer to it by that number in subsequent commands.

You can create the logical package as an empty package, or as a package based on an incident number, a problem number, a problem key, or a time interval. If you create the package as an empty package, you can add diagnostic information to it in step 2.

Creating a package based on an incident means including diagnostic data—dumps, health monitor reports, and so on—for that incident. Creating a package based on a problem number or problem key means including in the package diagnostic data for incidents that reference that problem number or problem key. Creating a package based on a time interval means including diagnostic data on incidents that occurred in the time interval.

Step 2: Add diagnostic information to the incident package

If you created a logical package based on an incident number, a problem number, a problem key, or a time interval, this step is optional. You can add additional incidents to the package or you can add any file within the ADR to the package. If you created an empty package, you must use ADRCI commands to add incidents or files to the package.

Step 3: Generate the physical incident package

When you submit the command to generate the physical package, ADRCI gathers all required diagnostic files and adds them to a zip file in a designated directory. You can generate a complete zip file or an incremental zip file. An incremental file contains all the diagnostic files that were added or changed since the last zip file was created for the same logical package. You can create incremental files only after you create a complete file, and you can create as many incremental files as you want. Each zip file is assigned a sequence number so that the files can be analyzed in the correct order.

Zip files are named according to the following scheme:

packageName_mode_sequence.zip

where:

  • packageName consists of a portion of the problem key followed by a timestamp

  • mode is either COM or INC, for complete or incremental

  • sequence is an integer

For example, if you generate a complete zip file for a logical package that was created on September 6, 2006 at 4:53 p.m., and then generate an incremental zip file for the same logical package, you would create files with names similar to the following:

ORA603_20060906165316_COM_1.zip
ORA603_20060906165316_INC_2.zip

Creating Incident Packages

The following sections present the ADRCI commands that you use to create a logical incident package (package) and generate a physical package:

Creating a Logical Incident Package

You use variants of the IPS CREATE PACKAGE command to create a logical package (package).

To create a package based on an incident:

  1. Start ADRCI in interactive mode, and ensure that the homepath points to the correct directory within the ADR base directory hierarchy.

    See "Starting ADRCI and Getting Help" and "Homepath" for details.

  2. At the ADRCI prompt, enter the following command:

    IPS CREATE PACKAGE INCIDENT incident_number
    

    For example, the following command creates a package based on incident 3:

    IPS CREATE PACKAGE INCIDENT 3
    

    ADRCI generates output similar to the following:

    Created package 10 based on incident id 3, correlation level typical
    

    The package number assigned to this logical package is 10.

The following are variations on the IPS CREATE PACKAGE command:

IPS CREATE PACKAGE

This creates an empty package. You must use the IPS ADD INCIDENT or IPS ADD FILE commands to add diagnostic data to the package before generating it.

IPS CREATE PACKAGE PROBLEM problem_ID

This creates a package and includes diagnostic information for incidents that reference the specified problem ID. (Problem IDs are integers.) You can obtain the problem ID for an incident from the report displayed by the SHOW INCIDENT -MODE BRIEF command. Because there can be many incidents with the same problem ID, ADRCI adds to the package the diagnostic information for the first three incidents (early incidents) that occurred and last three incidents (late incidents) that occurred with this problem ID, excluding any incidents that are older than 90 days.


Note:

The number of early and late incidents, and the 90-day age limit are defaults that can be changed. See "IPS SET CONFIGURATION".

ADRCI may also add other incidents that correlate closely in time or in other criteria with the already added incidents.

IPS CREATE PACKAGE PROBLEMKEY "problem_key"

This creates a package and includes diagnostic information for incidents that reference the specified problem key. You can obtain problem keys from the report displayed by the SHOW INCIDENT command. Because there can be many incidents with the same problem key, ADRCI adds to the package only the diagnostic information for the first three early incidents and last three late incidents with this problem key, excluding incidents that are older than 90 days.


Note:

The number of early and late incidents, and the 90-day age limit are defaults that can be changed. See "IPS SET CONFIGURATION".

ADRCI may also add other incidents that correlate closely in time or in other criteria with the already added incidents.

The problem key must be enclosed in single quotation marks (') or double quotation marks (") if it contains spaces or quotation marks.

IPS CREATE PACKAGE SECONDS sec

This creates a package and includes diagnostic information for all incidents that occurred from sec seconds ago until now. sec must be an integer.

IPS CREATE PACKAGE TIME 'start_time' TO 'end_time'

This creates a package and includes diagnostic information for all incidents that occurred within the specified time range. start_time and end_time must be in the format 'YYYY-MM-DD HH24:MI:SS.FF TZR'. This is a valid format string for the NLS_TIMESTAMP_TZ_FORMAT initialization parameter. The fraction (FF) portion of the time is optional, and the HH24:MI:SS delimiters can be colons or periods.

For example, the following command creates a package with incidents that occurred between July 24th and July 30th of 2010:

IPS CREATE PACKAGE TIME '2010-07-24 00:00:00 -07:00' to '2010-07-30 23.59.59 -07:00'

Adding Diagnostic Information to a Logical Incident Package

You can add the following diagnostic information to an existing logical package (package):

  • All diagnostic information for a particular incident

  • A named file within the ADR

To add an incident to an existing package:

  1. Start ADRCI in interactive mode, and ensure that the homepath points to the correct directory within the ADR base directory hierarchy.

    See "Starting ADRCI and Getting Help" and "Homepath" for details.

  2. At the ADRCI prompt, enter the following command:

    IPS ADD INCIDENT incident_number PACKAGE package_number
    

To add a file in the ADR to an existing package:

  • At the ADRCI prompt, enter the following command:

    IPS ADD FILE filespec PACKAGE package_number
    

    filespec must be a fully qualified file name (with path). Only files that are within the ADR base directory hierarchy may be added.

Generating a Physical Incident Package

When you generate a package, you create a physical package (a zip file) for an existing logical package.

To generate a physical incident package:

  1. Start ADRCI in interactive mode, and ensure that the homepath points to the correct directory within the ADR base directory hierarchy.

    See "Starting ADRCI and Getting Help" and "Homepath" for details.

  2. At the ADRCI prompt, enter the following command:

    IPS GENERATE PACKAGE package_number IN path
    

    This generates a complete physical package (zip file) in the designated path. For example, the following command creates a complete physical package in the directory /home/steve/diagnostics from logical package number 2:

    IPS GENERATE PACKAGE 2 IN /home/steve/diagnostics
    

You can also generate an incremental package containing only the incidents that have occurred since the last package generation.

To generate an incremental physical incident package:

  • At the ADRCI prompt, enter the following command:

    IPS GENERATE PACKAGE package_number IN path INCREMENTAL
    

ADRCI Command Reference

There are four command types in ADRCI:

  • Commands that work with one or more current ADR homes

  • Commands that work with only one current ADR home, and that issue an error message if there is more than one current ADR home

  • Commands that prompt you to select an ADR home when there are multiple current ADR homes

  • Commands that do not need a current ADR home

All ADRCI commands support the case where there is a single current ADR home.

Table 16-2 lists the set of ADRCI commands.

Table 16-2 List of ADRCI commands

CommandDescription

CREATE REPORT


Creates a report for the specified report type and ID.

ECHO


Echoes the input string.

EXIT


Exits the current ADRCI session.

HOST


Executes operating system commands from ADRCI.

IPS


Invokes the IPS utility. See Table 16-3 for the IPS commands available within ADRCI.

PURGE


Purges diagnostic data in the current ADR home, according to current purging policies.

QUIT


Exits the current ADRCI session.

RUN


Runs an ADRCI script.

SELECT


Retrieves qualified records from the specified incident or problem.

SET BASE


Sets the ADR base for the current ADRCI session.

SET BROWSER


Reserved for future use.

SET CONTROL


Set purging policies for ADR contents.

SET ECHO


Toggles command output.

SET EDITOR


Sets the default editor for displaying trace and alert log contents.

SET HOMEPATH


Makes current one or more ADR homes.

SET TERMOUT


Toggles terminal output.

SHOW ALERT


Shows alert log messages.

SHOW BASE


Shows the current ADR base.

SHOW CONTROL


Shows ADR information, including the current purging policy.

SHOW HM_RUN


Shows Health Monitor run information.

SHOW HOMEPATH


Shows the current homepath.

SHOW HOMES


Lists the current ADR homes.

SHOW INCDIR


Lists the trace files created for the specified incidents.

SHOW INCIDENT


Outputs a list of incidents.

SHOW PROBLEM


Outputs a list of problems.

SHOW REPORT


Shows a report for the specified report type and ID.

SHOW TRACEFILE


Lists qualified trace file names.

SPOOL


Directs output to a file.



Note:

Unless otherwise specified, all commands work with multiple current ADR homes.

CREATE REPORT

Purpose

Creates a report for the specified report type and run ID and stores the report in the ADR. Currently, only the hm_run (Health Monitor) report type is supported.


Note:

Results of Health Monitor runs are stored in the ADR in an internal format. To view these results, you must create a Health Monitor report from them and then view the report. You need create the report only once. You can then view it multiple times.

Syntax and Description

create report report_type run_name

report_type must be hm_run. run_name is a Health Monitor run name. Obtain run names with the SHOW HM_RUN command.

If the report already exists it is overwritten. Use the SHOW REPORT command to view the report.

This command does not support multiple ADR homes.

Example

This example creates a report for the Health Monitor run with run name hm_run_1421:

create report hm_run hm_run_1421

Note:

CREATE REPORT does not work when multiple ADR homes are set. For information about setting a single ADR home, see "Setting the ADRCI Homepath Before Using ADRCI Commands".

ECHO

Purpose

Prints the input string. You can use this command to print custom text from ADRCI scripts.

Syntax and Description

echo quoted_string

The string must be enclosed in single or double quotation marks.

This command does not require an ADR home to be set before you can use it.

Example

These examples print the string "Hello, world!":

echo "Hello, world!"
echo 'Hello, world!'

EXIT

Purpose

Exits the ADRCI utility.

Syntax and Description

exit

EXIT is a synonym for the QUIT command.

This command does not require an ADR home to be set before you can use it.

HOST

Purpose

Execute operating system commands without leaving ADRCI.

Syntax and Description

host ["host_command_string"]

Use host by itself to enter an operating system shell, which allows you to enter multiple operating system commands. Enter EXIT to leave the shell and return to ADRCI.

You can also specify the command on the same line (host_command_string) enclosed in double quotation marks.

This command does not require an ADR home to be set before you can use it.

Examples

host
host "ls -l *.pl"

IPS

Purpose

Invokes the Incident Packaging Service (IPS). The IPS command provides options for creating logical incident packages (packages), adding diagnostic data to packages, and generating physical packages for transmission to Oracle Support.


See Also:

"Packaging Incidents" for more information about packaging

The IPS command set contains the following commands:

Table 16-3 IPS Command Set

CommandDescription

IPS ADD


Adds an incident, problem, or problem key to a package.

IPS ADD FILE


Adds a file to a package.

IPS ADD NEW INCIDENTS


Finds and adds new incidents for the problems in the specified package.

IPS COPY IN FILE


Copies files into the ADR from the external file system.

IPS COPY OUT FILE


Copies files out of the ADR to the external file system.

IPS CREATE PACKAGE


Creates a new (logical) package.

IPS DELETE PACKAGE


Deletes a package and its contents from the ADR.

IPS FINALIZE


Finalizes a package before uploading.

IPS GENERATE PACKAGE


Generates a zip file of the specified package contents in the target directory.

IPS GET MANIFEST


Retrieves and displays the manifest from a package zip file.

IPS GET METADATA


Extracts metadata from a package zip file and displays it.

IPS PACK


Creates a physical package (zip file) directly from incidents, problems, or problem keys.

IPS REMOVE


Removes incidents from an existing package.

IPS REMOVE FILE


Remove a file from an existing package.

IPS SET CONFIGURATION


Changes the value of an IPS configuration parameter.

IPS SHOW CONFIGURATION


Displays the values of IPS configuration parameters.

IPS SHOW FILES


Lists the files in a package.

IPS SHOW INCIDENTS


Lists the incidents in a package.

IPS SHOW PACKAGE


Displays information about the specified package.

IPS UNPACK FILE


Unpackages a package zip file into a specified path.



Note:

IPS commands do not work when multiple ADR homes are set. For information about setting a single ADR home, see "Setting the ADRCI Homepath Before Using ADRCI Commands".

Using the <ADR_HOME> and <ADR_BASE> Variables in IPS Commands

The IPS command set provides shortcuts for referencing the current ADR home and ADR base directories. To access the current ADR home directory, use the <ADR_HOME> variable as follows:

ips add file <ADR_HOME>/trace/orcl_ora_13579.trc package 12

Use the <ADR_BASE> variable to access the ADR base directory as follows:

ips add file <ADR_BASE>/diag/rdbms/orcl/orcl/trace/orcl_ora_13579.trc package 12

Note:

Type the angle brackets (< >) as shown.

IPS ADD

Purpose

Adds incidents to a package.

Syntax and Description

ips add {incident first [n] | incident inc_id | incident last [n] | 
     problem first [n] | problem prob_id | problem last [n] |
     problemkey pr_key | seconds secs | time start_time to end_time} 
     package package_id

Table 16-4 describes the arguments of IPS ADD.

Table 16-4 Arguments of IPS ADD command

ArgumentDescription

incident first [n]

Adds the first n incidents to the package, where n is a positive integer. For example, if n is set to 5, then the first five incidents are added. If n is omitted, then the default is 1, and the first incident is added.

incident inc_id

Adds an incident with ID inc_id to the package.

incident last [n]

Adds the last n incidents to the package, where n is a positive integer. For example, if n is set to 5, then the last five incidents are added. If n is omitted, then the default is 1, and the last incident is added.

problem first [n]

Adds the incidents for the first n problems to the package, where n is a positive integer. For example, if n is set to 5, then the incidents for the first five problems are added. If n is omitted, then the default is 1, and the incidents for the first problem is added.

Adds only the first three early incidents and last three late incidents for each problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problem prob_id

Adds all incidents with problem ID prob_id to the package. Adds only the first three early incidents and last three late incidents for the problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problem last [n]

Adds the incidents for the last n problems to the package, where n is a positive integer. For example, if n is set to 5, then the incidents for the last five problems are added. If n is omitted, then the default is 1, and the incidents for the last problem is added.

Adds only the first three early incidents and last three late incidents for each problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problemkey pr_key

Adds incidents with problem key pr_key to the package. Adds only the first three early incidents and last three late incidents for the problem key, excluding any older than 90 days. (Note: These limits are defaults and can be changed.)

seconds secs

Adds all incidents that have occurred within secs seconds of the present time.

time start_time to end_time

Adds all incidents between start_time and end_time to the package. Time format is 'YYYY-MM-YY HH24:MI:SS.FF TZR'. Fractional part (FF) is optional.

package package_id

Specifies the package to which to add incidents.


Example

This example adds incident 22 to package 12:

ips add incident 22 package 12

This example adds the first three early incidents and the last three late incidents with problem ID 6 to package 2, exuding any incidents older than 90 days:

ips add problem 6 package 2

This example adds all incidents taking place during the last minute to package 5:

ips add seconds 60 package 5

This example adds all incidents taking place between 10:00 a.m. and 11:00 p.m. on May 1, 2010:

ips add  time '2010-05-01 10:00:00.00 -07:00' to '2010-05-01 23:00:00.00 -07:00'

IPS ADD FILE

Purpose

Adds a file to an existing package.

Syntax and Description

ips add file file_name package package_id

file_name is the full path name of the file. You can use the <ADR_HOME> and <ADR_BASE> variables if desired. The file must be under the same ADR base as the package.

package_id is the package ID.

Example

This example adds a trace file to package 12:

ips add file <ADR_HOME>/trace/orcl_ora_13579.trc package 12

See Also:

See "Using the <ADR_HOME> and <ADR_BASE> Variables in IPS Commands" for information about the <ADR_HOME> directory syntax

IPS ADD NEW INCIDENTS

Purpose

Find and add new incidents for all of the problems in the specified package.

Syntax and Description

ips add new incidents package package_id

package_id is the ID of the package to update. Only new incidents of the problems in the package are added.

Example

This example adds up to three of the new late incidents for the problems in package 12:

ips add new incidents package 12

Note:

The number of late incidents added is a default that can be changed. See "IPS SET CONFIGURATION".

IPS COPY IN FILE

Purpose

Copies a file into the ADR from the external file system.

To edit a file in a package, you must copy the file out to a designated directory, edit the file, and copy it back into the package. You may want to do this to delete sensitive data in the file before sending the package to Oracle Support.

Syntax and Description

ips copy in file filename [to new_name][overwrite] package package_id
     [incident incid]

Copies an external file, filename (specified with full path name) into the ADR, associating it with an existing package, package_id, and optionally an incident, incid. Use the to new_name option to give the copied file a new file name within the ADR. Use the overwrite option to overwrite a file that exists already.

Example

This example copies a trace file from the file system into the ADR, associating it with package 2 and incident 4:

ips copy in file /home/nick/trace/orcl_ora_13579.trc to <ADR_HOME>/trace/orcl_ora_13579.trc package 2 incident 4

See Also:


IPS COPY OUT FILE

Purpose

Copies a file from the ADR to the external file system.

To edit a file in a package, you must copy the file out to a designated directory, edit the file, and copy it back into the package. You may want to do this to delete sensitive data in the file before sending the package to Oracle Support.

Syntax and Description

ips copy out file source to target [overwrite]

Copies a file, source, to a location outside the ADR, target (specified with full path name). Use the overwrite option to overwrite the file that exists already.

Example

This example copies the file orcl_ora_13579.trc, in the trace subdirectory of the current ADR home, to a local folder.

ips copy out file <ADR_HOME>/trace/orcl_ora_13579.trc to /home/nick/trace/orcl_ora_13579.trc

See Also:


IPS CREATE PACKAGE

Purpose

Creates a new package. ADRCI automatically assigns the package number for the new package.

Syntax and Description

ips create package {incident first [n] | incident inc_id | 
     incident last [n] | problem first [n] | problem prob_id |
     problem last [n] | problemkey prob_key | seconds secs | 
     time start_time to end_time} [correlate {basic |typical | all}]

Optionally, you can add incidents to the new package using the provided options.

Table 16-5 describes the arguments for IPS CREATE PACKAGE.

Table 16-5 Arguments of IPS CREATE PACKAGE command

ArgumentDescription

incident first [n]

Adds the first n incidents to the package, where n is a positive integer. For example, if n is set to 5, then the first five incidents are added. If n is omitted, then the default is 1, and the first incident is added.

incident inc_id

Adds an incident with ID inc_id to the package.

incident last [n]

Adds the last n incidents to the package, where n is a positive integer. For example, if n is set to 5, then the last five incidents are added. If n is omitted, then the default is 1, and the last incident is added.

problem first [n]

Adds the incidents for the first n problems to the package, where n is a positive integer. For example, if n is set to 5, then the incidents for the first five problems are added. If n is omitted, then the default is 1, and the incidents for the first problem is added.

Adds only the first three early incidents and last three late incidents for each problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problem prob_id

Adds all incidents with problem ID prob_id to the package. Adds only the first three early incidents and last three late incidents for the problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problem last [n]

Adds the incidents for the last n problems to the package, where n is a positive integer. For example, if n is set to 5, then the incidents for the last five problems are added. If n is omitted, then the default is 1, and the incidents for the last problem is added.

Adds only the first three early incidents and last three late incidents for each problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problemkey pr_key

Adds all incidents with problem key pr_key to the package. Adds only the first three early incidents and last three late incidents for the problem key, excluding any older than 90 days. (Note: These limits are defaults and can be changed.)

seconds secs

Adds all incidents that have occurred within secs seconds of the present time.

time start_time to end_time

Adds all incidents taking place between <span class="codeinlineitalic">start_time and end_time to the package. Time format is 'YYYY-MM-YY HH24:MI:SS.FF TZR'. Fractional part (FF) is optional.

correlate {basic |typical | all}

Selects a method of including correlated incidents in the package. There are three options for this argument:

  • correlate basic includes incident dumps and incident process trace files.

  • correlate typical includes incident dumps and any trace files that were modified within five minutes of each incident. You can alter the time interval by modifying the INCIDENT_TIME_WINDOW configuration parameter.

  • correlate all includes the incident dumps, and all trace files that were modified between the time of the first selected incident and the last selected incident.

The default value is correlate typical.


Examples

This example creates a package with no incidents:

ips create package

Output:

Created package 5 without any contents, correlation level typical

This example creates a package containing all incidents between 10 AM and 11 PM on the given day:

ips create package time '2010-05-01 10:00:00.00 -07:00' to '2010-05-01 23:00:00.00 -07:00'

Output:

Created package 6 based on time range 2010-05-01 10:00:00.00 -07:00 to 2010-05-01 23:00:00.00 -07:00, correlation level typical

This example creates a package and adds the first three early incidents and the last three late incidents with problem ID 3, excluding incidents that are older than 90 days:

ips create package problem 3

Output:

Created package 7 based on problem id 3, correlation level typical

Note:

The number of early and late incidents added, and the 90-day age limit are defaults that can be changed. See "IPS SET CONFIGURATION".

IPS DELETE PACKAGE

Purpose

Drops a package and its contents from the ADR.

Syntax and Description

ips delete package package_id

package_id is the package to delete.

Example

ips delete package 12

IPS FINALIZE

Purpose

Finalizes a package before uploading.

Syntax and Description

ips finalize package package_id

package_id is the package ID to finalize.

Example

ips finalize package 12

See Also:

Oracle Database Administrator's Guide for more information about finalizing packages

IPS GENERATE PACKAGE

Purpose

Creates a physical package (a zip file) in target directory.

Syntax and Description

ips generate package package_id [in path] [complete | incremental]

package_id is the ID of the package to generate. Optionally, you can save the file in the directory path. Otherwise, the package is generated in the current working directory.

The complete option means the package forces ADRCI to include all package files. This is the default behavior.

The incremental option includes only files that have been added or changed since the last time that this package was generated. With the incremental option, the command finishes more quickly.

Example

This example generates a physical package file in path /home/steve:

ips generate package 12 in /home/steve

This example generates a physical package from files added or changed since the last generation:

ips generate package 14 incremental

IPS GET MANIFEST

Purpose

Extracts the manifest from a package zip file and displays it.

Syntax and Description

ips get manifest from file filename

filename is a package zip file. The manifest is an XML-formatted set of metadata for the package file, including information about ADR configuration, correlated files, incidents, and how the package was generated.

This command does not require an ADR home to be set before you can use it.

Example

ips get manifest from file /home/steve/ORA603_20060906165316_COM_1.zip

IPS GET METADATA

Purpose

Extracts ADR-related metadata from a package file and displays it.

Syntax and Description

ips get metadata {from file filename | from adr}

filename is a package zip file. The metadata in a package file (stored in the file metadata.xml) contains information about the ADR home, ADR base, and product.

Use the from adr option to get the metadata from a package zip file that has been unpacked into an ADR home using IPS UNPACK.

The from adr option requires an ADR home to be set.

Example

This example displays metadata from a package file:

ips get metadata from file /home/steve/ORA603_20060906165316_COM_1.zip

This next example displays metadata from a package file that was unpacked into the directory /scratch/oracle/package1:

set base /scratch/oracle/package1
ips get metadata from adr

In this previous example, upon receiving the SET BASE command, ADRCI automatically adds to the homepath the ADR home that was created in /scratch/oracle/package1 by the IPS UNPACK FILE command.


See Also:

"IPS UNPACK FILE" for more information about unpacking package files

IPS PACK

Purpose

Creates a package and generates the physical package immediately.

Syntax and Description

ips pack [incident first [n] | incident inc_id | incident last [n] | 
     problem first [n] | problem prob_id | problem last [n] | 
     problemkey prob_key | seconds secs | time start_time to end_time] 
     [correlate {basic |typical | all}] [in path]

ADRCI automatically generates the package number for the new package. IPS PACK creates an empty package if no package contents are specified.

Table 16-6 describes the arguments for IPS PACK.

Table 16-6 Arguments of IPS PACK command

ArgumentDescription

incident first [n]

Adds the first n incidents to the package, where n is a positive integer. For example, if n is set to 5, then the first five incidents are added. If n is omitted, then the default is 1, and the first incident is added.

incident inc_id

Adds an incident with ID inc_id to the package.

incident last [n]

Adds the last n incidents to the package, where n is a positive integer. For example, if n is set to 5, then the last five incidents are added. If n is omitted, then the default is 1, and the last incident is added.

problem first [n]

Adds the incidents for the first n problems to the package, where n is a positive integer. For example, if n is set to 5, then the incidents for the first five problems are added. If n is omitted, then the default is 1, and the incidents for the first problem is added.

Adds only the first three early incidents and last three late incidents for each problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problem prob_id

Adds all incidents with problem ID prob_id to the package. Adds only the first three early incidents and last three late incidents for the problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problem last [n]

Adds the incidents for the last n problems to the package, where n is a positive integer. For example, if n is set to 5, then the incidents for the last five problems are added. If n is omitted, then the default is 1, and the incidents for the last problem is added.

Adds only the first three early incidents and last three late incidents for each problem, excluding any older than 90 days. (Note: These limits are defaults and can be changed. See "IPS SET CONFIGURATION".)

problemkey pr_key

Adds incidents with problem key pr_key to the package. Adds only the first three early incidents and last three late incidents for the problem key, excluding any older than 90 days. (Note: These limits are defaults and can be changed.)

seconds secs

Adds all incidents that have occurred within secs seconds of the present time.

time start_time to end_time

Adds all incidents taking place between start_time and end_time to the package. Time format is 'YYYY-MM-YY HH24:MI:SS.FF TZR'. Fractional part (FF) is optional.

correlate {basic |typical | all}

Selects a method of including correlated incidents in the package. There are three options for this argument:

  • correlate basic includes incident dumps and incident process trace files.

  • correlate typical includes incident dumps and any trace files that were modified within five minutes of each incident. You can alter the time interval by modifying the INCIDENT_TIME_WINDOW configuration parameter.

  • correlate all includes the incident dumps, and all trace files that were modified between the time of the first selected incident and the last selected incident.

The default value is correlate typical.

in path

Saves the physical package to directory path.


Example

This example creates an empty package:

ips pack

This example creates a physical package containing all information for incident 861:

ips pack incident 861

This example creates a physical package for all incidents in the last minute, fully correlated:

ips pack seconds 60 correlate all

See Also:

"IPS SET CONFIGURATION" for more information about setting configuration parameters.

IPS REMOVE

Purpose

Removes incidents from an existing package.

Syntax and Description

ips remove {incident inc_id | problem prob_id | problemkey prob_key} 
     package package_id

After removing incidents from a package, the incidents continue to be tracked within the package metadata to prevent ADRCI from automatically including them later (such as with ADD NEW INCIDENTS).

Table 16-7 describes the arguments of IPS REMOVE.

Table 16-7 Arguments of IPS REMOVE command

ArgumentDescription

incident inc_id

Removes the incident with ID inc_id from the package

problem prob_id

Removes all incidents with problem ID prob_id from the package

problemkey pr_key

Removes all incidents with problem key pr_key from the package

package package_id

Removes incidents from the package with ID package_id.


Example

This example removes incident 22 from package 12:

ips remove incident 22 package 12

See Also:

"IPS GET MANIFEST" for information about package metadata

IPS REMOVE FILE

Purpose

Removes a file from an existing package.

Syntax and Description

ips remove file file_name package package_id

file_name is the file to remove from package package_id. The complete path of the file must be specified. (You can use the <ADR_HOME> and <ADR_BASE> variables if desired.)

After removal, the file continues to be tracked within the package metadata to prevent ADRCI from automatically including it later (such as with ADD NEW INCIDENTS). Removing a file, therefore, only sets the EXCLUDE flag for the file to Explicitly excluded.

Example

This example removes a trace file from package 12:

ips remove file <ADR_HOME>/trace/orcl_ora_13579.trc package 12
Removed file <ADR_HOME>/trace/orcl_ora_13579.trc from package 12
ips show files package 12

.
.
.
FILE_ID                4
FILE_LOCATION          <ADR_HOME>/trace
FILE_NAME              orcl_ora_13579.trc
LAST_SEQUENCE          0
EXCLUDE                Explicitly excluded
.
.
.

See Also:


IPS SET CONFIGURATION

Purpose

Changes the value of an IPS configuration parameter.

Syntax and Description

ips set configuration {parameter_id | parameter_name} value

parameter_id is the ID of the parameter to change, and parameter_name is the name of the parameter to change. value is the new value. For a list of the configuration parameters and their IDs, use "IPS SHOW CONFIGURATION".

Example

ips set configuration 3 10

IPS SHOW CONFIGURATION

Purpose

Displays a list of IPS configuration parameters and their values. These parameters control various thresholds for IPS data, such as timeouts and incident inclusion intervals.

Syntax and Description

ips show configuration {parameter_id | parameter_name}]

IPS SHOW CONFIGURATION lists the following information for each configuration parameter:

  • Parameter ID

  • Name

  • Description

  • Unit used by parameter (such as days or hours)

  • Value

  • Default value

  • Minimum Value

  • Maximum Value

  • Flags

Optionally, you can get information about a specific parameter by supplying a parameter_id or a parameter_name.

Example

This command describes all IPS configuration parameters:

ips show configuration

Output:

PARAMETER INFORMATION:
   PARAMETER_ID           1
   NAME                   CUTOFF_TIME
   DESCRIPTION            Maximum age for an incident to be considered for 
                          inclusion
   UNIT                   Days
   VALUE                  90
   DEFAULT_VALUE          90
   MINIMUM                1
   MAXIMUM                4294967295
   FLAGS                  0
 
PARAMETER INFORMATION:
   PARAMETER_ID           2
   NAME                   NUM_EARLY_INCIDENTS
   DESCRIPTION            How many incidents to get in the early part of the range
   UNIT                   Number
   VALUE                  3
   DEFAULT_VALUE          3
   MINIMUM                1
   MAXIMUM                4294967295
   FLAGS                  0
 
PARAMETER INFORMATION:
   PARAMETER_ID           3
   NAME                   NUM_LATE_INCIDENTS
   DESCRIPTION            How many incidents to get in the late part of the range
   UNIT                   Number
   VALUE                  3
   DEFAULT_VALUE          3
   MINIMUM                1
   MAXIMUM                4294967295
   FLAGS                  0
 
PARAMETER INFORMATION:
   PARAMETER_ID           4
   NAME                   INCIDENT_TIME_WINDOW
   DESCRIPTION            Incidents this close to each other are considered 
                          correlated
   UNIT                   Minutes
   VALUE                  5
   DEFAULT_VALUE          5
   MINIMUM                1
   MAXIMUM                4294967295
   FLAGS                  0
 
PARAMETER INFORMATION:
   PARAMETER_ID           5
   NAME                   PACKAGE_TIME_WINDOW
   DESCRIPTION            Time window for content inclusion is from x hours 
                          before first included incident to x hours after last 
                          incident
   UNIT                   Hours
   VALUE                  24
   DEFAULT_VALUE          24
   MINIMUM                1
   MAXIMUM                4294967295
   FLAGS                  0
 
PARAMETER INFORMATION:
   PARAMETER_ID           6
   NAME                   DEFAULT_CORRELATION_LEVEL
   DESCRIPTION            Default correlation level for packages
   UNIT                   Number
   VALUE                  2
   DEFAULT_VALUE          2
   MINIMUM                1
   MAXIMUM                4
   FLAGS                  0

Examples

This command describes configuration parameter NUM_EARLY_INCIDENTS:

ips show configuration num_early_incidents

This command describes configuration parameter 3:

ips show configuration 3

Configuration Parameter Descriptions

Table 16-8 describes the IPS configuration parameters in detail.

Table 16-8 IPS Configuration Parameters

ParameterIDDescription

CUTOFF_TIME

1

Maximum age, in days, for an incident to be considered for inclusion.

NUM_EARLY_INCIDENTS

2

Number of incidents to include in the early part of the range when creating a package based on a problem. By default, ADRCI adds the three earliest incidents and three most recent incidents to the package.

NUM_LATE_INCIDENTS

3

Number of incidents to include in the late part of the range when creating a package based on a problem. By default, ADRCI adds the three earliest incidents and three most recent incidents to the package.

INCIDENT_TIME_WINDOW

4

Number of minutes between two incidents in order for them to be considered correlated.

PACKAGE_TIME_WINDOW

5

Number of hours to use as a time window for including incidents in a package. For example, a value of 5 includes incidents five hours before the earliest incident in the package, and five hours after the most recent incident in the package.

DEFAULT_CORRELATION_LEVEL

6

The default correlation level to use for correlating incidents in a package. The correlation levels are:

  • 1 (basic): includes incident dumps and incident process trace files.

  • 2 (typical): includes incident dumps and any trace files that were modified within the time window specified by INCIDENT_TIME_WINDOW (see above).

  • 4 (all): includes the incident dumps, and all trace files that were modified between the first selected incident and the last selected incident. Additional incidents can be included automatically if they occurred in the same time range.


IPS SHOW FILES

Purpose

Lists files included in the specified package.

Syntax and Description

ips show files package package_id

package_id is the package ID to display.

Example

This example shows all files associated with package 1:

ips show files package 1

Output:

   FILE_ID                1
   FILE_LOCATION          <ADR_HOME>/alert
   FILE_NAME              log.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included
 
   FILE_ID                2
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              alert_adcdb.log
   LAST_SEQUENCE          1
   EXCLUDE                Included
 
   FILE_ID                27
   FILE_LOCATION          <ADR_HOME>/incident/incdir_4937
   FILE_NAME              adcdb_ora_692_i4937.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included
 
   FILE_ID                28
   FILE_LOCATION          <ADR_HOME>/incident/incdir_4937
   FILE_NAME              adcdb_ora_692_i4937.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included
 
   FILE_ID                29
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              adcdb_ora_692.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included
 
   FILE_ID                30
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              adcdb_ora_692.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included
.
.
.

IPS SHOW INCIDENTS

Purpose

Lists incidents included in the specified package.

Syntax and Description

ips show incidents package package_id

package_id is the package ID to display.

Example

This example lists the incidents in package 1:

ips show incidents package 1

Output:

MAIN INCIDENTS FOR PACKAGE 1:
   INCIDENT_ID            4985
   PROBLEM_ID             1
   EXCLUDE                Included
 
CORRELATED INCIDENTS FOR PACKAGE 1:

IPS SHOW PACKAGE

Purpose

Displays information about the specified package.

Syntax and Description

ips show package package_id {basic | brief | detail}

package_id is the ID of the package to display.

Use the basic option to display a minimal amount of information. It is the default when no package_id is specified.

Use the brief option to display more information about the package than the basic option. It is the default when a package_id is specified.

Use the detail option to show the information displayed by the brief option, as well as some package history and information about the included incidents and files.

Example

ips show package 12
ips show package 12 brief

IPS UNPACK FILE

Purpose

Unpackages a physical package file into the specified path.

Syntax and Description

ips unpack file file_name [into path]

file_name is the full path name of the physical package (zip file) to unpack. Optionally, you can unpack the file into directory path, which must exist and be writable. If you omit the path, the current working directory is used. The destination directory is treated as an ADR base, and the entire ADR base directory hierarchy is created, including a valid ADR home.

This command does not require an ADR home to be set before you can use it.

Example

ips unpack file /tmp/ORA603_20060906165316_COM_1.zip into /tmp/newadr

PURGE

Purpose

Purges diagnostic data in the current ADR home, according to current purging policies. Only ADR contents that are due to be purged are purged.

Diagnostic data in the ADR has a default lifecycle. For example, information about incidents and problems is subject to purging after one year, whereas the associated dump files (dumps) are subject to purging after only 30 days.

Some Oracle products, such as Oracle Database, automatically purge diagnostic data at the end of its life cycle. Other products and components require you to purge diagnostic data manually with this command. You can also use this command to purge data that is due to be automatically purged.

The SHOW CONTROL command displays the default purging policies for short-lived ADR contents and long-lived ADR contents.

Syntax and Description

purge [-i {id | start_id end_id} | 
  -age mins [-type {ALERT|INCIDENT|TRACE|CDUMP|HM}]]

Table 16-9 describes the flags for PURGE.

Table 16-9 Flags for the PURGE command

FlagDescription

-i {id1 | start_id end_id}

Purges either a specific incident ID (id) or a range of incident IDs (start_id and end_id)

-age mins

Purges only data older than mins minutes.

-type {ALERT|INCIDENT|TRACE|CDUMP|HM}

Specifies the type of diagnostic data to purge (alert log messages, incident data, trace files (including dumps), core files, or Health Monitor run data and reports). Used with the -age clause.


Examples

This example purges all diagnostic data in the current ADR home based on the default purging policies:

purge

This example purges all diagnostic data for all incidents between 123 and 456:

purge -i 123 456

This example purges all incident data from the last hour:

purge -age 60 -type incident

Note:

PURGE does not work when multiple ADR homes are set. For information about setting a single ADR home, see "Setting the ADRCI Homepath Before Using ADRCI Commands".

QUIT

See "EXIT".

RUN

Purpose

Runs an ADRCI script.

Syntax and Description

run script_name

@ script_name

@@ script_name

script_name is the file containing the ADRCI commands to execute. ADRCI looks for the script in the current directory unless a full path name is supplied. If the file name is given without a file extension, ADRCI uses the default extension .adi.

The run and @ commands are synonyms. The @@ command is similar to run and @ except that when used inside a script, @@ uses the path of the calling script to locate script_name, rather than the current directory.

This command does not require an ADR home to be set before you can use it.

Example

run my_script
@my_script

SELECT

Purpose

Retrieves qualified records for the specified incident or problem.

Syntax and Description

select {*|[field1, [field2, ...]} FROM {incident|problem}
  [WHERE predicate_string]
  [ORDER BY field1 [, field2, ...] [ASC|DSC|DESC]]
  [GROUP BY field1 [, field2, ...]]
  [HAVING having_predicate_string]

Table 16-10 Flags for the SELECT command

FlagDescription

field1, field2, ...

Lists the fields to retrieve. If * is specified, then all fields are retrieved.

incident|problem

Indicates whether to query incidents or problems.

WHERE "predicate_string"

Uses a SQL-like predicate string to show only the incident or problem for which the predicate is true. The predicate string must be enclosed in double quotation marks.

Table 16-16 lists the fields that can be used in the predicate string for incidents.

Table 16-18 lists the fields that can be used in the predicate string for problems.

ORDER BY field1, field2, ... [ASC|DSC|DESC]

Show results sorted by field in the given order, as well as in ascending (ASC) and descending order (DSC or DESC). When the ORDER BY clause is specified, results are shown in ascending order by default.

GROUP BY field1, field2, ...

Show results grouped by the specified fields.

The GROUP BY flag groups rows but does not guarantee the order of the result set. To order the groupings, use the ORDER BY flag.

HAVING "having_predicate_string"

Restrict the groups of returned rows to those groups for which the having predicate is true. The HAVING flag must be used in combination with the GROUP BY flag.



Note:

The WHERE, ORDER BY, GROUP BY, and HAVING flags are similar to the clauses with the same names in a SELECT SQL statement. See Oracle Database SQL Language Reference for more information about the clauses in a SELECT SQL statement.

Examples

The following example retrieves the incident_id and create_time for incidents with an incident_id greater than 1:

select incident_id, create_time from incident where incident_id > 1

The following is sample output for this query:

INCIDENT_ID          CREATE_TIME                              
-------------------- ---------------------------------------- 
4801                 2011-05-27 10:10:26.541656 -07:00       
4802                 2011-05-27 10:11:02.456066 -07:00       
4803                 2011-05-27 10:11:04.759654 -07:00       

The following example retrieves the problem_id and first_incident for each problem with a problem_key that includes 600:

select problem_id, first_incident from problem where problem­_key like '%600%'

The following is sample output for this query:

PROBLEM_ID           FIRST_INCIDENT       
-------------------- -------------------- 
1                    4801                
2                    4802                
3                    4803                

Functions

This section describes functions that you can use with the SELECT command.

The purpose and syntax of these functions are similar to the corresponding SQL functions, but there are some differences. This section notes the differences between the functions used with the ADRCI utility and the SQL functions.

The following restrictions apply to all of the functions:

  • The expressions must be simple expressions. See Oracle Database SQL Language Reference for information about simple expressions.

  • You cannot combine function calls. For example, the following combination of function calls is not supported:

    sum(length(column_name))
    
  • No functions are overloaded.

  • All function arguments are mandatory.

  • The functions cannot be used with other ADRCI Utility commands.

Table 16-11 ADRCI Utility Functions for the SELECT Command

FunctionDescription

AVG


Returns the average value of an expression.

CONCAT


Returns the concatenation of two character strings.

COUNT


Returns the number of rows returned by the query.

DECODE


Compares an expression to each search value one by one.

LENGTH


Returns the length of a character string as defined by the input character set.

MAX


Returns the maximum value of an expression.

MIN


Returns the minimum value of an expression.

NVL


Replaces null (returned as a blank) with character data in the results of a query.

REGEXP_LIKE


Returns rows that match a specified pattern in a specified regular expression.

SUBSTR


Returns a portion of character data.

SUM


Returns the sum of values of an expression.

TIMESTAMP_TO_CHAR


Converts a value of TIMESTAMP data type to a value of VARCHAR2 data type in a specified format.

TOLOWER


Returns character data, with all letters lowercase.

TOUPPER


Returns character data, with all letters uppercase.


AVG

Returns the average value of an expression.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the AVG function in the SELECT command:

  • The expression must be a numeric column or a positive numeric constant.

  • The function does not support the DISTINCT or ALL keywords.

  • The function does not support the OVER clause.

CONCAT

Returns a concatenation of two character strings. The character data can be of the data types CHAR and VARCHAR2. The return value is the same data type as the character data.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the CONCAT function in the SELECT command:

  • The function does not support LOB data types, including BLOB, CLOB, NCLOB, and BFILE data types.

  • The function does not support national character set data types, including NCHAR, NVARCHAR2, and NCLOB data types.

COUNT

Returns the number of rows returned by the query.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the COUNT function in the SELECT command:

  • The expression must be a column, a numeric constant, or a string constant.

  • The function does not support the DISTINCT or ALL keywords.

  • The function does not support the OVER clause.

  • The function always counts all rows for the query, including duplicates and nulls.

Examples

This example returns the number of incidents for which flood_controlled is 0 (zero):

select count(*) from incident where flood_controlled = 0

This example returns the number of problems for which problem_key includes ORA-600:

select count(*) from problem where problem_key like '%ORA-600%'


DECODE

Compares an expression to each search value one by one. If the expression is equal to a search, then Oracle Database returns the corresponding result. If no match is found, then Oracle Database returns the specified default value.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the DECODE function in the SELECT command:

  • The search arguments must be character data.

  • A default value must be specified.

Example

This example shows each incident_id and whether or not the incident is flood-controlled. The example uses the DECODE function to display text instead of numbers for the flood_controlled field.

select incident_id, decode(flood_controlled, 0, \
  "Not flood-controlled", "Flood-controlled") from incident

LENGTH

Returns the length of a character string as defined by the input character set.

The character string can be any of the data types CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB. The return value is of data type NUMBER. If the character sting has data type CHAR, then the length includes all trailing blanks. If the character string is null, then this function returns 0 (zero).


Note:

The SQL function returns null if the character string is null.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The SELECT command does not support the following functions: LENGTHB, LENGTHC, LENGTH2, and LENGTH4.

Example

This example shows the problem_id and the length of the problem_key for each problem.

select problem_id, length(problem_key) from problem

MAX

Returns the maximum value of an expression.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the MAX function in the SELECT command:

  • The function does not support the DISTINCT or ALL keywords.

  • The function does not support the OVER clause.

Example

This example shows the maximum last_incident value for all of the recorded problems.

select max(last_incident) from problem

MIN

Returns the minimum value of an expression.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the MIN function in the SELECT command:

  • The function does not support the DISTINCT or ALL keywords.

  • The function does not support the OVER clause.

Example

This example shows the minimum first_incident value for all of the recorded problems.

select min(first_incident) from problem

NVL

Replaces null (returned as a blank) with character data in the results of a query. If the first expression specified is null, then NVL returns the second expression specified. If the first expression specified is not null, then NVL returns the value of the first expression.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the NVL function in the SELECT command:

  • The replacement value (second expression) must be specified as character data.

  • The function does not support data conversions.

Example

This example replaces NULL in the output for singalling_component with the text "No component."

select nvl(signalling_component, 'No component') from incident

REGEXP_LIKE

Returns rows that match a specified pattern in a specified regular expression.


Note:

In SQL, REGEXP_LIKE is a condition instead of a function.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the REGEXP_LIKE function in the SELECT command:

  • The pattern match is always case-sensitive.

  • The function does not support the match_param argument.

Example

This example shows the problem_id and problem_key for all problems where the problem_key ends with a number.

select problem_id, problem_key from problem \
  where regexp_like(problem_key, '[0-9]$') = true

SUBSTR

Returns a portion of character data. The portion of data returned begins at the specified position and is the specified substring length of characters long. SUBSTR calculates lengths using characters as defined by the input character set.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the SUBSTR function in the SELECT command:

  • The function supports only positive integers. It does not support negative values or floating-point numbers.

  • The SELECT command does not support the following functions: SUBSTRB, SUBSTRC, SUBSTR2, and SUBSTR4.

Example

This example shows each problem_key starting with the fifth character in the key.

select substr(problem_key, 5) from problem

SUM

Returns the sum of values of an expression.

Syntax

See Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the SUM function in the SELECT command:

  • The expression must be a numeric column or a numeric constant.

  • The function does not support the DISTINCT or ALL keywords.

  • The function does not support the OVER clause.

TIMESTAMP_TO_CHAR

Converts a value of TIMESTAMP data type to a value of VARCHAR2 data type in a specified format. If you do not specify a format, then the function converts values to the default timestamp format.

Syntax

See the syntax of the TO_CHAR function in Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the TIMESTAMP_TO_CHAR function in the SELECT command:

  • The function converts only TIMESTAMP data type. TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, and other data types are not supported.

  • The function does not support the nlsparm argument. The function uses the default language for your session.

Example

This example converts the create_time for each incident from a TIMESTAMP data type to a VARCHAR2 data type in the DD-MON-YYYY format.

select timestamp_to_char(create_time, 'DD-MON-YYYY') from incident

TOLOWER

Returns character data, with all letters lowercase. The character data can be of the data types CHAR and VARCHAR2. The return value is the same data type as the character data. The database sets the case of the characters based on the binary mapping defined for the underlying character set.

Syntax

See the syntax of the LOWER function in Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the TOLOWER function in the SELECT command:

  • The function does not support LOB data types, including BLOB, CLOB, NCLOB, and BFILE data types.

  • The function does not support national character set data types, including NCHAR, NVARCHAR2, and NCLOB data types.

Example

This example shows each problem_key in all lowercase letters.

select tolower(problem_key) from problem

TOUPPER

Returns character data, with all letters uppercase. The character data can be of the data types CHAR and VARCHAR2. The return value is the same data type as the character data. The database sets the case of the characters based on the binary mapping defined for the underlying character set.

Syntax

See the syntax of the UPPER function in Oracle Database SQL Language Reference.

Restrictions

The following restrictions apply when you use the TOUPPER function in the SELECT command:

  • The function does not support LOB data types, including BLOB, CLOB, NCLOB, and BFILE data types.

  • The function does not support national character set data types, including NCHAR, NVARCHAR2, and NCLOB data types.

Example

This example shows each problem_key in all uppercase letters.

select toupper(problem_key) from problem

SET BASE

Purpose

Sets the ADR base to use in the current ADRCI session.

Syntax and Description

set base base_str

base_str is a full path to a directory. The format for base_str depends on the operating system. If there are valid ADR homes under the base directory, these homes are added to the homepath of the current ADRCI session.

This command does not require an ADR home to be set before you can use it.

Example

set base /u01/app/oracle

See Also:

"ADR Base"

SET BROWSER

Purpose

Sets the default browser for displaying reports.


Note:

This command is reserved for future use. At this time ADRCI does not support HTML-formatted reports in a browser.

Syntax and Description

set browser browser_program

browser_program is the browser program name (it is assumed the browser can be started from the current ADR working directory). If no browser is set, ADRCI will display reports to the terminal or spool file.

This command does not require an ADR home to be set before you can use it.

Example

set browser mozilla

See Also:

  • "SHOW REPORT" for more information about showing reports

  • "SPOOL" for more information about spooling


SET CONTROL

Purpose

Sets purging policies for ADR contents.

Syntax and Description

set control (purge_policy = value, ...)

purge_policy is either SHORTP_POLICY or LONGP_POLICY. See "SHOW CONTROL" for more information.

value is the number of hours after which the ADR contents become eligible for purging.

The SHORTP_POLICY and LONGP_POLICY are not mutually exclusive. Each policy controls different types of content.

This command works with a single ADR home only.

Example

set control (SHORTP_POLICY = 360)

SET ECHO

Purpose

Turns command output on or off. This command only affects output being displayed in a script or using the spool mode.

Syntax and Description

set echo on|off

This command does not require an ADR home to be set before you can use it.

Example

set echo off

See Also:

"SPOOL" for more information about spooling

SET EDITOR

Purpose

Sets the editor for displaying the alert log and the contents of trace files.

Syntax and Description

set editor editor_program

editor_program is the editor program name. If no editor is set, ADRCI uses the editor specified by the operating system environment variable EDITOR. If EDITOR is not set, ADRCI uses vi as the default editor.

This command does not require an ADR home to be set before you can use it.

Example

set editor xemacs

SET HOMEPATH

Purpose

Makes one or more ADR homes current. Many ADR commands work with the current ADR homes only.

Syntax and Description

set homepath homepath_str1 homepath_str2 ...

The homepath_strn strings are the paths of the ADR homes relative to the current ADR base. The diag directory name can be omitted from the path. If the specified path contains multiple ADR homes, all of the homes are added to the homepath.

If a desired new ADR home is not within the current ADR base, use SET BASE to set a new ADR base and then use SET HOMEPATH.

This command does not require an ADR home to be set before you can use it.

Example

set homepath diag/rdbms/orcldw/orcldw1  diag/rdbms/orcldw/orcldw2

The following command sets the same homepath as the previous example:

set homepath rdbms/orcldw/orcldw1  rdbms/orcldw/orcldw2

See Also:

"Homepath"

SET TERMOUT

Purpose

Turns output to the terminal on or off.

Syntax and Description

set termout on|off

This setting is independent of spooling. That is, the output can be directed to both terminal and a file at the same time.

This command does not require an ADR home to be set before you can use it.


See Also:

"SPOOL" for more information about spooling

Example

set termout on

SHOW ALERT

Purpose

Shows the contents of the alert log in the default editor.

Syntax and Description

show alert [-p "predicate_string"] [-tail [num] [-f]] [-term]
  [-file alert_file_name]

Except when using the -term flag, this command works with only a single current ADR home. If more than one ADR home is set, ADRCI prompts you to choose the ADR home to use.

Table 16-12 Flags for the SHOW ALERT command

FlagDescription

-p "predicate_string"

Uses a SQL-like predicate string to show only the alert log entries for which the predicate is true. The predicate string must be enclosed in double quotation marks.

Table 16-13 lists the fields that can be used in the predicate string.

-tail [num] [-f]

Displays the most recent entries in the alert log.

Use the num option to display the last num entries in the alert log. If num is omitted, the last 10 entries are displayed.

If the -f option is given, after displaying the requested messages, the command does not return. Instead, it remains active and continuously displays new alert log entries to the terminal as they arrive in the alert log. You can use this command to perform live monitoring of the alert log. To terminate the command, press CTRL+C.

-term

Directs results to the terminal. Outputs the entire alert logs from all current ADR homes, one after another. If this option is not given, the results are displayed in the default editor.

-file alert_file_name

Enables you to specify an alert file outside the ADR. alert_file_name must be specified with a full path name. Note that this option cannot be used with the -tail option.


Table 16-13 Alert Fields for SHOW ALERT

FieldType

ORIGINATING_TIMESTAMP

timestamp

NORMALIZED_TIMESTAMP

timestamp

ORGANIZATION_ID

text(65)

COMPONENT_ID

text(65)

HOST_ID

text(65)

HOST_ADDRESS

text(17)

MESSAGE_TYPE

number

MESSAGE_LEVEL

number

MESSAGE_ID

text(65)

MESSAGE_GROUP

text(65)

CLIENT_ID

text(65)

MODULE_ID

text(65)

PROCESS_ID

text(33)

THREAD_ID

text(65)

USER_ID

text(65)

INSTANCE_ID

text(65)

DETAILED_LOCATION

text(161)

UPSTREAM_COMP_ID

text(101)

DOWNSTREAM_COMP_ID

text(101)

EXECUTION_CONTEXT_ID

text(101)

EXECUTION_CONTEXT_SEQUENCE

number

ERROR_INSTANCE_ID

number

ERROR_INSTANCE_SEQUENCE

number

MESSAGE_TEXT

text(2049)

MESSAGE_ARGUMENTS

text(129)

SUPPLEMENTAL_ATTRIBUTES

text(129)

SUPPLEMENTAL_DETAILS

text(129)

PROBLEM_KEY

text(65)


Example

This example shows all alert messages for the current ADR home in the default editor:

show alert

This example shows all alert messages for the current ADR home and directs the output to the terminal instead of the default editor:

show alert -term

This example shows all alert messages for the current ADR home with message text describing an incident:

show alert -p "message_text like '%incident%'"

This example shows the last twenty alert messages, and then keeps the alert log open, displaying new alert log entries as they arrive:

show alert -tail 20 -f

This example shows all alert messages for a single ADR home in the default editor when multiple ADR homes have been set:

show alert

Choose the alert log from the following homes to view:

1: diag/tnslsnr/dbhost1/listener
2: diag/asm/+asm/+ASM
3: diag/rdbms/orcl/orcl
4: diag/clients/user_oracle/host_9999999999_11
Q: to quit

Please select option:
3

See Also:

"SET EDITOR"

SHOW BASE

Purpose

Shows the current ADR base.

Syntax and Description

show base [-product product_name]

Optionally, you can show the product's ADR base location for a specific product. The products currently supported are CLIENT and ADRCI.

This command does not require an ADR home to be set before you can use it.

Example

This example shows the current ADR base:

show base

Output:

ADR base is "/u01/app/oracle"

This example shows the current ADR base for Oracle Database clients:

show base -product client

SHOW CONTROL

Purpose

Displays information about the ADR, including the purging policy.

Syntax and Description

show control

Displays various attributes of the ADR, including the following purging policy attributes:

Attribute NameDescription
SHORTP_POLICYNumber of hours after which to purge ADR contents that have a short life. Default is 720 (30 days).

A setting of 0 (zero) means that all contents that have a short life can be purged. The maximum setting is 35791394. If a value greater than 35791394 is specified, then this attribute is set to 0 (zero).

The ADR contents that have a short life include the following:

  • Trace files

  • Core dump files

  • Packaging information

LONGP_POLICYNumber of hours after which to purge ADR contents that have a long life. Default is 8760 (365 days).

A setting of 0 (zero) means that all contents that have a long life can be purged. The maximum setting is 35791394. If a value greater than 35791394 is specified, then this attribute is set to 0 (zero).

The ADR contents that have a long life include the following:

  • Incident information

  • Incident dumps

  • Alert logs



Note:

The SHORTP_POLICY and LONGP_POLICY attributes are not mutually exclusive. Each policy controls different types of content.

SHOW HM_RUN

Purpose

Shows all information for Health Monitor runs.

Syntax and Description

show hm_run [-p "predicate_string"]

predicate_string is a SQL-like predicate specifying the field names to select. Table 16-14 displays the list of field names you can use.

Table 16-14 Fields for Health Monitor Runs

FieldType

RUN_ID

number

RUN_NAME

text(31)

CHECK_NAME

text(31)

NAME_ID

number

MODE

number

START_TIME

timestamp

RESUME_TIME

timestamp

END_TIME

timestamp

MODIFIED_TIME

timestamp

TIMEOUT

number

FLAGS

number

STATUS

number

SRC_INCIDENT_ID

number

NUM_INCIDENTS

number

ERR_NUMBER

number

REPORT_FILE

bfile


Example

This example displays data for all Health Monitor runs:

show hm_run

This example displays data for the Health Monitor run with ID 123:

show hm_run -p "run_id=123"

See Also:

Oracle Database Administrator's Guide for more information about Health Monitor

SHOW HOMEPATH

Purpose

Identical to the SHOW HOMES command.

Syntax and Description

show homepath | show homes | show home

This command does not require an ADR home to be set before you can use it.

Example

show homepath

Output:

ADR Homes:
diag/tnslsnr/dbhost1/listener
diag/asm/+asm/+ASM
diag/rdbms/orcl/orcl
diag/clients/user_oracle/host_9999999999_11

See Also:

"SET HOMEPATH" for information about how to set the homepath

SHOW HOMES

Purpose

Show the ADR homes in the current ADRCI session.

Syntax and Description

show homes | show home | show homepath

This command does not require an ADR home to be set before you can use it.

Example

show homes

Output:

ADR Homes:
diag/tnslsnr/dbhost1/listener
diag/asm/+asm/+ASM
diag/rdbms/orcl/orcl
diag/clients/user_oracle/host_9999999999_11

SHOW INCDIR

Purpose

Shows trace files for the specified incident.

Syntax and Description

show incdir [id | id_low id_high]

You can provide a single incident ID (id) or a range of incidents (id_low to id_high). If no incident ID is given, trace files for all incidents are listed.

Example

This example shows all trace files for all incidents:

show incdir

Output:

ADR Home = /u01/app/oracle/log/diag/rdbms/emdb/emdb:
*************************************************************************
diag/rdbms/emdb/emdb/incident/incdir_3801/emdb_ora_23604_i3801.trc
diag/rdbms/emdb/emdb/incident/incdir_3801/emdb_m000_23649_i3801_a.trc
diag/rdbms/emdb/emdb/incident/incdir_3802/emdb_ora_23604_i3802.trc
diag/rdbms/emdb/emdb/incident/incdir_3803/emdb_ora_23604_i3803.trc
diag/rdbms/emdb/emdb/incident/incdir_3804/emdb_ora_23604_i3804.trc
diag/rdbms/emdb/emdb/incident/incdir_3805/emdb_ora_23716_i3805.trc
diag/rdbms/emdb/emdb/incident/incdir_3805/emdb_m000_23767_i3805_a.trc
diag/rdbms/emdb/emdb/incident/incdir_3806/emdb_ora_23716_i3806.trc
diag/rdbms/emdb/emdb/incident/incdir_3633/emdb_pmon_28970_i3633.trc
diag/rdbms/emdb/emdb/incident/incdir_3633/emdb_m000_23778_i3633_a.trc
diag/rdbms/emdb/emdb/incident/incdir_3713/emdb_smon_28994_i3713.trc
diag/rdbms/emdb/emdb/incident/incdir_3713/emdb_m000_23797_i3713_a.trc
diag/rdbms/emdb/emdb/incident/incdir_3807/emdb_ora_23783_i3807.trc
diag/rdbms/emdb/emdb/incident/incdir_3807/emdb_m000_23803_i3807_a.trc
diag/rdbms/emdb/emdb/incident/incdir_3808/emdb_ora_23783_i3808.trc

This example shows all trace files for incident 3713:

show incdir 3713

Output:

ADR Home = /u01/app/oracle/log/diag/rdbms/emdb/emdb:
*************************************************************************
diag/rdbms/emdb/emdb/incident/incdir_3713/emdb_smon_28994_i3713.trc
diag/rdbms/emdb/emdb/incident/incdir_3713/emdb_m000_23797_i3713_a.trc

This example shows all tracefiles for incidents between 3801 and 3804:

show incdir 3801 3804

Output:

ADR Home = /u01/app/oracle/log/diag/rdbms/emdb/emdb:
*************************************************************************
diag/rdbms/emdb/emdb/incident/incdir_3801/emdb_ora_23604_i3801.trc
diag/rdbms/emdb/emdb/incident/incdir_3801/emdb_m000_23649_i3801_a.trc
diag/rdbms/emdb/emdb/incident/incdir_3802/emdb_ora_23604_i3802.trc
diag/rdbms/emdb/emdb/incident/incdir_3803/emdb_ora_23604_i3803.trc
diag/rdbms/emdb/emdb/incident/incdir_3804/emdb_ora_23604_i3804.trc

SHOW INCIDENT

Purpose

Lists all of the incidents associated with the current ADR home. Includes both open and closed incidents.

Syntax and Description

show incident [-p "predicate_string"] [-mode {BASIC|BRIEF|DETAIL}]          [-orderby field1, field2, ... [ASC|DSC]]

Table 16-15 describes the flags for SHOW INCIDENT.

Table 16-15 Flags for SHOW INCIDENT command

FlagDescription

-p "predicate_string"

Use a predicate string to show only the incidents for which the predicate is true. The predicate string must be enclosed in double quotation marks.

Table 16-16 lists the fields that can be used in the predicate string.

-mode {BASIC|BRIEF|DETAIL}

Choose an output mode for incidents. BASIC is the default.

  • BASIC displays only basic incident information (the INCIDENT_ID, PROBLEM_ID, and CREATE_TIME fields). It does not display flood-controlled incidents.

  • BRIEF displays all information related to the incidents, as given by the fields in Table 16-16. It includes flood-controlled incidents.

  • DETAIL displays all information for the incidents (as with BRIEF mode) as well as information about incident dumps. It includes flood-controlled incidents.

-orderby field1, field2, ... [ASC|DSC]

Show results sorted by field in the given order, as well as in ascending (ASC) and descending order (DSC). By default, results are shown in ascending order.


Table 16-16 Incident Fields for SHOW INCIDENT

FieldTypeDescription

INCIDENT_ID

number

ID of the incident

PROBLEM_ID

number

ID of the problem to which the incident belongs

CREATE_TIME

timestamp

Time when the incident was created

CLOSE_TIME

timestamp

Time when the incident was closed

STATUS

number

Status of this incident

FLAGS

number

Flags for internal use

FLOOD_CONTROLLED

number (decoded to a text status by ADRCI)

Encodes the flood control status for the incident

ERROR_FACILITY

text(10)

Error facility for the error that caused the incident

ERROR_NUMBER

number

Error number for the error that caused the incident

ERROR_ARG1

text(64)

First argument for the error that caused the incident

Error arguments provide additional information about the error, such as the code location that issued the error.

ERROR_ARG2

text(64)

Second argument for the error that caused the incident

ERROR_ARG3

text(64)

Third argument for the error that caused the incident

ERROR_ARG4

text(64)

Fourth argument for the error that caused the incident

ERROR_ARG5

text(64)

Fifth argument for the error that caused the incident

ERROR_ARG6

text(64)

Sixth argument for the error that caused the incident

ERROR_ARG7

text(64)

Seventh argument for the error that caused the incident

ERROR_ARG8

text(64)

Eighth argument for the error that caused the incident

SIGNALLING_COMPONENT

text(64)

Component that signaled the error that caused the incident

SIGNALLING_SUBCOMPONENT

text(64)

Subcomponent that signaled the error that jPcaused the incident

SUSPECT_COMPONENT

text(64)

Component that has been automatically identified as possibly causing the incident

SUSPECT_SUBCOMPONENT

text(64)

Subcomponent that has been automatically identified as possibly causing the incident

ECID

text(64)

Execution Context ID

IMPACT

number

Encodes the impact of the incident

ERROR_ARG9

text(64)

Ninth argument for the error that caused the incident

ERROR_ARG10

text(64)

Tenth argument for the error that caused the incident

ERROR_ARG11

text(64)

Eleventh argument for the error that caused the incident

ERROR_ARG12

text(64)

Twelfth argument for the error that caused the incident


Examples

This example shows all incidents for this ADR home:

show incident

Output:

ADR Home = /u01/app/oracle/log/diag/rdbms/emdb/emdb:
*************************************************************************
INCIDENT_ID          PROBLEM_KEY                                  CREATE_TIME
-------------------- -------------------------------------------- ----------------------------
3808                 ORA 603                                      2010-06-18 21:35:49.322161 -07:00
3807                 ORA 600 [4137]                               2010-06-18 21:35:47.862114 -07:00
3806                 ORA 603                                      2010-06-18 21:35:26.666485 -07:00
3805                 ORA 600 [4136]                               2010-06-18 21:35:25.012579 -07:00
3804                 ORA 1578                                     2010-06-18 21:35:08.483156 -07:00
3713                 ORA 600 [4136]                               2010-06-18 21:35:44.754442 -07:00
3633                 ORA 600 [4136]                               2010-06-18 21:35:35.776151 -07:00
7 rows fetched

This example shows the detail view for incident 3805:

adrci> show incident -mode DETAIL -p "incident_id=3805"

Output:

ADR Home = /u01/app/oracle/log/diag/rdbms/emdb/emdb:
*************************************************************************
 
**********************************************************
INCIDENT INFO RECORD 1
**********************************************************
   INCIDENT_ID                   3805
   STATUS                        closed
   CREATE_TIME                   2010-06-18 21:35:25.012579 -07:00
   PROBLEM_ID                    2
   CLOSE_TIME                    2010-06-18 22:26:54.143537 -07:00
   FLOOD_CONTROLLED              none
   ERROR_FACILITY                ORA
   ERROR_NUMBER                  600
   ERROR_ARG1                    4136
   ERROR_ARG2                    2
   ERROR_ARG3                    18.0.628
   ERROR_ARG4                    <NULL>
   ERROR_ARG5                    <NULL>
   ERROR_ARG6                    <NULL>
   ERROR_ARG7                    <NULL>
   ERROR_ARG8                    <NULL>
   SIGNALLING_COMPONENT          <NULL>
   SIGNALLING_SUBCOMPONENT       <NULL>
   SUSPECT_COMPONENT             <NULL>
   SUSPECT_SUBCOMPONENT          <NULL>
   ECID                          <NULL>
   IMPACTS                       0
   PROBLEM_KEY                   ORA 600 [4136]
   FIRST_INCIDENT                3805
   FIRSTINC_TIME                 2010-06-18 21:35:25.012579 -07:00
   LAST_INCIDENT                 3713
   LASTINC_TIME                  2010-06-18 21:35:44.754442 -07:00
   IMPACT1                       0
   IMPACT2                       0
   IMPACT3                       0
   IMPACT4                       0
   KEY_NAME                      Client ProcId
   KEY_VALUE                     oracle@dbhost1 (TNS V1-V3).23716_3083142848
   KEY_NAME                      SID
   KEY_VALUE                     127.52237
   KEY_NAME                      ProcId
   KEY_VALUE                     23.90
   KEY_NAME                      PQ
   KEY_VALUE                     (0, 1182227717)
   OWNER_ID                      1
   INCIDENT_FILE                 /.../emdb/emdb/incident/incdir_3805/emdb_ora_23716_i3805.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /.../emdb/emdb/trace/emdb_ora_23716.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /.../emdb/emdb/incident/incdir_3805/emdb_m000_23767_i3805_a.trc
1 rows fetched

SHOW PROBLEM

Purpose

Show problem information for the current ADR home.

Syntax and Description

show problem [-p "predicate_string"] [-last num | -all]
    [-orderby field1, field2, ... [ASC|DSC]]

Table 16-17 describes the flags for SHOW PROBLEM.

Table 16-17 Flags for SHOW PROBLEM command

FlagDescription

-p "predicate_string"

Use a SQL-like predicate string to show only the incidents for which the predicate is true. The predicate string must be enclosed in double quotation marks.

Table 16-18 lists the fields that can be used in the predicate string.

-last num | -all

Shows the last num problems, or lists all the problems. By default, SHOW PROBLEM lists the most recent 50 problems.

-orderby field1, field2, ... [ASC|DSC]

Show results sorted by field in the given order (field1, field2, ...), as well as in ascending (ASC) and descending order (DSC). By default, results are shown in ascending order.


Table 16-18 Problem Fields for SHOW PROBLEM

FieldTypeDescription

PROBLEM_ID

number

ID of the problem

PROBLEM_KEY

text(550)

Problem key for the problem

FIRST_INCIDENT

number

Incident ID of the first incident for the problem

FIRSTINC_TIME

timestamp

Creation time of the first incident for the problem

LAST_INCIDENT

number

Incident ID of the last incident for the problem

LASTINC_TIME

timestamp

Creation time of the last incident for the problem

IMPACT1

number

Encodes an impact of this problem

IMPACT2

number

Encodes an impact of this problem

IMPACT3

number

Encodes an impact of this problem

IMPACT4

number

Encodes an impact of this problem

SERVICE_REQUEST

text(64)

Service request for the problem (entered through Support Workbench)

BUG_NUMBER

text(64)

Bug number for the problem (entered through Support Workbench)


Example

This example lists all the problems in the current ADR home:

show problem -all

This example shows the problem with ID 4:

show problem -p "problem_id=4"

SHOW REPORT

Purpose

Show a report for the specified report type and run name. Currently, only the hm_run (Health Monitor) report type is supported, and only in XML formatting. To view HTML-formatted Health Monitor reports, use Oracle Enterprise Manager or the DBMS_HM PL/SQL package. See Oracle Database Administrator's Guide for more information.

Syntax and Description

SHOW REPORT report_type run_name

report_type must be hm_run. run_name is the Health Monitor run name from which you created the report. You must first create the report using the CREATE REPORT command.

This command does not require an ADR home to be set before you can use it.

Example

show report hm_run hm_run_1421

SHOW TRACEFILE

Purpose

List trace files.

Syntax and Description

show tracefile [file1 file2 ...] [-rt | -t]
  [-i inc1 inc2 ...] [-path path1 path2 ...] 

This command searches for one or more files under the trace directory and all incident directories of the current ADR homes, unless the -i or -path flags are given.

This command does not require an ADR home to be set unless using the -i option.

Table 16-19 describes the arguments of SHOW TRACEFILE.

Table 16-19 Arguments for SHOW TRACEFILE Command

ArgumentDescription

file1 file2 ...

Filter results by file name. The % symbol is a wildcard character.


Table 16-20 Flags for SHOW TRACEFILE Command

FlagDescription

-rt | -t

Order the trace file names by timestamp. -t sorts the file names in ascending order by timestamp, and -rt sorts them in reverse order. Note that file names are only ordered relative to their directory. Listing multiple directories of trace files applies a separate ordering to each directory.

Timestamps are listed next to each file name when using this option.

-i inc1 inc2 ...

Select only the trace files produced for the given incident IDs.

-path path1 path2 ...

Query only the trace files under the given path names.


Example

This example shows all the trace files under the current ADR home:

show tracefile

This example shows all the mmon trace files, sorted by timestamp in reverse order:

show tracefile %mmon% -rt

This example shows all trace files for incidents 1 and 4, under the path /home/steve/temp:

show tracefile -i 1 4 -path /home/steve/temp

SPOOL

Purpose

Directs ADRCI output to a file.

Syntax and Description

SPOOL filename [[APPEND] | [OFF]]

filename is the file name where the output is to be directed. If a full path name is not given, the file is created in the current ADRCI working directory. If no file extension is given, the default extension .ado is used. APPEND causes the output to be appended to the end of the file. Otherwise, the file is overwritten. Use OFF to turn off spooling.

This command does not require an ADR home to be set before you can use it.

Example

spool myfile
spool myfile.ado append
spool off
spool

Troubleshooting ADRCI

The following are some common ADRCI error messages, with their possible causes and remedies:

No ADR base is set

Cause: You may have started ADRCI with a null or invalid value for the ORACLE_HOME environment variable.

Action: Exit ADRCI, set the ORACLE_HOME environment variable, and restart ADRCI. See "ADR Base" for more information.

DIA-48323: Specified pathname string must be inside current ADR home

Cause: A file outside of the ADR home is not allowed as an incident file for this command.

Action: Retry using an incident file inside the ADR home.

DIA-48400: ADRCI initialization failed

Cause: The ADR Base directory does not exist.

Action: Check the value of the DIAGNOSTIC_DEST initialization parameter, and ensure that it points to an ADR base directory that contains at least one ADR home. If DIAGNOSTIC_DEST is missing or null, check for a valid ADR base directory hierarchy in ORACLE_HOME/log.

DIA-48431: Must specify at least one ADR home path

Cause: The command requires at least one ADR home to be current.

Action: Use the SET HOMEPATH command to make one or more ADR homes current.

DIA-48432: The ADR home path string is not valid

Cause: The supplied ADR home is not valid, possibly because the path does not exist.

Action: Check if the supplied ADR home path exists.

DIA-48447: The input path [path] does not contain any ADR homes

Cause: When using SET HOMEPATH to set an ADR home, you must supply a path relative to the current ADR base.

Action: If the new desired ADR home is not within the current ADR base, first set ADR base with SET BASE, and then use SHOW HOMES to check the ADR homes under the new ADR base. Next, use SET HOMEPATH to set a new ADR home if necessary.

DIA-48448: This command does not support multiple ADR homes

Cause: There are multiple current ADR homes in the current ADRCI session.

Action: Use the SET HOMEPATH command to make a single ADR home current.

PKl/jhPKN:AOEBPS/dp_export.htm Data Pump Export

2 Data Pump Export

This chapter describes the Oracle Data Pump Export utility (expdp). The following topics are discussed:

What Is Data Pump Export?

Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.

The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

Because the dump files are written by the server, rather than by the client, the database administrator (DBA) must create directory objects that define the server locations to which files are written. See "Default Locations for Dump, Log, and SQL Files" for more information about directory objects.

Data Pump Export enables you to specify that a job should move a subset of the data and metadata, as determined by the export mode. This is done using data filters and metadata filters, which are specified through Export parameters. See "Filtering During Export Operations".

To see some examples of the various ways in which you can use Data Pump Export, refer to "Examples of Using Data Pump Export".

Invoking Data Pump Export

The Data Pump Export utility is invoked using the expdp command. The characteristics of the export operation are determined by the Export parameters you specify. These parameters can be specified either on the command line or in a parameter file.


Note:

Do not invoke Export as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.

The following sections contain more information about invoking Export:

Data Pump Export Interfaces

You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode.

  • Command-Line Interface: Enables you to specify most of the Export parameters directly on the command line. For a complete description of the parameters available in the command-line interface, see "Parameters Available in Export's Command-Line Mode".

  • Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter, because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. See "Use of Quotation Marks On the Data Pump Command Line".

  • Interactive-Command Interface: Stops logging to the terminal and displays the Export prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an export operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.

    For a complete description of the commands available in interactive-command mode, see "Commands Available in Export's Interactive-Command Mode".

Data Pump Export Modes

Export provides different modes for unloading different portions of the database. The mode is specified on the command line, using the appropriate parameter. The available modes are described in the following sections:


Note:

Several system schemas cannot be exported because they are not user schemas; they contain Oracle-managed data and metadata. Examples of system schemas that are not exported include SYS, ORDSYS, and MDSYS.

Full Export Mode

A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. This mode requires that you have the DATAPUMP_EXP_FULL_DATABASE role.


See Also:

"FULL" for a description of the Export FULL parameter

Schema Mode

A schema export is specified using the SCHEMAS parameter. This is the default export mode. If you have the DATAPUMP_EXP_FULL_DATABASE role, then you can specify a list of schemas, optionally including the schema definitions themselves and also system privilege grants to those schemas. If you do not have the DATAPUMP_EXP_FULL_DATABASE role, then you can export only your own schema.

The SYS schema cannot be used as a source schema for export jobs.

Cross-schema references are not exported unless the referenced schema is also specified in the list of schemas to be exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported. This is also true for external type definitions upon which tables in the specified schemas depend. In such a case, it is expected that the type definitions already exist in the target instance at import time.


See Also:

"SCHEMAS" for a description of the Export SCHEMAS parameter

Table Mode

A table mode export is specified using the TABLES parameter. In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded.

If you specify the TRANSPORTABLE=ALWAYS parameter with the TABLES parameter, then only object metadata is unloaded. To move the actual data, you copy the data files to the target database. This results in quicker export times. If you are moving data files between releases or platforms, then the data files may need to be processed by Oracle Recovery Manager (RMAN).


See Also:

Oracle Database Backup and Recovery User's Guide for more information on transporting data across platforms

You must have the DATAPUMP_EXP_FULL_DATABASE role to specify tables that are not in your own schema. Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time. Also, as in schema exports, cross-schema references are not exported.


See Also:

  • "TABLES" for a description of the Export TABLES parameter

  • "TRANSPORTABLE" for a description of the Export TRANSPORTABLE parameter


Tablespace Mode

A tablespace export is specified using the TABLESPACES parameter. In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both object metadata and data are unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Unprivileged users get only the tables in their own schemas.


See Also:

  • "TABLESPACES" for a description of the Export TABLESPACES parameter


Transportable Tablespace Mode

A transportable tablespace export is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces is exported. The tablespace data files are copied in a separate operation. Then, a transportable tablespace import is performed to import the dump file containing the metadata and to specify the data files to use.

Transportable tablespace mode requires that the specified tables be completely self-contained. That is, all storage segments of all tables (and their indexes) defined within the tablespace set must also be contained within the set. If there are self-containment violations, then Export identifies all of the problems without actually performing the export.

Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.

Encrypted columns are not supported in transportable tablespace mode.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

Considerations for Time Zone File Versions in Transportable Tablespace Mode

Jobs performed in transportable tablespace mode have the following requirements concerning time zone file versions:

  • If the source is Oracle Database 11g release 2 (11.2.0.2) or later and there are tables in the transportable set that use TIMESTAMP WITH TIMEZONE (TSTZ) columns, then the time zone file version on the target database must exactly match the time zone file version on the source database.

  • If the source is earlier than Oracle Database 11g release 2 (11.2.0.2), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TSTZ columns.

If these requirements are not met, then the import job aborts before anything is imported. This is because if the import job were allowed to import the objects, there might be inconsistent results when tables with TSTZ columns were read.

To identify the time zone file version of a database, you can execute the following SQL statement:

SQL> SELECT VERSION FROM V$TIMEZONE_FILE;

See Also:


Network Considerations

You can specify a connect identifier in the connect string when you invoke the Data Pump Export utility. This identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID). The connect identifier can be an Oracle*Net connect descriptor or a net service name (usually defined in the tnsnames.ora file) that maps to a connect descriptor. Use of a connect identifier requires that you have Oracle Net Listener running (to start the default listener, enter lsnrctl start). The following is an example of this type of connection, in which inst1 is the connect identifier:

expdp hr@inst1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees

Export then prompts you for a password:

Password: password

The local Export client connects to the database instance defined by the connect identifier inst1 (a net service name), retrieves data from inst1, and writes it to the dump file hr.dmp on inst1.

Specifying a connect identifier when you invoke the Export utility is different from performing an export operation using the NETWORK_LINK parameter. When you start an export operation and specify a connect identifier, the local Export client connects to the database instance identified by the connect identifier, retrieves data from that database instance, and writes it to a dump file set on that database instance. Whereas, when you perform an export using the NETWORK_LINK parameter, the export is performed using a database link. (A database link is a connection between two physical database servers that allows a client to access them as one logical database.)


See Also:


Filtering During Export Operations

Data Pump Export provides data and metadata filtering capability to help you limit the type of information that is exported.

Data Filters

Data specific filtering is implemented through the QUERY and SAMPLE parameters, which specify restrictions on the table rows that are to be exported.

Data filtering can also occur indirectly because of metadata filtering, which can include or exclude table objects along with any associated row data.

Each data filter can be specified once per table within a job. If different filters using the same name are applied to both a particular table and to the whole job, then the filter parameter supplied for the specific table takes precedence.

Metadata Filters

Metadata filtering is implemented through the EXCLUDE and INCLUDE parameters. The EXCLUDE and INCLUDE parameters are mutually exclusive.

Metadata filters identify a set of objects to be included or excluded from an Export or Import operation. For example, you could request a full export, but without Package Specifications or Package Bodies.

To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that an index is to be included in an operation, then statistics from that index will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.

If multiple filters are specified for an object type, then an implicit AND operation is applied to them. That is, objects pertaining to the job must pass all of the filters applied to their object types.

The same metadata filter name can be specified multiple times within a job.

To see a list of valid object types, query the following views: DATABASE_EXPORT_OBJECTS for full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode. The values listed in the OBJECT_PATH column are the valid object types. For example, you could perform the following query:

SQL> SELECT OBJECT_PATH, COMMENTS FROM SCHEMA_EXPORT_OBJECTS
  2  WHERE OBJECT_PATH LIKE '%GRANT' AND OBJECT_PATH NOT LIKE '%/%';

The output of this query looks similar to the following:

OBJECT_PATH
--------------------------------------------------------------------------------
COMMENTS
--------------------------------------------------------------------------------
GRANT
Object grants on the selected tables
 
OBJECT_GRANT
Object grants on the selected tables
 
PROCDEPOBJ_GRANT
Grants on instance procedural objects
 
PROCOBJ_GRANT
Schema procedural object grants in the selected schemas
 
ROLE_GRANT
Role grants to users associated with the selected schemas
 
SYSTEM_GRANT
System privileges granted to users associated with the selected schemas

See Also:

"EXCLUDE" and "INCLUDE"

Parameters Available in Export's Command-Line Mode

This section describes the parameters available in the command-line mode of Data Pump Export. Be sure to read the following sections before using the Export parameters:

Many of the parameter descriptions include an example of how to use the parameter. For background information on setting up the necessary environment to run the examples, see:

Specifying Export Parameters

For parameters that can have multiple values specified, the values can be separated by commas or by spaces. For example, you could specify TABLES=employees,jobs or TABLES=employees jobs.

For every parameter you enter, you must enter an equal sign (=) and a value. Data Pump has no other way of knowing that the previous parameter specification is complete and a new parameter specification is beginning. For example, in the following command line, even though NOLOGFILE is a valid parameter, it would be interpreted as another dumpfile name for the DUMPFILE parameter:

expdp DIRECTORY=dpumpdir DUMPFILE=test.dmp NOLOGFILE TABLES=employees

This would result in two dump files being created, test.dmp and nologfile.dmp.

To avoid this, specify either NOLOGFILE=YES or NOLOGFILE=NO.

Use of Quotation Marks On the Data Pump Command Line

Some operating systems treat quotation marks as special characters and will therefore not pass them to an application unless they are preceded by an escape character, such as the backslash (\). This is true both on the command line and within parameter files. Some operating systems may require an additional set of single or double quotation marks on the command line around the entire parameter value containing the special characters.

The following examples are provided to illustrate these concepts. Be aware that they may not apply to your particular operating system and that this documentation cannot anticipate the operating environments unique to each user.

Suppose you specify the TABLES parameter in a parameter file, as follows:

TABLES = \"MixedCaseTableName\"

If you were to specify that on the command line, some operating systems would require that it be surrounded by single quotation marks, as follows:

TABLES - '\"MixedCaseTableName\"'

To avoid having to supply additional quotation marks on the command line, Oracle recommends the use of parameter files. Also, note that if you use a parameter file and the parameter value being specified does not have quotation marks as the first character in the string (for example, TABLES=scott."EmP"), then the use of escape characters may not be necessary on some systems.


See Also:


Using the Export Parameter Examples

If you try running the examples that are provided for each parameter, be aware of the following:

  • After you enter the username and parameters as shown in the example, Export is started and you are prompted for a password. You must enter the password before a database connection is made.

  • Most of the examples use the sample schemas of the seed database, which is installed by default when you install Oracle Database. In particular, the human resources (hr) schema is often used.

  • The examples assume that the directory objects, dpump_dir1 and dpump_dir2, already exist and that READ and WRITE privileges have been granted to the hr user for these directory objects. See "Default Locations for Dump, Log, and SQL Files" for information about creating directory objects and assigning privileges to them.

  • Some of the examples require the DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE roles. The examples assume that the hr user has been granted these roles.

If necessary, ask your DBA for help in creating these directory objects and assigning the necessary privileges and roles.

Syntax diagrams of these parameters are provided in "Syntax Diagrams for Data Pump Export".

Unless specifically noted, these parameters can also be specified in a parameter file.

ABORT_STEP

Default: Null

Purpose

Used to stop the job after it is initialized. This allows the master table to be queried before any data is exported.

Syntax and Description

ABORT_STEP=[n | -1]

The possible values correspond to a process order number in the master table. The result of using each number is as follows:

  • n -- If the value is zero or greater, then the export operation is started and the job is aborted at the object that is stored in the master table with the corresponding process order number.

  • -1 -- If the value is negative one (-1) then abort the job after setting it up, but before exporting any objects or data.

Restrictions

  • None

Example

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr ABORT_STEP=-1

ACCESS_METHOD

Default: AUTOMATIC

Purpose

Instructs Export to use a particular method to unload data.

Syntax and Description

ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE]

The ACCESS_METHOD parameter is provided so that you can try an alternative method if the default method does not work for some reason. Oracle recommends that you use the default option (AUTOMATIC) whenever possible because it allows Data Pump to automatically select the most efficient method.

Restrictions

  • If the NETWORK_LINK parameter is also specified, then direct path mode is not supported.

Example

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr 
ACCESS_METHOD=EXTERNAL_TABLE 

ATTACH

Default: job currently in the user's schema, if there is only one

Purpose

Attaches the client session to an existing export job and automatically places you in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the Export prompt.

Syntax and Description

ATTACH [=[schema_name.]job_name]

The schema_name is optional. To specify a schema other than your own, you must have the DATAPUMP_EXP_FULL_DATABASE role.

The job_name is optional if only one export job is associated with your schema and the job is active. To attach to a stopped job, you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS view or the USER_DATAPUMP_JOBS view.

When you are attached to the job, Export displays a description of the job and then displays the Export prompt.

Restrictions

  • When you specify the ATTACH parameter, the only other Data Pump parameter you can specify on the command line is ENCRYPTION_PASSWORD.

  • If the job you are attaching to was initially started using an encryption password, then when you attach to the job you must again enter the ENCRYPTION_PASSWORD parameter on the command line to re-specify that password. The only exception to this is if the job was initially started with the ENCRYPTION=ENCRYPTED_COLUMNS_ONLY parameter. In that case, the encryption password is not needed when attaching to the job.

  • You cannot attach to a job in another schema unless it is already running.

  • If the dump file set or master table for the job have been deleted, then the attach operation will fail.

  • Altering the master table in any way will lead to unpredictable results.

Example

The following is an example of using the ATTACH parameter. It assumes that the job, hr.export_job, already exists.

> expdp hr ATTACH=hr.export_job

CLUSTER

Default: YES

Purpose

Determines whether Data Pump can use Oracle Real Application Clusters (Oracle RAC) resources and start workers on other Oracle RAC instances.

Syntax and Description

CLUSTER=[YES | NO]

To force Data Pump Export to use only the instance where the job is started and to replicate pre-Oracle Database 11g release 2 (11.2) behavior, specify CLUSTER=NO.

To specify a specific, existing service and constrain worker processes to run only on instances defined for that service, use the SERVICE_NAME parameter with the CLUSTER=YES parameter.

Use of the CLUSTER parameter may affect performance because there is some additional overhead in distributing the export job across Oracle RAC instances. For small jobs, it may be better to specify CLUSTER=NO to constrain the job to run on the instance where it is started. Jobs whose performance benefits the most from using the CLUSTER parameter are those involving large amounts of data.

Example

The following is an example of using the CLUSTER parameter:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=NO PARALLEL=3

This example starts a schema-mode export (the default) of the hr schema. Because CLUSTER=NO is specified, the job uses only the instance on which it started. (If the CLUSTER parameter had not been specified at all, then the default value of Y would have been used and workers would have been started on other instances in the Oracle RAC, if necessary.) The dump files will be written to the location specified for the dpump_dir1 directory object. The job can have up to 3 parallel processes.

COMPRESSION

Default: METADATA_ONLY

Purpose

Specifies which data to compress before writing to the dump file set.

Syntax and Description

COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
  • ALL enables compression for the entire export operation. The ALL option requires that the Oracle Advanced Compression option be enabled.

  • DATA_ONLY results in all data being written to the dump file in compressed format. The DATA_ONLY option requires that the Oracle Advanced Compression option be enabled.

  • METADATA_ONLY results in all metadata being written to the dump file in compressed format. This is the default.

  • NONE disables compression for the entire export operation.


See Also:

Oracle Database Licensing Information for information about licensing requirements for the Oracle Advanced Compression option

Restrictions

  • To make full use of all these compression options, the COMPATIBLE initialization parameter must be set to at least 11.0.0.

  • The METADATA_ONLY option can be used even if the COMPATIBLE initialization parameter is set to 10.2.

  • Compression of data using ALL or DATA_ONLY is valid only in the Enterprise Edition of Oracle Database 11g and also requires that the Oracle Advanced Compression option be enabled.

Example

The following is an example of using the COMPRESSION parameter:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY

This command will execute a schema-mode export that will compress all metadata before writing it out to the dump file, hr_comp.dmp. It defaults to a schema-mode export because no export mode is specified.

CONTENT

Default: ALL

Purpose

Enables you to filter what Export unloads: data only, metadata only, or both.

Syntax and Description

CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
  • ALL unloads both data and metadata. This is the default.

  • DATA_ONLY unloads only table row data; no database object definitions are unloaded.

  • METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.

Restrictions

  • The CONTENT=METADATA_ONLY parameter cannot be used with the TRANSPORT_TABLESPACES (transportable-tablespace mode) parameter or with the QUERY parameter.

Example

The following is an example of using the CONTENT parameter:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY

This command will execute a schema-mode export that will unload only the metadata associated with the hr schema. It defaults to a schema-mode export of the hr schema because no export mode is specified.

DATA_OPTIONS

Default: There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.

Purpose

The DATA_OPTIONS parameter designates how certain types of data should be handled during export operations.

Syntax and Description

DATA_OPTIONS=XML_CLOBS

The XML_CLOBS option specifies that XMLType columns are to be exported in uncompressed CLOB format regardless of the XMLType storage format that was defined for them.

If a table has XMLType columns stored only as CLOBs, then it is not necessary to specify the XML_CLOBS option because Data Pump automatically exports them in CLOB format.If a table has XMLType columns stored as any combination of object-relational (schema-based), binary, or CLOB formats, then Data Pump exports them in compressed format, by default. This is the preferred method. However, if you need to export the data in uncompressed CLOB format, you can use the XML_CLOBS option to override the default.


See Also:

Oracle XML DB Developer's Guide for information specific to exporting and importing XMLType tables

Restrictions

  • Using the XML_CLOBS option requires that the same XML schema be used at both export and import time.

  • The Export DATA_OPTIONS parameter requires the job version to be set at 11.0.0 or higher. See "VERSION".

Example

This example shows an export operation in which any XMLType columns in the hr.xdb_tab1 table are exported in uncompressed CLOB format regardless of the XMLType storage format that was defined for them.

> expdp hr TABLES=hr.xdb_tab1 DIRECTORY=dpump_dir1 DUMPFILE=hr_xml.dmp
VERSION=11.2 DATA_OPTIONS=XML_CLOBS

DIRECTORY

Default: DATA_PUMP_DIR

Purpose

Specifies the default location to which Export can write the dump file set and the log file.

Syntax and Description

DIRECTORY=directory_object

The directory_object is the name of a database directory object (not the file path of an actual directory). Upon installation, privileged users have access to a default directory object named DATA_PUMP_DIR. Users with access to the default DATA_PUMP_DIR directory object do not need to use the DIRECTORY parameter at all.

A directory object specified on the DUMPFILE or LOGFILE parameter overrides any directory object that you specify for the DIRECTORY parameter.

Example

The following is an example of using the DIRECTORY parameter:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY

The dump file, employees.dmp, will be written to the path that is associated with the directory object dpump_dir1.


See Also:


DUMPFILE

Default: expdat.dmp

Purpose

Specifies the names, and optionally, the directory objects of dump files for an export job.

Syntax and Description

DUMPFILE=[directory_object:]file_name [, ...]

The directory_object is optional if one has already been established by the DIRECTORY parameter. If you supply a value here, then it must be a directory object that already exists and that you have access to. A database directory object that is specified as part of the DUMPFILE parameter overrides a value specified by the DIRECTORY parameter or by the default directory object.

You can supply multiple file_name specifications as a comma-delimited list or in separate DUMPFILE parameter specifications. If no extension is given for the file name, then Export uses the default file extension of .dmp. The file names can contain a substitution variable (%U), which implies that multiple files may be generated. The substitution variable is expanded in the resulting file names into a 2-digit, fixed-width, incrementing integer starting at 01 and ending at 99. If a file specification contains two substitution variables, both are incremented at the same time. For example, exp%Uaa%U.dmp would resolve to exp01aa01.dmp, exp02aa02.dmp, and so forth.

If the FILESIZE parameter is specified, then each dump file will have a maximum of that size and be nonextensible. If more space is required for the dump file set and a template with a substitution variable (%U) was supplied, then a new dump file is automatically created of the size specified by the FILESIZE parameter, if there is room on the device.

As each file specification or file template containing a substitution variable is defined, it is instantiated into one fully qualified file name and Export attempts to create it. The file specifications are processed in the order in which they are specified. If the job needs extra files because the maximum file size is reached, or to keep parallel workers active, then additional files are created if file templates with substitution variables were specified.

Although it is possible to specify multiple files using the DUMPFILE parameter, the export job may only require a subset of those files to hold the exported data. The dump file set displayed at the end of the export job shows exactly which files were used. It is this list of files that is required to perform an import operation using this dump file set. Any files that were not used can be discarded.

Restrictions

  • Any resulting dump file names that match preexisting dump file names will generate an error and the preexisting dump files will not be overwritten. You can override this behavior by specifying the Export parameter REUSE_DUMPFILES=YES.

Example

The following is an example of using the DUMPFILE parameter:

> expdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,
 exp2%U.dmp PARALLEL=3

The dump file, exp1.dmp, will be written to the path associated with the directory object dpump_dir2 because dpump_dir2 was specified as part of the dump file name, and therefore overrides the directory object specified with the DIRECTORY parameter. Because all three parallel processes will be given work to perform during this job, dump files named exp201.dmp and exp202.dmp will be created based on the specified substitution variable exp2%U.dmp. Because no directory is specified for them, they will be written to the path associated with the directory object, dpump_dir1, that was specified with the DIRECTORY parameter.


See Also:


ENCRYPTION

Default: The default value depends upon the combination of encryption-related parameters that are used. To enable encryption, either the ENCRYPTION or ENCRYPTION_PASSWORD parameter, or both, must be specified.

If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION parameter defaults to ALL.

If only the ENCRYPTION parameter is specified and the Oracle encryption wallet is open, then the default mode is TRANSPARENT. If only the ENCRYPTION parameter is specified and the wallet is closed, then an error is returned.

If neither ENCRYPTION nor ENCRYPTION_PASSWORD is specified, then ENCRYPTION defaults to NONE.

Purpose

Specifies whether to encrypt data before writing it to the dump file set.

Syntax and Description

ENCRYPTION = [ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE]

ALL enables encryption for all data and metadata in the export operation.

DATA_ONLY specifies that only data is written to the dump file set in encrypted format.

ENCRYPTED_COLUMNS_ONLY specifies that only encrypted columns are written to the dump file set in encrypted format. To use this option, you must have Oracle Advanced Security transparent data encryption enabled. See Oracle Database Advanced Security Administrator's Guide for more information about transparent data encryption.

METADATA_ONLY specifies that only metadata is written to the dump file set in encrypted format.

NONE specifies that no data is written to the dump file set in encrypted format.


Note:

If the data being exported includes SecureFiles that you want to be encrypted, then you must specify ENCRYPTION=ALL to encrypt the entire dump file set. Encryption of the entire dump file set is the only way to achieve encryption security for SecureFiles during a Data Pump export operation. For more information about SecureFiles, see Oracle Database SecureFiles and Large Objects Developer's Guide.

Restrictions

  • To specify the ALL, DATA_ONLY, or METADATA_ONLY options, the COMPATIBLE initialization parameter must be set to at least 11.0.0.

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Advanced Security Administrator's Guide for information about licensing requirements for the Oracle Advanced Security option.

Example

The following example performs an export operation in which only data is encrypted in the dump file:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp JOB_NAME=enc1
ENCRYPTION=data_only ENCRYPTION_PASSWORD=foobar

ENCRYPTION_ALGORITHM

Default: AES128

Purpose

Specifies which cryptographic algorithm should be used to perform the encryption.

Syntax and Description

ENCRYPTION_ALGORITHM = [AES128 | AES192 | AES256]

See Oracle Database Advanced Security Administrator's Guide for information about encryption algorithms.

Restrictions

  • To use this encryption feature, the COMPATIBLE initialization parameter must be set to at least 11.0.0.

  • The ENCRYPTION_ALGORITHM parameter requires that you also specify either the ENCRYPTION or ENCRYPTION_PASSWORD parameter; otherwise an error is returned.

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Advanced Security Administrator's Guide for information about licensing requirements for the Oracle Advanced Security option.

Example

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc3.dmp
ENCRYPTION_PASSWORD=foobar ENCRYPTION_ALGORITHM=AES128

ENCRYPTION_MODE

Default: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified and the Oracle encryption wallet is open, then the default mode is TRANSPARENT. If only the ENCRYPTION parameter is specified and the wallet is closed, then an error is returned.

If the ENCRYPTION_PASSWORD parameter is specified and the wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the wallet is closed, then the default is PASSWORD.

Purpose

Specifies the type of security to use when encryption and decryption are performed.

Syntax and Description

ENCRYPTION_MODE = [DUAL | PASSWORD | TRANSPARENT]

DUAL mode creates a dump file set that can later be imported either transparently or by specifying a password that was used when the dual-mode encrypted dump file set was created. When you later import the dump file set created in DUAL mode, you can use either the wallet or the password that was specified with the ENCRYPTION_PASSWORD parameter. DUAL mode is best suited for cases in which the dump file set will be imported on-site using the wallet, but which may also need to be imported offsite where the wallet is not available.

PASSWORD mode requires that you provide a password when creating encrypted dump file sets. You will need to provide the same password when you import the dump file set. PASSWORD mode requires that you also specify the ENCRYPTION_PASSWORD parameter. The PASSWORD mode is best suited for cases in which the dump file set will be imported into a different or remote database, but which must remain secure in transit.

TRANSPARENT mode allows an encrypted dump file set to be created without any intervention from a database administrator (DBA), provided the required wallet is available. Therefore, the ENCRYPTION_PASSWORD parameter is not required, and will in fact, cause an error if it is used in TRANSPARENT mode. This encryption mode is best suited for cases in which the dump file set will be imported into the same database from which it was exported.

Restrictions

  • To use DUAL or TRANSPARENT mode, the COMPATIBLE initialization parameter must be set to at least 11.0.0.

  • When you use the ENCRYPTION_MODE parameter, you must also use either the ENCRYPTION or ENCRYPTION_PASSWORD parameter. Otherwise, an error is returned.

  • When you use the ENCRYPTION=ENCRYPTED_COLUMNS_ONLY, you cannot use the ENCRYPTION_MODE parameter. Otherwise, an error is returned.

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Advanced Security Administrator's Guide for information about licensing requirements for the Oracle Advanced Security option.

Example

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc4.dmp
ENCRYPTION=all ENCRYPTION_PASSWORD=secretwords
ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=DUAL

ENCRYPTION_PASSWORD

Default: There is no default; the value is user-provided.

Purpose

Specifies a password for encrypting encrypted column data, metadata, or table data in the export dumpfile. This prevents unauthorized access to an encrypted dump file set.


Note:

Data Pump encryption functionality changed as of Oracle Database 11g release 1 (11.1). Before release 11.1, the ENCRYPTION_PASSWORD parameter applied only to encrypted columns. However, as of release 11.1, the new ENCRYPTION parameter provides options for encrypting other types of data. This means that if you now specify ENCRYPTION_PASSWORD without also specifying ENCRYPTION and a specific option, then all data written to the dump file will be encrypted (equivalent to specifying ENCRYPTION=ALL). If you want to re-encrypt only encrypted columns, then you must now specify ENCRYPTION=ENCRYPTED_COLUMNS_ONLY in addition to ENCRYPTION_PASSWORD.

Syntax and Description

ENCRYPTION_PASSWORD = password

Thee password value that is supplied specifies a key for re-encrypting encrypted table columns, metadata, or table data so that they are not written as clear text in the dump file set. If the export operation involves encrypted table columns, but an encryption password is not supplied, then the encrypted columns will be written to the dump file set as clear text and a warning will be issued.

For export operations, this parameter is required if the ENCRYPTION_MODE parameter is set to either PASSWORD or DUAL.


Note:

There is no connection or dependency between the key specified with the Data Pump ENCRYPTION_PASSWORD parameter and the key specified with the ENCRYPT keyword when the table with encrypted columns was initially created. For example, suppose a table is created as follows, with an encrypted column whose key is xyz:
CREATE TABLE emp (col1 VARCHAR2(256) ENCRYPT IDENTIFIED BY "xyz");

When you export the emp table, you can supply any arbitrary value for ENCRYPTION_PASSWORD. It does not have to be xyz.


Restrictions

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Advanced Security Administrator's Guide for information about licensing requirements for the Oracle Advanced Security option.

  • If ENCRYPTION_PASSWORD is specified but ENCRYPTION_MODE is not specified, then it is not necessary to have Oracle Advanced Security transparent data encryption enabled since ENCRYPTION_MODE will default to PASSWORD.

  • The ENCRYPTION_PASSWORD parameter is not valid if the requested encryption mode is TRANSPARENT.

  • To use the ENCRYPTION_PASSWORD parameter if ENCRYPTION_MODE is set to DUAL, you must have Oracle Advanced Security transparent data encryption enabled. See Oracle Database Advanced Security Administrator's Guide for more information about transparent data encryption.

  • For network exports, the ENCRYPTION_PASSWORD parameter in conjunction with ENCRYPTION=ENCRYPTED_COLUMNS_ONLY is not supported with user-defined external tables that have encrypted columns. The table will be skipped and an error message will be displayed, but the job will continue.

  • Encryption attributes for all columns must match between the exported table definition and the target table. For example, suppose you have a table, EMP, and one of its columns is named EMPNO. Both of the following situations would result in an error because the encryption attribute for the EMP column in the source table would not match the encryption attribute for the EMP column in the target table:

    • The EMP table is exported with the EMPNO column being encrypted, but before importing the table you remove the encryption attribute from the EMPNO column.

    • The EMP table is exported without the EMPNO column being encrypted, but before importing the table you enable encryption on the EMPNO column.

Example

In the following example, an encryption password, 123456, is assigned to the dump file, dpcd2be1.dmp.

expdp hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir1
DUMPFILE=dpcd2be1.dmp ENCRYPTION=ENCRYPTED_COLUMNS_ONLY 
ENCRYPTION_PASSWORD=123456

Encrypted columns in the employee_s_encrypt table, will not be written as clear text in the dpcd2be1.dmp dump file. Note that to subsequently import the dpcd2be1.dmp file created by this example, you will need to supply the same encryption password. (See "ENCRYPTION_PASSWORD" for an example of an import operation using the ENCRYPTION_PASSWORD parameter.)

ESTIMATE

Default: BLOCKS

Purpose

Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.

Syntax and Description

ESTIMATE=[BLOCKS | STATISTICS]
  • BLOCKS - The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.

  • STATISTICS - The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently. (Table analysis can be done with either the SQL ANALYZE statement or the DBMS_STATS PL/SQL package.)

Restrictions

  • If the Data Pump export job involves compressed tables, then the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS.

  • The estimate may also be inaccurate if either the QUERY or REMAP_DATA parameter is used.

Example

The following example shows a use of the ESTIMATE parameter in which the estimate is calculated using statistics for the employees table:

> expdp hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1
 DUMPFILE=estimate_stat.dmp

ESTIMATE_ONLY

Default: NO

Purpose

Instructs Export to estimate the space that a job would consume, without actually performing the export operation.

Syntax and Description

ESTIMATE_ONLY=[YES | NO]

If ESTIMATE_ONLY=YES, then Export estimates the space that would be consumed, but quits without actually performing the export operation.

Restrictions

  • The ESTIMATE_ONLY parameter cannot be used in conjunction with the QUERY parameter.

Example

The following shows an example of using the ESTIMATE_ONLY parameter to determine how much space an export of the HR schema will take.

> expdp hr ESTIMATE_ONLY=YES NOLOGFILE=YES SCHEMAS=HR

EXCLUDE

Default: There is no default

Purpose

Enables you to filter the metadata that is exported by specifying objects and object types to be excluded from the export operation.

Syntax and Description

EXCLUDE=object_type[:name_clause] [, ...]

The object_type specifies the type of object to be excluded. To see a list of valid values for object_type, query the following views: DATABASE_EXPORT_OBJECTS for full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode. The values listed in the OBJECT_PATH column are the valid object types.

All object types for the given mode of export will be included in the export except those specified in an EXCLUDE statement. If an object is excluded, then all of its dependent objects are also excluded. For example, excluding a table will also exclude all indexes and triggers on the table.

The name_clause is optional. It allows selection of specific objects within an object type. It is a SQL expression used as a filter on the type's object names. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name_clause applies only to object types whose instances have names (for example, it is applicable to TABLE, but not to GRANT). It must be separated from the object type with a colon and enclosed in double quotation marks, because single quotation marks are required to delimit the name strings. For example, you could set EXCLUDE=INDEX:"LIKE 'EMP%'" to exclude all indexes whose names start with EMP.

The name that you supply for the name_clause must exactly match, including upper and lower casing, an existing object in the database. For example, if the name_clause you supply is for a table named EMPLOYEES, then there must be an existing table named EMPLOYEES using all upper case. If the name_clause were supplied as Employees or employees or any other variation, then the table would not be found.

If no name_clause is provided, then all objects of the specified type are excluded.

More than one EXCLUDE statement can be specified.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line.


See Also:


If the object_type you specify is CONSTRAINT, GRANT, or USER, then you should be aware of the effects this will have, as described in the following paragraphs.

Excluding Constraints

The following constraints cannot be explicitly excluded:

  • NOT NULL constraints

  • Constraints needed for the table to be created and loaded successfully; for example, primary key constraints for index-organized tables, or REF SCOPE and WITH ROWID constraints for tables with REF columns

This means that the following EXCLUDE statements will be interpreted as follows:

  • EXCLUDE=CONSTRAINT will exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading.

  • EXCLUDE=REF_CONSTRAINT will exclude referential integrity (foreign key) constraints.

Excluding Grants and Users

Specifying EXCLUDE=GRANT excludes object grants on all object types and system privilege grants.

Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.

To exclude a specific user and all objects of that user, specify a command such as the following, where hr is the schema name of the user you want to exclude.

expdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:"='HR'"

Note that in this situation, an export mode of FULL is specified. If no mode were specified, then the default mode, SCHEMAS, would be used. This would cause an error because the command would indicate that the schema should be both exported and excluded at the same time.

If you try to exclude a user by using a statement such as EXCLUDE=USER:"='HR'", then only the information used in CREATE USER hr DDL statements will be excluded, and you may not get the results you expect.

Restrictions

  • The EXCLUDE and INCLUDE parameters are mutually exclusive.

Example

The following is an example of using the EXCLUDE statement.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,
PACKAGE, FUNCTION

This will result in a schema-mode export (the default export mode) in which all of the hr schema will be exported except its views, packages, and functions.


See Also:


FILESIZE

Default: 0 (equivalent to the maximum size of 16 terabytes)

Purpose

Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable or if additional dump files have been added to the job.

Syntax and Description

FILESIZE=integer[B | KB | MB | GB | TB]

The integer can be immediately followed (do not insert a space) by B, KB, MB, GB, or TB (indicating bytes, kilobytes, megabytes, gigabytes, and terabytes respectively). Bytes is the default. The actual size of the resulting file may be rounded down slightly to match the size of the internal blocks used in dump files.

Restrictions

  • The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.

  • The maximum size for a file is 16 terabytes.

Example

The following shows an example in which the size of the dump file is set to 3 megabytes:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3MB

If 3 megabytes had not been sufficient to hold all the exported data, then the following error would have been displayed and the job would have stopped:

ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes

The actual number of bytes that could not be allocated may vary. Also, this number does not represent the amount of space needed to complete the entire export operation. It indicates only the size of the current object that was being exported when the job ran out of dump file space.This situation can be corrected by first attaching to the stopped job, adding one or more files using the ADD_FILE command, and then restarting the operation.

FLASHBACK_SCN

Default: There is no default

Purpose

Specifies the system change number (SCN) that Export will use to enable the Flashback Query utility.

Syntax and Description

FLASHBACK_SCN=scn_value

The export operation is performed with data that is consistent up to the specified SCN. If the NETWORK_LINK parameter is specified, then the SCN refers to the SCN of the source database.

Restrictions

  • FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.

  • The FLASHBACK_SCN parameter pertains only to the Flashback Query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.

Example

The following example assumes that an existing SCN value of 384632 exists. It exports the hr schema up to SCN 384632.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632

Note:

If you are on a logical standby system and using a network link to access the logical standby primary, then the FLASHBACK_SCN parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases.

FLASHBACK_TIME

Default: There is no default

Purpose

The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent up to this SCN.

Syntax and Description

FLASHBACK_TIME="TO_TIMESTAMP(time-value)"

Because the TO_TIMESTAMP value is enclosed in quotation marks, it would be best to put this parameter in a parameter file. See "Use of Quotation Marks On the Data Pump Command Line".

Restrictions

  • FLASHBACK_TIME and FLASHBACK_SCN are mutually exclusive.

  • The FLASHBACK_TIME parameter pertains only to the flashback query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.

Example

You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME procedure accepts. For example, suppose you have a parameter file, flashback.par, with the following contents:

DIRECTORY=dpump_dir1
DUMPFILE=hr_time.dmp
FLASHBACK_TIME="TO_TIMESTAMP('25-08-2008 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"

You could then issue the following command:

> expdp hr PARFILE=flashback.par

The export operation will be performed with data that is consistent with the SCN that most closely matches the specified time.


Note:

If you are on a logical standby system and using a network link to access the logical standby primary, then the FLASHBACK_SCN parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases.


See Also:

Oracle Database Advanced Application Developer's Guide for information about using Flashback Query

FULL

Default: NO

Purpose

Specifies that you want to perform a full database mode export.

Syntax and Description

FULL=[YES | NO]

FULL=YES indicates that all data and metadata are to be exported. Filtering can restrict what is exported using this export mode. See "Filtering During Export Operations".

To perform a full export, you must have the DATAPUMP_EXP_FULL_DATABASE role.


Note:

Be aware that when you later import a dump file that was created by a full-mode export, the import operation attempts to copy the password for the SYS account from the source database. This sometimes fails (for example, if the password is in a shared password file). If it does fail, then after the import completes, you must set the password for the SYS account at the target database to a password of your choice.

Restrictions

  • A full export does not export system schemas that contain Oracle-managed data and metadata. Examples of system schemas that are not exported include SYS, ORDSYS, and MDSYS.

  • Grants on objects owned by the SYS schema are never exported.

  • If you are exporting data that is protected by a realm, then you must have authorization for that realm.


    See Also:

    Oracle Database Vault Administrator's Guide for information about configuring realms

Example

The following is an example of using the FULL parameter. The dump file, expfull.dmp is written to the dpump_dir2 directory.

> expdp hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=YES NOLOGFILE=YES

HELP

Default: NO

Purpose

Displays online help for the Export utility.

Syntax and Description

HELP = [YES | NO]

If HELP=YES is specified, then Export displays a summary of all Export command-line parameters and interactive commands.

Example

> expdp HELP = YES

This example will display a brief description of all Export parameters and commands.

INCLUDE

Default: There is no default

Purpose

Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.

Syntax and Description

INCLUDE = object_type[:name_clause] [, ...]

The object_type specifies the type of object to be included. To see a list of valid values for object_type, query the following views: DATABASE_EXPORT_OBJECTS for full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode. The values listed in the OBJECT_PATH column are the valid object types.

Only object types explicitly specified in INCLUDE statements, and their dependent objects, are exported. No other object types, including the schema definition information that is normally part of a schema-mode export when you have the DATAPUMP_EXP_FULL_DATABASE role, are exported.

The name_clause is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name_clause applies only to object types whose instances have names (for example, it is applicable to TABLE, but not to GRANT). It must be separated from the object type with a colon and enclosed in double quotation marks, because single quotation marks are required to delimit the name strings.

The name that you supply for the name_clause must exactly match, including upper and lower casing, an existing object in the database. For example, if the name_clause you supply is for a table named EMPLOYEES, then there must be an existing table named EMPLOYEES using all upper case. If the name_clause were supplied as Employees or employees or any other variation, then the table would not be found.

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line. See "Use of Quotation Marks On the Data Pump Command Line".

For example, suppose you have a parameter file named hr.par with the following content:

SCHEMAS=HR
DUMPFILE=expinclude.dmp
DIRECTORY=dpump_dir1
LOGFILE=expinclude.log
INCLUDE=TABLE:"IN ('EMPLOYEES', 'DEPARTMENTS')"
INCLUDE=PROCEDURE
INCLUDE=INDEX:"LIKE 'EMP%'"

You could then use the hr.par file to start an export operation, without having to enter any other parameters on the command line. The EMPLOYEES and DEPARTMENTS tables, all procedures, and all index names with an EMP prefix will be included in the export.

> expdp hr PARFILE=hr.par

Including Constraints

If the object_type you specify is a CONSTRAINT, then you should be aware of the effects this will have.

The following constraints cannot be explicitly included:

  • NOT NULL constraints

  • Constraints needed for the table to be created and loaded successfully; for example, primary key constraints for index-organized tables, or REF SCOPE and WITH ROWID constraints for tables with REF columns

This means that the following INCLUDE statements will be interpreted as follows:

  • INCLUDE=CONSTRAINT will include all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading

  • INCLUDE=REF_CONSTRAINT will include referential integrity (foreign key) constraints

Restrictions

  • The INCLUDE and EXCLUDE parameters are mutually exclusive.

  • Grants on objects owned by the SYS schema are never exported.

Example

The following example performs an export of all tables (and their dependent objects) in the hr schema:

> expdp hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=YES

JOB_NAME

Default: system-generated name of the form SYS_EXPORT_<mode>_NN

Purpose

Used to identify the export job in subsequent actions, such as when the ATTACH parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS views.

Syntax and Description

JOB_NAME=jobname_string

The jobname_string specifies a name of up to 30 bytes for this export job. The bytes must represent printable characters and spaces. If spaces are included, then the name must be enclosed in single quotation marks (for example, 'Thursday Export'). The job name is implicitly qualified by the schema of the user performing the export operation. The job name is used as the name of the master table, which controls the export job.

The default job name is system-generated in the form SYS_EXPORT_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_EXPORT_TABLESPACE_02'.

Example

The following example shows an export operation that is assigned a job name of exp_job:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job
NOLOGFILE=YES

KEEP_MASTER

Default: NO

Purpose

Indicates whether the master table should be deleted or retained at the end of a Data Pump job that completes successfully. The master table is automatically retained for jobs that do not complete successfully.

Syntax and Description

KEEP_MASTER=[YES | NO]

Restrictions

  • None

Example

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr KEEP_MASTER=YES

LOGFILE

Default: export.log

Purpose

Specifies the name, and optionally, a directory, for the log file of the export job.

Syntax and Description

LOGFILE=[directory_object:]file_name

You can specify a database directory_object previously established by the DBA, assuming that you have access to it. This overrides the directory object specified with the DIRECTORY parameter.

The file_name specifies a name for the log file. The default behavior is to create a file named export.log in the directory referenced by the directory object specified in the DIRECTORY parameter.

All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS command in interactive mode.)

A log file is always created for an export job unless the NOLOGFILE parameter is specified. As with the dump file set, the log file is relative to the server and not the client.

An existing file matching the file name will be overwritten.

Restrictions

  • To perform a Data Pump Export using Oracle Automatic Storage Management (Oracle ASM), you must specify a LOGFILE parameter that includes a directory object that does not include the Oracle ASM + notation. That is, the log file must be written to a disk file, and not written into the Oracle ASM storage. Alternatively, you can specify NOLOGFILE=YES. However, this prevents the writing of the log file.

Example

The following example shows how to specify a log file name if you do not want to use the default:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr_export.log

Note:

Data Pump Export writes the log file using the database character set. If your client NLS_LANG environment setting sets up a different client character set from the database character set, then it is possible that table names may be different in the log file than they are when displayed on the client output screen.


See Also:


METRICS

Default: NO

Purpose

Indicates whether additional information about the job should be reported to the Data Pump log file.

Syntax and Description

METRICS=[YES | NO]

When METRICS=YES is used, the number of objects and the elapsed time are recorded in the Data Pump log file.

Restrictions

  • None

Example

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr METRICS=YES

NETWORK_LINK

Default: There is no default

Purpose

Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.

Syntax and Description

NETWORK_LINK=source_database_link

The NETWORK_LINK parameter initiates an export using a database link. This means that the system to which the expdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a dump file set back on the connected system.

The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, then you or your DBA must create one using the SQL CREATE DATABASE LINK statement.

If the source database is read-only, then the user on the source database must have a locally managed temporary tablespace assigned as the default temporary tablespace. Otherwise, the job will fail.


Caution:

If an export operation is performed over an unencrypted network link, then all data is exported as clear text even if it is encrypted in the database. See Oracle Database Advanced Security Administrator's Guide for information about network security.


See Also:


Restrictions

  • The only types of database links supported by Data Pump Export are: public, fixed user, and connected user. Current-user database links are not supported.

  • Network exports do not support LONG columns.

  • When operating across a network link, Data Pump requires that the source and target databases differ by no more than one version. For example, if one database is Oracle Database 11g, then the other database must be either 11g or 10g. Note that Data Pump checks only the major version number (for example, 10g and 11g), not specific release numbers (for example, 10.1, 10.2, 11.1, or 11.2).

Example

The following is an example of using the NETWORK_LINK parameter. The source_database_link would be replaced with the name of a valid database link that must already exist.

> expdp hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
  DUMPFILE=network_export.dmp LOGFILE=network_export.log

NOLOGFILE

Default: NO

Purpose

Specifies whether to suppress creation of a log file.

Syntax and Description

NOLOGFILE=[YES | NO]

Specify NOLOGFILE =YES to suppress the default behavior of creating a log file. Progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job and you specify NOLOGFILE=YES, then you run the risk of losing important progress and error information.

Example

The following is an example of using the NOLOGFILE parameter:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp NOLOGFILE=YES

This command results in a schema-mode export (the default) in which no log file is written.

PARALLEL

Default: 1

Purpose

Specifies the maximum number of processes of active execution operating on behalf of the export job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process and worker processes acting as query coordinators in parallel query operations do not count toward this total.

This parameter enables you to make trade-offs between resource consumption and elapsed time.

Syntax and Description

PARALLEL=integer

The value you specify for integer should be less than, or equal to, the number of files in the dump file set (or you should specify substitution variables in the dump file specifications). Because each active worker process or I/O server process writes exclusively to one file at a time, an insufficient number of files can have adverse effects. Some of the worker processes will be idle while waiting for files, thereby degrading the overall performance of the job. More importantly, if any member of a cooperating group of parallel I/O server processes cannot obtain a file for output, then the export operation will be stopped with an ORA-39095 error. Both situations can be corrected by attaching to the job using the Data Pump Export utility, adding more files using the ADD_FILE command while in interactive mode, and in the case of a stopped job, restarting the job.

To increase or decrease the value of PARALLEL during job execution, use interactive-command mode. Decreasing parallelism does not result in fewer worker processes associated with the job; it decreases the number of worker processes that will be executing at any given time. Also, any ongoing work must reach an orderly completion point before the decrease takes effect. Therefore, it may take a while to see any effect from decreasing the value. Idle workers are not deleted until the job exits.

Increasing the parallelism takes effect immediately if there is work that can be performed in parallel.

Using PARALLEL During An Export In An Oracle RAC Environment

In an Oracle Real Application Clusters (Oracle RAC) environment, if an export operation has PARALLEL=1, then all Data Pump processes reside on the instance where the job is started. Therefore, the directory object can point to local storage for that instance.

If the export operation has PARALLEL set to a value greater than 1, then Data Pump processes can reside on instances other than the one where the job was started. Therefore, the directory object must point to shared storage that is accessible by all instances of the Oracle RAC.

Restrictions

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

  • To export a table or table partition in parallel (using PQ slaves), you must have the DATAPUMP_EXP_FULL_DATABASE role.

Example

The following is an example of using the PARALLEL parameter:

> expdp hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log 
JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
 

This results in a schema-mode export (the default) of the hr schema in which up to four files could be created in the path pointed to by the directory object, dpump_dir1.

PARFILE

Default: There is no default

Purpose

Specifies the name of an export parameter file.

Syntax and Description

PARFILE=[directory_path]file_name

Unlike dump files, log files, and SQL files which are created and written by the server, the parameter file is opened and read by the expdp client. Therefore, a directory object name is neither required nor appropriate. The default is the user's current directory. The use of parameter files is highly recommended if you are using parameters whose values require the use of quotation marks.

Restrictions

  • The PARFILE parameter cannot be specified within a parameter file.

Example

The content of an example parameter file, hr.par, might be as follows:

SCHEMAS=HR
DUMPFILE=exp.dmp
DIRECTORY=dpump_dir1
LOGFILE=exp.log

You could then issue the following Export command to specify the parameter file:

> expdp hr PARFILE=hr.par

QUERY

Default: There is no default

Purpose

Allows you to specify a query clause that is used to filter the data that gets exported.

Syntax and Description

QUERY = [schema.][table_name:] query_clause

The query_clause is typically a SQL WHERE clause for fine-grained row selection, but could be any SQL clause. For example, an ORDER BY clause could be used to speed up a migration from a heap-organized table to an index-organized table. If a schema and table name are not supplied, then the query is applied to (and must be valid for) all tables in the export job. A table-specific query overrides a query applied to all tables.

When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one query can be specified per table.

If the NETWORK_LINK parameter is specified along with the QUERY parameter, then any objects specified in the query_clause that are on the remote (source) node must be explicitly qualified with the NETWORK_LINK value. Otherwise, Data Pump assumes that the object is on the local (target) node; if it is not, then an error is returned and the import of the table from the remote (source) system fails.

For example, if you specify NETWORK_LINK=dblink1, then the query_clause of the QUERY parameter must specify that link, as shown in the following example:

QUERY=(hr.employees:"WHERE last_name IN(SELECT last_name 
FROM hr.employees@dblink1)")

Depending on your operating system, the use of quotation marks when you specify a value for this parameter may also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line. See "Use of Quotation Marks On the Data Pump Command Line".

To specify a schema other than your own in a table-specific query, you must be granted access to that specific table.

Restrictions

  • The QUERY parameter cannot be used with the following parameters:

    • CONTENT=METADATA_ONLY

    • ESTIMATE_ONLY

    • TRANSPORT_TABLESPACES

  • When the QUERY parameter is specified for a table, Data Pump uses external tables to unload the target table. External tables uses a SQL CREATE TABLE AS SELECT statement. The value of the QUERY parameter is the WHERE clause in the SELECT portion of the CREATE TABLE statement. If the QUERY parameter includes references to another table with columns whose names match the table being unloaded, and if those columns are used in the query, then you will need to use a table alias to distinguish between columns in the table being unloaded and columns in the SELECT statement with the same name. The table alias used by Data Pump for the table being unloaded is KU$.

    For example, suppose you want to export a subset of the sh.sales table based on the credit limit for a customer in the sh.customers table. In the following example, KU$ is used to qualify the cust_id field in the QUERY parameter for unloading sh.sales. As a result, Data Pump exports only rows for customers whose credit limit is greater than $10,000.

    QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c 
       WHERE cust_credit_limit > 10000 AND ku$.cust_id = c.cust_id)"'
    

    If, as in the following query, KU$ is not used for a table alias, then the result will be that all rows are unloaded:

    QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c 
       WHERE cust_credit_limit > 10000 AND cust_id = c.cust_id)"'
    
  • The maximum length allowed for a QUERY string is 4000 bytes including quotation marks, which means that the actual maximum length allowed is 3998 bytes.

Example

The following is an example of using the QUERY parameter:

> expdp hr PARFILE=emp_query.par

The contents of the emp_query.par file are as follows:

QUERY=employees:"WHERE department_id > 10 AND salary > 10000"
NOLOGFILE=YES 
DIRECTORY=dpump_dir1 
DUMPFILE=exp1.dmp 

This example unloads all tables in the hr schema, but only the rows that fit the query expression. In this case, all rows in all tables (except employees) in the hr schema will be unloaded. For the employees table, only rows that meet the query criteria are unloaded.

REMAP_DATA

Default: There is no default

Purpose

The REMAP_DATA parameter allows you to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file. A common use for this option is to mask data when moving from a production system to a test system. For example, a column of sensitive customer data such as credit card numbers could be replaced with numbers generated by a REMAP_DATA function. This would allow the data to retain its essential formatting and processing characteristics without exposing private data to unauthorized personnel.

The same function can be applied to multiple columns being dumped. This is useful when you want to guarantee consistency in remapping both the child and parent column in a referential constraint.

Syntax and Description

REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function

The description of each syntax element, in the order in which they appear in the syntax, is as follows:

schema -- the schema containing the table to be remapped. By default, this is the schema of the user doing the export.

tablename -- the table whose column will be remapped.

column_name -- the column whose data is to be remapped. The maximum number of columns that can be remapped for a single table is 10.

schema -- the schema containing the PL/SQL package you have created that contains the remapping function. As a default, this is the schema of the user doing the export.

pkg -- the name of the PL/SQL package you have created that contains the remapping function.

function -- the name of the function within the PL/SQL that will be called to remap the column table in each row of the specified table.

Restrictions

  • The datatypes of the source argument and the returned value should both match the data type of the designated column in the table.

  • Remapping functions should not perform commits or rollbacks except in autonomous transactions.

  • The maximum number of columns you can remap on a single table is 10. You can remap 9 columns on table a and 8 columns on table b, and so on, but the maximum for each table is 10.

Example

The following example assumes a package named remap has been created that contains functions named minus10 and plusx which change the values for employee_id and first_name in the employees table.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=remap1.dmp TABLES=employees
REMAP_DATA=hr.employees.employee_id:hr.remap.minus10 
REMAP_DATA=hr.employees.first_name:hr.remap.plusx 

REUSE_DUMPFILES

Default: NO

Purpose

Specifies whether to overwrite a preexisting dump file.

Syntax and Description

REUSE_DUMPFILES=[YES | NO]

Normally, Data Pump Export will return an error if you specify a dump file name that already exists. The REUSE_DUMPFILES parameter allows you to override that behavior and reuse a dump file name. For example, if you performed an export and specified DUMPFILE=hr.dmp and REUSE_DUMPFILES=YES, then hr.dmp would be overwritten if it already existed. Its previous contents would be lost and it would contain data for the current export instead.

Example

The following export operation creates a dump file named enc1.dmp, even if a dump file with that name already exists.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=enc1.dmp
TABLES=employees REUSE_DUMPFILES=YES

SAMPLE

Default: There is no default

Purpose

Allows you to specify a percentage of the data rows to be sampled and unloaded from the source database.

Syntax and Description

SAMPLE=[[schema_name.]table_name:]sample_percent

This parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a row will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.

The sample_percent can be applied to specific tables. In the following example, 50% of the HR.EMPLOYEES table will be exported:

SAMPLE="HR"."EMPLOYEES":50

If you specify a schema, then you must also specify a table. However, you can specify a table without specifying a schema; the current user will be assumed. If no table is specified, then the sample_percent value applies to the entire export job.

Note that you can use this parameter with the Data Pump Import PCTSPACE transform, so that the size of storage allocations matches the sampled data subset. (See "TRANSFORM".)

Restrictions

  • The SAMPLE parameter is not valid for network exports.

Example

In the following example, the value 70 for SAMPLE is applied to the entire export job because no table name is specified.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70

SCHEMAS

Default: current user's schema

Purpose

Specifies that you want to perform a schema-mode export. This is the default mode for Export.

Syntax and Description

SCHEMAS=schema_name [, ...]

If you have the DATAPUMP_EXP_FULL_DATABASE role, then you can specify a single schema other than your own or a list of schema names. The DATAPUMP_EXP_FULL_DATABASE role also allows you to export additional nonschema object information for each specified schema so that the schemas can be re-created at import time. This additional information includes the user definitions themselves and all associated system and role grants, user password history, and so on. Filtering can further restrict what is exported using schema mode (see "Filtering During Export Operations").

Restrictions

  • If you do not have the DATAPUMP_EXP_FULL_DATABASE role, then you can specify only your own schema.

  • The SYS schema cannot be used as a source schema for export jobs.

Example

The following is an example of using the SCHEMAS parameter. Note that user hr is allowed to specify more than one schema because the DATAPUMP_EXP_FULL_DATABASE role was previously assigned to it for the purpose of these examples.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe 
 

This results in a schema-mode export in which the schemas, hr, sh, and oe will be written to the expdat.dmp dump file located in the dpump_dir1 directory.

SERVICE_NAME

Default: There is no default

Purpose

Used to specify a service name to be used in conjunction with the CLUSTER parameter.

Syntax and Description

SERVICE_NAME=name

The SERVICE_NAME parameter can be used with the CLUSTER=YES parameter to specify an existing service associated with a resource group that defines a set of Oracle Real Application Clusters (Oracle RAC) instances belonging to that resource group, typically a subset of all the Oracle RAC instances.

The service name is only used to determine the resource group and instances defined for that resource group. The instance where the job is started is always used, regardless of whether it is part of the resource group.

The SERVICE_NAME parameter is ignored if CLUSTER=NO is also specified.

Suppose you have an Oracle RAC configuration containing instances A, B, C, and D. Also suppose that a service named my_service exists with a resource group consisting of instances A, B, and C only. In such a scenario, the following would be true:

  • If you start a Data Pump job on instance A and specify CLUSTER=YES (or accept the default, which is Y) and you do not specify the SERVICE_NAME parameter, then Data Pump creates workers on all instances: A, B, C, and D, depending on the degree of parallelism specified.

  • If you start a Data Pump job on instance A and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, and C only.

  • If you start a Data Pump job on instance D and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, C, and D. Even though instance D is not in my_service it is included because it is the instance on which the job was started.

  • If you start a Data Pump job on instance A and specify CLUSTER=NO, then any SERVICE_NAME parameter you specify is ignored and all processes will start on instance A.


See Also:

"CLUSTER"

Example

The following is an example of using the SERVICE_NAME parameter:

expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_svname2.dmp SERVICE_NAME=sales

This example starts a schema-mode export (the default mode) of the hr schema. Even though CLUSTER=YES is not specified on the command line, it is the default behavior, so the job will use all instances in the resource group associated with the service name sales. A dump file named hr_svname2.dmp will be written to the location specified by the dpump_dir1 directory object.

SOURCE_EDITION

Default: the default database edition on the system

Purpose

Specifies the database edition from which objects will be exported.

Syntax and Description

SOURCE_EDITION=edition_name

If SOURCE_EDITION=edition_name is specified, then the objects from that edition are exported. Data Pump selects all inherited objects that have not changed and all actual objects that have changed.

If this parameter is not specified, then the default edition is used. If the specified edition does not exist or is not usable, then an error message is returned.


See Also:


Restrictions

  • This parameter is only useful if there are two or more versions of the same versionable objects in the database.

  • The job version must be 11.2 or higher. See "VERSION".

Example

The following is an example of using the SOURCE_EDITION parameter:

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp SOURCE_EDITION=exp_edition EXCLUDE=USER

This example assumes the existence of an edition named exp_edition on the system from which objects are being exported. Because no export mode is specified, the default of schema mode will be used. The EXCLUDE=user parameter excludes only the definitions of users, not the objects contained within users' schemas.

STATUS

Default: 0

Purpose

Specifies the frequency at which the job status display is updated.

Syntax and Description

STATUS=[integer]

If you supply a value for integer, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then no additional information is displayed beyond information about the completion of each object type, table, or partition.

This status information is written only to your standard output device, not to the log file (if one is in effect).

Example

The following is an example of using the STATUS parameter.

> expdp hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300

This example will export the hr and sh schemas and display the status of the export every 5 minutes (60 seconds x 5 = 300 seconds).

TABLES

Default: There is no default

Purpose

Specifies that you want to perform a table-mode export.

Syntax and Description

TABLES=[schema_name.]table_name[:partition_name] [, ...]

Filtering can restrict what is exported using this mode (see "Filtering During Export Operations"). You can filter the data and metadata that is exported, by specifying a comma-delimited list of tables and partitions or subpartitions. If a partition name is specified, then it must be the name of a partition or subpartition in the associated table. Only the specified set of tables, partitions, and their dependent objects are unloaded.

If an entire partitioned table is exported, then it will be imported in its entirety, as a partitioned table. The only case in which this is not true is if PARTITION_OPTIONS=DEPARTITION is specified during import.

The table name that you specify can be preceded by a qualifying schema name. The schema defaults to that of the current user. To specify a schema other than your own, you must have the DATAPUMP_EXP_FULL_DATABASE role.

Use of the wildcard character, %, to specify table names and partition names is supported.

The following restrictions apply to table names:

  • By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, then you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.

    Some operating systems require that quotation marks on the command line be preceded by an escape character. The following are examples of how case-sensitivity can be preserved in the different Export modes.

    • In command-line mode:

      TABLES='\"Emp\"'
      
    • In parameter file mode:

      TABLES='"Emp"'
      
  • Table names specified on the command line cannot include a pound sign (#), unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound sign (#), then the Export utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.

    For example, if the parameter file contains the following line, then Export interprets everything on the line after emp# as a comment and does not export the tables dept and mydata:

    TABLES=(emp#, dept, mydata)
    

    However, if the parameter file contains the following line, then the Export utility exports all three tables because emp# is enclosed in quotation marks:

    TABLES=('"emp#"', dept, mydata)
    

    Note:

    Some operating systems require single quotation marks rather than double quotation marks, or the reverse. See your Oracle operating system-specific documentation. Different operating systems also have other restrictions on table naming.

    For example, the UNIX C shell attaches a special meaning to a dollar sign ($) or pound sign (#) (or certain other special characters). You must use escape characters to get such characters in the name past the shell and into Export.


Using the Transportable Option During Table-Mode Export

To use the transportable option during a table-mode export, specify the TRANSPORTABLE=ALWAYS parameter with the TABLES parameter. Metadata for the specified tables, partitions, or subpartitions is exported to the dump file. To move the actual data, you copy the data files to the target database.

If only a subset of a table's partitions are exported and the TRANSPORTABLE=ALWAYS parameter is used, then on import each partition becomes a non-partitioned table.

Restrictions

  • Cross-schema references are not exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported.

  • Types used by the table are not exported in table mode. This means that if you subsequently import the dump file and the type does not already exist in the destination database, then the table creation will fail.

  • The use of synonyms as values for the TABLES parameter is not supported. For example, if the regions table in the hr schema had a synonym of regn, then it would not be valid to use TABLES=regn. An error would be returned.

  • The export of tables that include a wildcard character, %, in the table name is not supported if the table has partitions.

  • The length of the table name list specified for the TABLES parameter is limited to a maximum of 4 MB, unless you are using the NETWORK_LINK parameter to an Oracle Database release 10.2.0.3 or earlier or to a read-only database. In such cases, the limit is 4 KB.

  • You can only specify partitions from one table if TRANSPORTABLE=ALWAYS is also set on the export.

Examples

The following example shows a simple use of the TABLES parameter to export three tables found in the hr schema: employees, jobs, and departments. Because user hr is exporting tables found in the hr schema, the schema name is not needed before the table names.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departments

The following example assumes that user hr has the DATAPUMP_EXP_FULL_DATABASE role. It shows the use of the TABLES parameter to export partitions.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp
TABLES=sh.sales:sales_Q1_2008,sh.sales:sales_Q2_2008

This example exports the partitions, sales_Q1_2008 and sales_Q2_2008, from the table sales in the schema sh.

TABLESPACES

Default: There is no default

Purpose

Specifies a list of tablespace names to be exported in tablespace mode.

Syntax and Description

TABLESPACES=tablespace_name [, ...]

In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both object metadata and data are unloaded. If any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Unprivileged users get only the tables in their own schemas

Filtering can restrict what is exported using this mode (see "Filtering During Export Operations").

Restrictions

  • The length of the tablespace name list specified for the TABLESPACES parameter is limited to a maximum of 4 MB, unless you are using the NETWORK_LINK to an Oracle Database release 10.2.0.3 or earlier or to a read-only database. In such cases, the limit is 4 KB.

Example

The following is an example of using the TABLESPACES parameter. The example assumes that tablespaces tbs_4, tbs_5, and tbs_6 already exist.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp 
TABLESPACES=tbs_4, tbs_5, tbs_6

This results in a tablespace export in which tables (and their dependent objects) from the specified tablespaces (tbs_4, tbs_5, and tbs_6) will be unloaded.

TRANSPORT_FULL_CHECK

Default: NO

Purpose

Specifies whether to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.

Syntax and Description

TRANSPORT_FULL_CHECK=[YES | NO]

If TRANSPORT_FULL_CHECK=YES, then Export verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, then a failure is returned and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

If TRANSPORT_FULL_CHECK=NO, then Export verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, then the export operation is terminated.

There are other checks performed as well. For instance, export always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES are actually contained within the tablespace set.

Example

The following is an example of using the TRANSPORT_FULL_CHECK parameter. It assumes that tablespace tbs_1 exists.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp 
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log 

TRANSPORT_TABLESPACES

Default: There is no default

Purpose

Specifies that you want to perform an export in transportable-tablespace mode.

Syntax and Description

TRANSPORT_TABLESPACES=tablespace_name [, ...]

Use the TRANSPORT_TABLESPACES parameter to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.

The log file for the export lists the data files that are used in the transportable set, the dump files, and any containment violations.

The TRANSPORT_TABLESPACES parameter exports metadata for all objects within the specified tablespaces. If you want to perform a transportable export of only certain tables, partitions, or subpartitions, then you must use the TABLES parameter with the TRANSPORTABLE=ALWAYS parameter.


Note:

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database.

Restrictions

  • Transportable jobs are not restartable.

  • Transportable jobs are restricted to a degree of parallelism of 1.

  • Transportable tablespace mode requires that you have the DATAPUMP_EXP_FULL_DATABASE role.

  • Transportable mode does not support encrypted columns.

  • The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.

  • The SYS and SYSAUX tablespaces are not transportable.

  • All tablespaces in the transportable set must be set to read-only.

  • If the Data Pump Export VERSION parameter is specified along with the TRANSPORT_TABLESPACES parameter, then the version must be equal to or greater than the Oracle Database COMPATIBLE initialization parameter.

  • The TRANSPORT_TABLESPACES parameter cannot be used in conjunction with the QUERY parameter.

Example 1

The following is an example of using the TRANSPORT_TABLESPACES parameter in a file-based job (rather than network-based). The tablespace tbs_1 is the tablespace being moved. This example assumes that tablespace tbs_1 exists and that it has been set to read-only. This example also assumes that the default tablespace was changed before this export command was issued.

> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log

TRANSPORTABLE

Default: NEVER

Purpose

Specifies whether the transportable option should be used during a table mode export (specified with the TABLES parameter) to export metadata for specific tables, partitions, and subpartitions.

Syntax and Description

TRANSPORTABLE = [ALWAYS | NEVER]

The definitions of the allowed values are as follows:

ALWAYS - Instructs the export job to use the transportable option. If transportable is not possible, then the job will fail. The transportable option exports only metadata for the specified tables, partitions, or subpartitions specified by the TABLES parameter. You must copy the actual data files to the target database. See "Using Data File Copying to Move Data".

NEVER - Instructs the export job to use either the direct path or external table method to unload data rather than the transportable option. This is the default.


Note:

If you want to export an entire tablespace in transportable mode, then use the TRANSPORT_TABLESPACES parameter.

  • If only a subset of a table's partitions are exported and the TRANSPORTABLE=ALWAYS parameter is used, then on import each partition becomes a non-partitioned table.

  • If only a subset of a table's partitions are exported and the TRANSPORTABLE parameter is not used at all or is set to NEVER (the default), then on import:

    • If PARTITION_OPTIONS=DEPARTITION is used, then each partition included in the dump file set is created as a non-partitioned table.

    • If PARTITION_OPTIONS is not used, then the complete table is created. That is, all the metadata for the complete table is present so that the table definition looks the same on the target system as it did on the source. But only the data that was exported for the specified partitions is inserted into the table.

Restrictions

  • The TRANSPORTABLE parameter is only valid in table mode exports.

  • The user performing a transportable export requires the DATAPUMP_EXP_FULL_DATABASE privilege.

  • Tablespaces associated with tables, partitions, and subpartitions must be read-only.

  • Transportable mode does not export any data. Data is copied when the tablespace data files are copied from the source system to the target system. The tablespaces that must be copied are listed at the end of the log file for the export operation.

  • To make use of the TRANSPORTABLE parameter, the COMPATIBLE initialization parameter must be set to at least 11.0.0.

  • The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.

Example

The following example assumes that the sh user has the DATAPUMP_EXP_FULL_DATABASE role and that table sales2 is partitioned and contained within tablespace tbs2. (The tbs2 tablespace must be set to read-only in the source database.)

> expdp sh DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp
TABLES=sh.sales2 TRANSPORTABLE=ALWAYS 

After the export completes successfully, you must copy the data files to the target database area. You could then perform an import operation using the PARTITION_OPTIONS and REMAP_SCHEMA parameters to make each of the partitions in sales2 its own table.

> impdp system PARTITION_OPTIONS=DEPARTITION 
TRANSPORT_DATAFILES=oracle/dbs/tbs2 DIRECTORY=dpump_dir1 
DUMPFILE=tto1.dmp REMAP_SCHEMA=sh:dp

VERSION

Default: COMPATIBLE

Purpose

Specifies the version of database objects to be exported (that is, only database objects and attributes that are compatible with the specified release will be exported). This can be used to create a dump file set that is compatible with a previous release of Oracle Database. Note that this does not mean that Data Pump Export can be used with releases of Oracle Database prior to Oracle Database 10g release 1 (10.1). Data Pump Export only works with Oracle Database 10g release 1 (10.1) or later. The VERSION parameter simply allows you to identify the versyJion of the objects being exported.

Syntax and Description

VERSION=[COMPATIBLE | LATEST | version_string]

The legal values for the VERSION parameter are as follows:

  • COMPATIBLE - This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher.

  • LATEST - The version of the metadata corresponds to the database release.

  • version_string - A specific database release (for example, 11.2.0). In Oracle Database 11g, this value cannot be lower than 9.2.

Database objects or attributes that are incompatible with the specified release will not be exported. For example, tables containing new datatypes that are not supported in the specified release will not be exported.

Restrictions

  • Exporting a table with archived LOBs to a database release earlier than 11.2 is not allowed.

  • If the Data Pump Export VERSION parameter is specified along with the TRANSPORT_TABLESPACES parameter, then the value must be equal to or greater than the Oracle Database COMPATIBLE initialization parameter.

Example

The following example shows an export for which the version of the metadata will correspond to the database release:

> expdp hr TABLES=hr.employees VERSION=LATEST DIRECTORY=dpump_dir1
DUMPFILE=emp.dmp NOLOGFILE=YES

Commands Available in Export's Interactive-Command Mode

In interactive-command mode, the current job continues running, but logging to the terminal is suspended and the Export prompt (Export>) is displayed.

To start interactive-command mode, do one of the following:

  • From an attached client, press Ctrl+C.

  • From a terminal other than the one on which the job is running, specify the ATTACH parameter in an expdp command to attach to the job. This is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location.

Table 2-1 lists the activities you can perform for the current job from the Data Pump Export prompt in interactive-command mode.

Table 2-1 Supported Activities in Data Pump Export's Interactive-Command Mode

ActivityCommand Used

Add additional dump files.

ADD_FILE


Exit interactive mode and enter logging mode.

CONTINUE_CLIENT


Stop the export client session, but leave the job running.

EXIT_CLIENT


Redefine the default size to be used for any subsequent dump files.

FILESIZE


Display a summary of available commands.

HELP


Detach all currently attached client sessions and terminate the current job.

KILL_JOB


Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 11g.

PARALLEL


Restart a stopped job to which you are attached.

START_JOB


Display detailed status for the current job and/or set status interval.

STATUS


Stop the current job for later restart.

STOP_JOB



The following are descriptions of the commands available in the interactive-command mode of Data Pump Export.

ADD_FILE

Purpose

Adds additional files or substitution variables to the export dump file set.

Syntax and Description

ADD_FILE=[directory_object:]file_name [,...]

Each file name can have a different directory object. If no directory object is specified, then the default is assumed.

The file_name must not contain any directory path information. However, it can include a substitution variable, %U, which indicates that multiple files may be generated using the specified file name as a template.

The size of the file being added is determined by the setting of the FILESIZE parameter.


See Also:

"File Allocation" for information about the effects of using substitution variables

Example

The following example adds two dump files to the dump file set. A directory object is not specified for the dump file named hr2.dmp, so the default directory object for the job is assumed. A different directory object, dpump_dir2, is specified for the dump file named hr3.dmp.

Export> ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp

CONTINUE_CLIENT

Purpose

Changes the Export mode from interactive-command mode to logging mode.

Syntax and Description

CONTINUE_CLIENT

In logging mode, status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT will also cause the client to attempt to start the job.

Example

Export> CONTINUE_CLIENT

EXIT_CLIENT

Purpose

Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running.

Syntax and Description

EXIT_CLIENT

Because EXIT_CLIENT leaves the job running, you can attach to the job at a later time. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS view or the V$SESSION_LONGOPS view.

Example

Export> EXIT_CLIENT

FILESIZE

Purpose

Redefines the maximum size of subsequent dump files. If the size is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable or if additional dump files have been added to the job.

Syntax and Description

FILESIZE=integer[B | KB | MB | GB | TB]

The integer can be immediately followed (do not insert a space) by B, KB, MB, GB, or TB (indicating bytes, kilobytes, megabytes, gigabytes, and terabytes respectively). Bytes is the default. The actual size of the resulting file may be rounded down slightly to match the size of the internal blocks used in dump files.

A file size of 0 is equivalent to the maximum file size of 16 TB.

Restrictions

  • The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.

  • The maximum size for a file is 16 terabytes.

Example

Export> FILESIZE=100MB

HELP

Purpose

Provides information about Data Pump Export commands available in interactive-command mode.

Syntax and Description

HELP

Displays information about the commands available in interactive-command mode.

Example

Export> HELP

KILL_JOB

Purpose

Detaches all currently attached client sessions and then terminates the current job. It exits Export and returns to the terminal prompt.

Syntax and Description

KILL_JOB

A job that is terminated using KILL_JOB cannot be restarted. All attached clients, including the one issuing the KILL_JOB command, receive a warning that the job is being terminated by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.

Example

Export> KILL_JOB

PARALLEL

Purpose

Enables you to increase or decrease the number of active processes (worker and parallel slaves) for the current job.

Syntax and Description

PARALLEL=integer

PARALLEL is available as both a command-line parameter and as an interactive-command mode parameter. You set it to the desired number of parallel processes (worker and parallel slaves). An increase takes effect immediately if there are sufficient files and resources. A decrease does not take effect until an existing process finishes its current task. If the value is decreased, then workers are idled but not deleted until the job exits.


See Also:

"PARALLEL" for more information about parallelism

Restrictions

  • This parameter is valid only in the Enterprise Edition of Oracle Database 11g.

Example

Export> PARALLEL=10

START_JOB

Purpose

Starts the current job to which you are attached.

Syntax and Description

START_JOB

The START_JOB command restarts the current job to which you are attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issued a STOP_JOB command, provided the dump file set and master table have not been altered in any way.

Exports done in transportable-tablespace mode are not restartable.

Example

Export> START_JOB

STATUS

Purpose

Displays cumulative status of the job, a description of the current operation, and an estimated completion percentage. It also allows you to reset the display interval for logging mode status.

Syntax and Description

STATUS[=integer]

You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then the periodic status display is turned off and status is displayed only once.

This status information is written only to your standard output device, not to the log file (even if one is in effect).

Example

The following example will display the current job status and change the logging mode display interval to five minutes (300 seconds):

Export> STATUS=300

STOP_JOB

Purpose

Stops the current job either immediately or after an orderly shutdown, and exits Export.

Syntax and Description

STOP_JOB[=IMMEDIATE]

If the master table and dump file set are not disturbed when or after the STOP_JOB command is issued, then the job can be attached to and restarted at a later time with the START_JOB command.

To perform an orderly shutdown, use STOP_JOB (without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.

To perform an immediate shutdown, specify STOP_JOB=IMMEDIATE. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.

Example

Export> STOP_JOB=IMMEDIATE

Examples of Using Data Pump Export

This section provides the following examples of using Data Pump Export:

For information that will help you to successfully use these examples, see "Using the Export Parameter Examples".

Performing a Table-Mode Export

Example 2-1 shows a table-mode export, specified using the TABLES parameter. Issue the following Data Pump export command to perform a table export of the tables employees and jobs from the human resources (hr) schema:

Example 2-1 Performing a Table-Mode Export

expdp hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=YES

Because user hr is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=YES parameter indicates that an Export log file of the operation will not be generated.

Data-Only Unload of Selected Tables and Rows

Example 2-2 shows the contents of a parameter file (exp.par) that you could use to perform a data-only unload of all tables in the human resources (hr) schema except for the tables countries and regions. Rows in the employees table are unloaded that have a department_id other than 50. The rows are ordered by employee_id.

Example 2-2 Data-Only Unload of Selected Tables and Rows

DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"

You can issue the following command to execute the exp.par parameter file:

> expdp hr PARFILE=exp.par

A schema-mode export (the default mode) is performed, but the CONTENT parameter effectively limits the export to an unload of just the table's data. The DBA previously created the directory object dpump_dir1 which points to the directory on the server where user hr is authorized to read and write export dump files. The dump file dataonly.dmp is created in dpump_dir1.

Estimating Disk Space Needed in a Table-Mode Export

Example 2-3 shows the use of the ESTIMATE_ONLY parameter to estimate the space that would be consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr) schema: employees, departments, and locations.

Example 2-3 Estimating Disk Space Needed in a Table-Mode Export

> expdp hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=YES TABLES=employees, 
departments, locations LOGFILE=estimate.log

The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.

Performing a Schema-Mode Export

Example 2-4 shows a schema-mode export of the hr schema. In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS parameter on the command line, unless you are specifying more than one schema or a schema other than your own.

Example 2-4 Performing a Schema Mode Export

> expdp hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log

Performing a Parallel Full Database Export

Example 2-5 shows a full database Export that will have up to 3 parallel processes (worker or PQ slaves).

Example 2-5 Parallel Full Export

> expdp hr FULL=YES DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull

Because this is a full database export, all data and metadata in the database will be exported. Dump files full101.dmp, full201.dmp, full102.dmp, and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1 and dpump_dir2 directory objects. For best performance, these should be on separate I/O channels. Each file will be up to 2 gigabytes in size, as necessary. Initially, up to three files will be created. More files will be created, if needed. The job and master table will have a name of expfull. The log file will be written to expfull.log in the dpump_dir1 directory.

Using Interactive Mode to Stop and Reattach to a Job

To start this example, reexecute the parallel full export in Example 2-5. While the export is running, press Ctrl+C. This will start the interactive-command interface of Data Pump Export. In the interactive interface, logging to the terminal stops and the Export prompt is displayed.

Example 2-6 Stopping and Reattaching to a Job

At the Export prompt, issue the following command to stop the job:

Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y

The job is placed in a stopped state and exits the client.

Enter the following command to reattach to the job you just stopped:

> expdp hr ATTACH=EXPFULL

After the job status is displayed, you can issue the CONTINUE_CLIENT command to resume logging mode and restart the expfull job.

Export> CONTINUE_CLIENT

A message is displayed that the job has been reopened, and processing status is output to the client.

PK,,PKN:AOEBPS/content.opf Oracle® Database Utilities, 11g Release 2 (11.2) en-US E22490-05 Oracle Corporation Oracle Corporation Oracle® Database Utilities, 11g Release 2 (11.2) 2012-12-17T08:29:56Z Describes how to use Oracle Database utilities to load data into a database, transfer data between databases, and maintain data. The topics discussed include Data Pump Export, Data Pump Import, SQL*Loader, external tables and associated access drivers, the Automatic Diagnostic Repository Command Interpreter (ADRCI), DBVERIFY, DBNEWID, LogMiner, the Metadata API, original Export, and original Import.  PK%PKN:A OEBPS/lof.htmt List of Figures PK\ytPKN:AOEBPS/dcommon/prodbig.gif GIF87a!!!)))111BBBZZZsss{{ZRRcZZ!!1!91)JB9B9)kkcJJB991ssc絽Zcc!!{祽BZc!9B!c{!)c{9{Z{{cZB1)sJk{{Z{kBsZJ91)Z{!{BcsRsBc{9ZZk甽kBkR!BZ9c)JJc{!))BZks{BcR{JsBk9k)Zck!!BZ1k!ZcRBZcZJkBk1Z9c!R!c9kZRZRBZ9{99!R1{99R{1!1)c1J)1B!BJRkk{ƽ絵ތkk絵RRs{{{{JJsssBBkkk!!9ss{{ZZssccJJZZRRccRRZZ))cBBJJ99JJ!!c11991199Z11!c!!))Z!!!1BRck{)!cJBkZRZ,HP)XRÇEZ֬4jJ0 @ "8pYҴESY3CƊ@*U:lY0_0#  5tX1E: C_xޘeKTV%ȣOΏ9??:a"\fSrğjAsKJ:nOzO=}E1-I)3(QEQEQEQEQEQEQE֝Hza<["2"pO#f8M[RL(,?g93QSZ uy"lx4h`O!LŏʨXZvq& c՚]+: ǵ@+J]tQ]~[[eϸ (]6A&>ܫ~+כzmZ^(<57KsHf妬Ϧmnẁ&F!:-`b\/(tF*Bֳ ~V{WxxfCnMvF=;5_,6%S>}cQQjsOO5=)Ot [W9 /{^tyNg#ЄGsֿ1-4ooTZ?K Gc+oyڙoNuh^iSo5{\ܹ3Yos}$.nQ-~n,-zr~-|K4R"8a{]^;I<ȤL5"EԤP7_j>OoK;*U.at*K[fym3ii^#wcC'IIkIp$󿉵|CtĈpW¹l{9>⪦׺*ͯj.LfGߍԁw] |WW18>w.ӯ! VӃ :#1~ +މ=;5c__b@W@ +^]ևՃ7 n&g2I8Lw7uҭ$"&"b eZ":8)D'%{}5{; w]iu;_dLʳ4R-,2H6>½HLKܹR ~foZKZ࿷1[oZ7׫Z7R¢?«'y?A}C_iG5s_~^ J5?œ tp]X/c'r%eܺA|4ծ-Ե+ْe1M38Ǯ `|Kյ OVڅu;"d56, X5kYR<̭CiطXԮ];Oy)OcWj֩}=܅s۸QZ*<~%뺃ȶp f~Bðzb\ݳzW*y{=[ C/Ak oXCkt_s}{'y?AmCjޓ{ WRV7r. g~Q"7&͹+c<=,dJ1V߁=T)TR՜*N4 ^Bڥ%B+=@fE5ka}ędܤFH^i1k\Sgdk> ֤aOM\_\T)8靠㡮3ģR: jj,pk/K!t,=ϯZ6(((((((49 xn_kLk&f9sK`zx{{y8H 8b4>ÇНE|7v(z/]k7IxM}8!ycZRQ pKVr(RPEr?^}'ðh{x+ՀLW154cK@Ng C)rr9+c:׹b Жf*s^ fKS7^} *{zq_@8# pF~ [VPe(nw0MW=3#kȵz晨cy PpG#W:%drMh]3HH<\]ԁ|_W HHҡb}P>k {ZErxMX@8C&qskLۙOnO^sCk7ql2XCw5VG.S~H8=(s1~cV5z %v|U2QF=NoW]ո?<`~׮}=ӬfԵ,=;"~Iy7K#g{ñJ?5$y` zz@-~m7mG宝Gٱ>G&K#]؃y1$$t>wqjstX.b̐{Wej)Dxfc:8)=$y|L`xV8ߙ~E)HkwW$J0uʟk>6Sgp~;4֌W+חc"=|ř9bc5> *rg {~cj1rnI#G|8v4wĿhFb><^ pJLm[Dl1;Vx5IZ:1*p)إ1ZbAK(1ׅ|S&5{^ KG^5r>;X׻K^? s fk^8O/"J)3K]N)iL?5!ƾq:G_=X- i,vi2N3 |03Qas ! 7}kZU781M,->e;@Qz T(GK(ah(((((((Y[×j2F}o־oYYq $+]%$ v^rϭ`nax,ZEuWSܽ,g%~"MrsrY~Ҿ"Fت;8{ѰxYEfP^;WPwqbB:c?zp<7;SBfZ)dϛ; 7s^>}⍱x?Bix^#hf,*P9S{w[]GF?1Z_nG~]kk)9Sc5Ո<<6J-ϛ}xUi>ux#ţc'{ᛲq?Oo?x&mѱ'#^t)ϲbb0 F«kIVmVsv@}kҡ!ˍUTtxO̧]ORb|2yԵk܊{sPIc_?ħ:Ig)=Z~' "\M2VSSMyLsl⺿U~"C7\hz_ Rs$~? TAi<lO*>U}+'f>7_K N s8g1^CeКÿE ;{+Y\ O5|Y{/o+ LVcO;7Zx-Ek&dpzbӱ+TaB0gNy׭ 3^c T\$⫫?F33?t._Q~Nln:U/Ceb1-im WʸQM+VpafR3d׫é|Aү-q*I P7:y&]hX^Fbtpܩ?|Wu󭏤ʫxJ3ߴm"(uqA}j.+?S wV ~ [B&<^U?rϜ_OH\'.;|.%pw/ZZG'1j(#0UT` Wzw}>_*9m>󑓀F?EL3"zpubzΕ$+0܉&3zڶ+jyr1QE ( ( ( ( ( ( ( (UIdC0EZm+]Y6^![ ԯsmܶ捆?+me+ZE29)B[;я*wGxsK7;5w)}gH~.Ɣx?X\ߚ}A@tQ(:ͧ|Iq(CT?v[sKG+*רqҍck <#Ljα5݈`8cXP6T5i.K!xX*p&ќZǓϘ7 *oƽ:wlຈ:Q5yIEA/2*2jAҐe}k%K$N9R2?7ýKMV!{W9\PA+c4w` Wx=Ze\X{}yXI Ү!aOÎ{]Qx)#D@9E:*NJ}b|Z>_k7:d$z >&Vv󃏽WlR:RqJfGإd9Tm(ҝEtO}1O[xxEYt8,3v bFF )ǙrPNE8=O#V*Cc𹾾&l&cmCh<.P{ʦ&ۣY+Gxs~k5$> ӥPquŽўZt~Tl>Q.g> %k#ú:Kn'&{[yWQGqF}AЅ׮/}<;VYZa$wQg!$;_ $NKS}“_{MY|w7G!"\JtRy+贾d|o/;5jz_6fHwk<ѰJ#]kAȎ J =YNu%dxRwwbEQEQEQEQEQEQEQEQEQE'fLQZ(1F)hQ@X1KEQE-Q@ 1KE3h=iPb(((1GjZ(-ʹRPbR@ 1KE7`bڒyS0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((,RmuM. I-nBvskƭON%I4/# WW@|i9Q3|뜨9>h1G`QZX ,m39V8$s@QEQEW|@ֶ>GDu39((:c:u}oc?hn#u˫nPpr=E{Q@Q@Q@Q@Q@Q@W/ė&XGVVĝICTC=uސ)60 'PEy,}c fo7{O>5Լ[]SKiKv[fP6Z( jSk>5KgE2IOZMJO:j}8%BrF>P@E;ti%yWO7ys+˂sJc#V3|Oynך +eԭ|4W\bT-$c ߞ|;ti%yWO7ys+˂sJ(/?ٟcݷqz~uQ^_M7=}|nXoQNMB3㞕QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEw?±k< $_ K^G<+O<5`_<[oqsA*92<G4_V$Z5z$.h]F2F'ʂ~`:Oğ30YM=BҀ=xb/ üK*Ȼ`&П@N+~K8▫yQy8T1#Ӟ9񝏁|5&}x48Œ)$p׈:c&rPwi#ާ9 '35?gso%K:p2%H~FܯOxF/v~.C)keFQTyoM4k7gO8o3>^Gz>P?xgfM}hϙa:wl> coD v4zwٗftx/>%5ߍѴ6eWޱ [ ^Pqbtn7ğZx}sM[ ݘm'bV`d|AmbL{oP>t[d, %NN?f= =[cK$E<~=c%נ.щ%X r21^S[imsnۿeč881g'dQW9nn>>(TI#C+pAc௉>3[^ol|#1Ua⥷6=¾ 6_b/A~RG'h<_7.;İK R Fn= $)j] g1I=9^Rk1.!83JA!s(’I'ݟ-x#x1im0 !pf=sq2ph?^0Xx/w?R[\cW3.Tmo\aKDR!Bvf'' zW͟5ciX<}y?L'S׷~0>0=Ėv(dqGZ h(?|mwLpXn |* cx?5i^:֬iozdMNĜ0'h)6>&FEA{G*q` axwߏ:F}*QgVb(!cv>8ƞ,W|$0H Ĝd pqI:sP4$B֐X+8!@ml >kUi[ [] v()# 3{n>+{~"@\`8cZ?O ?qp"LfCV#,K!!.`+,/=:O2$F` FAkR]˽:Tv[7]\`ꣵzGQÍ'Iխ}E_n靇*H<x4WmCۿG^^InQ_WWCI^@?~V^+HMާi'%Ulr-cdIG=7yº;IaXT ,%YA-k)dاOq__kֶ-q+Jd$`V?xkV)k$](4>zi:q2]8P/?^KIoQ%y Zޣχ}'˗b"x$r*}?Z3][,WRF(D 'CQmCۿGG/$:D9/K`=<~_ν7kOO_iWBDF(pD+Ѷ\? <6y:e>op1R* sY{߾[HW:*X3 WSW[IEO4?mR)dDS%Ub#aFSڀ0?3[xqe"&$4V;22Tk~Y˯x-%6 -TbB8$@վ/KI<g.dBa珙2IjڇgF<,[΁$f/v:'3)1່ee= y[Z/V;&9Bk#> |*&|4ּ9KY,QYc7o*HQ7<7_NXijz.&91yr6dm' A88?{h k'DjkNZޗiwB#4-Bi:HnPpI 3^@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@?G<+O¶ljthgLI#%da-Ax75?(SxPя\)//<9:̆!'^FNN@ ??xGm|.b7*Uv$HѾ;@tؤ(&?:Dqy72=k /BඒK;'.e^DFO/fgp:HҾ&x7VZm7ywQ,gVFlzA[{ PȡԌA8qG ;o'9Ǘf6c>9\qYI<)?@HRW?#Y[3EXʎHk>jo5i>"-4&۩tM"Xqy# =gPw|CaFTOQɇ6  뎇ҏ:汢A=233!8튱wjw邏vqր$:vVj#{D;k7m:t-d漙r'.I|;_t;S?" <@?hpͫw#xuqu Ƨ ֗y/_usc t]HwI,6PI9cFiaXAej;!Ln Vl9'OJ-N i$yUDdn*Fzg+gum.g١yA}aggiݬ ASGVg>(Ғ7+ 9G!N㞽+>2Iǚ]R֥Cd2NHSUCnn`H__뛉gG.;A'<* =v@w<,.ih yʛe-3 =?ofoO}SKs, 4@BNU??4-SV:k3^fU +Hf rw qk+ï^k$# I O3K"IlM .$$ y.}3YN{|Ccue q$d֦xu6ʪp "g<:SXv?Gu y8$rz&OE$^Ubؗ|:a݁~7 ciE4p=~gQǯk "@TP0O7VEJ$E `(/&RJht!)'GD G Ʊqo,sA*[2-웶aʐG  &xsGImc.-w1cO$Mxf<>]fX.^9ak[A8R2qds^c wxJ;!9 8Prp?!1ki^ƿp!q\|-o55m'F=~3ܥO  p +z׺lK/.q d8ڴ5_w[ vc:Eٱ˯)y ?C_ x;Rx 7Nv8VBp@d"?9.F2 Y!,zuV]2&&DbH밑ujzoj=Er]L-8i#_C]:Nm. >-Z((((((((((((g\==oNYdp+/ 7 Ե(klAOszc\zOIԡtkRdX/m㸍d0WP3jW| yv3.g݈hx>6>ֈ߻(((|GZoy_6;:gqn~!Wش*.$? [- ((}c7Mnϵ6yx_@EPEPEPEPEPEPEPE~#懦:Ap-R($r˸+ n_ p$qzr|rԴo=t=abH 4t8LljhpDm2 H ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (2i%lcncBvp FӐs5t7Fico06vJ[ɔdg ;IaXTyZ襮<RA@o xsm'A ]ǔeݜG'+f_kW3Oj?t߇m _(^+hT ˞JB#f0C_Q;h] l 1P';oZ=EN9eB8$sHÚn!B{$YHc0)A"Rptk=uyv,`̴i vPQE|M̭oFihA Io1@*5{h⿸{?ꤖ9b63$ MCgH}ɻ 9WXţ[Yϫ clqU`r+@ Ǎ3[Yé[I![db8%gVoR85~9_kN?f*$wc`˅ –n~!|?Aӧ6 yv8e` keo{5Y[{M7WاP] c2V@@Wd<ɮHZmQm9d.J(89`kh FbI e $ 񏅿 Ο˩ 0"Vώ|L9c]!m9anf:q# TcZkHm.yUP28y]}ONu-nxfO*!"\zV}q&+6Ѧ sב~^:OxI;?Qޥ]v1$ݔH ј+gSG^UcϘ֌t2,,3/<[Sž Q]B4W6V x083> '\'d 2%'z,[>-I mP!,>MA,ĕcr\EĚwL,FG1^Di{׀>3導ODێD'T1S&h"O*ԼYqhi:gpAߐ`}yyᵷX$/$0UE$xs_6x_6*,K2Br~tKW_68]u)muoZ抨# QGP8zqb?iai((%,|3hSx':G2\rVl$]s?vZﵻeLJ窶 #<EWWW}GCmP> <whrk_vi6#~ <y'›sgXVu;ڻrП~^NA_E'?>#j-I+w*.qï 760NkϊmⷷH8IUQ@ b= %Kѝ#,72HpV^?V~ է֤/=^)"l|m|$'߆_𱿲o7]~*hmdT7 x{W=xGrnS]n$21l* 's^^oZ4ۉMl'eJSFrN,rh*8g"7x_d F vUռ5 I++ >#Wx+Zoi.{YUڪ1h>^"յ/5@* $ |c<Qc ]FHD133˃5:d?uZ߇}h2H`*U#$695bݯ-/+i*(H'q1<]w;XZhnv[ @-u#80f4X4X&68c5 SXF#뚖2cu႞TzG<1GCH 噹y\w=~@xg›sgXVu;ڻrП~^NA_E'?>#j-I+w*.qï 760NkϊmⷷH8IUQ@ b= %Kѝ#,72Hp+ZԚE$W3̀o9md9(+f۾sxT2٩3TxoTվM^ts^ybwBJA=xڀ Ox$WkkYbs˞LO=?럔~Ox^5K]%t24QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEq_{x.+kZM(8v^p2xŠ+?St}ll<kH34(7zwp]ɝA GV(|AtKNn 8vvGX7'n.%"BI#TP2I'5AnK vI+!I8>_ ;oHIpKNGÀ8lu@)5]M6^aՎ0yx[5Z]_Gy2[|}^InQ_WWCI^@_KK׼Cs[u)$yaP,sʲx\G}3㯋,,mbN[ŒNkzg/JҀ>(;1Aj֫mi„S`" 糗6Ya[y@! 9erAyVI=̗zs7#1cvbk> yc7%㌀A FHg fڔ;([Kfe@'<6G xD~ԴiwH#b>Wzl0r"wSˎDo.3" ] $O8ϲWSFY^orD.[n13ߚ((((((((((((((((((((((((((((+ٯ0\J!tyQRUeۈEDKrwD<<${cJ&5ha/- F|*aEy__A]U|Y$8M2,9,sqWmCۿG]%:5\1Ϩ%k ;8-}ExW?9wGJf0k9Tv ?J  /rC%=eg9"p:^٤}6c. pjo]EʤFdu?S@+p5+JߜH0O^,+|+kMAoQ6bݼXn+}=(_^:IF8=deg O C}m]" PuK$d;Nk|9?j/ݰY"Tkesʥzs@x3_$G#\)8!Jr u' ř~mmⷷ8`GjQ@W|m$q"*1]G_xnvh嵐3w.WicӰjSQ!|9fˈ[6g?9g/$熵-f,7 8 AHouh;պv (((((((l<{[XjwZ/ѣ]%L}Lӭ,mbHaME(֬Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@PKƄffPKN:AOEBPS/dcommon/contbig.gif`GIF87a!!!111999BBBJJJRRRccckkksss{{{skk{{ZRRRJJƽ{sZRJRJB91)kcZB9)sskZRJ1޽ƽ{{ssskkkcƵZZRccZRRJJJB{BB9991ssckkZccR))!RRB!!JJ1))99!11ƌ)1R)k֔)s1RZJR{BJs9R1J!11J1J9k{csZk!1J!)cBR9J1B)91B!cRs{!)s!){1B!k!s!{ksksckckZc9B)1!)!)BJ9B1919έƌ!!)JJcZZ{!!!1RR{JJsBBkJJ{!!9BB{1!!J9)!!Z!!c1!!kR!!s9Z!BckJs)19!!c!!ZRZ,H rrxB(Kh" DժuICiи@S z$G3TTʖ&7!f b`D 0!A  k,>SO[!\ *_t  Exr%*_}!#U #4 & ֩3|b]L ]t b+Da&R_2lEٱZ`aC)/яmvUkS r(-iPE Vv_{z GLt\2s!F A#葡JY r|AA,hB}q|B`du }00(䡆<pb,G+oB C0p/x$…– ]7 @2HFc ) @AD \0 LHG',(A` `@SC)_" PH`}Y+_|1.K8pAKMA @?3҄$[JPA)+NH I ,@8G0/@R T,`pF8Ѓ)$^$ DDTDlA@ s;PKPKN:AOEBPS/dcommon/darbbook.cssPKPKN:A!OEBPS/dcommon/O_signature_clr.JPG"(JFIF``C    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?O '~MQ$Vz;OlJi8L%\]UFjޙ%ԯS;rA]5ފ<׈]j7Ouyq$z'TQuw7Ŀ KX߁M2=S'TQt?.5w'97;~pq=" ~k?`'9q6 E|yayM^Om'fkC&<5x' ?A?Zx'jß={=SßM gVC.5+Hd֪xc^)Җufz{Cީ|D Vkznq|+Xa+{50rx{|OG.OϞ~f/ xxX[2H )c+#jpUOZYX\=SG ߨC|K@;_߆'e?LT?]:?>w ڔ`D^So~xo[Ӡ3i7B:Q8 Vc-ďoi:FM292~y_*_闱YN\Fr=xZ3鳎OwW_QEzW~c]REeaSM}}Hӏ4&.E]u=gMѠ+mF`rNn$w9gMa꺢nTuhf2Xv>އ a(Û6߭?<=>z'TQuw7Ŀ KX߁M2=S'TQt?.5Kko\.8S$TOX߀Gw?Zx汴X)C7~.i6(Щ=+4{mGӭ¸-]&'t_kV*I<1)4thtIsqpQJ+> \m^[aJ5)ny:4o&QEnyAEPEEss 72,PDۢ׃K W{Wjr+wگ iM/;pd?~&?@;7E4gv8 $l'z'TQuw7Ŀ Gֱ=ɿ&G?. iR(5W*$|?w᫼gkmIbHe/_t>tg%y.l}N5[]+Mk0ĠeHdPrsst'UiC,y8`V%9ZIia|ܪvi מYG,o}+kk{YbyIeb*sAtի82zWoEK5z*o-eo;n(P u-I)4Š(HQEQEQEQEhz(X/Đ?}Bk˩ ݏrk0]4>8XzV? }6$}d^F>nU K ?Bտk_9׾x~w'ߞ  uDŽtL ؈5c-E/"|_Oo.IH쐍=i*Iw5(ںw?t5s.)+tQ2dUt5Vĺ.jZ"@IRrZƅY4ߡ_;}ų(KyQf1Aǵt?sZg+?F5_oQR&Dg߿]6FuRD u>ڿxl7?IT8'shj^=.=J1rj1Wl$얲cPx;E,p$֟ˏkw qg"45(ǛkV/=+ũ)bYl~K#˝J_כ5&\F'I#8/|wʾ_Xj Q:os^T1.M_|TO.;?_  jF?g N 8nA2F%i =qW,G=5OU u8]Rq?wr'˻S+۾.ܼ 87Q^elo/T*?L|ۚ<%<,/v_OKs B5f/29n0=zqQq(ª=VX@*J(э(f5qJN_EVǞQEOuoѕOuoa5}gO?:߂8Wא|cڽ~]N&O( (<]>͠@VQ=^~U ̴m&\խ5i:}|}r~9՝f}_>'vVֲ$~^f30^in{\_.O F8to}?${φ|#x^#^n~w=~k~?'KRtO.㌡h![3Zu*ٷճ(ԟ]z_/W1(ԟ]v~g|Yq<ז0 ; b8֮s,w9\?uEyStKaª@\,)) (!EPEPEPEPEPzѧts{v>C/"N6`d*J2gGӧWqBq_1ZuΓ\X]r?=Ey88Mp&pKtO-"wR2 K^-Z< \c>V0^@O7x2WFjs<׻kZ(<Т(OFw/6$1[:ޯԯ#q~4|,LVPem=@=YLUxӃV}AUbcUB.Ds5*kٸAeG>PJxt͝ b88?*$~@ׯD VkraiJs}Q.20x&mXξ,Z]“A-J#`+-E/"<]\a'tZGy.(|lދ~gMK OZdxDŽU9T6ϯ^<Ϡt5CZ]].t۫S=s`ڳ%8iVK:nqe+#<.T6U>zWoy3^I {F?J~=G}k)K$$;$de8*G Uӟ4Ocºw}|]4=ݣ\x$ʠms?q^ipw\"ȿPs^Z Q_0GڼU.t}ROM[G#]8wٞ ӫ87}Cgw vHȩBM55vof =A_٭`Ygx[6 P,5}>蚊(0(+?>+?> k|TuXq6_ +szk :u_ Z߶Ak_U}Jc2u/1[_»ݸG41-bሬ۴}}Eȹפ_c?5gi @cL\L<68hF_Ih>X4K7UТ sMj =J7CKo>Օ5s:߀t ~ηaٿ?|gdL8+gG%o?x`دOqȱwc¨&TW_V_aI=dpG!wu۞սZ1yL50$(l3(:~'ַo A}a3N*[0ǭ HKQV}G@֜$ 9of$ArNqUOgË05#m?D)^_h//5_/<?4}Jį+GkpG4"$ r| >S4Ђ"S 1%R:ȝ 8;PKPz PKN:AOEBPS/dcommon/feedback.gif7GIF89a'%(hp|fdx?AN5:dfeDGHɾTdQc`g*6DC\?ؘ||{;=E6JUՄfeA= >@,4`H.|`a (Q 9:&[|ځ,4p Y&BDb,!2@, $wPA'ܠǃ@CO~/d.`I @8ArHx9H75j L 3B/` P#qD*s 3A:3,H70P,R@ p!(F oԥ D;"0 ,6QBRɄHhI@@VDLCk8@NBBL2&pClA?DAk%$`I2 #Q+l7 "=&dL&PRSLIP)PɼirqМ'N8[_}w;PK-PKN:AOEBPS/dcommon/booklist.gifGIF89a1޵֥΄kZ{Jk1Rs!BZ)B),@I9Z͓Ca % Dz8Ȁ0FZЌ0P !x8!eL8aWȠFD(~@p+rMS|ӛR$ v "Z:]ZJJEc{*=AP  BiA ']j4$*   & 9q sMiO?jQ = , YFg4.778c&$c%9;PKː5PKN:AOEBPS/dcommon/cpyr.htm1 Oracle Legal Notices

Oracle Legal Notices

Copyright Notice

Copyright © 1994-2012, Oracle and/or its affiliates. All rights reserved.

Trademark Notice

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

License Restrictions Warranty/Consequential Damages Disclaimer

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Warranty Disclaimer

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

Restricted Rights Notice

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

Hazardous Applications Notice

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Third-Party Content, Products, and Services Disclaimer

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Alpha and Beta Draft Documentation Notice

If this document is in prerelease status:

This documentation is in prerelease status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.

Oracle Logo

PKN61PKN:AOEBPS/dcommon/masterix.gif.GIF89a1ޜΌscJk1Rs!Bc1J),@IS@0"1 Ѿb$b08PbL,acr B@(fDn Jx11+\%1 p { display: none; } /* Class Selectors */ .ProductTitle { font-family: sans-serif; } .BookTitle { font-family: sans-serif; } .VersionNumber { font-family: sans-serif; } .PrintDate { font-family: sans-serif; font-size: small; } .PartNumber { font-family: sans-serif; font-size: small; } PKeӺ1,PKN:AOEBPS/dcommon/larrow.gif#GIF87a絵ƌֵƽ{{ss֜ƔZZ{{{{ZZssZZccJJJJRRBBJJJJ991111))!!{,@pH,Ȥrl:ШtpHc`  өb[.64ꑈ53=Z]'yuLG*)g^!8C?-6(29K"Ĩ0Яl;U+K9^u2,@@ (\Ȱ Ë $P`lj 8x I$4H *(@͉0dа8tA  DсSP v"TUH PhP"Y1bxDǕ̧_=$I /& .)+ 60D)bB~=0#'& *D+l1MG CL1&+D`.1qVG ( "D2QL,p.;u. |r$p+5qBNl<TzB"\9e0u )@D,¹ 2@C~KU 'L6a9 /;<`P!D#Tal6XTYhn[p]݅ 7}B a&AƮe{EɲƮiEp#G}D#xTIzGFǂEc^q}) Y# (tۮNeGL*@/%UB:&k0{ &SdDnBQ^("@q #` @1B4i@ aNȅ@[\B >e007V[N(vpyFe Gb/&|aHZj@""~ӎ)t ? $ EQ.սJ$C,l]A `8A o B C?8cyA @Nz|`:`~7-G|yQ AqA6OzPbZ`>~#8=./edGA2nrBYR@ W h'j4p'!k 00 MT RNF6̙ m` (7%ꑀ;PKl-OJPKN:AOEBPS/dcommon/index.gifGIF89a1޵ΥΥ{sc{BZs,@IM" AD B0 3.R~[D"0, ]ШpRNC  /& H&[%7TM/`vS+-+ q D go@" 4o'Uxcxcc&k/ qp zUm(UHDDJBGMԃ;PK(PKN:AOEBPS/dcommon/bookbig.gif +GIF89a$!!!)))111999BBBJJJRRRZZZccckkksss{{{skkB991)))!!B11))1!JB9B9!!cZ9ƭƽssk{ZZRccZRRJJJBBB9c!!ν)1)k{s絽ƌkssֽZccJRRBJJ{9BB)11)99!!))11!!k!JZ!)RcJccBcs)1c)JZ!BR!)BZ)99J!Rk9!c11B)Z{)9Bkc1kB9BZ!Z{9Rs)Jkksk9kB1s1Jk9Rƥc{k9s)Z{1k91)s1Rk)Jc1J!))BZ!1k{csc{)19B!)Bcsc{ksc{kZs!RkJkJkքc{9Zks{ck9R)Bks9R9R1J!)Z1B!)c)9)99BR19kksBBJcc{ccBBZ))9kk!!199c11ZBB{9!!R!!Z!!c))!!kR!!s!!BcksRZ1c9B)R91c1)Z!R9B9k1)RcZ{)!1B9JB9B)!)J9B!& Imported from GIF image: bookbig.gif,$!!!)))111999BBBJJJRRRZZZccckkksss{{{skkB991)))!!B11))1!JB9B9!!cZ9ƭƽssk{ZZRccZRRJJJBBB9c!!ν)1)k{s絽ƌkssֽZccJRRBJJ{9BB)11)99!!))11!!k!JZ!)RcJccBcs)1c)JZ!BR!)BZ)99J!Rk9!c11B)Z{)9Bkc1kB9BZ!Z{9Rs)Jkksk9kB1s1Jk9Rƥc{k9s)Z{1k91)s1Rk)Jc1J!))BZ!1k{csc{)19B!)Bcsc{ksc{kZs!RkJkJkքc{9Zks{ck9R)Bks9R9R1J!)Z1B!)c)9)99BR19kksBBJcc{ccBBZ))9kk!!199c11ZBB{9!!R!!Z!!c))!!kR!!s!!BcksRZ1c9B)R91c1)Z!R9B9k1)RcZ{)!1B9JB9B)!)J9BH`\Ȑ:pظа"A6DBH,V@Dڹ'G"v Æ ܥ;n;!;>xAܽ[G.\rQC wr}BŊQ A9ᾑ#5Y0VȒj0l-GqF>ZpM rb ;=.ސW-WѻWo ha!}~ْ ; t 53 :\ 4PcD,0 4*_l0K3-`l.j!c Aa|2L4/1C`@@md;(H*80L0L(h*҇҆o#N84pC (xO@ A)J6rVlF r  fry†$r_pl5xhA+@A=F rGU a 1х4s&H Bdzt x#H%Rr (Ѐ7P`#Rщ'x" #0`@~i `HA'Tk?3!$`-A@1l"P LhʖRG&8A`0DcBH sq@AXB4@&yQhPAppxCQ(rBW00@DP1E?@lP1%T` 0 WB~nQ@;PKGC PKN:AOEBPS/dcommon/rarrow.gif/GIF87a絵ƌֵƽ{{ss֜ƔZZ{{{{ZZssZZccJJJJRRBBJJJJ991111))!!{,@pH,Ȥrl:ШLlԸ NCqWEd)#34vwwpN|0yhX!'+-[F 'n5 H $/14w3% C .90" qF 7&E "D mnB|,c96) I @0BW{ᢦdN p!5"D`0 T 0-]ʜ$;PKJV^PKN:AOEBPS/dcommon/mix.gifkGIF89aZZZBBBJJJkkk999sss!!!111cccֽ{{{RRR)))猌ƭ{s{sks!,@@pH,B$ 8 t:<8 *'ntPP DQ@rIBJLNPTVEMOQUWfj^!  hhG H  kCúk_a Ǥ^ h`B BeH mm  #F` I lpǎ,p B J\Y!T\(dǏ!Gdˆ R53ټ R;iʲ)G=@-xn.4Y BuU(*BL0PX v`[D! | >!/;xP` (Jj"M6 ;PK枰pkPKN:AOEBPS/dcommon/doccd_epub.jsM /* Copyright 2006, 2012, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2012.3.17 */ function addLoadEvent(func) { var oldOnload = window.onload; if (typeof(window.onload) != "function") window.onload = func; else window.onload = function() { oldOnload(); func(); } } function compactLists() { var lists = []; var ul = document.getElementsByTagName("ul"); for (var i = 0; i < ul.length; i++) lists.push(ul[i]); var ol = document.getElementsByTagName("ol"); for (var i = 0; i < ol.length; i++) lists.push(ol[i]); for (var i = 0; i < lists.length; i++) { var collapsible = true, c = []; var li = lists[i].getElementsByTagName("li"); for (var j = 0; j < li.length; j++) { var p = li[j].getElementsByTagName("p"); if (p.length > 1) collapsible = false; for (var k = 0; k < p.length; k++) { if ( getTextContent(p[k]).split(" ").length > 12 ) collapsible = false; c.push(p[k]); } } if (collapsible) { for (var j = 0; j < c.length; j++) { c[j].style.margin = "0"; } } } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(compactLists); function processIndex() { try { if (!/\/index.htm(?:|#.*)$/.test(window.location.href)) return false; } catch(e) {} var shortcut = []; lastPrefix = ""; var dd = document.getElementsByTagName("dd"); for (var i = 0; i < dd.length; i++) { if (dd[i].className != 'l1ix') continue; var prefix = getTextContent(dd[i]).substring(0, 2).toUpperCase(); if (!prefix.match(/^([A-Z0-9]{2})/)) continue; if (prefix == lastPrefix) continue; dd[i].id = prefix; var s = document.createElement("a"); s.href = "#" + prefix; s.appendChild(document.createTextNode(prefix)); shortcut.push(s); lastPrefix = prefix; } var h2 = document.getElementsByTagName("h2"); for (var i = 0; i < h2.length; i++) { var nav = document.createElement("div"); nav.style.position = "relative"; nav.style.top = "-1.5ex"; nav.style.left = "1.5em"; nav.style.width = "90%"; while (shortcut[0] && shortcut[0].toString().charAt(shortcut[0].toString().length - 2) == getTextContent(h2[i])) { nav.appendChild(shortcut.shift()); nav.appendChild(document.createTextNode("\u00A0 ")); } h2[i].parentNode.insertBefore(nav, h2[i].nextSibling); } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(processIndex); PKo"nR M PKN:AOEBPS/dcommon/toc.gifGIF89a1ΥΥ{c{Z{JkJk1Rk,@IK% 0| eJB,K-1i']Bt9dz0&pZ1o'q(؟dQ=3S SZC8db f&3v2@VPsuk2Gsiw`"IzE%< C !.hC IQ 3o?39T ҍ;PKv I PKN:AOEBPS/dcommon/topnav.gifGIF89a1ֽ筽ޭƔkZZk{Bc{,@ ) l)-'KR$&84 SI) XF P8te NRtHPp;Q%Q@'#rR4P fSQ o0MX[) v + `i9gda/&L9i*1$#"%+ ( E' n7Ȇ(,҅(L@(Q$\x 8=6 'נ9tJ&"[Epljt p#ѣHb :f F`A =l|;&9lDP2ncH R `qtp!dȐYH›+?$4mBA9 i@@ ]@ꃤFxAD*^Ŵ#,(ε  $H}F.xf,BD Z;PK1FAPKN:AOEBPS/dcommon/bp_layout.css# @charset "utf-8"; /* bp_layout.css Copyright 2007, Oracle and/or its affiliates. All rights reserved. */ body { margin: 0ex; padding: 0ex; } h1 { display: none; } #FOOTER { border-top: #0d4988 solid 10px; background-color: inherit; color: #e4edf3; clear: both; } #FOOTER p { font-size: 80%; margin-top: 0em; margin-left: 1em; } #FOOTER a { background-color: inherit; color: gray; } #LEFTCOLUMN { float: left; width: 50%; } #RIGHTCOLUMN { float: right; width: 50%; clear: right; /* IE hack */ } #LEFTCOLUMN div.portlet { margin-left: 2ex; margin-right: 1ex; } #RIGHTCOLUMN div.portlet { margin-left: 1ex; margin-right: 2ex; } div.portlet { margin: 2ex 1ex; padding-left: 0.5em; padding-right: 0.5em; border: 1px #bcc solid; background-color: #f6f6ff; color: black; } div.portlet h2 { margin-top: 0.5ex; margin-bottom: 0ex; font-size: 110%; } div.portlet p { margin-top: 0ex; } div.portlet ul { list-style-type: none; padding-left: 0em; margin-left: 0em; /* IE Hack */ } div.portlet li { text-align: right; } div.portlet li cite { font-style: normal; float: left; } div.portlet li a { margin: 0px 0.2ex; padding: 0px 0.2ex; font-size: 95%; } #NAME { margin: 0em; padding: 0em; position: relative; top: 0.6ex; left: 10px; width: 80%; } #PRODUCT { font-size: 180%; } #LIBRARY { color: #0b3d73; background: inherit; font-size: 180%; font-family: serif; } #RELEASE { position: absolute; top: 28px; font-size: 80%; font-weight: bold; } #TOOLS { list-style-type: none; position: absolute; top: 1ex; right: 2em; margin: 0em; padding: 0em; background: inherit; color: black; } #TOOLS a { background: inherit; color: black; } #NAV { float: left; width: 96%; margin: 3ex 0em 0ex 0em; padding: 2ex 0em 0ex 4%; /* Avoiding horizontal scroll bars. */ list-style-type: none; background: transparent url(../gifs/nav_bg.gif) repeat-x bottom; } #NAV li { float: left; margin: 0ex 0.1em 0ex 0em; padding: 0ex 0em 0ex 0em; } #NAV li a { display: block; margin: 0em; padding: 3px 0.7em; border-top: 1px solid gray; border-right: 1px solid gray; border-bottom: none; border-left: 1px solid gray; background-color: #a6b3c8; color: #333; } #SUBNAV { float: right; width: 96%; margin: 0ex 0em 0ex 0em; padding: 0.1ex 4% 0.2ex 0em; /* Avoiding horizontal scroll bars. */ list-style-type: none; background-color: #0d4988; color: #e4edf3; } #SUBNAV li { float: right; } #SUBNAV li a { display: block; margin: 0em; padding: 0ex 0.5em; background-color: inherit; color: #e4edf3; } #SIMPLESEARCH { position: absolute; top: 5ex; right: 1em; } #CONTENT { clear: both; } #NAV a:hover, #PORTAL_1 #OVERVIEW a, #PORTAL_2 #OVERVIEW a, #PORTAL_3 #OVERVIEW a, #PORTAL_4 #ADMINISTRATION a, #PORTAL_5 #DEVELOPMENT a, #PORTAL_6 #DEVELOPMENT a, #PORTAL_7 #DEVELOPMENT a, #PORTAL_11 #INSTALLATION a, #PORTAL_15 #ADMINISTRATION a, #PORTAL_16 #ADMINISTRATION a { background-color: #0d4988; color: #e4edf3; padding-bottom: 4px; border-color: gray; } #SUBNAV a:hover, #PORTAL_2 #SEARCH a, #PORTAL_3 #BOOKS a, #PORTAL_6 #WAREHOUSING a, #PORTAL_7 #UNSTRUCTURED a, #PORTAL_15 #INTEGRATION a, #PORTAL_16 #GRID a { position: relative; top: 2px; background-color: white; color: #0a4e89; } PK3( # PKN:AOEBPS/dcommon/bookicon.gif:GIF87a!!!)))111999BBBJJJRRRZZZccckkksss{{{ޭ{{ZRRcZZRJJJBB)!!skRB9{sν{skskcZRJ1)!֭ƽ{ZZRccZJJBBB999111)JJ9BB1ZZB!!ﭵBJJ9BB!!))Jk{)1!)BRZJ{BsR!RRJsJ!J{s!JsBkks{RsB{J{c1RBs1ZB{9BJ9JZ!1BJRRs!9R!!9Z9!1)J19JJRk19R1Z)!1B9R1RB!)J!J1R)J119!9J91!9BkksBBJ119BBR!))9!!!JB1JJ!)19BJRZckތ1)1J9B,H*\hp >"p`ƒFF "a"E|ժOC&xCRz OBtX>XE*O>tdqAJ +,WxP!CYpQ HQzDHP)T njJM2ꔀJ2T0d#+I:<жk 'ꤱF AB @@nh Wz' H|-7f\A#yNR5 /PM09u UjćT|q~Yq@&0YZAPa`EzI /$AD Al!AAal 2H@$ PVAB&c*ؠ p @% p-`@b`uBa l&`3Ap8槖X~ vX$Eh`.JhAepA\"Bl, :Hk;PKx[?:PKN:AOEBPS/dcommon/conticon.gif^GIF87a!!!)))111999BBBJJJRRRZZZccckkksss{{{ZRR޽{{ssskkkcccZ991ccRZZBBJJZck)19ZcsBJZ19J!k{k)Z1RZs1!B)!J91{k{)J!B!B911)k{cs!1s!9)s!9!B!k)k1c!)Z!R{9BJcckZZcBBJ99B119{{!!)BBRBBZ!))999R99Z!!999c1!9!)19B1)!B9R,  oua\h2SYPa aowwxYi 9SwyyxxyYSd $'^qYȵYvh ч,/?g{н.J5fe{ڶyY#%/}‚e,Z|pAܠ `KYx,ĉ&@iX9|`p ]lR1khٜ'E 6ÅB0J;t X b RP(*MÄ!2cLhPC <0Ⴁ  $4!B 6lHC%<1e H 4p" L`P!/,m*1F`#D0D^!AO@..(``_؅QWK>_*OY0J@pw'tVh;PKp*c^PKN:AOEBPS/dcommon/blafdoc.cssL@charset "utf-8"; /* Copyright 2002, 2011, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2011.10.7 */ body { font-family: Tahoma, sans-serif; /* line-height: 125%; */ color: black; background-color: white; font-size: small; } * html body { /* http://www.info.com.ph/~etan/w3pantheon/style/modifiedsbmh.html */ font-size: x-small; /* for IE5.x/win */ f\ont-size: small; /* for other IE versions */ } h1 { font-size: 165%; font-weight: bold; border-bottom: 1px solid #ddd; width: 100%; } h2 { font-size: 152%; font-weight: bold; } h3 { font-size: 139%; font-weight: bold; } h4 { font-size: 126%; font-weight: bold; } h5 { font-size: 113%; font-weight: bold; display: inline; } h6 { font-size: 100%; font-weight: bold; font-style: italic; display: inline; } a:link { color: #039; background: inherit; } a:visited { color: #72007C; background: inherit; } a:hover { text-decoration: underline; } a img, img[usemap] { border-style: none; } code, pre, samp, tt { font-family: monospace; font-size: 110%; } caption { text-align: center; font-weight: bold; width: auto; } dt { font-weight: bold; } table { font-size: small; /* for ICEBrowser */ } td { vertical-align: top; } th { font-weight: bold; text-align: left; vertical-align: bottom; } ol ol { list-style-type: lower-alpha; } ol ol ol { list-style-type: lower-roman; } td p:first-child, td pre:first-child { margin-top: 0px; margin-bottom: 0px; } table.table-border { border-collapse: collapse; border-top: 1px solid #ccc; border-left: 1px solid #ccc; } table.table-border th { padding: 0.5ex 0.25em; color: black; background-color: #f7f7ea; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } table.table-border td { padding: 0.5ex 0.25em; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } span.gui-object, span.gui-object-action { font-weight: bold; } span.gui-object-title { } p.horizontal-rule { width: 100%; border: solid #cc9; border-width: 0px 0px 1px 0px; margin-bottom: 4ex; } div.zz-skip-header { display: none; } td.zz-nav-header-cell { text-align: left; font-size: 95%; width: 99%; color: black; background: inherit; font-weight: normal; vertical-align: top; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-header-link { font-size: 95%; } td.zz-nav-button-cell { white-space: nowrap; text-align: center; width: 1%; vertical-align: top; padding-left: 4px; padding-right: 4px; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-button-link { font-size: 90%; } div.zz-nav-footer-menu { width: 100%; text-align: center; margin-top: 2ex; margin-bottom: 4ex; } p.zz-legal-notice, a.zz-legal-notice-link { font-size: 85%; /* display: none; */ /* Uncomment to hide legal notice */ } /*************************************/ /* Begin DARB Formats */ /*************************************/ .bold, .codeinlinebold, .syntaxinlinebold, .term, .glossterm, .seghead, .glossaryterm, .keyword, .msg, .msgexplankw, .msgactionkw, .notep1, .xreftitlebold { font-weight: bold; } .italic, .codeinlineitalic, .syntaxinlineitalic, .variable, .xreftitleitalic { font-style: italic; } .bolditalic, .codeinlineboldital, .syntaxinlineboldital, .titleinfigure, .titleinexample, .titleintable, .titleinequation, .xreftitleboldital { font-weight: bold; font-style: italic; } .itemizedlisttitle, .orderedlisttitle, .segmentedlisttitle, .variablelisttitle { font-weight: bold; } .bridgehead, .titleinrefsubsect3 { font-weight: bold; } .titleinrefsubsect { font-size: 126%; font-weight: bold; } .titleinrefsubsect2 { font-size: 113%; font-weight: bold; } .subhead1 { display: block; font-size: 139%; font-weight: bold; } .subhead2 { display: block; font-weight: bold; } .subhead3 { font-weight: bold; } .underline { text-decoration: underline; } .superscript { vertical-align: super; } .subscript { vertical-align: sub; } .listofeft { border: none; } .betadraft, .alphabetanotice, .revenuerecognitionnotice { color: #e00; background: inherit; } .betadraftsubtitle { text-align: center; font-weight: bold; color: #e00; background: inherit; } .comment { color: #080; background: inherit; font-weight: bold; } .copyrightlogo { text-align: center; font-size: 85%; } .tocsubheader { list-style-type: none; } table.icons td { padding-left: 6px; padding-right: 6px; } .l1ix dd, dd dl.l2ix, dd dl.l3ix { margin-top: 0ex; margin-bottom: 0ex; } div.infoboxnote, div.infoboxnotewarn, div.infoboxnotealso { margin-top: 4ex; margin-right: 10%; margin-left: 10%; margin-bottom: 4ex; padding: 0.25em; border-top: 1pt solid gray; border-bottom: 1pt solid gray; } p.notep1 { margin-top: 0px; margin-bottom: 0px; } .tahiti-highlight-example { background: #ff9; text-decoration: inherit; } .tahiti-highlight-search { background: #9cf; text-decoration: inherit; } .tahiti-sidebar-heading { font-size: 110%; margin-bottom: 0px; padding-bottom: 0px; } /*************************************/ /* End DARB Formats */ /*************************************/ @media all { /* * * { line-height: 120%; } */ dd { margin-bottom: 2ex; } dl:first-child { margin-top: 2ex; } } @media print { body { font-size: 11pt; padding: 0px !important; } a:link, a:visited { color: black; background: inherit; } code, pre, samp, tt { font-size: 10pt; } #nav, #search_this_book, #comment_form, #comment_announcement, #flipNav, .noprint { display: none !important; } body#left-nav-present { overflow: visible !important; } } PKʍPKN:AOEBPS/dcommon/rightnav.gif&GIF89a1ֽ筽ޭƔkZZk{Bc{,@ ) l)- $CҠҀ ! D1 #:aS( c4B0 AC8 ְ9!%MLj Z * ctypJBa H t>#Sb(clhUԂ̗4DztSԙ9ZQҀEPEPEPEPEPEPEPM=iԍP Gii c*yF 1׆@\&o!QY00_rlgV;)DGhCq7~..p&1c:u֫{fI>fJL$}BBP?JRWc<^j+χ5b[hֿ- 5_j?POkeQ^hֿ1L^ H ?Qi?z?+_xɔŪ\썽O]χ>)xxV/s)e6MI7*ߊޛv֗2J,;~E4yi3[nI`Ѱe9@zXF*W +]7QJ$$=&`a۾?]N T䏟'X)Ɣkf:j |>NBWzYx0t!* _KkoTZ?K Gc+UyڹgNuh^iSo5{\ܹ3Yos}.>if FqR5\/TӮ#]HS0DKu{($"2xִ{SBJ8=}Y=.|Tsц2UЫ%.InaegKo z ݎ3ֹxxwM&2S%';+I',kW&-"_¿_ Vq^ܫ6pfT2RV A^6RKetto^[{w\jPZ@ޢN4/XN#\42j\(z'j =~-I#:q[Eh|X:sp* bifp$TspZ-}NM*B-bb&*xUr#*$M|QWY ~p~- fTED6O.#$m+t$˙H"Gk=t9r娮Y? CzE[/*-{c*[w~o_?%ƔxZ:/5𨴟q}/]22p qD\H"K]ZMKR&\C3zĽ[PJm]AS)Ia^km M@dК)fT[ijW*hnu Ͳiw/bkExG£@f?Zu.s0(<`0ֹoxOaDx\zT-^ѧʧ_1+CP/p[w 9~U^[U<[tĽwPv[yzD1W='u$Oeak[^ |Gk2xv#2?¹TkSݕ| rݞ[Vi _Kz*{\c(Ck_܏|?u jVڔ6f t?3nmZ6f%QAjJf9Rq _j7Z-y.pG$Xb]0')[_k;$̭?&"0FOew7 z-cIX岛;$u=\an$ zmrILu uٞ% _1xcUW%dtÀx885Y^gn;}ӭ)場QEQ@Q@Q@Q@Q@Q@!4xPm3w*]b`F_931˜[ן+(> E ly;<;MF-qst+}DH @YKlLmؤciN<|]IU)Lw(8t9FS(=>og<\Z~u_+X1ylsj'eՃ*U3`C!N9Q_WܱhKc93^ua>H ƕGk=8~e#_?{ǀe-[2ٔ7;=&K挑5zsLdx(e8#{1wS+ΝVkXq9>&yஏh$zq^0~/j@:/«Vnce$$uoPp}MC{$-akH@ɫ1O !8R9s5ԦYmϧ'OUṡ5T,!Ԛ+s#1Veo=[)g>#< s)ƽُA^䠮ωFUj(ǩ|N3Jڷ睁ϱuږZYGOTsI<&drav?A^_f׻B$,O__ԿC`it{6>G׈C~&$y؎v1q9Sc1fH[ѽ>,gG'0'@Vw,BO [#>ﱺg5ΒFVD%Yr:O5 Tu+O멃]ی38Ze}R&ѝ_xzc1DXgس;<,_,{ƽY'AS#oF.M#~cBuEx7G+Y)(5q+GCV;qF+CLQ)qEC&6z𿊘z}?&w=+)??&\g{;V??׻xGœdٿ׼-Nc')3K]N)iLTӿCdb7Q^a N sd>Fz[0S^s'Zi 77D}kWus ab~~H(>.fif9,~|Jk;YN3H8Y(t6Q݉k͇_÷Z+2߄&[ +Tr^藺97~c܎=[f1RrBǓ^kEMhxYVm<[џ6| kqbѱ| YA{G8p?\UM7Z66 g1U1igU69 u5Pƪ:VVZC=[@ҹ¨$kSmɳО\vFz~i3^a Osŧυ9Q}_3 όO{/wgoet39 vO2ea;Ύ7$U#?k+Ek&dpzbӱ+TaB0gN{[N7Gי}U7&@?>Fz~E!a@s ?'67XxO*!?qi]֏TQN@tI+\^s8l0)2k!!iW8F$(yOּT.k,/#1:}8uT˾+5=O/`IW G֯b.-<= HOm;~so~hW5+kS8s.zwE| ?4ӿw/K N 9?j(#0UT` Wzw}:_*9m>󑓀F?ELzv=8q:=WgJ`nDr Zе<ֹ](Q@Q@Q@Q@Q@Q@Q@Q@ 'IdC0EYJVcMty_~u+Sw-aO n<[YJgL#6i g5ЖDZ14cʝ!!\/M}/_AYR__>oC? _?7_G#RERW쏞KB}JxGSkǕA pƱơP m]hwB7U$Zq M95"3q1ioATߚ{g.t uu2k=;h#YB= fgS :TdLԃ!44mFK{Hrd^7oz|BVr<{)6AXգV»|>*/hS܏z͆OM=Εq (s|s׊LKQI :9NJ)P+!ʣoAF>+=@I}"x/}۠1aנc¹4emC:>p_xWKX` >R3_S½èųp3޺u3N e یbmͺ<_ mnݮ1Op?Gm)Qb%N585'%Ahs\6yw!"&Ɨ._wk)}GP;Z!#\"< *oƾ\)}N>"լ/~]Lg}pBG X?<zZ#x69S=6) jzx=y9O&>+e!!? ?s~k5Gʏ)?*ce7Ox~k5􇔾Q/e7/Ԑ#3OgNC0] ;_FiRl>Q.g>!%k#ú:Kn'&}?U@\pџPtp)v<{_i}Oվֲ3XIYIx~b<D?(=_JXH=bbi=Oh?_ C_O)}oW쏜? %Ƶ;-RYFi`wۭ{ϖZMtQ$"c_+ԃx1*0b;ԕ݋ESQEQEQEQEQEQEQEQEQEQZ(1F)h1K@XLRE&9P (bf{RӨ&)PEPEPbԴPGKZ(iإbn(:A%S0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((PKje88PKN:AOEBPS/dcommon/help.gif!GIF89a1εֵ֜֜{kZsBc{,@ )sƠTQ$8(4ʔ%ŌCK$A HP`$h8ŒSd+ɡ\ H@%' 6M HO3SJM /:Zi[7 \( R9r ERI%  N=aq   qƦs *q-n/Sqj D XZ;PKއ{&!PKN:AOEBPS/et_concepts.htmG3 External Tables Concepts

13 External Tables Concepts

The external tables feature is a complement to existing SQL*Loader functionality. It enables you to access data in external sources as if it were in a table in the database.

Note that SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. See "Behavior Differences Between SQL*Loader and External Tables" for more information about how load behavior differs between SQL*Loader and external tables.

This chapter discusses the following topics:


See Also:

Oracle Database Administrator's Guide for additional information about creating and managing external tables

How Are External Tables Created?

External tables are created using the SQL CREATE TABLE...ORGANIZATION EXTERNAL statement. When you create an external table, you specify the following attributes:

  • TYPE - specifies the type of external table. The two available types are the ORACLE_LOADER type and the ORACLE_DATAPUMP type. Each type of external table is supported by its own access driver.

    • The ORACLE_LOADER access driver is the default. It loads data from external tables to internal tables. The data must come from text data files. (The ORACLE_LOADER access driver cannot perform unloads; that is, it cannot move data from an internal table to an external table.)

    • The ORACLE_DATAPUMP access driver can perform both loads and unloads. The data must come from binary dump files. Loads to internal tables from external tables are done by fetching from the binary dump files. Unloads from internal tables to external tables are done by populating the binary dump files of the external table. The ORACLE_DATAPUMP access driver can write dump files only as part of creating an external table with the SQL CREATE TABLE AS SELECT statement. Once the dump file is created, it can be read any number of times, but it cannot be modified (that is, no DML operations can be performed).

  • DEFAULT DIRECTORY - specifies the default directory to use for all input and output files that do not explicitly name a directory object. The location is specified with a directory object, not a directory path. You must create the directory object before you create the external table; otherwise, an error is generated. See "Location of Data Files and Output Files" for more information.

  • ACCESS PARAMETERS - describe the external data source and implements the type of external table that was specified. Each type of external table has its own access driver that provides access parameters unique to that type of external table. Access parameters are optional. See "Access Parameters".

  • LOCATION - specifies the data files for the external table. The files are named in the form directory:file. The directory portion is optional. If it is missing, then the default directory is used as the directory for the file.

The following example shows the use of each of these attributes (it assumes that the default directory def_dir1 already exists):

SQL> CREATE TABLE emp_load
  2    (employee_number      CHAR(5),
  3     employee_dob         CHAR(20),
  4     employee_last_name   CHAR(20),
  5     employee_first_name  CHAR(15),
  6     employee_middle_name CHAR(15),
  7     employee_hire_date   DATE)
  8  ORGANIZATION EXTERNAL
  9    (TYPE ORACLE_LOADER
 10     DEFAULT DIRECTORY def_dir1
 11     ACCESS PARAMETERS
 12       (RECORDS DELIMITED BY NEWLINE
 13        FIELDS (employee_number      CHAR(2),
 14                employee_dob         CHAR(20),
 15                employee_last_name   CHAR(18),
 16                employee_first_name  CHAR(11),
 17                employee_middle_name CHAR(11),
 18                employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
 19               )
 20       )
 21     LOCATION ('info.dat')
 22    );
 
Table created.

The information you provide through the access driver ensures that data from the data source is processed so that it matches the definition of the external table. The fields listed after CREATE TABLE emp_load are actually defining the metadata for the data in the info.dat source file.

Location of Data Files and Output Files

The access driver runs inside the database server. This is different from SQL*Loader, which is a client program that sends the data to be loaded over to the server. This difference has the following implications:

  • The server must have access to any files to be loaded by the access driver.

  • The server must create and write the output files created by the access driver: the log file, bad file, discard file, and also any dump files created by the ORACLE_DATAPUMP access driver.

The access driver requires that a directory object be used to specify the location from which to read and write files. A directory object maps a name to a directory name on the file system. For example, the following statement creates a directory object named ext_tab_dir that is mapped to a directory located at /usr/apps/datafiles.

CREATE DIRECTORY ext_tab_dir AS '/usr/apps/datafiles';

Directory objects can be created by DBAs or by any user with the CREATE ANY DIRECTORY privilege.


Note:

To use external tables in an Oracle Real Applications Cluster (Oracle RAC) configuration, you must ensure that the directory object path is on a cluster-wide file system.

After a directory is created, the user creating the directory object needs to grant READ and WRITE privileges on the directory to other users. These privileges must be explicitly granted, rather than assigned through the use of roles. For example, to allow the server to read files on behalf of user scott in the directory named by ext_tab_dir, the user who created the directory object must execute the following command:

GRANT READ ON DIRECTORY ext_tab_dir TO scott;

The SYS user is the only user that can own directory objects, but the SYS user can grant other users the privilege to create directory objects. Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.

Access Parameters

When you create an external table of a particular type, you can specify access parameters to modify the default behavior of the access driver. Each access driver has its own syntax for access parameters. Oracle provides two access drivers for use with external tables: ORACLE_LOADER and ORACLE_DATAPUMP.


Note:

These access parameters are collectively referred to as the opaque_format_spec in the SQL CREATE TABLE...ORGANIZATION EXTERNAL statement.


See Also:


Datatype Conversion During External Table Use

When data is moved into or out of an external table, it is possible that the same column will have a different datatype in each of the following three places:

  • The database: This is the source when data is unloaded into an external table and it is the destination when data is loaded from an external table.

  • The external table: When data is unloaded into an external table, the data from the database is converted, if necessary, to match the datatype of the column in the external table. Also, you can apply SQL operators to the source data to change its datatype before the data gets moved to the external table. Similarly, when loading from the external table into a database, the data from the external table is automatically converted to match the datatype of the column in the database. Again, you can perform other conversions by using SQL operators in the SQL statement that is selecting from the external table. For better performance, the datatypes in the external table should match those in the database.

  • The data file: When you unload data into an external table, the datatypes for fields in the data file exactly match the datatypes of fields in the external table. However, when you load data from the external table, the datatypes in the data file may not match the datatypes in the external table. In this case, the data from the data file is converted to match the datatypes of the external table. If there is an error converting a column, then the record containing that column is not loaded. For better performance, the datatypes in the data file should match the datatypes in the external table.

Any conversion errors that occur between the data file and the external table cause the row with the error to be ignored. Any errors between the external table and the column in the database (including conversion errors and constraint violations) cause the entire operation to terminate unsuccessfully.

When data is unloaded into an external table, data conversion occurs if the datatype of a column in the source table does not match the datatype of the column in the external table. If a conversion error occurs, then the data file may not contain all the rows that were processed up to that point and the data file will not be readable. To avoid problems with conversion errors causing the operation to fail, the datatype of the column in the external table should match the datatype of the column in the database. This is not always possible, because external tables do not support all datatypes. In these cases, the unsupported datatypes in the source table must be converted into a datatype that the external table can support. For example, if a source table has a LONG column, then the corresponding column in the external table must be a CLOB and the SELECT subquery that is used to populate the external table must use the TO_LOB operator to load the column. For example:

CREATE TABLE LONG_TAB_XT (LONG_COL CLOB) ORGANIZATION EXTERNAL...SELECT TO_LOB(LONG_COL) FROM LONG_TAB;

External Table Restrictions

This section lists what the external tables feature does not do and also describes some processing restrictions.

  • Exporting and importing of external tables with encrypted columns is not supported.

  • An external table does not describe any data that is stored in the database.

  • An external table does not describe how data is stored in the external source. This is the function of the access parameters.

  • Column processing: By default, the external tables feature fetches all columns defined for an external table. This guarantees a consistent result set for all queries. However, for performance reasons you can decide to process only the referenced columns of an external table, thus minimizing the amount of data conversion and data handling required to execute a query. In this case, a row that is rejected because a column in the row causes a datatype conversion error will not get rejected in a different query if the query does not reference that column. You can change this column-processing behavior with the ALTER TABLE command.

  • An external table cannot load data into a LONG column.

  • SQL strings cannot be specified in access parameters for the ORACLE_LOADER access driver. As a workaround, you can use the DECODE clause in the SELECT clause of the statement that is reading the external table. Alternatively, you can create a view of the external table that uses the DECODE clause and select from that view rather than the external table.

  • When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks.

PK6GGPKN:A OEBPS/toc.htm Oracle Database Utilities , 11g Release 2 (11.2)

Contents

List of Examples

List of Figures

List of Tables

Title and Copyright Information

Preface

What's New in Database Utilities?

Part I Oracle Data Pump

1 Overview of Oracle Data Pump

2 Data Pump Export

3 Data Pump Import

4 Data Pump Legacy Mode

5 Data Pump Performance

6 The Data Pump API

Part II SQL*Loader

7 SQL*Loader Concepts

8 SQL*Loader Command-Line Reference

9 SQL*Loader Control File Reference

10 SQL*Loader Field List Reference

11 Loading Objects, LOBs, and Collections

12 Conventional and Direct Path Loads

Part III External Tables

13 External Tables Concepts

14 The ORACLE_LOADER Access Driver

15 The ORACLE_DATAPUMP Access Driver

Part IV Other Utilities

16 ADRCI: ADR Command Interpreter

17 DBVERIFY: Offline Database Verification Utility

18 DBNEWID Utility

19 Using LogMiner to Analyze Redo Log Files

20 Using the Metadata APIs

21 Original Export

22 Original Import

Part V Appendixes

A SQL*Loader Syntax Diagrams

Index

PKܐf;'PKN:AOEBPS/part_et.htmv External Tables

Part III

External Tables

The chapters in this part describe the use of external tables.

Chapter 13, "External Tables Concepts"

This chapter describes basic concepts about external tables.

Chapter 14, "The ORACLE_LOADER Access Driver"

This chapter describes the ORACLE_LOADER access driver.

Chapter 15, "The ORACLE_DATAPUMP Access Driver"

This chapter describes the ORACLE_DATAPUMP access driver, including its parameters, and information about loading and unloading supported data types.

PKn~PKN:AOEBPS/app_ldr_syntax.htm!j SQL*Loader Syntax Diagrams

A SQL*Loader Syntax Diagrams

This appendix describes SQL*Loader syntax in graphic form (sometimes called railroad diagrams or DDL diagrams). For information about the syntax notation used, see the Oracle Database SQL Language Reference.

The following diagrams are shown with certain clauses collapsed (such as pos_spec). These diagrams are expanded and explained further along in the appendix.

Options Clause

Description of options.gif follows
Description of the illustration options.gif

Load Statement

Description of load_statement.gif follows
Description of the illustration load_statement.gif

infile_clause

Description of infile_clause.gif follows
Description of the illustration infile_clause.gif

concatenate_clause

Description of concatenate.gif follows
Description of the illustration concatenate.gif

into_table_clause

Description of intotab_clause.gif follows
Description of the illustration intotab_clause.gif

field_condition

Description of fld_cond.gif follows
Description of the illustration fld_cond.gif

delim_spec

Description of delim_spec.gif follows
Description of the illustration delim_spec.gif

full_fieldname

Description of fieldname.gif follows
Description of the illustration fieldname.gif

termination_spec

Description of terminat.gif follows
Description of the illustration terminat.gif

enclosure_spec

Description of enclose.gif follows
Description of the illustration enclose.gif

oid_spec

Description of oid_spec.gif follows
Description of the illustration oid_spec.gif

sid_spec

Description of sid_spec.gif follows
Description of the illustration sid_spec.gif

xmltype_spec

Description of xmltype_spec.gif follows
Description of the illustration xmltype_spec.gif

field_list

Description of field_list.gif follows
Description of the illustration field_list.gif

dgen_fld_spec

Description of dgen_fld.gif follows
Description of the illustration dgen_fld.gif

ref_spec

Description of ref.gif follows
Description of the illustration ref.gif

init_spec

Description of init.gif follows
Description of the illustration init.gif

bfile_spec

Description of bfile.gif follows
Description of the illustration bfile.gif

filler_fld_spec

Description of filler_fld.gif follows
Description of the illustration filler_fld.gif

scalar_fld_spec

Description of scalar.gif follows
Description of the illustration scalar.gif

lobfile_spec

Description of lobfile_spec.gif follows
Description of the illustration lobfile_spec.gif

pos_spec

Description of pos_spec.gif follows
Description of the illustration pos_spec.gif

datatype_spec

Description of datatype_spec.gif follows
Description of the illustration datatype_spec.gif

datatype_spec_cont

Description of datatype_spec_cont.gif follows
Description of the illustration datatype_spec_cont.gif

col_obj_fld_spec

Description of col_obj_fld_spec.gif follows
Description of the illustration col_obj_fld_spec.gif

collection_fld_spec

Description of coll_fld.gif follows
Description of the illustration coll_fld.gif

nested_table_spec

Description of nested_table.gif follows
Description of the illustration nested_table.gif

varray_spec

Description of varray.gif follows
Description of the illustration varray.gif

sdf_spec

Description of sdf.gif follows
Description of the illustration sdf.gif

count_spec

Description of count.gif follows
Description of the illustration count.gif

PK骢!!PKN:A OEBPS/lot.htmc List of Tables

List of Tables

PKhcPKN:AOEBPS/dbverify.htm>[ DBVERIFY: Offline Database Verification Utility

17 DBVERIFY: Offline Database Verification Utility

DBVERIFY is an external command-line utility that performs a physical data structure integrity check.

DBVERIFY can be used on offline or online databases, as well on backup files. You use DBVERIFY primarily when you need to ensure that a backup database (or data file) is valid before it is restored, or as a diagnostic aid when you have encountered data corruption problems. Because DBVERIFY can be run against an offline database, integrity checks are significantly faster.

DBVERIFY checks are limited to cache-managed blocks (that is, data blocks). Because DBVERIFY is only for use with data files, it does not work against control files or redo logs.

There are two command-line interfaces to DBVERIFY. With the first interface, you specify disk blocks of a single data file for checking. With the second interface, you specify a segment for checking. Both interfaces are started with the dbv command. The following sections provide descriptions of these interfaces:

Using DBVERIFY to Validate Disk Blocks of a Single Data File

In this mode, DBVERIFY scans one or more disk blocks of a single data file and performs page checks.


Note:

If the file you are verifying is an Oracle Automatic Storage Management (Oracle ASM) file, then you must supply a USERID. This is because DBVERIFY needs to connect to an Oracle instance to access Oracle ASM files.

Syntax

The syntax for DBVERIFY when you want to validate disk blocks of a single data file is as follows:

Description of dbverify.gif follows
Description of the illustration dbverify.gif

Parameters

Descriptions of the parameters are as follows:

ParameterDescription
USERIDSpecifies your username and password. This parameter is only necessary when the files being verified are Oracle ASM files.
FILEThe name of the database file to verify.
STARTThe starting block address to verify. Specify block addresses in Oracle blocks (as opposed to operating system blocks). If you do not specify START, then DBVERIFY defaults to the first block in the file.
ENDThe ending block address to verify. If you do not specify END, then DBVERIFY defaults to the last block in the file.
BLOCKSIZEBLOCKSIZE is required only if the file to be verified does not have a block size of 2 KB. If the file does not have block size of 2 KB and you do not specify BLOCKSIZE, then you will receive the error DBV-00103.
HIGH_SCNWhen a value is specified for HIGH_SCN, DBVERIFY writes diagnostic messages for each block whose block-level SCN exceeds the value specified.

This parameter is optional. There is no default.

LOGFILESpecifies the file to which logging information should be written. The default sends output to the terminal display.
FEEDBACKCauses DBVERIFY to send a progress display to the terminal in the form of a single period (.) for n number of pages verified during the DBVERIFY run. If n = 0, then there is no progress display.
HELPProvides online help.
PARFILESpecifies the name of the parameter file to use. You can store various values for DBVERIFY parameters in flat files. This enables you to customize parameter files to handle different types of data files and to perform specific types of integrity checks on data files.

Sample DBVERIFY Output For a Single Data File

The following is a sample verification of the file t_db1.dbf.The feedback parameter has been given the value 100 to display one period (.) for every 100 pages processed. A portion of the resulting output is also shown.

% dbv FILE=t_db1.dbf FEEDBACK=100
.
.
.
DBVERIFY - Verification starting : FILE = t_db1.dbf 

................................................................................
 

DBVERIFY - Verification complete 
 
Total Pages Examined         : 9216 
Total Pages Processed (Data) : 2044 
Total Pages Failing   (Data) : 0 
Total Pages Processed (Index): 733 
Total Pages Failing   (Index): 0 
Total Pages Empty            : 5686 
Total Pages Marked Corrupt   : 0 

Total Pages Influx           : 0 

Notes:

  • Pages = Blocks

  • Total Pages Examined = number of blocks in the file

  • Total Pages Processed = number of blocks that were verified (formatted blocks)

  • Total Pages Failing (Data) = number of blocks that failed the data block checking routine

  • Total Pages Failing (Index) = number of blocks that failed the index block checking routine

  • Total Pages Marked Corrupt = number of blocks for which the cache header is invalid, thereby making it impossible for DBVERIFY to identify the block type

  • Total Pages Influx = number of blocks that are being read and written to at the same time. If the database is open when DBVERIFY is run, then DBVERIFY reads blocks multiple times to get a consistent image. But because the database is open, there may be blocks that are being read and written to at the same time (INFLUX). DBVERIFY cannot get a consistent image of pages that are in flux.

Using DBVERIFY to Validate a Segment

In this mode, DBVERIFY enables you to specify a table segment or index segment for verification. It checks to ensure that a row chain pointer is within the segment being verified.

This mode requires that you specify a segment (data or index) to be validated. It also requires that you log on to the database with SYSDBA privileges, because information about the segment must be retrieved from the database.

During this mode, the segment is locked. If the specified segment is an index, then the parent table is locked. Note that some indexes, such as IOTs, do not have parent tables.

Syntax

The syntax for DBVERIFY when you want to validate a segment is as follows:

Description of dbverify_seg.gif follows
Description of the illustration dbverify_seg.gif

Parameters

Descriptions of the parameters are as follows:

ParameterDescription
USERIDSpecifies your username and password.
SEGMENT_IDSpecifies the segment to verify. It is composed of the tablespace ID number (tsn), segment header file number (segfile), and segment header block number (segblock). You can get this information from SYS_USER_SEGS. The relevant columns are TABLESPACE_ID, HEADER_FILE, and HEADER_BLOCK. You must have SYSDBA privileges to query SYS_USER_SEGS.
HIGH_SCNWhen a value is specified for HIGH_SCN, DBVERIFY writes diagnostic messages for each block whose block-level SCN exceeds the value specified.

This parameter is optional. There is no default.

LOGFILESpecifies the file to which logging information should be written. The default sends output to the terminal display.
FEEDBACKCauses DBVERIFY to send a progress display to the terminal in the form of a single period (.) for n number of pages verified during the DBVERIFY run. If n = 0, then there is no progress display.
HELPProvides online help.
PARFILESpecifies the name of the parameter file to use. You can store various values for DBVERIFY parameters in flat files. This enables you to customize parameter files to handle different types of data files and to perform specific types of integrity checks on data files.

Sample DBVERIFY Output For a Validated Segment

The following is a sample of the output that would be shown for a DBVERIFY operation to validate SEGMENT_ID 1.2.67.

DBVERIFY - Verification starting : SEGMENT_ID = 1.2.67
 
DBVERIFY - Verification complete
 
Total Pages Examined         : 8
Total Pages Processed (Data) : 0
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 1
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 2
Total Pages Processed (Seg)  : 1
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 4
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Highest block SCN            : 7358 (0.7358)
PKM>>PK N:Aoa,mimetypePKN:A+D?:iTunesMetadata.plistPKN:AYuMETA-INF/container.xmlPKN:A0+au>uOEBPS/logminer.htmPKN:AEF(({OEBPS/dp_perf.htmPKN:AԦ{ OEBPS/part_ldr.htmPKN:AS2DDrOEBPS/ldr_control_file.htmPKN:A[pTOOEBPS/cover.htmPKN:AD@@}OEBPS/original_import.htmPKN:AB449 OEBPS/whatsnew.htmPKN:ADISIn OEBPS/dp_api.htmPKN:A^#( OEBPS/ldr_concepts.htmPKN:AD/yy OEBPS/original_export.htmPKN:A FS7Sq OEBPS/ldr_loading.htmPKN:A6s1sOEBPS/dbnewid.htmPKN:Ak3"]X8OEBPS/title.htmPKN:Ab LOEBPS/part_dp.htmPKN:AVħXOEBPS/dp_legacy.htmPKN:A OEBPS/loe.htmPKN:AuvvXOEBPS/ldr_modes.htmPKN:Aw)\OEBPS/ldr_params.htmPKN:A`V0`OEBPS/et_dp_driver.htmPKN:AI OOOEBPS/metadata_api.htmPKN:A^OEBPS/preface.htmPKN:A<~OEBPS/index.htmPKN:Amo?L?OEBPS/dp_import.htmPKN:A!*OEBPS/img/impopts.gifPKN:A= tOEBPS/img/col_obj_fld_spec.gifPKN:ATM OEBPS/img/varray.gifPKN:AsOEBPS/img/sut81008.gifPKN:AjojOEBPS/img/into_table1.gifPKN:A ppOEBPS/img/et_string.gifPKN:AMMOEBPS/img/sut81088.gifPKN:ApEա/*OEBPS/img/varchar.gifPKN:Am ''#OEBPS/img/infile_clause.gifPKN:AdNuKOEBPS/img/into_table3.gifPKN:A w''VSOEBPS/img/expstart.gifPKN:A'   hjOEBPS/img/delim_spec.gifPKN:Ao""vOEBPS/img/concatenate.gifPKN:Ac!88 OEBPS/img/expmodes.gifPKN:A\{tpOEBPS/img/continueif.gifPKN:A"!OEBPS/img/dgen_fld.gifPKN:Avisj0e0 OEBPS/img/sdf.gifPKN:ARN}>OEBPS/img/scalar.gifPKN:A<&ROEBPS/img/expremap.gifPKN:AifOEBPS/img/sut81009.gifPKN:A~OEBPS/img/sut81018.gifPKN:AJO> 'OEBPS/img/expracopt.gifPKN:A 5OEBPS/img/terminat.gifPKN:AeO]N I AOEBPS/img/et_access_param.gifPKN:Aߞ&& MOEBPS/img/et_oracle_datapump.gifPKN:A& tOEBPS/img/fields_spec.gifPKN:A㙭++OEBPS/img/impdiagnostics.gifPKN:A.o[OEBPS/img/parallel.gifPKN:AF /OEBPS/img/ref.gifPKN:A58r m 1OEBPS/img/nested_table.gifPKN:A<;OEBPS/img/impnetopts.gifPKN:AmQ2 bb$5 OEBPS/img/et_record_spec_options.gifPKN:A!,'kOEBPS/img/char.gifPKN:AgrOEBPS/img/et_trim_spec.gifPKN:AkOJa{OEBPS/img/impinit.gifPKN:A $OEBPS/img/et_lobfile_attr.gifPKN:Ad bOEBPS/img/sid_spec.gifPKN:AD ? OEBPS/img/dbverify_seg.gifPKN:A DOEBPS/img/badfile.gifPKN:A<*7*>OEBPS/img/dbverify.gifPKN:AE6@6OEBPS/img/expfilter.gifPKN:AM$H2 OEBPS/img/et_condition.gifPKN:A\F:5NI OEBPS/img/et_record_spec.gifPKN:Am [ OEBPS/img/count.gifPKN:Ay) f OEBPS/img/char_length.gifPKN:A}9)4)Vq OEBPS/img/impdynopts.gifPKN:AʰO61՚ OEBPS/img/graphic.gifPKN:AN;Aa:\:N OEBPS/img/expdynopts.gifPKN:AiKxs OEBPS/img/options.gifPKN:A#>&& OEBPS/img/impfilter.gifPKN:A-f !OEBPS/img/et_field_list.gifPKN:A^21!OEBPS/img/remote_config.gifPKN:AIAvK/F/;I!OEBPS/img/nid.gifPKN:AC8aax!OEBPS/img/et_datatype_spec.gifPKN:A k|!OEBPS/img/into_table6.gifPKN:AZi!OEBPS/img/byteordermark.gifPKN:A!je!OEBPS/img/vargraphic.gifPKN:A $$!OEBPS/img/et_transform.gifPKN:A\"OEBPS/img/infile.gifPKN:A!Ϸ?:'"OEBPS/img/et_position_spec.gifPKN:A1whi{;"OEBPS/img/into_table7.gifPKN:A' VNIUA"OEBPS/img/graphic_ext.gifPKN:AAfH"OEBPS/img/et_cond_spec.gifPKN:AVQ\"OEBPS/img/byteorder.gifPKN:Aurme"OEBPS/img/zoned.gifPKN:A_O >n"OEBPS/img/sequence.gifPKN:AJO>  |"OEBPS/img/impracopt.gifPKN:A{'"OEBPS/img/sut81006.gifPKN:A AJ*&OEBPS/img/enclose.gifPKN:AIĈto"9&OEBPS/img/et_preprocessor_spec.gifPKN:A} L~ `B&OEBPS/img/bfile.gifPKN:AV6b ::P&OEBPS/img/expfileopts.gifPKN:Aۯ5ي&OEBPS/img/impstart.gifPKN:A ϡ&OEBPS/img/pos_spec.gifPKN:A?0V+V &OEBPS/img/datatype_spec_cont.gifPKN:A#'&!'OEBPS/img/impmodes.gifPKN:A{-r e-'OEBPS/img/discard.gifPKN:A=vn9'OEBPS/img/recsize_spec.gifPKN:ANH_ A'OEBPS/img/et_column_trans.gifPKN:A~J'OEBPS/img/fieldname.gifPKN:AuvqM'OEBPS/img/expinit.gifPKN:AQr&m&uc'OEBPS/img/lobfile_spec.gifPKN:Al%/'OEBPS/img/coll_fld.gifPKN:AiH'OEBPS/img/et_delim_spec.gifPKN:AMwo'OEBPS/img/field_list.gifPKN:Ai0c^'OEBPS/img/et_dateformat.gifPKN:AuV0Z ZM'OEBPS/img/datatype_spec.gifPKN:AR腁m5(OEBPS/et_params.htmPKN:A 9op*%f *OEBPS/img_text/sut81007.htmPKN:AoP61)$*OEBPS/img_text/et_record_spec_options.htmPKN:Aj(f**OEBPS/img_text/raw.htmPKN:Awhc-*OEBPS/img_text/fields_spec.htmPKN:Am@N?:]1*OEBPS/img_text/into_table3.htmPKN:Ac%4*OEBPS/img_text/datatype_spec_cont.htmPKN:A@;8<*OEBPS/img_text/sid_spec.htmPKN:AqVJC>?*OEBPS/img_text/graphic_ext.htmPKN:Ao<]PC*OEBPS/img_text/expinit.htmPKN:A%3 D?=G*OEBPS/img_text/varray.htmPKN:A~O`[ J*OEBPS/img_text/et_field_list.htmPKN:Ai!&.)vN*OEBPS/img_text/infile.htmPKN:A5`ql Q*OEBPS/img_text/et_dateformat.htmPKN:A/ U*OEBPS/img_text/badfile.htmPKN:Aaȳ<7Y*OEBPS/img_text/concatenate.htmPKN:Atأ"]*OEBPS/img_text/et_lobfile_attr.htmPKN:Aupa*OEBPS/img_text/enclose.htmPKN:A Ke*OEBPS/img_text/vargraphic.htmPKN:A`D{h*OEBPS/img_text/impracopt.htmPKN:AÚ|l*OEBPS/img_text/char_length.htmPKN:Abp*OEBPS/img_text/date.htmPKN:A)Ja\s*OEBPS/img_text/expdynopts.htmPKN:AsIDiy*OEBPS/img_text/expencrypt.htmPKN:AyvUP#~*OEBPS/img_text/et_position_spec.htmPKN:Aa%*OEBPS/img_text/et_oracle_datapump.htmPKN:AFA*OEBPS/img_text/delim_spec.htmPKN:A0SE@"B*OEBPS/img_text/et_column_trans.htmPKN:A׍*OEBPS/img_text/decimal.htmPKN:Aj5 :*OEBPS/img_text/expopts.htmPKN:Aql*OEBPS/img_text/dbverify_seg.htmPKN:A`[V!L*OEBPS/img_text/et_record_spec.htmPKN:A-r!"*OEBPS/img_text/et_access_param.htmPKN:AEd"qlg*OEBPS/img_text/et_string.htmPKN:AD;I"*OEBPS/img_text/continueif.htmPKN:A,<7 o*OEBPS/img_text/datatype_spec.htmPKN:AM2-*OEBPS/img_text/sut81003.htmPKN:AĜt*OEBPS/img_text/fieldname.htmPKN:A-Ӌ8kf*OEBPS/img_text/et_cond_spec.htmPKN:A2l#_Zj*OEBPS/img_text/dgen_fld.htmPKN:Aטje*OEBPS/img_text/discard.htmPKN:A( *OEBPS/img_text/et_trim_spec.htmPKN:Aآ1u94 *OEBPS/img_text/infile_clause.htmPKN:A61!0*OEBPS/img_text/impdiagnostics.htmPKN:A/%P *OEBPS/img_text/pos_spec.htmPKN:Agd_*OEBPS/img_text/nid.htmPKN:A (z{*OEBPS/img_text/expracopt.htmPKN:Ax*OEBPS/img_text/field_list.htmPKN:AD?*OEBPS/img_text/et_init_spec.htmPKN:Alkf**OEBPS/img_text/sdf.htmPKN:AX-ce*OEBPS/img_text/impinit.htmPKN:Ac*OEBPS/img_text/sut81008.htmPKN:AP#*OEBPS/img_text/col_obj_fld_spec.htmPKN:AwW\W *OEBPS/img_text/byteordermark.htmPKN:A\.UPQ+OEBPS/img_text/scalar.htmPKN:Axvt]94+OEBPS/img_text/expfileopts.htmPKN:AǓjer +OEBPS/img_text/sequence.htmPKN:Ai C>%+OEBPS/img_text/count.htmPKN:A+:5'+OEBPS/img_text/et_preprocessor_spec.htmPKN:Ap\ =+OEBPS/img_text/oid_spec.htmPKN:A1tA<+OEBPS/img_text/recsize_spec.htmPKN:AS;6!+OEBPS/img_text/expmodes.htmPKN:A|q1,!+OEBPS/img_text/into_table7.htmPKN:A!"%+OEBPS/img_text/graphic.htmPKN:AU,GB(+OEBPS/img_text/into_table5.htmPKN:A<,+OEBPS/img_text/sut81088.htmPKN:A?-(B0+OEBPS/img_text/into_table6.htmPKN:A\r&! 3+OEBPS/img_text/remote_config.htmPKN:AG/7+OEBPS/img_text/zoned.htmPKN:A"B=!:+OEBPS/img_text/intotab_clause.htmPKN:Av6bRMC+OEBPS/img_text/sut81009.htmPKN:AL=8G+OEBPS/img_text/impnetopts.htmPKN:AB>:OEBPS/dbverify.htmPK&&Q;