PK &Aoa,mimetypeapplication/epub+zipPK&AiTunesMetadata.plistN artistName Oracle Corporation book-info cover-image-hash 70640769 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 301204021 publisher-unique-id E17069-07 unique-id 68593371 genre Oracle Documentation itemName Oracle® Streams Concepts and Administration, 11g Release 2 (11.2) releaseDate 2011-07-28T22:40:01Z year 2011 PK>0+~SNPK&AMETA-INF/container.xml PKYuPK&AOEBPS/strms_monitor.htm Monitoring an Oracle Streams Environment

22 Monitoring an Oracle Streams Environment

This chapter lists the static data dictionary views and dynamic performance views related to Oracle Streams. You can use these views to monitor your Oracle Streams environment.

The following sections contain data dictionary views for monitoring an Oracle Streams environment:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See Oracle Database 2 Day + Data Replication and Integration Guide and the online Help for the Oracle Streams tool for more information.


See Also:

Oracle Database Reference for information about the data dictionary views described in this chapter

Summary of Oracle Streams Static Data Dictionary Views

Table 22-1 lists the Oracle Streams static data dictionary views.

Table 22-1 Oracle Streams Static Data Dictionary Views

ALL_ ViewsDBA_ ViewsUSER_ Views

ALL_APPLY

DBA_APPLY

N/A

ALL_APPLY_CHANGE_HANDLERS

DBA_APPLY_CHANGE_HANDLERS

N/A

ALL_APPLY_CONFLICT_COLUMNS

DBA_APPLY_CONFLICT_COLUMNS

N/A

ALL_APPLY_DML_HANDLERS

DBA_APPLY_DML_HANDLERS

N/A

ALL_APPLY_ENQUEUE

DBA_APPLY_ENQUEUE

N/A

ALL_APPLY_ERROR

DBA_APPLY_ERROR

N/A

ALL_APPLY_EXECUTE

DBA_APPLY_EXECUTE

N/A

N/A

DBA_APPLY_INSTANTIATED_GLOBAL

N/A

N/A

DBA_APPLY_INSTANTIATED_OBJECTS

N/A

N/A

DBA_APPLY_INSTANTIATED_SCHEMAS

N/A

ALL_APPLY_KEY_COLUMNS

DBA_APPLY_KEY_COLUMNS

N/A

N/A

DBA_APPLY_OBJECT_DEPENDENCIES

N/A

ALL_APPLY_PARAMETERS

DBA_APPLY_PARAMETERS

N/A

ALL_APPLY_PROGRESS

DBA_APPLY_PROGRESS

N/A

N/A

DBA_APPLY_SPILL_TXN

N/A

ALL_APPLY_TABLE_COLUMNS

DBA_APPLY_TABLE_COLUMNS

N/A

N/A

DBA_APPLY_VALUE_DEPENDENCIES

N/A

ALL_CAPTURE

DBA_CAPTURE

N/A

ALL_CAPTURE_EXTRA_ATTRIBUTES

DBA_CAPTURE_EXTRA_ATTRIBUTES

N/A

ALL_CAPTURE_PARAMETERS

DBA_CAPTURE_PARAMETERS

N/A

ALL_CAPTURE_PREPARED_DATABASE

DBA_CAPTURE_PREPARED_DATABASE

N/A

ALL_CAPTURE_PREPARED_SCHEMAS

DBA_CAPTURE_PREPARED_SCHEMAS

N/A

ALL_CAPTURE_PREPARED_TABLES

DBA_CAPTURE_PREPARED_TABLES

N/A

N/A

DBA_COMPARISON

USER_COMPARISON

N/A

DBA_COMPARISON_COLUMNS

USER_COMPARISON_COLUMNS

N/A

DBA_COMPARISON_ROW_DIF

USER_COMPARISON_ROW_DIF

N/A

DBA_COMPARISON_SCAN

USER_COMPARISON_SCAN

N/A

DBA_COMPARISON_SCAN_VALUES

USER_COMPARISON_SCAN_VALUES

ALL_EVALUATION_CONTEXT_TABLES

DBA_EVALUATION_CONTEXT_TABLES

USER_EVALUATION_CONTEXT_TABLES

ALL_EVALUATION_CONTEXT_VARS

DBA_EVALUATION_CONTEXT_VARS

USER_EVALUATION_CONTEXT_VARS

ALL_EVALUATION_CONTEXTS

DBA_EVALUATION_CONTEXTS

USER_EVALUATION_CONTEXTS

ALL_FILE_GROUP_EXPORT_INFO

DBA_FILE_GROUP_EXPORT_INFO

USER_FILE_GROUP_EXPORT_INFO

ALL_FILE_GROUP_FILES

DBA_FILE_GROUP_FILES

USER_FILE_GROUP_FILES

ALL_FILE_GROUP_TABLES

DBA_FILE_GROUP_TABLES

USER_FILE_GROUP_TABLES

ALL_FILE_GROUP_TABLESPACES

DBA_FILE_GROUP_TABLESPACES

USER_FILE_GROUP_TABLESPACES

ALL_FILE_GROUP_VERSIONS

DBA_FILE_GROUP_VERSIONS

USER_FILE_GROUP_VERSIONS

ALL_FILE_GROUPS

DBA_FILE_GROUPS

USER_FILE_GROUPS

N/A

DBA_HIST_STREAMS_APPLY_SUM

N/A

N/A

DBA_HIST_STREAMS_CAPTURE

N/A

N/A

DBA_HIST_STREAMS_POOL_ADVICE

N/A

ALL_PROPAGATION

DBA_PROPAGATION

N/A

N/A

DBA_RECOVERABLE_SCRIPT

N/A

N/A

DBA_RECOVERABLE_SCRIPT_BLOCKS

N/A

N/A

DBA_RECOVERABLE_SCRIPT_ERRORS

N/A

N/A

DBA_RECOVERABLE_SCRIPT_HIST

N/A

N/A

DBA_RECOVERABLE_SCRIPT_PARAM

N/A

N/A

DBA_REGISTERED_ARCHIVED_LOG

N/A

ALL_RULE_SET_RULES

DBA_RULE_SET_RULES

USER_RULE_SET_RULES

ALL_RULE_SETS

DBA_RULE_SETS

USER_RULE_SETS

ALL_RULES

DBA_RULES

USER_RULES

N/A

DBA_STREAMS_ADD_COLUMN

N/A

N/A

DBA_STREAMS_ADMINISTRATOR

N/A

ALL_STREAMS_COLUMNS

DBA_STREAMS_COLUMNS

N/A

N/A

DBA_STREAMS_DELETE_COLUMN

N/A

ALL_STREAMS_GLOBAL_RULES

DBA_STREAMS_GLOBAL_RULES

N/A

N/A

DBA_STREAMS_KEEP_COLUMNS

N/A

ALL_STREAMS_MESSAGE_CONSUMERS

DBA_STREAMS_MESSAGE_CONSUMERS

N/A

ALL_STREAMS_MESSAGE_RULES

DBA_STREAMS_MESSAGE_RULES

N/A

ALL_STREAMS_NEWLY_SUPPORTED

DBA_STREAMS_NEWLY_SUPPORTED

N/A

N/A

DBA_STREAMS_RENAME_COLUMN

N/A

N/A

DBA_STREAMS_RENAME_SCHEMA

N/A

N/A

DBA_STREAMS_RENAME_TABLE

N/A

ALL_STREAMS_RULES

DBA_STREAMS_RULES

N/A

ALL_STREAMS_SCHEMA_RULES

DBA_STREAMS_SCHEMA_RULES

N/A

N/A

DBA_STREAMS_SPLIT_MERGE

N/A

N/A

DBA_STREAMS_SPLIT_MERGE_HIST

N/A

N/A

DBA_STREAMS_STMT_HANDLERS

N/A

N/A

DBA_STREAMS_STMTS

N/A

ALL_STREAMS_TABLE_RULES

DBA_STREAMS_TABLE_RULES

N/A

N/A

DBA_STREAMS_TRANSFORMATIONS

N/A

ALL_STREAMS_TRANSFORM_FUNCTION

DBA_STREAMS_TRANSFORM_FUNCTION

N/A

N/A

DBA_STREAMS_TP_COMPONENT

N/A

N/A

DBA_STREAMS_TP_COMPONENT_LINK

N/A

N/A

DBA_STREAMS_TP_COMPONENT_STAT

N/A

N/A

DBA_STREAMS_TP_DATABASE

N/A

N/A

DBA_STREAMS_TP_PATH_BOTTLENECK

N/A

N/A

DBA_STREAMS_TP_PATH_STAT

N/A

ALL_STREAMS_UNSUPPORTED

DBA_STREAMS_UNSUPPORTED

N/A

ALL_SYNC_CAPTURE

DBA_SYNC_CAPTURE

N/A

ALL_SYNC_CAPTURE_PREPARED_TABS

DBA_SYNC_CAPTURE_PREPARED_TABS

N/A

N/A

DBA_SYNC_CAPTURE_TABLES

N/A


Summary of Oracle Streams Dynamic Performance Views

The Oracle Streams dynamic performance views are:


Note:

  • When monitoring an Oracle Real Application Clusters (Oracle RAC) database, use the GV$ versions of the dynamic performance views.

  • To collect elapsed time statistics in these dynamic performance views, set the TIMED_STATISTICS initialization parameter to TRUE.


PK]*PK&AOEBPS/strms_trapply.htm Troubleshooting Apply

33 Troubleshooting Apply

The following topics describe identifying and resolving common apply process problems in an Oracle Streams environment:

Is the Apply Process Enabled?

An apply process applies changes only when it is enabled.

You can check whether an apply process is enabled, disabled, or aborted by querying the DBA_APPLY data dictionary view. For example, to check whether an apply process named apply is enabled, run the following query:

SELECT STATUS FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';

If the apply process is disabled, then your output looks similar to the following:

STATUS
--------
DISABLED

If the apply process is disabled, then try restarting it. If the apply process is aborted, then you might need to correct an error before you can restart it successfully. If the apply process did not shut down cleanly, then it might not restart. In this case, it returns the following error:

ORA-26666 cannot alter STREAMS process

If this happens then, then run the STOP_APPLY procedure in the DBMS_APPLY_ADM package with the force parameter set to TRUE. Next, restart the apply process.

To determine why an apply process aborted, query the DBA_APPLY data dictionary view or check the trace files for the apply process. The following query shows when the apply process aborted and the error that caused it to abort:

COLUMN APPLY_NAME HEADING 'APPLY|Process|Name' FORMAT A10
COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40

SELECT APPLY_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE
  FROM DBA_APPLY WHERE STATUS='ABORTED';

See Also:


Is the Apply Process Current?

If an apply process has not applied recent changes, then the problem might be that the apply process has fallen behind. If apply process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism apply process parameter.

You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR dynamic performance view.

Does the Apply Process Apply Captured LCRs?

An apply process can apply either captured LCRs from its buffered queue, or it can apply messages from its persistent queue, but not both types of messages. Messages in a persistent queue can be persistent LCRs and persistent user messages. An apply process might not be applying messages of a one type because it was configured to apply the other type of messages.

You can check the type of messages applied by an apply process by querying the DBA_APPLY data dictionary view. For example, to check whether an apply process named apply applies captured LCRs or not, run the following query:

COLUMN APPLY_CAPTURED HEADING 'Type of Messages Applied' FORMAT A25

SELECT DECODE(APPLY_CAPTURED,
                'YES', 'Captured',
                'NO',  'Messages from Persistent Queue') APPLY_CAPTURED
  FROM DBA_APPLY
  WHERE APPLY_NAME = 'APPLY';

If the apply process applies captured LCRs, then your output looks similar to the following:

Type of Messages Applied
-------------------------
Captured

If an apply process is not applying the expected type of messages, then you might need to create an apply process to apply these messages.


See Also:


Is the Apply Process's Queue Receiving the Messages to be Applied?

An apply process must receive messages in its queue before it can apply these messages. Therefore, if an apply process is applying messages captured by a capture process or a synchronous capture, then the capture process or synchronous capture that captures these messages must be configured properly. If it is a capture process, then it must also be enabled. Similarly, if messages are propagated from one or more databases before reaching the apply process, then each propagation must be enabled and must be configured properly. If a capture process, a synchronous capture, or a propagation on which the apply process depends is not enabled or is not configured properly, then the messages might never reach the apply process's queue.

The rule sets used by all Oracle Streams clients, including capture processes, synchronous captures, and propagations, determine the behavior of these Oracle Streams clients. Therefore, ensure that the rule sets for any capture processes, synchronous capture, or propagations on which an apply process depends contain the correct rules. If the rules for these Oracle Streams clients are not configured properly, then the apply process's queue might never receive the appropriate messages. Also, a message traveling through a stream is the composition of all of the transformations done along the path. For example, if a capture process uses subset rules and performs row migration during capture of a message, and a propagation uses a rule-based transformation on the message to change the table name, then, when the message reaches an apply process, the apply process rules must account for these transformations.

In an environment where a capture process or synchronous capture captures changes that are propagated and applied at multiple databases, you can use the following guidelines to determine whether a problem is caused by a capture process, a synchronous capture, or a propagation on which an apply process depends or by the apply process itself:

Is a Custom Apply Handler Specified?

You can use apply handlers to handle messages dequeued by an apply process in a customized way. These handlers include statement DML handlers, procedure DML handlers, DDL handlers, precommit handlers, and message handlers. If an apply process is not behaving as expected, then check the handlers used by the apply process, and correct any flaws. You might need to modify a SQL statement in a statement DML handler to correct an apply problem. You also might need to modify a PL/SQL procedure or remove it to correct an apply problem.

You can find the names of these procedures by querying the DBA_APPLY_DML_HANDLERS and DBA_APPLY data dictionary views.

Is the AQ_TM_PROCESSES Initialization Parameter Set to Zero?

The AQ_TM_PROCESSES initialization parameter controls time monitoring on queue messages and controls processing of messages with delay and expiration properties specified. In Oracle Database 10g or later, the database automatically controls these activities when the AQ_TM_PROCESSES initialization parameter is not set.

If an apply process is not applying messages, but there are messages that satisfy the apply process rule sets in the apply process's queue, then ensure that the AQ_TM_PROCESSES initialization parameter is not set to zero at the destination database. If this parameter is set to zero, then unset this parameter or set it to a nonzero value and monitor the apply process to see if it begins to apply messages.

To determine whether there are messages in a buffered queue, you can query the V$BUFFERED_QUEUES and V$BUFFERED_SUBSCRIBERS dynamic performance views. To determine whether there are messages in a persistent queue, you can query the queue table for the queue.

Does the Apply User Have the Required Privileges?

If the apply user does not have explicit EXECUTE privilege on an apply handler procedure or custom rule-based transformation function, then an ORA-26808 error might result when the apply user tries to run the procedure or function. Typically, this error is causes the apply process to abort without adding errors to the DBA_APPLY_ERROR view. However, the trace file for the apply coordinator reports the error. Specifically, an error similar to the following appears in the trace file:

ORA-26808: Apply process AP01 died unexpectedly

Typically, error messages surround this message, and one or more of these messages contain the name of the procedure or function. To correct the problem, grant the required EXECUTE privilege to the apply user.

Is the Apply Process Encountering Contention?

An apply server is a component of an apply process. Apply servers apply DML and DDL changes to database objects at a destination database. An apply process can use one or more apply servers, and the parallelism apply process parameter specifies the number of apply servers that can concurrently apply transactions. For example, if parallelism is set to 5, then an apply process uses a total of five apply servers.

An apply server encounters contention when the apply server must wait for a resource that is being used by another session. Contention can result from logical dependencies. For example, when an apply server tries to apply a change to a row that a user has locked, then the apply server must wait for the user. Contention can also result from physical dependencies. For example, interested transaction list (ITL) contention results when two transactions that are being applied, which might not be logically dependent, are trying to lock the same block on disk. In this case, one apply server locks rows in the block, and the other apply server must wait for access to the block, even though the second apply server is trying to lock different rows. See "Is the Apply Process Waiting for a Dependent Transaction?" for detailed information about ITL contention.

When an apply server encounters contention that does not involve another apply server in the same apply process, it waits until the contention clears. When an apply server encounters contention that involves another apply server in the same apply process, one of the two apply servers is rolled back. An apply process that is using multiple apply servers might be applying multiple transactions at the same time. The apply process tracks the state of the apply server that is applying the transaction with the lowest commit SCN. If there is a dependency between two transactions, then an apply process always applies the transaction with the lowest commit SCN first. The transaction with the higher commit SCN waits for the other transaction to commit. Therefore, if the apply server with the lowest commit SCN transaction is encountering contention, then the contention results from something other than a dependent transaction. In this case, you can monitor the apply server with the lowest commit SCN transaction to determine the cause of the contention.

The following four wait states are possible for an apply server:


See Also:


Is the Apply Process Waiting for a Dependent Transaction?

If you set the parallelism parameter for an apply process to a value greater than 1, and you set the commit_serialization parameter of the apply process to FULL, then the apply process can detect interested transaction list (ITL) contention if there is a transaction that is dependent on another transaction with a higher SCN. ITL contention occurs if the session that created the transaction waited for an ITL slot in a block. This happens when the session wants to lock a row in the block, but one or more other sessions have rows locked in the same block, and there is no free ITL slot in the block.

ITL contention also is possible if the session is waiting due to a shared bitmap index fragment. Bitmap indexes index key values and a range of rowids. Each entry in a bitmap index can cover many rows in the actual table. If two sessions want to update rows covered by the same bitmap index fragment, then the second session waits for the first transaction to either COMMIT or ROLLBACK.

When an apply process detects such a dependency, it resolves the ITL contention automatically and records information about it in the alert log and apply process trace file for the database. ITL contention can negatively affect the performance of an apply process because there might not be any progress while it is detecting the deadlock.

To avoid the problem in the future, perform one of the following actions:

Is an Apply Server Performing Poorly for Certain Transactions?

If an apply process is not performing well, then the reason might be that one or more apply servers used by the apply process are taking an inordinate amount of time to apply certain transactions. The following query displays information about the transactions being applied by each apply server used by an apply process named strm01_apply:

COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99999999
COLUMN STATE HEADING 'Apply Server State' FORMAT A20
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied Message|Number' FORMAT 99999999
COLUMN MESSAGE_SEQUENCE HEADING 'Message Sequence|Number' FORMAT 99999999

SELECT SERVER_ID, STATE, APPLIED_MESSAGE_NUMBER, MESSAGE_SEQUENCE 
  FROM V$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'STRM01_APPLY'
  ORDER BY SERVER_ID;

If you run this query repeatedly, then over time the apply server state, applied message number, and message sequence number should continue to change for each apply server as it applies transactions. If these values do not change for one or more apply servers, then the apply server might not be performing well. In this case, you should ensure that, for each table to which the apply process applies changes, every key column has an index.

If you have many such tables, then you might need to determine the specific table and DML or DDL operation that is causing an apply server to perform poorly. To do so, run the following query when an apply server is taking an inordinately long time to apply a transaction. In this example, assume that the name of the apply process is strm01_apply and that apply server number two is performing poorly:

COLUMN OPERATION HEADING 'Operation' FORMAT A20
COLUMN OPTIONS HEADING 'Options' FORMAT A20
COLUMN OBJECT_OWNER HEADING 'Object|Owner' FORMAT A10
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A10
COLUMN COST HEADING 'Cost' FORMAT 99999999

SELECT p.OPERATION, p.OPTIONS, p.OBJECT_OWNER, p.OBJECT_NAME, p.COST
  FROM V$SQL_PLAN p, V$SESSION s, V$STREAMS_APPLY_SERVER a
  WHERE a.APPLY_NAME = 'STRM01_APPLY' AND a.SERVER_ID = 2
    AND s.SID = a.SID
    AND p.HASH_VALUE = s.SQL_HASH_VALUE;

This query returns the operation being performed currently by the specified apply server. The query also returns the owner and name of the table on which the operation is being performed and the cost of the operation. Ensure that each key column in this table has an index. If the results show FULL for the COST column, then the operation is causing full table scans, and indexing the table's key columns might solve the problem.

In addition, you can run the following query to determine the specific DML or DDL SQL statement that is causing an apply server to perform poorly, assuming that the name of the apply process is strm01_apply and that apply server number two is performing poorly:

SELECT t.SQL_TEXT
  FROM V$SESSION s, V$SQLTEXT t, V$STREAMS_APPLY_SERVER a
  WHERE a.APPLY_NAME = 'STRM01_APPLY' AND a.SERVER_ID = 2
    AND s.SID = a.SID
    AND s.SQL_ADDRESS = t.ADDRESS
    AND s.SQL_HASH_VALUE = t.HASH_VALUE
    ORDER BY PIECE;

This query returns the SQL statement being run currently by the specified apply server. The statement includes the name of the table to which the transaction is being applied. Ensure that each key column in this table has an index.

If the SQL statement returned by the previous query is less than one thousand characters long, then you can run the following simplified query instead:

SELECT t.SQL_TEXT
  FROM V$SESSION s, V$SQLAREA t, V$STREAMS_APPLY_SERVER a
  WHERE a.APPLY_NAME = 'STRM01_APPLY' AND a.SERVER_ID = 2
    AND s.SID = a.SID
    AND s.SQL_ADDRESS = t.ADDRESS
    AND s.SQL_HASH_VALUE = t.HASH_VALUE;

See Also:

Oracle Database Performance Tuning Guide and Oracle Database Reference for more information about the V$SQL_PLAN dynamic performance view

Are There Any Apply Errors in the Error Queue?

When an apply process cannot apply a message, it moves the message and all of the other messages in the same transaction into the error queue. You should check for apply errors periodically to see if there are any transactions that could not be applied.

Using a DML Handler to Correct Error Transactions

When an apply process moves a transaction to the error queue, you can examine the transaction to analyze the feasibility reexecuting the transaction successfully. If an abnormality is found in the transaction, then you might be able to configure a statement DML handler or a procedure DML handler to correct the problem. In this case, configure the DML handler to run when you reexecute the error transaction.

When a DML handler is used to correct a problem in an error transaction, the apply process that uses the DML handler should be stopped to prevent the DML handler from acting on LCRs that are not involved with the error transaction. After successful reexecution, if the DML handler is no longer needed, then remove it. Also, correct the problem that caused the transaction to moved to the error queue to prevent future error transactions.

Troubleshooting Specific Apply Errors

You might encounter the following types of apply process errors for LCRs:

The errors marked with an asterisk (*) in the previous list often result from a problem with an apply handler or a rule-based transformation.

ORA-01031 Insufficient Privileges

An ORA-01031 error occurs when the user designated as the apply user does not have the necessary privileges to perform SQL operations on the replicated objects. The apply user privileges can be granted directly or through a role.

Specifically, the following privileges are required:

  • For table level DML changes, the INSERT, UPDATE, DELETE, and SELECT privileges must be granted.

  • For table level DDL changes, the ALTER TABLE privilege must be granted.

  • For schema level changes, the CREATE ANY TABLE, CREATE ANY INDEX, CREATE ANY PROCEDURE, ALTER ANY TABLE, and ALTER ANY PROCEDURE privileges must be granted.

  • For global level changes, ALL PRIVILEGES must be granted to the apply user.

To correct this error, complete the following steps:

  1. Connect as the apply user on the destination database.

  2. Query the SESSION_PRIVS data dictionary view to determine which required privileges are not granted to the apply user.

  3. Connect as an administrative user who can grant privileges.

  4. Grant the necessary privileges to the apply user.

  5. Reexecute the error transactions in the error queue for the apply process.

ORA-01403 No Data Found

Typically, an ORA-01403 error occurs when an apply process tries to update an existing row and the OLD_VALUES in the row LCR do not match the current values at the destination database.

Typically, one of the following conditions causes this error:

  • Supplemental logging is not specified for columns that require supplemental logging at the source database. In this case, LCRs from the source database might not contain values for key columns. You can use a procedure DML handler to modify the LCR so that it contains the necessary supplemental data. See "Using a DML Handler to Correct Error Transactions". Also, specify the necessary supplemental logging at the source database to prevent future errors.

  • There is a problem with the primary key in the table for which an LCR is applying a change. In this case, ensure that the primary key is enabled by querying the DBA_CONSTRAINTS data dictionary view. If no primary key exists for the table, or if the target table has a different primary key than the source table, then specify substitute key columns using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. You also might encounter error ORA-23416 if a table being applied does not have a primary key. After you make these changes, you can reexecute the error transaction.

  • The transaction being applied depends on another transaction which has not yet executed. For example, if a transaction tries to update an employee with an employee_id of 300, but the row for this employee has not yet been inserted into the employees table, then the update fails. In this case, execute the transaction on which the error transaction depends. Then, reexecute the error transaction.

ORA-23605 Invalid Value for Oracle Streams Parameter

When calling row LCR (SYS.LCR$_ROW_RECORD type) member subprograms, an ORA-23605 error might be raised if the values of the parameters passed by the member subprogram do not match the row LCR. For example, an error results if a member subprogram tries to add an old column value to an insert row LCR, or if a member subprogram tries to set the value of a LOB column to a number.

Row LCRs should contain the following old and new values, depending on the operation:

  • A row LCR for an INSERT operation should contain new values but no old values.

  • A row LCR for an UPDATE operation can contain both new values and old values.

  • A row LCR for a DELETE operation should contain old values but no new values.

Verify that the correct parameter type (OLD, or NEW, or both) is specified for the row LCR operation (INSERT, UPDATE, or DELETE). For example, if a procedure DML handler or custom rule-based transformation changes an UPDATE row LCR into an INSERT row LCR, then the handler or transformation should remove the old values in the row LCR.

If an apply handler caused the error, then correct the apply handler and reexecute the error transaction. If a custom rule-based transformation caused the error, then you might be able to create a DML handler to correct the problem. See "Using a DML Handler to Correct Error Transactions". Also, correct the rule-based transformation to avoid future errors.

ORA-23607 Invalid Column

An ORA-23607 error is raised by a row LCR (SYS.LCR$_ROW_RECORD type) member subprogram, when the value of the column_name parameter in the member subprogram does not match the name of any of the columns in the row LCR. Check the column names in the row LCR.

If an apply handler caused the error, then correct the apply handler and reexecute the error transaction. If a custom rule-based transformation caused the error, then you might be able to create a DML handler to correct the problem. See "Using a DML Handler to Correct Error Transactions". Also, correct the rule-based transformation to avoid future errors.

An apply handler or custom rule-based transformation can cause this error by using one of the following row LCR member procedures:

  • DELETE_COLUMN, if this procedure tries to delete a column from a row LCR that does not exist in the row LCR

  • RENAME_COLUMN, if this procedure tries to rename a column that does not exist in the row LCR

In this case, to avoid similar errors in the future, perform one of the following actions:

  • Instead of using an apply handler or custom rule-based transformation to delete or rename a column in row LCRs, use a declarative rule-based transformation. If a declarative rule-based transformation tries to delete or rename a column that does not exist, then the declarative rule-based transformation does not raise an error. You can specify a declarative rule-based transformation that deletes a column using the DBMS_STREAMS_ADM.DELETE_COLUMN procedure and a declarative rule-based transformation that renames a column using the DBMS_STREAMS_ADM.RENAME_COLUMN procedure. You can use a declarative rule-based transformation in combination with apply handlers and custom rule-based transformations.

  • If you want to continue to use an apply handler or custom rule-based transformation to delete or rename a column in row LCRs, then modify the handler or transformation to prevent future errors. For example, modify the handler or transformation to verify that a column exists before trying to rename or delete the column.


See Also:


ORA-24031 Invalid Value, parameter_name Should Be Non-NULL

An ORA-24031 error can occur when an apply handler or a custom rule-based transformation passes a NULL value to an LCR member subprogram instead of an ANYDATA value that contains a NULL.

For example, the following call to the ADD_COLUMN member procedure for row LCRs can result in this error:

new_lcr.ADD_COLUMN('OLD','LANGUAGE',NULL);

The following example shows the correct way to call the ADD_COLUMN member procedure for row LCRs:

new_lcr.ADD_COLUMN('OLD','LANGUAGE',ANYDATA.ConvertVarchar2(NULL));

If an apply handler caused the error, then correct the apply handler and reexecute the error transaction. If a custom rule-based transformation caused the error, then you might be able to create a DML handler to correct the problem. See "Using a DML Handler to Correct Error Transactions". Also, correct the rule-based transformation to avoid future errors.

ORA-26687 Instantiation SCN Not Set

Typically, an ORA-26687 error occurs because the instantiation SCN is not set on an object for which an apply process is attempting to apply changes. You can query the DBA_APPLY_INSTANTIATED_OBJECTS data dictionary view to list the objects that have an instantiation SCN.

You can set an instantiation SCN for one or more objects by exporting the objects at the source database, and then importing them at the destination database. You can use Data Pump export/import. If you do not want to use export/import, then you can run one or more of the following procedures in the DBMS_APPLY_ADM package:

  • SET_TABLE_INSTANTIATION_SCN

  • SET_SCHEMA_INSTANTIATION_SCN

  • SET_GLOBAL_INSTANTIATION_SCN

Some of the common reasons why an instantiation SCN is not set for an object at a destination database include the following:

  • You used export/import for instantiation, and you exported the objects from the source database before preparing the objects for instantiation. You can prepare objects for instantiation either by creating Oracle Streams rules for the objects with the DBMS_STREAMS_ADM package or by running a procedure or function in the DBMS_CAPTURE_ADM package. If the objects were not prepared for instantiation before the export, then the instantiation SCN information will not be available in the export file, and the instantiation SCNs will not be set.

    In this case, prepare the database objects for instantiation at the source database. Next, set the instantiation SCN for the database objects at the destination database.

  • Instead of using export/import for instantiation, you set the instantiation SCN explicitly with the appropriate procedure in the DBMS_APPLY_ADM package. When the instantiation SCN is set explicitly by the database administrator, responsibility for the correctness of the data is assumed by the administrator.

    In this case, set the instantiation SCN for the database objects explicitly. Alternatively, you can choose to perform a metadata-only export/import to set the instantiation SCNs.

  • You want to apply DDL changes, but you did not set the instantiation SCN at the schema or global level.

    In this case, set the instantiation SCN for the appropriate schemas by running the SET_SCHEMA_INSTANTIATION_SCN procedure, or set the instantiation SCN for the source database by running the SET_GLOBAL_INSTANTIATION_SCN procedure. Both of these procedures are in the DBMS_APPLY_ADM package.

After you correct the condition that caused the error, whether you should reexecute the error transaction or delete it depends on whether the changes included in the transaction were executed at the destination database when you corrected the error condition. Follow these guidelines when you decide whether you should reexecute the transaction in the error queue or delete it:

  • If you performed a new export/import, and the new export includes the transaction in the error queue, then delete the transaction in the error queue.

  • If you set instantiation SCNs explicitly or reimported an existing export dump file, then reexecute the transaction in the error queue.

ORA-26688 Missing Key in LCR

Typically, an ORA-26688 error occurs because of one of the following conditions:

  • At least one LCR in a transaction does not contain enough information for the apply process to apply it. For dependency computation, an apply process always needs values for the defined primary key column(s) at the destination database. Also, if the parallelism of any apply process that will apply the changes is greater than 1, then the apply process needs values for any indexed column at a destination database, which includes unique or non unique index columns, foreign key columns, and bitmap index columns.

    If an apply process needs values for a column, and the column exists at the source database, then this error results when supplemental logging is not specified for one or more of these columns at the source database. In this case, specify the necessary supplemental logging at the source database to prevent apply errors.

    However, the definition of the source database table might be different than the definition of the corresponding destination database table. If an apply process needs values for a column, and the column exists at the destination database but does not exist at the source database, then you can configure a rule-based transformation to add the required values to the LCRs from the source database to prevent apply errors.

    To correct a transaction placed in the error queue because of this error, you can use a procedure DML handler to modify the LCRs so that they contain the necessary supplemental data. See "Using a DML Handler to Correct Error Transactions".

  • There is a problem with the primary key in the table for which an LCR is applying a change. In this case, ensure that the primary key is enabled by querying the DBA_CONSTRAINTS data dictionary view. If no primary key exists for the table, or if the destination table has a different primary key than the source table, then specify substitute key columns using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. You can also encounter error ORA-23416 if a table does not have a primary key. After you make these changes, you can reexecute the error transaction.

ORA-26689 Column Type Mismatch

Typically, an ORA-26689 error occurs because one or more columns at a table in the source database do not match the corresponding columns at the destination database. The LCRs from the source database might contain more columns than the table at the destination database, or there might be a column name or column type mismatch for one or more columns. If the columns differ at the databases, then you can use rule-based transformations to avoid future errors.

If you use an apply handler or a custom rule-based transformation, then ensure that any ANYDATA conversion functions match the data type in the LCR that is being converted. For example, if the column is specified as VARCHAR2, then use ANYDATA.CONVERTVARCHAR2 function to convert the data from type ANY to VARCHAR2.

Also, ensure that you use the correct character case in rule conditions, apply handlers, and rule-based transformations. For example, if a column name has all uppercase characters in the data dictionary, then you should specify the column name with all uppercase characters in rule conditions, apply handlers, and rule-based transformations

This error can also occur because supplemental logging is not specified where it is required for nonkey columns at the source database. In this case, LCRs from the source database might not contain needed values for these nonkey columns.

You might be able to configure a DML handler to apply the error transaction. See "Using a DML Handler to Correct Error Transactions".

ORA-26786 A row with key exists but has conflicting column(s) in table

An ORA-26786 error occurs when the values of some columns in the destination table row do not match the old values of the corresponding columns in the row LCR.

To avoid future apply errors, you can either configure a conflict handler, a DML handler, or an error handler. The handler should resolve the mismatched column in a way that is appropriate for your replication environment.

In addition, you might be able to configure a DML handler to apply existing error transactions that resulted from this error. See "Using a DML Handler to Correct Error Transactions".

Alternatively, you can update the current values in the row so that the row LCR can be applied successfully. If changes to the row are captured by a capture process or synchronous capture at the destination database, then you probably do not want to replicate this manual change to other destination databases. In this case, complete the following steps:

  1. Set a tag in the session that corrects the row. Ensure that you set the tag to a value that prevents the manual change from being replicated. For example, the tag can prevent the change from being captured by a capture process or synchronous capture.

    EXEC DBMS_STREAMS.SET_TA\G(tag => HEXTORAW('17'));
    

    In some environments, you might need to set the tag to a different value.

  2. Update the row in the table so that the data matches the old values in the LCR.

  3. Reexecute the error or reexecute all errors. To reexecute an error, run the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify the transaction identifier for the transaction that caused the error. For example:

    EXEC DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '5.4.312');
    

    Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS procedure:

    EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name => 'APPLY');
    
  4. If you are going to make other changes in the current session that you want to replicate destination databases, then reset the tag for the session to an appropriate value, as in the following example:

    EXEC DBMS_STREAMS.SET_TAG(tag => NULL);
    

    In some environments, you might need to set the tag to a value other than NULL.


See Also:


ORA-26787 The row with key column_value does not exist in table table_name

An ORA-26787 error occurs when the row that a row LCR is trying to update or delete does not exist in the destination table.

To avoid future apply errors, you can either configure a conflict handler, a DML handler, or an error handler. The handler should resolve row LCRs that do not have corresponding table rows in a way that is appropriate for your replication environment.

In addition, you might be able to configure a DML handler to apply existing error transactions that resulted from this error. See "Using a DML Handler to Correct Error Transactions".

Alternatively, you can update the current values in the row so that the row LCR can be applied successfully. See "ORA-26786 A row with key exists but has conflicting column(s) in table" for instructions.


See Also:


PKCPK&AOEBPS/strms_ipro.htm Information Provisioning Concepts

35 Information Provisioning Concepts

Information provisioning makes information available when and where it is needed. Information provisioning is part of Oracle grid computing, which pools large numbers of servers, storage areas, and networks into a flexible, on-demand computing resource for enterprise computing needs. Information provisioning uses many of the features that also are used for information integration.

The following topics contain information about information provisioning:


See Also:


Overview of Information Provisioning

Oracle grid computing enables resource provisioning with features such as Oracle Real Application Clusters (Oracle RAC), Oracle Scheduler, and Database Resource Manager. Oracle RAC enables you to provision hardware resources by running a single Oracle database server on a cluster of physical servers. Oracle Scheduler enables you to provision database workload over time for more efficient use of resources. Database Resource Manager provisions resources to database users, applications, or services within an Oracle database.

In addition to resource provisioning, Oracle grid computing also enables information provisioning. Information provisioning delivers information when and where it is needed, regardless of where the information currently resides on the grid. In a grid environment with distributed systems, the grid must move or copy information efficiently to make it available where it is needed.

Information provisioning can take the following forms:

These information provisioning capabilities can be used individually or in combination to provide a full information provisioning solution in your environment. The remaining sections in this chapter discuss the ways to provision information in more detail.

Bulk Provisioning of Large Amounts of Information

Oracle provides several ways to move or copy large amounts of information from database to database efficiently. Data Pump can export and import at the database, tablespace, schema, or table level. There are several ways to move or copy a tablespace set from one Oracle database to another. Transportable tablespaces can move or copy a subset of an Oracle database and "plug" it in to another Oracle database. Transportable tablespace from backup with RMAN enables you to move or copy a tablespace set while the tablespaces remain online. The procedures in the DBMS_STREAMS_TABLESPACE_ADM package combine several steps that are required to move or copy a tablespace set into one procedure call.

Each method for moving or copying a tablespace set requires that the tablespace set is self-contained. A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, then the tablespace is not self-contained. A self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces. For example, if a partitioned table is partially contained in the set of tablespaces, then the set of tablespaces is not self-contained. To determine whether a set of tablespaces is self-contained, use the TRANSPORT_SET_CHECK procedure in the Oracle supplied package DBMS_TTS.

The following sections describe the options for moving or copying large amounts of information and when to use each option:

Data Pump Export/Import

Data Pump export/import can move or copy data efficiently between databases. Data Pump can export/import a full database, tablespaces, schemas, or tables to provision large or small amounts of data for a particular requirement. Data Pump exports and imports can be performed using command line clients (expdp and impdp) or the DBMS_DATAPUMP package.

A transportable tablespaces export/import is specified using the TRANSPORT_TABLESPACES parameter. Transportable tablespaces enables you to unplug a set of tablespaces from a database, move or copy them to another location, and then plug them into another database. The transport is quick because the process transfers metadata and files. It does not unload and load the data. In transportable tablespaces mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded at the source and loaded at the target. This allows the tablespace data files to be copied to the target Oracle database and incorporated efficiently.

The tablespaces being transported can be either dictionary managed or locally managed. Moving or copying tablespaces using transportable tablespaces is faster than performing either an export/import or unload/load of the same data. To use transportable tablespaces, you must have the EXP_FULL_DATABASE and IMP_FULL_DATABASE role. The tablespaces being transported must be read-only during export, and the export cannot have a degree of parallelism greater than 1.


See Also:


Transportable Tablespace from Backup with RMAN

The Recovery Manager (RMAN) TRANSPORT TABLESPACE command copies tablespaces without requiring that the tablespaces be in read-only mode during the transport process. Appropriate database backups must be available to perform RMAN transportable tablespace from backup.

DBMS_STREAMS_TABLESPACE_ADM Procedures

The following procedures in the DBMS_STREAMS_TABLESPACE_ADM package can move or copy tablespaces:

  • ATTACH_TABLESPACES: Uses Data Pump to import a self-contained tablespace set previously exported using the DBMS_STREAMS_TABLESPACE_ADM package, Data Pump export, or the RMAN TRANSPORT TABLESPACE command.

  • CLONE_TABLESPACES: Uses Data Pump export to clone a set of self-contained tablespaces. The tablespace set can be attached to a database after it is cloned. The tablespace set remains in the database from which it was cloned.

  • DETACH_TABLESPACES: Uses Data Pump export to detach a set of self-contained tablespaces. The tablespace set can be attached to a database after it is detached. The tablespace set is dropped from the database from which it was detached.

  • PULL_TABLESPACES: Uses Data Pump export/import to copy a set of self-contained tablespaces from a remote database and attach the tablespace set to the current database.

In addition, the DBMS_STREAMS_TABLESPACE_ADM package also contains the following procedures: ATTACH_SIMPLE_TABLESPACE, CLONE_SIMPLE_TABLESPACE, DETACH_SIMPLE_TABLESPACE, and PULL_SIMPLE_TABLESPACE. These procedures operate on a single tablespace that uses only one data file instead of a tablespace set.

File Group Repository

In the context of a file group, a file is a reference to a file stored on hard disk. A file is composed of a file name, a directory object, and a file type. The directory object references the directory in which the file is stored on hard disk. A version is a collection of related files, and a file group is a collection of versions.

A file group repository is a collection of all of the file groups in a database. A file group repository can contain multiple file groups and multiple versions of a particular file group.

For example, a file group named reports can store versions of sales reports. The reports can be generated on a regular schedule, and each version can contain the report files. The file group repository can version the file group under names such as sales_reports_v1, sales_reports_v2, and so on.

File group repositories can contain all types of files. You can create and manage file group repositories using the DBMS_FILE_GROUP package.


See Also:


Tablespace Repository

A tablespace repository is a collection of tablespace sets in a file group repository. Tablespace repositories are built on file group repositories, but tablespace repositories only contain the files required to move or copy tablespaces between databases. A file group repository can store versioned sets of files, including, but not restricted to, tablespace sets.

Different tablespace sets can be stored in a tablespace repository, and different versions of a particular tablespace set can also be stored. A version of a tablespace set in a tablespace repository consists of the following files:

  • The Data Pump export dump file for the tablespace set

  • The Data Pump log file for the export

  • The data files that comprise the tablespace set

All of the files in a version can reside in a single directory, or they can reside in different directories. The following procedures can move or copy tablespaces with or without using a tablespace repository:

  • ATTACH_TABLESPACES

  • CLONE_TABLESPACES

  • DETACH_TABLESPACES

If one of these procedures is run without using a tablespace repository, then a tablespace set is moved or copied, but it is not placed in or copied from a tablespace repository. If the CLONE_TABLESPACES or DETACH_TABLESPACES procedure is run using a tablespace repository, then the procedure places a tablespace set in the repository as a version of the tablespace set. If the ATTACH_TABLESPACES procedure is run using a tablespace repository, then the procedure copies a particular version of a tablespace set from the repository and attaches it to a database.

When to Use a Tablespace Repository

A tablespace repository is useful when you must store different versions of one or more tablespace sets. For example, a tablespace repository can accomplish the following goals:

  • You want to run quarterly reports on a tablespace set. You can clone the tablespace set quarterly for storage in a versioned tablespace repository, and a specific version of the tablespace set can be requested from the repository and attached to another database to run the reports.

  • You want applications to be able to attach required tablespace sets on demand in a grid environment. You can store multiple versions of several different tablespace sets in the tablespace repository. Each tablespace set can be used for a different purpose by the application. When the application needs a particular version of a particular tablespace set, the application can scan the tablespace repository and attach the correct tablespace set to a database.

Differences Between the Tablespace Repository Procedures

The procedures that include the file_group_name parameter in the DBMS_STREAMS_TABLESPACE_ADM package behave differently for the tablespace set, the data files in the tablespace set, and the export dump file. Table 35-1 describes these differences.

Table 35-1 Tablespace Repository Procedures

ProcedureTablespace SetData FilesExport Dump File

ATTACH_TABLESPACES

The tablespace set is added to the local database.

If the datafiles_directory_object parameter is non-NULL, then the data files are copied from their current location(s) for the version in the tablespace repository to the directory object specified in the datafiles_directory_object parameter. The attached tablespace set uses the data files that were copied.

If the datafiles_directory_object parameter is NULL, then the data files are not moved or copied. The data files remain in the directory object(s) for the version in the tablespace repository, and the attached tablespace set uses these data files.

If the datafiles_directory_object parameter is non-NULL, then the export dump file is copied from its directory object for the version in the tablespace repository to the directory object specified in the datafiles_directory_object parameter.

If the datafiles_directory_object parameter is NULL, then the export dump file is not moved or copied.

CLONE_TABLESPACES

The tablespace set is retained in the local database.

The data files are copied from their current location(s) to the directory object specified in the tablespace_directory_object parameter or in the default directory for the version or file group. This parameter specifies where the version of the tablespace set is stored in the tablespace repository. The current location of the data files can be determined by querying the DBA_DATA_FILES data dictionary view. A directory object must exist, and must be accessible to the user who runs the procedure, for each data file location.

The export dump file is placed in the directory object specified in the tablespace_directory_object parameter or in the default directory for the version or file group.

DETACH_TABLESPACES

The tablespace set is dropped from the local database.

The data files are not moved or copied. The data files remain in their current location(s). A directory object must exist, and must be accessible to the user who runs the procedure, for each data file location. These data files are included in the version of the tablespace set stored in the tablespace repository.

The export dump file is placed in the directory object specified in the export_directory_object parameter or in the default directory for the version or file group.


Remote Access to a Tablespace Repository

A tablespace repository can reside in the database that uses the tablespaces, or it can reside in a remote database. If it resides in a remote database, then a database link must be specified in the repository_db_link parameter when you run one of the procedures, and the database link must be accessible to the user who runs the procedure.

Only One Tablespace Version Can Be Online in a Database

A version of a tablespace set in a tablespace repository can be either online or offline in a database. A tablespace set version is online in a database when it is attached to the database using the ATTACH_TABLESPACES procedure. Only a single version of a tablespace set can be online in a database at a particular time. However, the same version or different versions of a tablespace set can be online in different databases at the same time. In this case, it might be necessary to ensure that only one database can make changes to the tablespace set.

Tablespace Repository Procedures Use the DBMS_FILE_GROUP Package Automatically

Although tablespace repositories are built on file group repositories, it is not necessary to use the DBMS_FILE_GROUP package to create a file group repository before using one of the procedures in the DBMS_STREAMS_TABLESPACE_ADM package. If you run the CLONE_TABLESPACES or DETACH_TABLESPACES procedure and specify a file group that does not exist, then the procedure creates the file group automatically.

A Tablespace Repository Provides Versioning but Not Source Control

A tablespace repository provides versioning of tablespace sets, but it does not provide source control. If two or more versions of a tablespace set are changed at the same time and placed in a tablespace repository, then these changes are not merged.

Read-Only Tablespaces Requirement During Export

The procedures in the DBMS_STREAMS_TABLESPACE_ADM package that perform a Data Pump export make any read/write tablespace being exported read-only. After the export is complete, if a procedure in the DBMS_STREAMS_TABLESPACE_ADM package made a tablespace read-only, then the procedure makes the tablespace read/write.

Automatic Platform Conversion for Tablespaces

When one of the procedures in the DBMS_STREAMS_TABLESPACE_ADM package moves or copies tablespaces to a database that is running on a different platform, the procedure can convert the data files to the appropriate platform if the conversion is supported. The V$TRANSPORTABLE_PLATFORM dynamic performance view lists all platforms that support cross-platform transportable tablespaces.

When a tablespace repository is used, the platform conversion is automatic if it is supported. When a tablespace repository is not used, you must specify the platform to which or from which the tablespace is being converted.


See Also:


Options for Bulk Information Provisioning

Table 35-2 describes when to use each option for bulk information provisioning.

Table 35-2 Options for Moving or Copying Tablespaces

OptionUse this Option Under these Conditions

Data Pump export/import

  • You want to move or copy data at the database, tablespace, schema, or table level.

  • You want to perform each step required to complete the Data Pump export/import.

Data Pump export/import with the TRANSPORT_TABLESPACES option

  • The tablespaces being moved or copied can be read-only during the operation.

  • You want to perform each step required to complete the Data Pump export/import.

Transportable tablespace from backup with the RMAN TRANSPORT TABLESPACE command

The tablespaces being moved or copied must remain online (writeable) during the operation.

DBMS_STREAMS_TABLESPACE_ADM procedures without a tablespace repository

  • The tablespaces being moved or copied can be read-only during the operation.

  • You want to combine multiple steps in the Data Pump export/import into one procedure call.

  • You do not want to use a tablespace repository for the tablespaces being moved or copied.

DBMS_STREAMS_TABLESPACE_ADM procedures with a tablespace repository

  • The tablespaces being moved or copied can be read-only during the operation.

  • You want to combine multiple steps in the Data Pump export/import into one procedure call.

  • You want to use a tablespace repository for the tablespaces being moved or copied.

  • You want platform conversion to be automatic.


Incremental Information Provisioning with Oracle Streams

Oracle Streams can share and maintain database objects in different databases at each of the following levels:

Oracle Streams can keep shared database objects synchronized at two or more databases. Specifically, an Oracle Streams capture process or synchronous capture captures changes to a shared database object in a source database, one or more propagations propagate the changes to another database, and an Oracle Streams apply process applies the changes to the shared database object. If database objects are not identical at different databases, then Oracle Streams can transform them at any point in the process. That is, a change can be transformed during capture, propagation, or apply. In addition, Oracle Streams provides custom processing of changes during apply with apply handlers. Database objects can be shared between Oracle databases, or they can be shared between Oracle and non-Oracle databases with an Oracle Database Gateway. In addition to data replication, Oracle Streams provides messaging, event management and notification, and data warehouse loading.

A combination of Oracle Streams and bulk provisioning enables you to copy and maintain a large amount of data by running a single procedure. The following procedures in the DBMS_STREAMS_ADM package use Data Pump to copy data between databases and configure Oracle Streams to maintain the copied data incrementally:

In addition, the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures configure an Oracle Streams environment that replicates changes either at the database level or to specified tablespaces between two databases. These procedures must be used together, and instantiation actions must be performed manually, to complete the Oracle Streams replication configuration.

Using these procedures, you can export data from one database, ship it to another database, reformat the data if the second database is on a different platform, import the data into the second database, and start syncing the data with the changes happening in the first database. If the second database is on a grid, then you have just migrated your application to a grid with one command.

These procedures can configure Oracle Streams clients to maintain changes originating at the source database in a single-source replication environment, or they can configure Oracle Streams clients to maintain changes originating at both databases in a bidirectional replication environment. By maintaining changes to the data, it can be kept synchronized at both databases. These procedures can either perform these actions directly, or they can generate one or more scripts that performs these actions.


See Also:


On-Demand Information Access

Users and applications can access information without moving or copying it to a new location. Distributed SQL allows grid users to access and integrate data stored in multiple Oracle and, through Oracle Database Gateway, non-Oracle databases. Transparent remote data access with distributed SQL allows grid users to run their applications against any other database without making any code change to the applications. While integrating data and managing transactions across multiple data stores, the Oracle database optimizes the execution plans to access data in the most efficient manner.


See Also:


PK7-PK&AOEBPS/ap_strup.htm Online Upgrade of a 10.1 or Earlier Database with Oracle Streams

E Online Upgrade of a 10.1 or Earlier Database with Oracle Streams

This appendix describes how to perform a database upgrade from one of the following Oracle Database releases with Oracle Streams:

The database upgrade operation described in this appendix uses the features of Oracle Streams to achieve little or no database down time.

The following topics describe performing an online database upgrade with Oracle Streams:


See Also:

Appendix D, "Online Database Upgrade and Maintenance with Oracle Streams" for information about upgrading from Oracle Database 10g Release 2 (10.2) or later and for information about performing other database maintenance operations with Oracle Streams

Overview of Using Oracle Streams in the Database Upgrade Process

An Oracle database upgrade is the process of transforming an existing, prior release of an Oracle database into the current release. A database upgrade typically requires substantial database down time, but you can perform a database upgrade with little or no down time by using the features of Oracle Streams. To do so, you use Oracle Streams to configure a replication environment with the following databases:

Specifically, you can use the following general steps to perform a database upgrade while the database is online:

  1. Create an empty destination database.

  2. Configure an Oracle Streams replication environment where the original database is the source database and a copy of the database is the destination database for the changes made at the source.

  3. Perform the database upgrade on the destination database. During this time the original source database is available online.

  4. Use Oracle Streams to apply the changes made at the source database to the destination database.

  5. When the destination database has caught up with the changes made at the source database, take the source database offline and make the destination database available for applications and users.

Figure E-1 provides an overview of this process.

Figure E-1 Online Database Upgrade with Oracle Streams

Description of Figure E-1 follows
Description of "Figure E-1 Online Database Upgrade with Oracle Streams"

The Capture Database During the Upgrade Process

During the upgrade process, the capture database is the database where the capture process is created. Downstream capture was introduced in Oracle Database 10g Release 1 (10.1). If you are upgrading a database from Oracle Database 10g Release 1, then you have the following options:

  • A local capture process can be created at the source database during the upgrade process.

  • A downstream capture process can be created at the destination database. If the destination database is the capture database, then a propagation from the capture database to the destination database is not needed.

  • A third database can be the capture database. In this case, the third database can be an Oracle Database 10g Release 1 or later database.

However, if you are upgrading a database from Oracle9i Database Release 2 (9.2) to Oracle Database 11g Release 2, then downstream capture is not supported, and a local capture process must be created at the source database.

A downstream capture process reduces the resources required at the source database during the upgrade process, but a local capture process is easier to configure. Table E-1 describes which database can be the capture database during the upgrade process.

Table E-1 Supported Capture Database During Upgrade

Existing Database ReleaseCapture Database Can Be Source Database?Capture Database Can Be Destination Database?Capture Database Can Be Third Database?

9.2

Yes

No

No

10.1

Yes

Yes

Yes



Note:

If you are upgrading from Oracle Database 10g Release 1 (10.1), then, before you begin the upgrade, decide which database will be the capture database.

Assumptions for the Database Being Upgraded

The instructions in this appendix assume that all of the following statements are true for the database being upgraded:

  • The database is not part of an existing Oracle Streams environment.

  • The database is not part of an existing logical standby environment.

  • The database is not part of an existing Advanced Replication environment.

  • No tables at the database are master tables for materialized views in other databases.

  • No messages are enqueued into user-created queues during the upgrade process.

Considerations for Job Queue Processes and PL/SQL Package Subprograms

If possible, ensure that no job queue processes are created, modified, or deleted during the upgrade process, and that no Oracle-supplied PL/SQL package subprograms are invoked during the upgrade process that modify both user data and dictionary metadata at the same time. The following packages contain subprograms that modify both user data and dictionary metadata at the same time: DBMS_RLS, DBMS_STATS, and DBMS_JOB.

It might be possible to perform such actions on the database if you ensure that the same actions are performed on the source database and destination database in Steps 13 and 14 in "Task 5: Finishing the Upgrade and Removing Oracle Streams". For example, if a PL/SQL procedure gathers statistics on the source database during the upgrade process, then the same PL/SQL procedure should be invoked at the destination database in Step 14.

Preparing for a Database Upgrade Using Oracle Streams

The following sections describe tasks to complete before starting the database upgrade with Oracle Streams:

Preparing to Upgrade a Database with User-Defined Types

User-defined types include object types, REF values, varrays, and nested tables. Currently, Oracle Streams capture processes and apply processes do not support user-defined types. This section discusses using Oracle Streams to perform a database upgrade on a database that has user-defined types.

One option is to ensure that no data manipulation language (DML) or data definition language (DDL) changes are made to the tables that contain user-defined types during the database upgrade. In this case, these tables are instantiated at the destination database, and no changes are made to these tables during the entire operation. After the upgrade is complete, make the tables that contain user-defined types read/write at the destination database.

If tables that contain user-defined types must remain open during the upgrade, then use the following general steps to retain changes to these tables during the upgrade:

  1. Before you begin the upgrade process described in "Performing a Database Upgrade Using Oracle Streams", create one or more logging tables to store row changes to tables at the source database that include user-defined types. Each column in the logging table must use a data type that is supported by Oracle Streams in the source database release.

  2. Before you begin the upgrade process described in "Performing a Database Upgrade Using Oracle Streams", create a DML trigger at the source database that fires on the tables that contain the user-defined data types. The trigger converts each row change into relational equivalents and logs the modified row in a logging table created in Step 1.

  3. When the instructions in "Performing a Database Upgrade Using Oracle Streams" say to configure a capture process and propagation, configure the capture process and propagation to capture changes to the logging table and propagate these changes to the destination database. Changes to tables that contain user-defined types must not be captured or propagated.

  4. When the instructions in "Performing a Database Upgrade Using Oracle Streams" say to configure a an apply process on the destination database, configure the apply process to use a procedure DML handler that processes the changes to the logging tables. The procedure DML handler reconstructs the user-defined types from the relational equivalents and applies the modified changes to the tables that contain user-defined types.

For instructions, go to the My Oracle Support (formerly OracleMetaLink) Web site using a Web browser:

http://support.oracle.com/

Database bulletin 556742.1 describes extended data type support for Oracle Streams.


See Also:


Deciding Which Utility to Use for Instantiation

Before you begin the database upgrade, decide whether you want to use the Export/Import utilities (Data Pump or original) or the Recovery Manager (RMAN) utility to instantiate the destination database during the operation. The destination database will replace the existing database that is being upgraded.

Consider the following factors when you make this decision:

  • If you use original Export/Import or Data Pump Export/Import, then you can make the destination database an Oracle Database 11g Release 2 (11.2) database at the beginning of the operation. Therefore, you do not need to upgrade the destination database after the instantiation.

    If you use Export/Import for instantiation, and Data Pump is supported, then Oracle recommends using Data Pump. Data Pump can perform the instantiation faster than original Export/Import.

  • If you use the RMAN DUPLICATE command, then the instantiation might be faster than with Export/Import, especially if the database is large, but the database release must be the same for RMAN instantiation. Therefore, the following conditions must be met:

    • If the database is an Oracle9i Database Release 2 (9.2) database, then the destination database is an Oracle9i Database Release 2 database when it is instantiated.

    • If the database is an Oracle Database 10g Release 1 (10.1) database, then the destination database is an Oracle Database 10g Release 1 database when it is instantiated.

    After the instantiation, you must upgrade the destination database.

    Also, Oracle recommends that you do not use RMAN for instantiation in an environment where distributed transactions are possible. Doing so might cause in-doubt transactions that must be corrected manually.

Table E-2 describes whether each instantiation method is supported based on the release being upgraded, whether the platform at the source and destination databases are different, and whether the character set at the source and destination databases are different. Each instantiation method is supported when the platform and character set are the same at the source and destination databases.

Table E-2 Instantiation Methods for Database Upgrade with Oracle Streams

Instantiation MethodSupported When Upgrading FromDifferent Platforms Supported?Different Character Sets Supported?

Original Export/Import

9.2 or 10.1

Yes

Yes

Data Pump Export/Import

10.1

Yes

Yes

RMAN DUPLICATE

9.2 or 10.1

No

No


Performing a Database Upgrade Using Oracle Streams

This section contains instructions for performing a database upgrade using Oracle Streams. These instructions describe using Oracle Streams to upgrade one of the following Oracle Database releases: Oracle9i Database Release 2 (9.2) or Oracle Database 10g Release 1 (10.1).

Complete the following tasks to upgrade a database using Oracle Streams:

Task 1: Beginning the Upgrade

Complete the following steps to begin the upgrade using Oracle Streams:

  1. Create an empty destination database. Ensure that this database has a different global name than the source database. This example assumes that the global name of the source database is orcl.example.com and the global name of the destination database during the upgrade is updb.example.com. The global name of the destination database is changed when the destination database replaces the source database at the end of the upgrade process.

    The release of the empty database you create depends on the instantiation method you decided to use in "Deciding Which Utility to Use for Instantiation":

    • If you decided to use export/import for instantiation, then create an empty Oracle Database 11g Release 2 database. This database will be the destination database during the upgrade process.

      See the Oracle Database installation guide for your operating system if you must install Oracle Database, and see the Oracle Database Administrator's Guide for information about creating a database.

    • If you decided to use RMAN for instantiation, then create an empty Oracle database that is the same release as the database you are upgrading.

      Specifically, if you are upgrading an Oracle9i Database Release 2 (9.2) database, then create an Oracle9i Release 2 database. Alternatively, if you are upgrading an Oracle Database 10g Release 1 (10.1) database, then create an Oracle Database 10g Release 1 database.

      This database will be the destination database during the upgrade process. Both the source database that is being upgraded and the destination database must be the same release of Oracle when you start the upgrade process.

      See the Oracle installation guide for your operating system if you must install Oracle, and see the Oracle Database Administrator's Guide for the release for information about creating a database.

  2. Ensure that the source database is running in ARCHIVELOG mode. See the Oracle Database Administrator's Guide for the source database release for information about running a database in ARCHIVELOG mode.

  3. Ensure that the initialization parameters are set properly at each database to support an Oracle Streams environment. For the source database, see the Oracle Streams documentation for the source database release. For the destination database, see Oracle Streams Replication Administrator's Guide for information about setting initialization parameters that are relevant to Oracle Streams. If the capture database is a third database, then see the Oracle Streams documentation for the capture database release.

  4. At the source database, ensure that no changes are made during the upgrade process to any database objects that were not supported by Oracle Streams in the release you are upgrading:

    • If you are upgrading an Oracle9i Database Release 2 (9.2) database, then tables with columns of the following data types are not supported: NCLOB, LONG, LONG RAW, BFILE, ROWID, and UROWID, and user-defined types (including object types, REFs, varrays, and nested tables). In addition, the following types of tables are not supported: temporary tables, index-organized tables, and object tables. See Oracle9i Streams for complete information about unsupported database objects.

    • If you are upgrading an Oracle Database 10g Release 1 (10.1) database, then query the DBA_STREAMS_UNSUPPORTED data dictionary view to list the database objects that are not supported by Oracle Streams. Ensure that no changes are made to these database objects during the upgrade process.

    "Preparing to Upgrade a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the upgrade. If you are using this method, then tables that contain user-defined types can remain open during the upgrade.

  5. At the source database, configure an Oracle Streams administrator:

    • If you are upgrading an Oracle9i Database Release 2 (9.2) database, then see Oracle9i Streams for instructions.

    • If you are upgrading an Oracle Database 10g Release 1 database, then see Oracle Streams Concepts and Administration for that release for instructions.

    These instructions assume that the name of the Oracle Streams administrator at the source database is strmadmin. This Oracle Streams administrator will be copied automatically to the destination database during instantiation.

  6. In SQL*Plus, connect to the source database orcl.example.com as an administrative user.

    See the Oracle Database Administrator's Guide for the source database release for information about connecting to a database in SQL*Plus.

  7. Specify database supplemental logging of primary keys, unique keys, and foreign keys for all updates. For example:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA 
       (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS; 
    

Task 2: Setting Up Oracle Streams Before Instantiation

The specific instructions for setting up Oracle Streams before instantiation depend on which database is the capture database. Follow the instructions in the appropriate section:


See Also:

"Overview of Using Oracle Streams in the Database Upgrade Process" for information about the capture database

The Source Database Is the Capture Database

Complete the following steps to set up Oracle Streams before instantiation when the source database is the capture database:

  1. Configure your network and Oracle Net so that the source database can communicate with the destination database. See Oracle Database Net Services Administrator's Guide for instructions.

  2. In SQL*Plus, connect to the source database orcl.example.com as the Oracle Streams administrator.

    Seethe Oracle Database Administrator's Guide for the source database release for information about connecting to a database in SQL*Plus.

  3. Create an ANYDATA queue that will stage changes made to the source database during the upgrade process. For example:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.capture_queue_table',
        queue_name  => 'strmadmin.capture_queue');
    END;
    /
    
  4. Configure a capture process that will capture all supported changes made to the source database and stage these changes in the queue created in Step 3. Do not start the capture process. For example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
        streams_type       => 'capture',
        streams_name       => 'capture_upgrade',
        queue_name         => 'strmadmin.capture_queue',
        include_dml        => TRUE,
        include_ddl        => TRUE,
        include_tagged_lcr => FALSE,
        source_database    => 'orcl.example.com',
        inclusion_rule     => TRUE);
    END;
    /
    

    "Preparing to Upgrade a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then ensure that the capture process does not attempt to capture changes to tables with user-defined types. See the Oracle Streams documentation for the source database release for information about excluding database objects from an Oracle Streams configuration with rules.

  5. Proceed to "Task 3: Instantiating the Database".

The Destination Database Is the Capture Database

The database being upgraded must be an Oracle Database 10g Release 1 (10.1) database to use this option. Complete the following steps to set up Oracle Streams before instantiation when the destination database is the capture database:

  1. Configure your network and Oracle Net so that the source database and destination database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.

  2. Follow the instructions in the appropriate section based on the method you are using for instantiation:

    Export/Import

    Complete the following steps if you are using export/import for instantiation:

    1. In SQL*Plus, connect to the destination database updb.example.com as the Oracle Streams administrator.

      See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

    2. Create an ANYDATA queue that will stage changes made to the source database during the upgrade process. For example:

      BEGIN
        DBMS_STREAMS_ADM.SET_UP_QUEUE(
          queue_table => 'strmadmin.destination_queue_table',
          queue_name  => 'strmadmin.destination_queue');
      END;
      /
      
    3. Configure a downstream capture process that will capture all supported changes made to the source database and stage these changes in the queue created in Step b. Ensure that the capture process uses a database link to the source database. The capture process can be a real-time downstream capture process or an archived-log downstream capture process. See Oracle Streams Replication Administrator's Guide for instructions. Do not start the capture process.

      "Preparing to Upgrade a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then ensure that the capture process does not attempt to capture changes to tables with user-defined types. See the Oracle Streams documentation for the source database for information about excluding database objects from an Oracle Streams configuration with rules.

    RMAN

    Complete the following steps if you are using RMAN for instantiation:

    1. In SQL*Plus, connect to the source database orcl.example.com as the Oracle Streams administrator.

      See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

    2. Perform a build of the data dictionary in the redo log:

      SET SERVEROUTPUT ON
      DECLARE
        scn  NUMBER;
      BEGIN
        DBMS_CAPTURE_ADM.BUILD(
          first_scn => scn);
        DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
      END;
      /
      First SCN Value = 1122610
      

      This procedure displays the valid first SCN value for the capture process that will be created at the destination database. Make a note of the SCN value returned because you will use it when you create the capture process at the destination database.

    3. Prepare the source database for instantiation:

      exec DBMS_CAPTURE_ADM.PREPARE_GLOBAL_INSTANTIATION();
      
  3. Proceed to "Task 3: Instantiating the Database".

A Third Database Is the Capture Database

To use this option, meet the following requirements:

  • The database being upgraded must be an Oracle Database 10g Release 1 (10.1) database.

  • The third database must be an Oracle Database 10g Release 1 or later database.

This example assumes that the global name of the third database is thrd.example.com. Complete the following steps to set up Oracle Streams before instantiation when a third database is the capture database:

  1. Configure your network and Oracle Net so that the source database, destination database, and third database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.

  2. In SQL*Plus, connect to the third database thrd.example.com as an administrative user.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Create an Oracle Streams administrator:

    • If the third database is an Oracle Database 10g database or an Oracle Database 11g Release 1 database, then see the Oracle Streams Concepts and Administration book for that release for instructions.

    • If the third database is an Oracle Database 11g Release 2 database, then see Oracle Streams Replication Administrator's Guide for instructions.

    These instructions assume that the name of the Oracle Streams administrator at the third database is strmadmin.

  4. While still connected to the third database as the Oracle Streams administrator, create an ANYDATA queue that will stage changes made to the source database during the upgrade process. For example:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.capture_queue_table',
        queue_name  => 'strmadmin.capture_queue');
    END;
    /
    
  5. Configure a downstream capture process that will capture all supported changes made to the source database and stage these changes in the queue created in Step 4. Ensure that the capture process uses a database link to the source database. Do not start the capture process.

    See the following documentation for instructions:

    • If the capture database is an Oracle Database 10g database or an Oracle Database 11g Release 1 database, then see the Oracle Streams Concepts and Administration book for that release for instructions.

    • If the capture database is an Oracle Database 11g Release 2 database, then see Oracle Streams Replication Administrator's Guide.

    The capture process can be a real-time downstream capture process or an archived-log downstream capture process.

    "Preparing to Upgrade a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the upgrade operation. If you are using this method, then ensure that the capture process does not attempt to capture changes to tables with user-defined types. See the Oracle Streams documentation for the source database for information about excluding database objects from an Oracle Streams configuration with rules.

  6. Proceed to "Task 3: Instantiating the Database".

Task 3: Instantiating the Database

"Deciding Which Utility to Use for Instantiation" discusses different options for instantiating an entire database. Complete the steps in the appropriate section based on the instantiation option you are using:

Instantiating the Database Using Export/Import

Complete the following steps to instantiate the destination database using export/import:

  1. Instantiate the destination database using Export/Import. See Oracle Streams Replication Administrator's Guide for more information about performing instantiations, and see Oracle Database Utilities for information about performing an export/import using the Export and Import utilities.

    If you use Oracle Data Pump or original Export/Import to instantiate the destination database, then ensure that the following parameters are set to the appropriate values:

    • Set the STREAMS_CONFIGURATION import parameter to n.

    • If you use original Export/Import, then set the CONSISTENT export parameter to y. This parameter does not apply to Data Pump exports.

    • If you use original Export/Import, then set the STREAMS_INSTANTIATION import parameter to y. This parameter does not apply to Data Pump imports.

    If you are upgrading an Oracle9i Database Release 2 (9.2) database, then you must use original Export/Import.

  2. At the destination database, disable any imported jobs that modify data that will be replicated from the source database. Query the DBA_JOBS data dictionary view to list the jobs.

  3. Proceed to "Task 4: Setting Up Oracle Streams After Instantiation".

Instantiating the Database Using RMAN

Complete the following steps to instantiate the destination database using the RMAN DUPLICATE command:


Note:

These steps provide a general outline for using RMAN to duplicate a database. If you are upgrading an Oracle9i Release 2 database, then see the Oracle9i Recovery Manager User's Guide for detailed information about using RMAN in that release. If you upgrading an Oracle Database 10g Release 1 (10.1) database, then see the Oracle Database Backup and Recovery Advanced User's Guide for that release.

  1. Create a backup of the source database if one does not exist. RMAN requires a valid backup for duplication. In this example, create a backup of orcl.example.com if one does not exist.

  2. In SQL*Plus, connect to the source database orcl.example.com as an administrative user.

    Seethe Oracle Database Administrator's Guide for the source database release for information about connecting to a database in SQL*Plus.

  3. Determine the until SCN for the RMAN DUPLICATE command. For example:

    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
      until_scn NUMBER;
    BEGIN
      until_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
          DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn);
    END;
    /
    

    Make a note of the until SCN value. This example assumes that the until SCN value is 439882. You will set the UNTIL SCN option to this value when you use RMAN to duplicate the database in Step 7.

  4. While still connected as an administrative user in SQL*Plus to the source database, archive the current online redo log. For example:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  5. Prepare your environment for database duplication, which includes preparing the destination database as an auxiliary instance for duplication. See the documentation for the release from which you are upgrading for instructions. Specifically, see the "Duplicating a Database with Recovery Manager" chapter in the Oracle9i Recovery Manager User's Guide or Oracle Database Backup and Recovery Advanced User's Guide (10g) for instructions.

  6. Start the RMAN client, and connect to the database orcl.example.com as TARGET and to the updb.example.com database as AUXILIARY. Connect to each database as an administrative user.

    See the RMAN documentation for your Oracle Database release for more information about the RMAN CONNECT command.

  7. Use the RMAN DUPLICATE command with the OPEN RESTRICTED option to instantiate the source database at the destination database. The OPEN RESTRICTED option is required. This option enables a restricted session in the duplicate database by issuing the following SQL statement: ALTER SYSTEM ENABLE RESTRICTED SESSION. RMAN issues this statement immediately before the duplicate database is opened.

    You can use the UNTIL SCN clause to specify an SCN for the duplication. Use the until SCN determined in Step 3 for this clause. Archived redo logs must be available for the until SCN specified and for higher SCN values. Therefore, Step 4 archived the redo log containing the until SCN.

    Ensure that you use TO database_name in the DUPLICATE command to specify the database name of the duplicate database. In this example, the database name of the duplicate database is updb. Therefore, the DUPLICATE command for this example includes TO updb.

    The following is an example of an RMAN DUPLICATE command:

    RMAN> RUN
          { 
            SET UNTIL SCN 439882;
            ALLOCATE AUXILIARY CHANNEL updb DEVICE TYPE sbt; 
            DUPLICATE TARGET DATABASE TO updb 
            NOFILENAMECHECK
            OPEN RESTRICTED;
          }
    
  8. In SQL*Plus, connect to the destination database as an administrative user.

  9. Use the ALTER SYSTEM statement to disable the RESTRICTED SESSION:

    ALTER SYSTEM DISABLE RESTRICTED SESSION;
    
  10. While still connected as an administrative user in SQL*Plus to the destination database, rename the database global name. After the RMAN DUPLICATE command, the destination database has the same global name as the source database, but the destination database must have its original name until the end of the upgrade. For example:

    ALTER DATABASE RENAME GLOBAL_NAME TO updb.example.com;
    
  11. At the destination database, disable any jobs that modify data that will be replicated from the source database. Query the DBA_JOBS data dictionary view to list the jobs.

  12. Upgrade the destination database to Oracle Database 11g Release 2. See the Oracle Database Upgrade Guide for instructions.

  13. If you have not done so already, configure your network and Oracle Net so that the source database and destination database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.

  14. Connect to the destination database as the Oracle Streams administrator in SQL*Plus. In this example, the destination database is updb.example.com.

  15. Create a database link to the source database. For example:

    CREATE DATABASE LINK orcl.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'orcl.example.com';
    
  16. Set the instantiation SCN for the entire database and all of the database objects. The RMAN DUPLICATE command duplicates the database up to one less than the SCN value specified in the UNTIL SCN clause. Therefore, you should subtract one from the until SCN value that you specified when you ran the DUPLICATE command in Step 7. In this example, the until SCN was set to 439882. Therefore, the instantiation SCN should be set to 439882 - 1, or 439881.

    BEGIN
      DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN(
        source_database_name => 'orcl.example.com',
        instantiation_scn    => 439881,
        recursive            => TRUE);
    END;
    /
    
  17. Proceed to "Task 4: Setting Up Oracle Streams After Instantiation".

Task 4: Setting Up Oracle Streams After Instantiation

The specific instructions for setting up Oracle Streams after instantiation depend on which database is the capture database. Follow the instructions in the appropriate section:


See Also:

"Overview of Using Oracle Streams in the Database Upgrade Process" for information about the capture database

The Source Database Is the Capture Database

Complete the following steps to set up Oracle Streams after instantiation when the source database is the capture database:

  1. In SQL*Plus, connect to the destination database as the Oracle Streams administrator. In this example, the destination database is updb.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Remove the Oracle Streams components that were cloned from the source database during instantiation:

    • If export/import was used for instantiation, then remove the ANYDATA queue that was cloned from the source database.

    • If RMAN was used for instantiation, then remove the ANYDATA queue and the capture process that were cloned from the source database.

    To remove the queue that was cloned from the source database, run the REMOVE_QUEUE procedure in the DBMS_STREAMS_ADM package. For example:

    BEGIN
      DBMS_STREAMS_ADM.REMOVE_QUEUE(
        queue_name              => 'strmadmin.capture_queue',
        cascade                 => FALSE,
        drop_unused_queue_table => TRUE);
    END;
    /
    

    To remove the capture process that was cloned from the source database, run the DROP_CAPTURE procedure in the DBMS_CAPTURE_ADM package. For example:

    BEGIN
      DBMS_CAPTURE_ADM.DROP_CAPTURE(
        capture_name          => 'capture_upgrade',
        drop_unused_rule_sets => TRUE);
    END;
    /
    
  3. Create an ANYDATA queue. This queue will stage changes propagated from the source database. For example:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.destination_queue_table',
        queue_name  => 'strmadmin.destination_queue');
    END;
    /
    
  4. Connect to the source database as the Oracle Streams administrator. In this example, the source database is orcl.example.com.

  5. Create a database link to the destination database. For example:

    CREATE DATABASE LINK updb.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'updb.example.com';
    
  6. Create a propagation that propagates all changes from the source queue to the destination database created in Step 3. For example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(
        streams_name            => 'to_updb',
        source_queue_name       => 'strmadmin.capture_queue',
        destination_queue_name  => 'strmadmin.destination_queue@updb.example.com', 
        include_dml             => TRUE,
        include_ddl             => TRUE,
        include_tagged_lcr      => TRUE,
        source_database         => 'orcl.example.com');
    END;
    /
    
  7. Connect to the destination database as the Oracle Streams administrator.

  8. Create an apply process that applies all changes in the queue created in Step 3. For example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
        streams_type       => 'apply',
        streams_name       => 'apply_upgrade',
        queue_name         => 'strmadmin.destination_queue',
        include_dml        => TRUE,
        include_ddl        => TRUE,
        include_tagged_lcr => TRUE,
        source_database    => 'orcl.example.com');
    END;
    /
    
  9. Proceed to "Task 5: Finishing the Upgrade and Removing Oracle Streams".

The Destination Database Is the Capture Database

Complete the following steps to set up Oracle Streams after instantiation when the destination database is the capture database:

  1. Complete the following steps if you used RMAN for instantiation. If you used export/import for instantiation, then proceed to Step 2.

    1. In SQL*Plus, connect to the destination database as the Oracle Streams administrator. In this example, the destination database is updb.example.com.

      See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

    2. Create an ANYDATA queue that will stage changes made to the source database during the upgrade process. For example:

      BEGIN
        DBMS_STREAMS_ADM.SET_UP_QUEUE(
          queue_table => 'strmadmin.destination_queue_table',
          queue_name  => 'strmadmin.destination_queue');
      END;
      /
      
    3. Configure a downstream capture process that will capture all supported changes made to the source database and stage these changes in the queue created in Step b.

      Ensure that you set the first_scn parameter in the CREATE_CAPTURE procedure to the value obtained for the data dictionary build in Step 2b in "The Destination Database Is the Capture Database". In this example, the first_scn parameter should be set to 1122610.

      The capture process can be a real-time downstream capture process or an archived-log downstream capture process. See Oracle Streams Replication Administrator's Guide for instructions. Do not start the capture process.

      "Preparing to Upgrade a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then ensure that the capture process does not attempt to capture changes to tables with user-defined types. See the Oracle Streams documentation for the source database for information about excluding database objects from an Oracle Streams configuration with rules.

  2. Create an apply process that applies all changes in the queue used by the downstream capture process. For example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
        streams_type       => 'apply',
        streams_name       => 'apply_upgrade',
        queue_name         => 'strmadmin.destination_queue',
        include_dml        => TRUE,
        include_ddl        => TRUE,
        include_tagged_lcr => TRUE,
        source_database    => 'orcl.example.com');
    END;
    /
    
  3. Proceed to "Task 5: Finishing the Upgrade and Removing Oracle Streams".

A Third Database Is the Capture Database

This example assumes that the global name of the third database is thrd.example.com. Complete the following steps to set up Oracle Streams after instantiation when a third database is the capture database:

  1. In SQL*Plus, connect to the destination database as the Oracle Streams administrator. In this example, the destination database is updb.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create an ANYDATA queue. This queue will stage changes propagated from the capture database. For example:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.destination_queue_table',
        queue_name  => 'strmadmin.destination_queue');
    END;
    /
    
  3. Connect to the capture database as the Oracle Streams administrator. In this example, the capture database is thrd.example.com.

  4. Create a database link to the destination database. For example:

    CREATE DATABASE LINK updb.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'updb.example.com';
    
  5. Create a propagation that propagates all changes from the source queue at the capture database to the destination queue created in Step 2. For example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(
        streams_name            => 'to_updb',
        source_queue_name       => 'strmadmin.capture_queue',
        destination_queue_name  => 'strmadmin.destination_queue@updb.example.com', 
        include_dml             => TRUE,
        include_ddl             => TRUE,
        include_tagged_lcr      => TRUE,
        source_database         => 'orcl.example.com');
    END;
    /
    
  6. Connect to the destination database as the Oracle Streams administrator. In this example, the destination database is updb.example.com.

  7. Create an apply process that applies all changes in the queue created in Step 2. For example:

    BEGIN
      DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
        streams_type       => 'apply',
        streams_name       => 'apply_upgrade',
        queue_name         => 'strmadmin.destination_queue',
        include_dml        => TRUE,
        include_ddl        => TRUE,
        include_tagged_lcr => TRUE,
        source_database    => 'orcl.example.com');
    END;
    /
    
  8. Complete the steps in "Task 5: Finishing the Upgrade and Removing Oracle Streams".

Task 5: Finishing the Upgrade and Removing Oracle Streams

Complete the following steps to finish the upgrade operation using Oracle Streams and remove Oracle Streams components:

  1. Connect to the destination database as the Oracle Streams administrator. In this example, the destination database is updb.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Start the apply process. For example:

    BEGIN
      DBMS_APPLY_ADM.START_APPLY(
        apply_name  => 'apply_upgrade');
    END;
    /
    
  3. Connect to the capture database as the Oracle Streams administrator.

  4. Start the capture process. For example:

    BEGIN
      DBMS_CAPTURE_ADM.START_CAPTURE(
        capture_name  => 'capture_upgrade');
    END;
    /
    

    This step begins the process of replicating changes that were made to the source database during instantiation of the destination database.

  5. While still connected as the Oracle Streams administrator in SQL*Plus to the capture database, monitor the Oracle Streams environment until the apply process at the destination database has applied most of the changes from the source database.

    To determine whether the apply process at the destination database has applied most of the changes from the source database, complete the following steps:

    1. Query the enqueue message number of the capture process and the message number with the oldest system change number (SCN) for the apply process to see if they are nearly equal.

      For example, if the name of the capture process is capture_upgrade, and the name of the apply process is apply_upgrade, then run the following query at the capture database:

      COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Captured SCN' FORMAT 99999999999
      COLUMN OLDEST_SCN_NUM HEADING 'Oldest Applied SCN' FORMAT 99999999999
      
      SELECT c.ENQUEUE_MESSAGE_NUMBER, a.OLDEST_SCN_NUM
        FROM V$STREAMS_CAPTURE c, V$STREAMS_APPLY_READER@updb.example.com a
        WHERE c.CAPTURE_NAME = 'CAPTURE_UPGRADE'
          AND a.APPLY_NAME   = 'APPLY_UPGRADE';
      

      When the two values returned by this query are nearly equal, most of the changes from the source database have been applied at the destination database, and you can proceed to the next step. At this point in the process, the values returned by this query might never be equal because the source database still allows changes.

      If this query returns no results, then ensure that the Oracle Streams clients in the environment are enabled by querying the STATUS column in the DBA_CAPTURE view at the capture database and the DBA_APPLY view at the destination database. If a propagation is used, you can check the status of the propagation by running the query in "Displaying Information About the Schedules for Propagation Jobs".

      If an Oracle Streams client is disabled, then try restarting it. If an Oracle Streams client will not restart, then troubleshoot the environment using the information in Chapter 30, "Identifying Problems in an Oracle Streams Environment".

    2. Query the state of the apply process apply servers at the destination database to determine whether they have finished applying changes.

      For example, if the name of the apply process is apply_upgrade, then run the following query at the capture database:

      COLUMN STATE HEADING 'Apply Server State' FORMAT A20
       
      SELECT STATE
        FROM V$STREAMS_APPLY_SERVER@updb.example.com
        WHERE APPLY_NAME = 'APPLY_UPGRADE';
      

      When the state for all apply servers is IDLE, you can proceed to the next step.

  6. Connect to the destination database as the Oracle Streams administrator. In this example, the destination database is updb.example.com.

  7. Ensure that there are no apply errors by running the following query:

    SELECT COUNT(*) FROM DBA_APPLY_ERROR;
    

    If this query returns zero, then proceed to the next step. If this query shows errors in the error queue, then resolve these errors before continuing. See "Managing Apply Errors" for instructions.

  8. Disconnect all applications and users from the source database.

  9. Connect as an administrative user to the source database. In this example, the source database is orcl.example.com.

  10. Restrict access to the database. For example:

    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    
  11. Connect as an administrative user in SQL*Plus to the capture database, and repeat the query you ran in Step 5a. When the two values returned by the query are equal, all of the changes from the source database have been applied at the destination database, and you can proceed to the next step.

  12. Connect as the Oracle Streams administrator in SQL*Plus to the destination database, and repeat the query you ran in Step 7. If this query returns zero, then move on to the next step. If this query shows errors in the error queue, then resolve these errors before continuing. See "Managing Apply Errors" for instructions.

  13. If you performed any actions that created, modified, or deleted job queue processes at the source database during the upgrade process, then perform the same actions at the destination database. See "Considerations for Job Queue Processes and PL/SQL Package Subprograms" for more information.

  14. If you invoked any Oracle-supplied PL/SQL package subprograms at the source database during the upgrade process that modified both user data and dictionary metadata at the same time, then invoke the same subprograms at the destination database. See "Considerations for Job Queue Processes and PL/SQL Package Subprograms" for more information.

  15. Shut down the source database. This database should not be opened again.

  16. Connect to the destination database as an administrative user.

  17. Change the global name of the database to match the source database. For example:

    ALTER DATABASE RENAME GLOBAL_NAME TO orcl.example.com;
    
  18. At the destination database, enable any jobs that you disabled earlier.

  19. Make the destination database available for applications and users. Redirect any applications and users that were connecting to the source database to the destination database. If necessary, reconfigure your network and Oracle Net so that systems that communicated with the source database now communicate with the destination database. See Oracle Database Net Services Administrator's Guide for instructions.

  20. At the destination database, remove the Oracle Streams components that are no longer needed. Connect as an administrative user to the destination database, and run the following procedure:


    Note:

    Running this procedure is dangerous. It removes the local Oracle Streams configuration. Ensure that you are ready to remove the Oracle Streams configuration at the destination database before running this procedure.

    EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
    

    If you no longer need database supplemental logging at the destination database, then run the following statement to drop it:

    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA 
      (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
    

    If you no longer need the Oracle Streams administrator at the destination database, then run the following statement:

    DROP USER strmadmin CASCADE;
    
  21. If the capture database was a third database, then, at the third database, remove the Oracle Streams components that are no longer needed. Connect as an administrative user to the third database, and run the following procedure:


    Note:

    Running this procedure is dangerous. It removes the local Oracle Streams configuration. Ensure that you are ready to remove the Oracle Streams configuration at the third database before running this procedure.

    EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
    

    If you no longer need database supplemental logging at the third database, then run the following statement to drop it:

    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA 
      (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
    

    If you no longer need the Oracle Streams administrator at the destination database, then run the following statement:

    DROP USER strmadmin CASCADE;
    

The database upgrade is complete.

PK;55PK&AOEBPS/cover.htmO Cover

Oracle Corporation

PK[pTOPK&AOEBPS/whatsnew.htm What's New in Oracle Streams?

What's New in Oracle Streams?

This section describes new features of Oracle Streams for Oracle Database 11g and provides pointers to additional information.

This section contains these topics:

Oracle Database 11g Release 2 (11.2) New Features in Oracle Streams

The following Oracle Streams features are new in Oracle Database 11g Release 2 (11.2):

XStream

XStream provides application programming interfaces (APIs) that enable information sharing between Oracle databases and between Oracle databases and other systems. The other systems include Oracle systems, such as Oracle Times Ten, non-Oracle databases, non-RDBMS Oracle products, file systems, third party software applications, and so on.

Statement DML Handlers

A new type of apply handler called a statement DML handler can process row LCRs in a customized way using a collection of SQL statements. Statement DML handlers typically perform better than procedure DML handlers because statement DML handlers require no PL/SQL processing.

Record Table Changes With Oracle Streams

The new MAINTAIN_CHANGE_TABLE procedure in the DBMS_STREAMS_ADM package makes it easy to configure an Oracle Streams environment that records the changes made to a table.

SQL Generation

SQL generation is the ability to generate the SQL statement required to perform the change encapsulated in a row logical change record (row LCR).


See Also:


Oracle Streams Supports Compressed Tables

In prior releases of Oracle Database, Oracle Streams did not support the capture of changes to compressed tables. In Oracle Database 11g Release 2 (11.2) and later, Oracle Streams capture processes and synchronous captures can capture changes made to tables compressed using either basic table compression or OLTP table compression. In addition, apply processes can apply changes to compressed tables.


Note:

Capture processes can capture changes to compressed tables only if the compatibility level is set to 11.2.0 or higher at the source database. In a downstream capture configuration, the compatibility level must be set to 11.2.0 or higher at the database running the capture process also. Synchronous captures can capture changes to compressed tables only if the compatibility level is set to 11.2.0 or higher at the database.

Capture Processes and Apply Processes Support SecureFile LOBs

In prior releases of Oracle Database, Oracle Streams did not support SecureFile LOBs. In Oracle Database 11g Release 2 (11.2) and later, Oracle Streams capture processes can capture changes made to SecureFile CLOB, NCLOB, and BLOB columns, and Oracle Streams apply processes can apply changes to SecureFile CLOB, NCLOB, and BLOB columns.

New Keep Columns Declarative Rule-Based Transformation

The keep columns declarative rule-based transformation keeps a list of columns in a row logical change record (LCR) that satisfies the specified rule. The transformation deletes columns that are not in the list from the row LCR. You specify a keep columns declarative rule-based transformation using the KEEP_COLUMNS procedure in the DBMS_STREAMS_ADM package.

Automatic Split and Merge

Two new capture process parameters can enable automatic split and merge: split_threshold and merge_theshold. When these parameters are set to specify automatic split and merge, Oracle Scheduler jobs monitor the streams flowing from a capture process. When an Oracle Scheduler job identifies a problem with a stream, the job submits a new Oracle Scheduler job to split the problem stream off from the other streams flowing from the capture process. Other Oracle Scheduler jobs continue to monitor the stream, and, when the problem is corrected, an Oracle Scheduler job merges the stream back with the other streams.

New Apply Process Parameter: txn_age_spill_threshold

The apply process begins to spill messages from memory to hard disk for a particular transaction when the amount of time that any message in the transaction has been in memory exceeds the specified number of seconds in the txn_age_spill_threshold parameter.

Monitoring Jobs

The new START_MONITORING procedure in the UTL_SPADV package can create a monitoring job that monitors Oracle Streams performance continually at specified intervals. Other new procedures in this package enable you to manage monitoring jobs.

New DBA_RECOVERABLE_SCRIPT_HIST View

The new DBA_RECOVERABLE_SCRIPT_HIST view stores the results of recovery operations that were performed by the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package.

Oracle Database 11g Release 1 (11.1) New Features in Oracle Streams

The following Oracle Streams features are new in Oracle Database 11g Release 1 (11.1):

Oracle Streams Topology and Oracle Streams Performance Advisor

The Oracle Streams topology identifies individual streams of messages and the Oracle Streams components configured in each stream. An Oracle Streams environment typically covers multiple databases, and the Oracle Streams topology provides a comprehensive view of the entire Oracle Streams environment.

The Oracle Streams Performance Advisor reports performance measurements for an Oracle Streams topology, including throughput and latency measurements. The Oracle Streams Performance Advisor also identifies bottlenecks in an Oracle Streams topology so that they can be corrected. In addition, the Oracle Streams Performance advisor examines the Oracle Streams components in an Oracle Streams topology and recommends ways to improve their performance.

Automatic Data Type Conversion During Apply

During apply, an apply process automatically converts certain data types when there is a mismatch between the data type of a column in the row logical change record (row LCR) and the data type of the corresponding column in a table.

Simplified Way to Restore Default Values for Parameters

You can set a capture process parameter to its default value by specifying NULL for the value of the parameter in the DBMS_CAPTURE_ADM.SET_PARAMETER procedure. Similarly, you can set an apply process parameter to its default value by specifying NULL for the value of the parameter in the DBMS_APPLY_ADM.SET_PARAMETER procedure.

Oracle Streams Supports Tables in a Flashback Data Archive

In prior releases of Oracle Database, Oracle Streams did not support the replication of changes to tables in a flashback data archive. In Oracle Database 11g Release 1 (11.1) and later, Oracle Streams supports tables in a flashback data archive.

Oracle Streams Supports Virtual Columns

In prior releases of Oracle Database, Oracle Streams did not support the replication of changes to tables with virtual columns. In Oracle Database 11g Release 1 (11.1) and later, Oracle Streams supports tables with virtual columns.

New Capture Process Parameter: skip_autofiltered_table_ddl

A new capture process parameter named skip_autofiltered_table_ddl enables you to capture data definition language (DDL) changes to database objects for which data manipulation language (DML) changes are automatically filtered.

New Apply Process Parameter: rtrim_on_implicit_conversion

A new apply process parameter named rtrim_on_implicit_conversion determines whether the apply process trims character data during automatic data type conversion.

Synchronous Capture

Synchronous capture is a new Oracle Streams client that captures data manipulation language (DML) changes made to tables immediately after the changes are committed.

Oracle Streams Support for XMLType Columns

XMLType is an Oracle-supplied type that you can use to store and query XML data in the database. Oracle Streams can capture, propagate, and apply changes to XMLType data.

Capture processes can capture changes to XMLType columns stored as CLOB columns, but capture processes cannot capture changes to XMLType columns stored object relationally or as binary XML. Apply processes can apply changes to XMLType columns stored as CLOB columns, stored object relationally, or stored as binary XML.

Oracle Streams Support for Transparent Data Encryption

Oracle Streams supports capturing, propagation, and applying changes to columns that have been encrypted using transparent data encryption. Oracle Streams supports columns that were encrypted at the column level or through tablespace encryption. Tablespace encryption enables you to encrypt an entire tablespace. All objects created in the encrypted tablespace are automatically encrypted, including all columns in the database objects in the tablespace. Once a column is encrypted, whether it is due to column encryption or tablespace encryption, Oracle Streams components handle the column data in the same way.

Split and Merge of a Stream Destination

You can easily split off an unavailable replica from a Streams replication configuration. Splitting the stream minimizes the time needed for the replica to "catch up" when it becomes available again. When the replica is caught up, it can be merged back into the original configuration. This feature uses three new procedures in the DBMS_STREAMS_ADM package: SPLIT_STREAMS, MERGE_STREAMS_JOB, and MERGE_STREAMS.

Track LCRs Through a Stream

The new SET_MESSAGE_TRACKING procedure in the DBMS_STREAMS_ADM package lets you specify a tracking label for logical change records (LCRs) generated by a database session. You can query the new V$STREAMS_MESSAGE_TRACKING view to track the LCRs through the stream and see how they were processed by each Oracle Streams client.

LCR tracking is useful if LCRs are not being applied as expected by one or more apply processes. When this happens, you can use LCR tracking to determine where the LCRs are stopping in the stream and address the problem at that location.

Also, the new message_tracking_frequency capture process parameter enables you to track LCRs automatically.


See Also:


Compare and Converge Shared Database Objects

A new Oracle-supplied package called DBMS_COMPARISON enables you to compare the rows in a shared database object, such as a table, at two different databases. If differences are found in the database object, then this package can converge the database objects so that they are consistent.

Automated Alerts for Oracle Streams Clients and Thresholds

Enterprise Manager automatically alerts you when an Oracle Streams client becomes disabled or when Oracle Streams-related threshold that you have defined is crossed.

Oracle Streams Jobs Use Oracle Scheduler

In past releases, Oracle Streams used jobs created by the DBMS_JOB package to perform jobs such as propagation and event notification, and the JOB_QUEUE_PROCESSES initialization parameter controlled the number of slave processes that were created.

In Oracle Database 11g Release 1 (11.1) and* later, Oracle Streams uses Oracle Scheduler to perform these jobs. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set, unless you want to limit the number of slaves that can be created.


See Also:

"Propagation Jobs"

Notification Improvements

This release introduces the following notification improvements:

New Error Messages for Easier Error Handling

The following apply error messages are new in Oracle Database 11g Release 1 (11.1):

In past releases, an ORA-01403 error was returned in these situations. These new error messages make it easier to handle apply errors in DML handlers and error handlers. If you have existing procedure handlers and error handlers, then they you might need to modify them for the current release.

Combined Capture and Apply

Oracle Streams can improve propagation efficiency under certain conditions.

PKߎՎPK&AOEBPS/strms_cpmon.htm Monitoring Oracle Streams Implicit Capture

24 Monitoring Oracle Streams Implicit Capture

Both capture processes and synchronous captures perform implicit capture.

The following topics describe monitoring Oracle Streams implicit capture:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See Oracle Database 2 Day + Data Replication and Integration Guide and the online Help for the Oracle Streams tool for more information.

Monitoring a Capture Process

This section provides sample queries that you can use to monitor Oracle Streams capture processes.

This section contains these topics:

Displaying the Queue, Rule Sets, and Status of Each Capture Process

You can display the following information about each capture process in a database by running the query in this section:

  • The capture process name

  • The name of the queue used by the capture process

  • The name of the positive rule set used by the capture process

  • The name of the negative rule set used by the capture process

  • The status of the capture process, which can be ENABLED, DISABLED, or ABORTED

To display this general information about each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15

SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS 
   FROM DBA_CAPTURE;

Your output looks similar to the following:

Capture         Capture                                         Capture
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set        Rule Set        Status
--------------- --------------- --------------- --------------- ---------------
STRM01_CAPTURE  STREAMS_QUEUE   RULESET$_25     RULESET$_36     ENABLED

If the status of a capture process is ABORTED, then you can query the ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_CAPTURE data dictionary view to determine the error.


See Also:

"Is the Capture Process Enabled?" for an example query that shows the error number and error message if a capture process is aborted

Displaying Session Information About Each Capture Process

The query in this section displays the following session information about each session associated with a capture process in a database:

  • The capture process component

  • The session identifier

  • The serial number

  • The operating system process identification number

  • The process name of the capture process in the form CPnn, where nn can include letters and numbers

To display this information for each capture process in a database, run the following query:

COLUMN ACTION HEADING 'Capture Process Component' FORMAT A25
COLUMN SID HEADING 'Session ID' FORMAT 99999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 99999999
COLUMN PROCESS HEADING 'Operating System|Process Number' FORMAT A20
COLUMN PROCESS_NAME HEADING 'Process|Name' FORMAT A7
 
SELECT /*+PARAM('_module_action_old_length',0)*/ ACTION,
       SID,
       SERIAL#,
       PROCESS,
       SUBSTR(PROGRAM,INSTR(PROGRAM,'(')+1,4) PROCESS_NAME
  FROM V$SESSION
  WHERE MODULE ='Streams' AND
        ACTION LIKE '%Capture%';

Your output looks similar to the following:

                                       Session
                                        Serial Operating System     Process
Capture Process Component Session ID    Number Process Number       Name
------------------------- ---------- --------- -------------------- -------
EMDBA$CAP - Capture               74         9 10019                CP01

See Also:

"Capture Process Subcomponents" for information about capture process parallelism

Displaying Change Capture Information About Each Capture Process

The query in this section displays the following information about each capture process in a database:

  • The name of the capture process.

  • The process number CPnn, where nn can include letters and numbers

  • The session identifier.

  • The serial number of the session.

  • The current state of the capture process

    See "Capture Process States".

  • The total number of redo entries passed by LogMiner to the capture process for detailed rule evaluation. A capture process converts a redo entry into a message and performs detailed rule evaluation on the message when capture process prefiltering cannot discard the change.

  • The total number LCRs enqueued since the capture process was last started.

To display this information for each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A7
COLUMN PROCESS_NAME HEADING 'Capture|Process|Number' FORMAT A7
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A20
COLUMN TOTAL_MESSAGES_CAPTURED HEADING 'Redo|Entries|Evaluated|In Detail' FORMAT 9999999
COLUMN TOTAL_MESSAGES_ENQUEUED HEADING 'Total|LCRs|Enqueued' FORMAT 9999999999

SELECT c.CAPTURE_NAME,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME, 
       c.SID,
       c.SERIAL#, 
       c.STATE,
       c.TOTAL_MESSAGES_CAPTURED,
       c.TOTAL_MESSAGES_ENQUEUED 
  FROM V$STREAMS_CAPTURE c, V$SESSION s
  WHERE c.SID = s.SID AND
        c.SERIAL# = s.SERIAL#;

Your output looks similar to the following:

                                                          Redo
        Capture         Session                        Entries       Total
Capture Process Session  Serial                      Evaluated        LCRs
Name    Number       ID  Number State                In Detail    Enqueued
------- ------- ------- ------- -------------------- --------- -----------
CAPTURE CP01        954       3 CAPTURING CHANGES      3719085     3389713
_HNS

The number of redo entries scanned can be higher than the number of DML and DDL redo entries captured by a capture process. Only DML and DDL redo entries that satisfy the rule sets of a capture process are captured and enqueued into the capture process queue. Also, the total LCRs enqueued includes LCRs that contain transaction control statements. These row LCRs contain directives such as COMMIT and ROLLBACK. Therefore, the total LCRs enqueued is a number higher than the number of row changes and DDL changes enqueued by a capture process.


See Also:

"Row LCRs" for more information about transaction control statements

Displaying State Change and Message Creation Time for Each Capture Process

The query in this section displays the following information for each capture process in a database:

  • The name of the capture process

  • The current state of the capture process

    See "Capture Process States".

  • The date and time when the capture process state last changed

  • The date and time when the capture process last created an LCR

To display this information for each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN STATE_CHANGED HEADING 'State|Change Time'
COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'

SELECT CAPTURE_NAME,
       STATE,
       TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
       TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

Capture                                     State             Last Message
Name            State                       Change Time       Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_SIMP    CAPTURING CHANGES           13:24:42 11/08/04 13:24:41 11/08/04

Displaying Elapsed Time Performing Capture Operations for Each Capture Process

The query in this section displays the following information for each capture process in a database:

  • The name of the capture process

  • The elapsed capture time, which is the amount of time (in seconds) spent scanning for changes in the redo log since the capture process was last started

  • The elapsed rule evaluation time, which is the amount of time (in seconds) spent evaluating rules since the capture process was last started

  • The elapsed enqueue time, which is the amount of time (in seconds) spent enqueuing messages since the capture process was last started

  • The elapsed LCR creation time, which is the amount of time (in seconds) spent creating logical change records (LCRs) since the capture process was last started

  • The elapsed pause time, which is the amount of time (in seconds) spent paused for flow control since the capture process was last started


Note:

All times for this query are displayed in seconds. The V$STREAMS_CAPTURE view displays elapsed time in centiseconds by default. A centisecond is one-hundredth of a second. The query in this section divides each elapsed time by one hundred to display the elapsed time in seconds.

To display this information for each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN ELAPSED_CAPTURE_TIME HEADING 'Elapsed|Capture|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Elapsed|Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Elapsed|Enqueue|Time' FORMAT 99999999.99
COLUMN ELAPSED_LCR_TIME HEADING 'Elapsed|LCR|Creation|Time' FORMAT 99999999.99
COLUMN ELAPSED_PAUSE_TIME HEADING 'Elapsed|Pause|Time' FORMAT 99999999.99

SELECT CAPTURE_NAME,
       (ELAPSED_CAPTURE_TIME/100) ELAPSED_CAPTURE_TIME,
       (ELAPSED_RULE_TIME/100) ELAPSED_RULE_TIME,
       (ELAPSED_ENQUEUE_TIME/100) ELAPSED_ENQUEUE_TIME,
       (ELAPSED_LCR_TIME/100) ELAPSED_LCR_TIME,
       (ELAPSED_PAUSE_TIME/100) ELAPSED_PAUSE_TIME
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

                                  Elapsed                   Elapsed
                     Elapsed         Rule      Elapsed          LCR      Elapsed
Capture              Capture   Evaluation      Enqueue     Creation        Pause
Name                    Time         Time         Time         Time         Time
--------------- ------------ ------------ ------------ ------------ ------------
STM1$CAP             1213.92          .04        33.84       185.25       600.60

Displaying Information About Each Downstream Capture Process

A downstream capture is a capture process that runs on a database other than the source database. You can display the following information about each downstream capture process in a database by running the query in this section:

  • The capture process name

  • The source database of the changes captured by the capture process

  • The name of the queue used by the capture process

  • The status of the capture process, which can be ENABLED, DISABLED, or ABORTED

  • Whether the downstream capture process uses a database link to the source database for administrative actions

To display this information about each downstream capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
COLUMN USE_DATABASE_LINK HEADING 'Uses|Database|Link?' FORMAT A8

SELECT CAPTURE_NAME, 
       SOURCE_DATABASE, 
       QUEUE_NAME, 
       STATUS, 
       USE_DATABASE_LINK
   FROM DBA_CAPTURE
   WHERE CAPTURE_TYPE = 'DOWNSTREAM';

Your output looks similar to the following:

Capture                              Capture         Capture         Uses
Process         Source               Process         Process         Database
Name            Database             Queue           Status          Link?
--------------- -------------------- --------------- --------------- --------
STRM03_CAPTURE  DBS1.EXAMPLE.COM     STRM03_QUEUE    ENABLED         YES

In this case, the source database for the capture process is dbs1.example.com, but the local database running the capture process is not dbs1.example.com. Also, the capture process returned by this query uses a database link to the source database to perform administrative actions. The database link name is the same as the global name of the source database, which is dbs1.example.com in this case.

If the status of a capture process is ABORTED, then you can query the ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_CAPTURE data dictionary view to determine the error.


Note:

At the source database for an Oracle Streams downstream capture process, you can query the V$ARCHIVE_DEST_STATUS view to display information about the downstream database. The following columns in the view relate to the downstream database:
  • The TYPE column shows DOWNSTREAM if redo log information is being shipped to a downstream capture database.

  • The DESTINATION column shows the name of the downstream capture database.



See Also:


Displaying the Registered Redo Log Files for Each Capture Process

You can display information about the archived redo log files that are registered for each capture process in a database by running the query in this section. This query displays information about these files for both local capture processes and downstream capture processes.

The query displays the following information for each registered archived redo log file:

  • The name of a capture process that uses the file

  • The source database of the file

  • The sequence number of the file

  • The name and location of the file at the local site

  • Whether the file contains the beginning of a data dictionary build

  • Whether the file contains the end of a data dictionary build

To display this information about each registered archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A20
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10

SELECT r.CONSUMER_NAME,
       r.SOURCE_DATABASE,
       r.SEQUENCE#, 
       r.NAME, 
       r.DICTIONARY_BEGIN, 
       r.DICTIONARY_END 
  FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
  WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;  

Your output looks similar to the following:

Capture                                                  Dictionary Dictionary
Process         Source     Sequence Archived Redo Log    Build      Build
Name            Database     Number File Name            Begin      End
--------------- ---------- -------- -------------------- ---------- ----------
STRM02_CAPTURE  DBS2.EXAMP       15 /orc/dbs/log/arch2_1 NO         NO
                LE.COM              _15_478347508.arc
STRM02_CAPTURE  DBS2.EXAMP       16 /orc/dbs/log/arch2_1 NO         NO
                LE.COM              _16_478347508.arc 
STRM03_CAPTURE  DBS1.EXAMP       45 /remote_logs/arch1_1 YES        YES
                LE.COM              _45_478347335.arc
STRM03_CAPTURE  DBS1.EXAMP       46 /remote_logs/arch1_1 NO         NO
                LE.COM              _46_478347335.arc
STRM03_CAPTURE  DBS1.EXAMP       47 /remote_logs/arch1_1 NO         NO
                LE.COM              _47_478347335.arc

Assume that this query was run at the dbs2.example.com database, and that strm02_capture is a local capture process, and strm03_capture is a downstream capture process. The source database for the strm03_capture downstream capture process is dbs1.example.com. This query shows that there are two registered archived redo log files for strm02_capture and three registered archived redo log files for strm03_capture. This query shows the name and location of each of these files in the local file system.

Displaying the Redo Log Files that Are Required by Each Capture Process

A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files. You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process. Redo log files before the redo log file that contains the required checkpoint SCN are no longer needed by the capture process. These redo log files can be stored offline if they are no longer needed for any other purpose. If you reset the start SCN for a capture process to a lower value in the future, then these redo log files might be needed.

The query displays the following information for each required archived redo log file:

  • The name of a capture process that uses the file

  • The source database of the file

  • The sequence number of the file

  • The name and location of the required redo log file at the local site

To display this information about each required archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40

SELECT r.CONSUMER_NAME,
       r.SOURCE_DATABASE,
       r.SEQUENCE#, 
       r.NAME 
  FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
  WHERE r.CONSUMER_NAME =  c.CAPTURE_NAME AND
        r.NEXT_SCN      >= c.REQUIRED_CHECKPOINT_SCN;  

Your output looks similar to the following:

Capture                             Required
Process         Source     Sequence Archived Redo Log
Name            Database     Number File Name
--------------- ---------- -------- ----------------------------------------
STRM02_CAPTURE  DBS2.EXAMP       16 /orc/dbs/log/arch2_1_16_478347508.arc
                LE.COM
STRM03_CAPTURE  DBS1.EXAMP       47 /remote_logs/arch1_1_47_478347335.arc
                LE.COM

Displaying SCN Values for Each Redo Log File Used by Each Capture Process

You can display information about the SCN values for archived redo log files that are registered for each capture process in a database by running the query in this section. This query displays information the SCN values for these files for both local capture processes and downstream capture processes. This query also identifies redo log files that are no longer needed by any capture process at the local database.

The query displays the following information for each registered archived redo log file:

  • The capture process name of a capture process that uses the file

  • The name and location of the file at the local site

  • The lowest SCN value for the information contained in the redo log file

  • The lowest SCN value for the next redo log file in the sequence

  • Whether the redo log file is purgeable

To display this information about each registered archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A25
COLUMN FIRST_SCN HEADING 'First SCN' FORMAT 99999999999
COLUMN NEXT_SCN HEADING 'Next SCN' FORMAT 99999999999
COLUMN PURGEABLE HEADING 'Purgeable?' FORMAT A10
 
SELECT r.CONSUMER_NAME,
       r.NAME, 
       r.FIRST_SCN,
       r.NEXT_SCN,
       r.PURGEABLE 
  FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
  WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

Your output looks similar to the following:

Capture
Process         Archived Redo Log
Name            File Name                    First SCN     Next SCN Purgeable?
--------------- ------------------------- ------------ ------------ ----------
CAPTURE_SIMP    /private1/ARCHIVE_LOGS/1_       509686       549100 YES
                3_502628294.dbf
 
CAPTURE_SIMP    /private1/ARCHIVE_LOGS/1_       549100       587296 YES
                4_502628294.dbf
 
CAPTURE_SIMP    /private1/ARCHIVE_LOGS/1_       587296       623107 NO
                5_502628294.dbf

The redo log files with YES for Purgeable? for all capture processes will never be needed by any capture process at the local database. These redo log files can be removed without affecting any existing capture process at the local database. The redo log files with NO for Purgeable? for one or more capture processes must be retained.

Displaying the Last Archived Redo Entry Available to Each Capture Process

For a local capture process, the last archived redo entry available is the last entry from the online redo log flushed to an archived log file. For a downstream capture process, the last archived redo entry available is the redo entry with the most recent system change number (SCN) in the last archived log file added to the LogMiner session used by the capture process.

You can display the following information about the last redo entry that was made available to each capture process by running the query in this section:

  • The name of the capture process

  • The identification number of the LogMiner session used by the capture process

  • The highest SCN available for the capture process

    For local capture, this SCN is the last redo SCN flushed to the log files. For downstream capture, this SCN is the last SCN added to LogMiner through the archive logs.

  • The timestamp of the highest SCN available for the capture process

    For local capture, this timestamp is the time the SCN was written to the log file. For downstream capture, this timestamp is the time of the most recent archive log (containing the most recent SCN) available to LogMiner.

The information displayed by this query is valid only for an enabled capture process.

Run the following query to display this information for each capture process:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A20
COLUMN LOGMINER_ID HEADING 'LogMiner ID' FORMAT 9999
COLUMN AVAILABLE_MESSAGE_NUMBER HEADING 'Highest|Available SCN' FORMAT 9999999999
COLUMN AVAILABLE_MESSAGE_CREATE_TIME HEADING 'Time of|Highest|Available SCN'

SELECT CAPTURE_NAME,
       LOGMINER_ID,
       AVAILABLE_MESSAGE_NUMBER,
       TO_CHAR(AVAILABLE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') 
         AVAILABLE_MESSAGE_CREATE_TIME
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

                                               Time of
Capture                                Highest Highest
Name                 LogMiner ID Available SCN Available SCN
-------------------- ----------- ------------- -----------------
DB1$CAP                        1       1506751 09:46:11 06/29/09

Listing the Parameter Settings for Each Capture Process

The following query displays the current setting for each capture process parameter for each capture process in a database:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A25
COLUMN PARAMETER HEADING 'Parameter' FORMAT A30
COLUMN VALUE HEADING 'Value' FORMAT A10
COLUMN SET_BY_USER HEADING 'Set by|User?' FORMAT A10

SELECT CAPTURE_NAME,
       PARAMETER, 
       VALUE,
       SET_BY_USER  
  FROM DBA_CAPTURE_PARAMETERS;

Your output looks similar to the following:

Capture
Process                                                             Set by
Name                      Parameter                      Value      User?
------------------------- ------------------------------ ---------- ----------
DA$CAP                    CAPTURE_IDKEY_OBJECTS          N          NO
DA$CAP                    CAPTURE_SEQUENCE_NEXTVAL       N          NO
DA$CAP                    DISABLE_ON_LIMIT               N          NO
DA$CAP                    DOWNSTREAM_REAL_TIME_MINE      Y          NO
DA$CAP                    EXCLUDETRANS                              NO
DA$CAP                    EXCLUDEUSER                               NO
DA$CAP                    EXCLUDEUSERID                             NO
DA$CAP                    GETAPPLOPS                     Y          NO
DA$CAP                    GETREPLICATES                  N          NO
DA$CAP                    IGNORE_TRANSACTION                        NO
DA$CAP                    IGNORE_UNSUPPORTED_TABLE       *          NO
DA$CAP                    MAXIMUM_SCN                    INFINITE   NO
DA$CAP                    MAX_SGA_SIZE                   INFINITE   NO
DA$CAP                    MERGE_THRESHOLD                60         NO
DA$CAP                    MESSAGE_LIMIT                  INFINITE   NO
DA$CAP                    MESSAGE_TRACKING_FREQUENCY     2000000    NO
DA$CAP                    PARALLELISM                    1          NO
DA$CAP                    SKIP_AUTOFILTERED_TABLE_DDL    Y          NO
DA$CAP                    SPLIT_THRESHOLD                1800       NO
DA$CAP                    STARTUP_SECONDS                0          NO
DA$CAP                    TIME_LIMIT                     INFINITE   NO
DA$CAP                    TRACE_LEVEL                    0          NO
DA$CAP                    WRITE_ALERT_LOG                Y          NO
DA$CAP                    XOUT_CLIENT_EXISTS             N          NO

Note:

If the Set by User? column is NO for a parameter, then the parameter is set to its default value. If the Set by User? column is YES for a parameter, then the parameter was set by a user and might or might not be set to its default value.

Determining the Applied SCN for All Capture Processes in a Database

The applied system change number (SCN) for a capture process is the SCN of the most recent message dequeued by the relevant apply processes. All changes below this applied SCN have been dequeued by all apply processes that apply changes captured by the capture process.

To display the applied SCN for all of the capture processes in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture Process Name' FORMAT A30
COLUMN APPLIED_SCN HEADING 'Applied SCN' FORMAT 99999999999

SELECT CAPTURE_NAME, APPLIED_SCN FROM DBA_CAPTURE;

Your output looks similar to the following:

Capture Process Name           Applied SCN
------------------------------ -----------
CAPTURE_EMP                         177154

Determining Redo Log Scanning Latency for Each Capture Process

You can find the following information about each capture process by running the query in this section:

  • The redo log scanning latency, which specifies the number of seconds between the creation time of the most recent redo log entry scanned by a capture process and the current time. This number might be relatively large immediately after you start a capture process.

  • The seconds since last recorded status, which is the number of seconds since a capture process last recorded its status.

  • The current capture process time, which is the latest time when the capture process recorded its status.

  • The message creation time, which is the time when the data manipulation language (DML) or data definition language (DDL) change generated the redo data at the source database for the most recently captured LCR.

The information displayed by this query is valid only for an enabled capture process.

Run the following query to determine the redo scanning latency for each capture process:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN LAST_STATUS HEADING 'Seconds Since|Last Status' FORMAT 999999
COLUMN CAPTURE_TIME HEADING 'Current|Process|Time'
COLUMN CREATE_TIME HEADING 'Message|Creation Time' FORMAT 999999

SELECT CAPTURE_NAME,
       ((SYSDATE - CAPTURE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
       ((SYSDATE - CAPTURE_TIME)*86400) LAST_STATUS,
       TO_CHAR(CAPTURE_TIME, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,       
       TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

Capture    Latency               Current
Process         in Seconds Since Process           Message
Name       Seconds   Last Status Time              Creation Time
---------- ------- ------------- ----------------- -----------------
DA$CAP           1             1 12:33:39 07/14/10 12:33:39 07/14/10

The "Latency in Seconds" returned by this query is the difference between the current time (SYSDATE) and the "Message Creation Time." The "Seconds Since Last Status" returned by this query is the difference between the current time (SYSDATE) and the "Current Process Time."

Determining Message Enqueuing Latency for Each Capture Process

You can find the following information about each capture process by running the query in this section:

  • The message enqueuing latency, which specifies the number of seconds between when an entry was recorded in the redo log at the source database and when the message was enqueued by the capture process

  • The message creation time, which is the time when the data manipulation language (DML) or data definition language (DDL) change generated the redo data at the source database for the most recently enqueued message

  • The enqueue time, which is when the capture process enqueued the message into its queue

  • The message number of the enqueued message

The information displayed by this query is valid only for an enabled capture process.

Run the following query to determine the message capturing latency for each capture process:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Message Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Message|Number' FORMAT 9999999999

SELECT CAPTURE_NAME,
       (ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400 LATENCY_SECONDS, 
       TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
       TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
       ENQUEUE_MESSAGE_NUMBER
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

Capture    Latency
Process         in Message Creation                            Message
Name       Seconds Time                 Enqueue Time          Number
---------- ------- -------------------- -------------------- -------
CAPTURE          0 10:56:51 03/01/02    10:56:51 03/01/02     253962

The "Latency in Seconds" returned by this query is the difference between the "Enqueue Time" and the "Message Creation Time."

Displaying Information About Rule Evaluations for Each Capture Process

You can display the following information about rule evaluation for each capture process by running the query in this section:

  • The name of the capture process.

  • The number of changes discarded during prefiltering since the capture process was last started. The capture process determined that these changes definitely did not satisfy the capture process rule sets during prefiltering.

  • The number of changes kept during prefiltering since the capture process was last started. The capture process determined that these changes definitely satisfied the capture process rule sets during prefiltering. Such changes are converted into LCRs and enqueued into the capture process queue.

  • The total number of prefilter evaluations since the capture process was last started.

  • The number of undecided changes after prefiltering since the capture process was last started. These changes might or might not satisfy the capture process rule sets. Some of these changes might be filtered out after prefiltering without requiring full evaluation. Other changes require full evaluation to determine whether they satisfy the capture process rule sets.

  • The number of full evaluations since the capture process was last started. Full evaluations can be expensive. Therefore, capture process performance is best when this number is relatively low.

The information displayed by this query is valid only for an enabled capture process.

Run the following query to display this information for each capture process:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN TOTAL_PREFILTER_DISCARDED HEADING 'Prefilter|Changes|Discarded' 
  FORMAT 9999999999
COLUMN TOTAL_PREFILTER_KEPT HEADING 'Prefilter|Changes|Kept' FORMAT 9999999999
COLUMN TOTAL_PREFILTER_EVALUATIONS HEADING 'Prefilter|Evaluations' 
  FORMAT 9999999999
COLUMN UNDECIDED HEADING 'Undecided|After|Prefilter' FORMAT 9999999999
COLUMN TOTAL_FULL_EVALUATIONS HEADING 'Full|Evaluations' FORMAT 9999999999

SELECT CAPTURE_NAME,
       TOTAL_PREFILTER_DISCARDED,
       TOTAL_PREFILTER_KEPT,
       TOTAL_PREFILTER_EVALUATIONS,
       (TOTAL_PREFILTER_EVALUATIONS - 
         (TOTAL_PREFILTER_KEPT + TOTAL_PREFILTER_DISCARDED)) UNDECIDED,
       TOTAL_FULL_EVALUATIONS
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

                 Prefilter   Prefilter               Undecided
Capture            Changes     Changes   Prefilter       After        Full
Name             Discarded        Kept Evaluations   Prefilter Evaluations
--------------- ---------- ----------- ----------- ----------- -----------
CAPTURE_HNS         927409     3271491     4198900           0           9

The total number of prefilter evaluations equals the sum of the prefilter changes discarded, the prefilter changes kept, and the undecided changes.

Determining Which Capture Processes Use Combined Capture and Apply

A combined capture and apply environment is efficient because the capture process acts as the propagation sender, and the buffered queue is optimized to make replication of changes more efficient.

When a capture process uses combined capture and apply, the OPTIMIZATION column in the V$STREAMS_CAPTURE data dictionary view is greater than zero. When a capture process does not use combined capture and apply, the OPTIMIZATION column is 0 (zero).

To determine whether a capture process uses combined capture and apply, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30
COLUMN OPTIMIZATION HEADING 'Optimized?' FORMAT A10

SELECT CAPTURE_NAME, 
       DECODE(OPTIMIZATION,
                0, 'No',
                   'Yes') OPTIMIZATION
  FROM V$STREAMS_CAPTURE;

Your output looks similar to the following:

Capture Name                   Optimized?
------------------------------ ----------
CAPTURE_HNS                    Yes

This output indicates that the capture_hns capture process uses combined capture and apply.

Displaying Information About Split and Merge Operations

Splitting and merging an Oracle Streams destination is useful under the following conditions:

  • A single capture process captures changes that are sent to two or more apply processes.

  • An apply process stops accepting changes captured by the capture process. The apply process might stop accepting changes if, for example, the apply process is disabled, the database that contains the apply process goes down, there is a network problem, the computer system running the database that contains the apply process goes down, or for some other reason.

When these conditions are met, it is best to split the problem destination stream off from the other destination streams to avoid degraded performance. When the problem is corrected, the destination stream that was split off can be merged back into the other destination streams for the capture process.

By default, split and merge operations are performed automatically when Oracle Streams detects a problem destination. Two capture process parameters, split_threshold and merge_threshold, control automatic split and merge operations.

The following sections contain queries that you can run to monitor current and past automatic split and merge operations:


Note:

The queries in these sections only show information about automatic split and merge operations. These queries do not show information about operations that split streams manually using the SPLIT_STREAMS procedure in the DBMS_STREAMS_ADM package.


See Also:


Displaying the Names of the Original and Cloned Oracle Streams Components

The query in this section shows the following information about the Oracle Streams components that are involved in a split and merge operation:

  • The name of the original capture process from which a destination stream was split off

  • The name of the cloned capture process that captures changes for the problem destination

  • The name of the original propagation or apply process that was part of the stream that was split off

    In a multiple-database configuration, a propagation sends changes from the capture process's queue to the apply process's queue, and a propagation is shown in this query. In a single-database configuration, an apply process dequeues changes from the queue that is used by the capture process, and an apply process is shown in this query.

  • The name of the cloned propagation or apply process that processes changes for the problem destination

  • The type of the Oracle Streams component that receives changes from the capture process, either PROPAGATION or APPLY

Run the following query to display this information:

COLUMN ORIGINAL_CAPTURE_NAME HEADING 'Original|Capture|Process' FORMAT A15
COLUMN CLONED_CAPTURE_NAME HEADING 'Cloned|Capture|Process' FORMAT A15
COLUMN ORIGINAL_STREAMS_NAME HEADING 'Original|Streams|Name' FORMAT A15
COLUMN CLONED_STREAMS_NAME HEADING 'Cloned|Streams|Name' FORMAT A15
COLUMN STREAMS_TYPE HEADING 'Streams|Type' FORMAT A11
 
SELECT ORIGINAL_CAPTURE_NAME,
       CLONED_CAPTURE_NAME,
       ORIGINAL_STREAMS_NAME,
       CLONED_STREAMS_NAME,
       STREAMS_TYPE
 FROM DBA_STREAMS_SPLIT_MERGE;

Your output looks similar to the following:

Original        Cloned          Original        Cloned
Capture         Capture         Streams         Streams         Streams
Process         Process         Name            Name            Type
--------------- --------------- --------------- --------------- -----------
DB$CAP          CLONED$_DB$CAP_ PROPAGATION$_17 CLONED$_PROPAGA PROPAGATION
                1                               TION$_17_2

See Also:

Oracle Streams Replication Administrator's Guide for more information about split and merge operations

Displaying the Actions and Thresholds for Split and Merge Operations

The query in this section shows the following information about the actions performed by the split and merge operation and the thresholds that were set for splitting and merging a problem destination:

  • The name of the original capture process from which a destination stream was split off

  • The script status of the split or merge job, either GENERATING, NOT EXECUTED, EXECUTING, EXECUTED, or ERROR

  • The type of action performed by the job, either SPLIT, MERGE, or MONITOR

    When a SPLIT job determines that a split must be performed, a row with SPLIT action type is inserted into the DBA_STREAMS_SPLIT_MERGE view.

    When the split operation is complete, the SPLIT action type row is copied to the DBA_STREAMS_SPLIT_MERGE_HIST view, and a MERGE job is created. A row with MERGE action type is inserted into the DBA_STREAMS_SPLIT_MERGE view. When merge operation is complete, the MERGE action type row is moved to the DBA_STREAMS_SPLIT_MERGE_HIST view, and the SPLIT action type row is deleted from the DBA_STREAMS_SPLIT_MERGE view. The SPLIT action type row was previously copied to the DBA_STREAMS_SPLIT_MERGE_HIST view.

    Each original capture process has a SPLIT job that monitors all of its destinations. This type of job displays the MONITOR action type in rows in the DBA_STREAMS_SPLIT_MERGE view. MONITOR action type rows are moved to the DBA_STREAMS_SPLIT_MERGE_HIST view only if the SPLIT job is disabled. A SPLIT job can be disabled either by setting the split_threshold capture process parameter to INFINITE or by dropping the capture process.

  • The capture process parameter threshold set for the operation, in seconds

    For SPLIT jobs, the threshold is set by the split_threshold capture process parameter. For MERGE jobs, the threshold is set by the merge_threshold capture process parameter.

  • The status of the action

    For SPLIT actions, the status can be SPLITTING, SPLIT DONE, or ERROR. The SPLITTING status indicates that the split operation is being performed. The SPLIT DONE status indicates that the split operation is complete. The ERROR status indicates that an error was returned during the split operation.

    For MERGE actions, the status can be NOTHING TO MERGE, MERGING, MERGE DONE, or ERROR. The NOTHING TO MERGE status indicates that a split was performed but the split stream is not yet ready to merge. The MERGING status indicates that the merge operation is being performed. The MERGE DONE status indicates that the merge operation is complete. The ERROR status indicates that an error was returned during the merge operation.

    For MONITOR actions, the status can be any of the SPLIT and MERGE status values. In addition, a MONITOR action can show NOTHING TO SPLIT or NONSPLITTABLE for its status. The NOTHING TO SPLIT status indicates that the streams flowing from the capture process are being processed at all destinations, and no stream should be split. The NONSPLITTABLE status indicates that it is not possible to split the stream for the capture process. A NONSPLITTABLE status is possible in the following cases:

    • The capture process is disabled or aborted.

    • The capture process's queue has at least one publisher in addition to the capture process. The additional publisher can be another capture process or a propagation that sends messages to the queue.

    • The capture process has only one destination. Split and merge operations are possible only when there are two or more destinations for the changes captured by the capture process.

  • The date and time when the job status was last updated

Run the following query to display this information:

COLUMN ORIGINAL_CAPTURE_NAME HEADING 'Original|Capture|Process' FORMAT A10
COLUMN SCRIPT_STATUS HEADING 'Script|Status' FORMAT A12
COLUMN ACTION_TYPE HEADING 'Action|Type' FORMAT A7
COLUMN ACTION_THRESHOLD HEADING 'Action|Threshold' FORMAT A15
COLUMN STATUS HEADING 'Status' FORMAT A16
COLUMN STATUS_UPDATE_TIME HEADING 'Status|Update|Time' FORMAT A15
 
SELECT ORIGINAL_CAPTURE_NAME,
       SCRIPT_STATUS,
       ACTION_TYPE,
       ACTION_THRESHOLD,
       STATUS,
       STATUS_UPDATE_TIME
 FROM DBA_STREAMS_SPLIT_MERGE
 ORDER BY STATUS_UPDATE_TIME DESC;

Your output looks similar to the following:

Original                                                         Status
Capture    Script       Action  Action                           Update
Process    Status       Type    Threshold       Status           Time
---------- ------------ ------- --------------- ---------------- ---------------
DB$CAP     EXECUTED     SPLIT   1800            SPLIT DONE       31-MAR-09 01.31
                                                                 .37.133788 PM

See Also:


Displaying the Lag Time of the Cloned Capture Process

After a stream is split off from a capture process for a problem destination, you must correct the problem at the destination and ensure that the cloned capture process is enabled. When the cloned capture process is sending changes to the problem destination, and the apply process at the problem destination is applying these changes, an Oracle Scheduler job runs the MERGE_STREAMS_JOB procedure according to its schedule.

The MERGE_STREAMS_JOB procedure queries the CAPTURE_MESSAGE_CREATE_TIME in the GV$STREAMS_CAPTURE view. When the difference between CAPTURE_MESSAGE_CREATE_TIME of the cloned capture process and the original capture process is less than or equal to the value of the merge_threshold capture process parameter, the MERGE_STREAMS_JOB procedure determines that the streams are ready to merge. The MERGE_STREAMS_JOB procedure runs the MERGE_STREAMS procedure automatically to merge the streams.

The LAG column in the DBA_STREAMS_SPLIT_MERGE view tracks the time in seconds that the cloned capture process lags behind the original capture process. The following query displays the lag time:

COLUMN ORIGINAL_CAPTURE_NAME HEADING 'Original Capture Process' FORMAT A25
COLUMN CLONED_CAPTURE_NAME HEADING 'Cloned Capture Process' FORMAT A25
COLUMN LAG HEADING 'Lag' FORMAT 999999999999999
 
SELECT ORIGINAL_CAPTURE_NAME,
       CLONED_CAPTURE_NAME,
       LAG
 FROM DBA_STREAMS_SPLIT_MERGE;

Your output looks similar to the following:

Original Capture Process  Cloned Capture Process                 Lag
------------------------- ------------------------- ----------------
DB$CAP                    CLONED$_DB$CAP_1                       526

When the MERGE_STREAMS_JOB runs and the lag time is less than or equal to the value of the merge_threshold capture process parameter, the merge operation begins.


See Also:

Oracle Streams Replication Administrator's Guide for more information about split and merge operations

Displaying Information About the Split and Merge Jobs

The query in this section shows the following information about split and merge jobs:

  • The name of the original capture process from which a destination stream was split off

  • The owner of the job

  • The name of the job

  • The current state of the job, either DISABLED, RETRY SCHEDULED, SCHEDULED, RUNNING, COMPLETED, BROKEN, FAILED, REMOTE, SUCCEEDED, or CHAIN_STALLED

    See Oracle Database Administrator's Guide for information about these job states.

  • The date and time when the job will run next

Run the following query to display this information:

COLUMN ORIGINAL_CAPTURE_NAME HEADING 'Original|Capture|Process' FORMAT A10
COLUMN JOB_OWNER HEADING 'Job Owner' FORMAT A10
COLUMN JOB_NAME HEADING 'Job Name' FORMAT A15
COLUMN JOB_STATE HEADING 'Job State' FORMAT A15
COLUMN JOB_NEXT_RUN_DATE HEADING 'Job Next|Run Date' FORMAT A20
 
SELECT ORIGINAL_CAPTURE_NAME,
       JOB_OWNER,
       JOB_NAME,
       JOB_STATE,
       JOB_NEXT_RUN_DATE
 FROM DBA_STREAMS_SPLIT_MERGE;

Your output looks similar to the following:

Original
Capture                                               Job Next
Process    Job Owner  Job Name        Job State       Run Date
---------- ---------- --------------- --------------- --------------------
DB$CAP     SYS        STREAMS_SPLITJO SCHEDULED       01-APR-09 01.14.55.0
                      B$_3                            00000 PM -07:00
DB$CAP     SYS        STREAMS_MERGEJO SCHEDULED       01-APR-09 01.17.08.0
                      B$_6                            00000 PM -07:00

See Also:

Oracle Streams Replication Administrator's Guide for more information about split and merge operations

Displaying Information About Past Split and Merge Operations

The query in this section shows the following historical information about split and merge operations that were performed in the past:

  • The name of the original capture process from which a destination stream was split off

  • The script status of split or merge job

  • The type of action performed by the job, either SPLIT or MERGE

  • The status of the action performed by the job

    See "Displaying the Actions and Thresholds for Split and Merge Operations" for information about the status values.

  • The owner of the job

  • The name of the job

  • The recoverable script ID

Run the following query to display this information:

COLUMN ORIGINAL_CAPTURE_NAME HEADING 'Original|Capture|Process' FORMAT A8
COLUMN SCRIPT_STATUS HEADING 'Script|Status' FORMAT A12
COLUMN ACTION_TYPE HEADING 'Action|Type' FORMAT A8
COLUMN STATUS HEADING 'Status' FORMAT A10
COLUMN JOB_OWNER HEADING 'Job Owner' FORMAT A10
COLUMN JOB_NAME HEADING 'Job Name' FORMAT A10
COLUMN RECOVERABLE_SCRIPT_ID HEADING 'Recoverable|Script ID' FORMAT A15
 
SELECT ORIGINAL_CAPTURE_NAME,
       SCRIPT_STATUS,
       ACTION_TYPE,
       STATUS,
       JOB_OWNER,
       JOB_NAME,
       RECOVERABLE_SCRIPT_ID
 FROM DBA_STREAMS_SPLIT_MERGE_HIST;

Your output looks similar to the following:

Original
Capture  Script       Action                                    Recoverable
Process  Status       Type     Status     Job Owner  Job Name   Script ID
-------- ------------ -------- ---------- ---------- ---------- ---------------
DB1$CAP  EXECUTED     SPLIT    SPLIT DONE SYS        STREAMS_SP 6E5C6C49CDB5798
                                                     LITJOB$_9  3E040578C891704
                                                                87
 
DB1$CAP  EXECUTED     MERGE    MERGE DONE SYS        STREAMS_ME 6E5BA57554F1C4C
                                                     RGEJOB$_12 3E040578C89170A
                                                                1F

See Also:

Oracle Streams Replication Administrator's Guide for more information about split and merge operations

Monitoring Supplemental Logging

The following sections contain queries that you can run to monitor supplemental logging at a source database:

The total supplemental logging at a database is determined by the results shown in all three of the queries in these sections combined. For example, supplemental logging can be enabled for columns in a table even if no results for the table are returned by the query in the "Displaying Supplemental Log Groups at a Source Database" section. That is, supplemental logging can be enabled for the table if database supplemental logging is enabled or if the table is in a schema for which supplemental logging was enabled during preparation for instantiation.

Supplemental logging places additional column data into a redo log when an operation is performed. A capture process captures this additional information and places it in LCRs. An apply process that applies these captured LCRs might need this additional information to schedule or apply changes correctly.

Displaying Supplemental Log Groups at a Source Database

To check whether one or more log groups are specified for the table at the source database, run the following query:

COLUMN LOG_GROUP_NAME HEADING 'Log Group' FORMAT A20
COLUMN TABLE_NAME HEADING 'Table' FORMAT A15
COLUMN ALWAYS HEADING 'Conditional or|Unconditional' FORMAT A14
COLUMN LOG_GROUP_TYPE HEADING 'Type of Log Group' FORMAT A20

SELECT 
    LOG_GROUP_NAME, 
    TABLE_NAME, 
    DECODE(ALWAYS,
             'ALWAYS', 'Unconditional',
             'CONDITIONAL', 'Conditional') ALWAYS,
    LOG_GROUP_TYPE
  FROM DBA_LOG_GROUPS;

Your output looks similar to the following:

                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
LOG_GROUP_DEP_PK     DEPARTMENTS     Unconditional  USER LOG GROUP
SYS_C002105          REGIONS         Unconditional  PRIMARY KEY LOGGING
SYS_C002106          REGIONS         Conditional    FOREIGN KEY LOGGING
SYS_C002110          LOCATIONS       Unonditional   ALL COLUMN LOGGING
SYS_C002111          COUNTRIES       Conditional    ALL COLUMN LOGGING
LOG_GROUP_JOBS_CR    JOBS            Conditional    USER LOG GROUP

If the output for the type of log group shows how the log group was created:

  • If the output is USER LOG GROUP, then the log group was created using the ADD SUPPLEMENTAL LOG GROUP clause of the ALTER TABLE statement.

  • Otherwise, the log group was created using the ADD SUPPLEMENTAL LOG DATA clause of the ALTER TABLE statement.

If the type of log group is USER LOG GROUP, then you can list the columns in the log group by querying the DBA_LOG_GROUP_COLUMNS data dictionary view.


Note:

If the type of log group is not USER LOG GROUP, then the DBA_LOG_GROUP_COLUMNS data dictionary view does not contain information about the columns in the log group. Instead, Oracle supplementally logs the correct columns when an operation is performed on the table. For example, if the type of log group is PRIMARY KEY LOGGING, then Oracle logs the current primary key column(s) when a change is performed on the table.

Displaying Database Supplemental Logging Specifications

To display the database supplemental logging specifications, query the V$DATABASE dynamic performance view, as in the following example:

COLUMN log_min HEADING 'Minimum|Supplemental|Logging?' FORMAT A12
COLUMN log_pk HEADING 'Primary Key|Supplemental|Logging?' FORMAT A12
COLUMN log_fk HEADING 'Foreign Key|Supplemental|Logging?' FORMAT A12
COLUMN log_ui HEADING 'Unique|Supplemental|Logging?' FORMAT A12
COLUMN log_all HEADING 'All Columns|Supplemental|Logging?' FORMAT A12

SELECT SUPPLEMENTAL_LOG_DATA_MIN log_min, 
       SUPPLEMENTAL_LOG_DATA_PK log_pk, 
       SUPPLEMENTAL_LOG_DATA_FK log_fk,
       SUPPLEMENTAL_LOG_DATA_UI log_ui,
       SUPPLEMENTAL_LOG_DATA_ALL log_all
  FROM V$DATABASE;  
  

Your output looks similar to the following:

Minimum      Primary Key  Foreign Key  Unique        All Columns
Supplemental Supplemental Supplemental Supplemental  Supplemental
Logging?     Logging?     Logging?     Logging?      Logging?
------------ ------------ ------------ ------------- ------------
YES          YES          YES          YES           NO

These results show that minimum, primary key, foreign key, and unique key columns are being supplementally logged for all of the tables in the database. Because unique key columns are supplementally logged, bitmap index columns also are supplementally logged. However, all columns are not being supplementally logged.

Displaying Supplemental Logging Specified During Preparation for Instantiation

Supplemental logging can be enabled when database objects are prepared for instantiation using one of the three procedures in the DBMS_CAPTURE_ADM package. A data dictionary view displays the supplemental logging enabled by each of these procedures: PREPARE_TABLE_INSTANTIATION, PREPARE_SCHEMA_INSTANTIATION, and PREPARE_GLOBAL_INSTANTIATION.

  • The DBA_CAPTURE_PREPARED_TABLES view displays the supplemental logging enabled by the PREPARE_TABLE_INSTANTIATION procedure.

  • The DBA_CAPTURE_PREPARED_SCHEMAS view displays the supplemental logging enabled by the PREPARE_SCHEMA_INSTANTIATION procedure.

  • The DBA_CAPTURE_PREPARED_DATABASE view displays the supplemental logging enabled by the PREPARE_GLOBAL_INSTANTIATION procedure.

Each of these views has the following columns:

  • SUPPLEMENTAL_LOG_DATA_PK shows whether primary key supplemental logging was enabled by a procedure.

  • SUPPLEMENTAL_LOG_DATA_UI shows whether unique key and bitmap index supplemental logging was enabled by a procedure.

  • SUPPLEMENTAL_LOG_DATA_FK shows whether foreign key supplemental logging was enabled by a procedure.

  • SUPPLEMENTAL_LOG_DATA_ALL shows whether supplemental logging for all columns was enabled by a procedure.

Each of these columns can display one of the following values:

  • IMPLICIT means that the relevant procedure enabled supplemental logging for the columns.

  • EXPLICIT means that supplemental logging was enabled for the columns manually using an ALTER TABLE or ALTER DATABASE statement with an ADD SUPPLEMENTAL LOG DATA clause.

  • NO means that supplemental logging was not enabled for the columns using a prepare procedure or an ALTER TABLE or ALTER DATABASE statement with an ADD SUPPLEMENTAL LOG DATA clause. Supplemental logging might not be enabled for the columns. However, supplemental logging might be enabled for the columns at another level (table, schema, or database), or it might have been enabled using an ALTER TABLE statement with an ADD SUPPLEMENTAL LOG GROUP clause.

The following sections contain queries that display the supplemental logging enabled by these procedures:

Displaying Supplemental Logging Enabled by PREPARE_TABLE_INSTANTIATION

The following query displays the supplemental logging enabled by the PREPARE_TABLE_INSTANTIATION procedure for the tables in the hr schema:

COLUMN TABLE_NAME HEADING 'Table Name' FORMAT A15
COLUMN log_pk HEADING 'Primary Key|Supplemental|Logging' FORMAT A12
COLUMN log_fk HEADING 'Foreign Key|Supplemental|Logging' FORMAT A12
COLUMN log_ui HEADING 'Unique|Supplemental|Logging' FORMAT A12
COLUMN log_all HEADING 'All Columns|Supplemental|Logging' FORMAT A12

SELECT TABLE_NAME,
       SUPPLEMENTAL_LOG_DATA_PK log_pk, 
       SUPPLEMENTAL_LOG_DATA_FK log_fk,
       SUPPLEMENTAL_LOG_DATA_UI log_ui,
       SUPPLEMENTAL_LOG_DATA_ALL log_all
  FROM DBA_CAPTURE_PREPARED_TABLES
  WHERE TABLE_OWNER = 'HR';  
  

Your output looks similar to the following:

                Primary Key  Foreign Key  Unique         All Columns
                Supplemental Supplemental Supplemental   Supplemental
Table Name      Logging      Logging      Logging        Logging
--------------- ------------ ------------ -------------- ------------
COUNTRIES       NO           NO           NO             NO
REGIONS         IMPLICIT     IMPLICIT     IMPLICIT       NO
DEPARTMENTS     IMPLICIT     IMPLICIT     IMPLICIT       NO
LOCATIONS       EXPLICIT     NO           NO             NO
EMPLOYEES       NO           NO           NO             IMPLICIT
JOB_HISTORY     NO           NO           NO             NO
JOBS            NO           NO           NO             NO

These results show the following:

  • The PREPARE_TABLE_INSTANTIATION procedure enabled supplemental logging for the primary key, unique key, bitmap index, and foreign key columns in the hr.regions and hr.departments tables.

  • The PREPARE_TABLE_INSTANTIATION procedure enabled supplemental logging for all columns in the hr.employees table.

  • An ALTER TABLE statement with an ADD SUPPLEMENTAL LOG DATA clause enabled primary key supplemental logging for the hr.locations table.


Note:

Omit the WHERE clause in the query to list the information for all of the tables in the database.

Displaying Supplemental Logging Enabled by PREPARE_SCHEMA_INSTANTIATION

The following query displays the supplemental logging enabled by the PREPARE_SCHEMA_INSTANTIATION procedure:

COLUMN SCHEMA_NAME HEADING 'Schema Name' FORMAT A20
COLUMN log_pk HEADING 'Primary Key|Supplemental|Logging' FORMAT A12
COLUMN log_fk HEADING 'Foreign Key|Supplemental|Logging' FORMAT A12
COLUMN log_ui HEADING 'Unique|Supplemental|Logging' FORMAT A12
COLUMN log_all HEADING 'All Columns|Supplemental|Logging' FORMAT A12

SELECT SCHEMA_NAME,
       SUPPLEMENTAL_LOG_DATA_PK log_pk, 
       SUPPLEMENTAL_LOG_DATA_FK log_fk,
       SUPPLEMENTAL_LOG_DATA_UI log_ui,
       SUPPLEMENTAL_LOG_DATA_ALL log_all
  FROM DBA_CAPTURE_PREPARED_SCHEMAS;
  

Your output looks similar to the following:

                     Primary Key  Foreign Key  Unique         All Columns
                     Supplemental Supplemental Supplemental   Supplemental
Schema Name          Logging      Logging      Logging        Logging
-------------------- ------------ ------------ -------------- ------------
HR                   NO           NO           NO             IMPLICIT
OE                   IMPLICIT     IMPLICIT     IMPLICIT       NO

These results show the following:

  • The PREPARE_SCHEMA_INSTANTIATION procedure enabled supplemental logging for all columns in tables in the hr schema.

  • The PREPARE_SCHEMA_INSTANTIATION procedure enabled supplemental logging for the primary key, unique key, bitmap index, and foreign key columns in the tables in the oe schema.

Displaying Supplemental Logging Enabled by PREPARE_GLOBAL_INSTANTIATION

The following query displays the supplemental logging enabled by the PREPARE_GLOBAL_INSTANTIATION procedure:

COLUMN log_pk HEADING 'Primary Key|Supplemental|Logging' FORMAT A12
COLUMN log_fk HEADING 'Foreign Key|Supplemental|Logging' FORMAT A12
COLUMN log_ui HEADING 'Unique|Supplemental|Logging' FORMAT A12
COLUMN log_all HEADING 'All Columns|Supplemental|Logging' FORMAT A12

SELECT SUPPLEMENTAL_LOG_DATA_PK log_pk,
       SUPPLEMENTAL_LOG_DATA_FK log_fk,
       SUPPLEMENTAL_LOG_DATA_UI log_ui,
       SUPPLEMENTAL_LOG_DATA_ALL log_all
  FROM DBA_CAPTURE_PREPARED_DATABASE;
  

Your output looks similar to the following:

Primary Key  Foreign Key  Unique         All Columns
Supplemental Supplemental Supplemental   Supplemental
Logging      Logging      Logging        Logging
------------ ------------ -------------- ------------
IMPLICIT     IMPLICIT     IMPLICIT       NO

These results show that the PREPARE_GLOBAL_INSTANTIATION procedure enabled supplemental logging for the primary key, unique key, bitmap index, and foreign key columns in all of the tables in the database.

Monitoring a Synchronous Capture

This section provides sample queries that you can use to monitor Oracle Streams synchronous captures.

This section contains these topics:


See Also:


Displaying the Queue and Rule Set of Each Synchronous Capture

You can display the following information about each synchronous capture in a database by running the query in this section:

  • The synchronous capture name

  • The name of the queue used by the synchronous capture

  • The name of the positive rule set used by the synchronous capture

  • The capture user for the synchronous capture

To display this general information about each synchronous capture in 2a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Synchronous|Capture Name' FORMAT A20
COLUMN QUEUE_NAME HEADING 'Synchronous|Capture Queue' FORMAT A20
COLUMN RULE_SET_NAME HEADING 'Positive Rule Set' FORMAT A20
COLUMN CAPTURE_USER HEADING 'Capture User' FORMAT A15

SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, CAPTURE_USER
   FROM DBA_SYNC_CAPTURE;

Your output looks similar to the following:

Synchronous          Synchronous
Capture Name         Capture Queue        Positive Rule Set    Capture User
-------------------- -------------------- -------------------- ---------------
SYNC01_CAPTURE       STRM01_QUEUE         RULESET$_21          STRMADMIN
SYNC02_CAPTURE       STRM02_QUEUE         SYNC02_RULE_SET      HR

Displaying the Tables For Which Synchronous Capture Captures Changes

The DBA_SYNC_CAPTURE_TABLES view displays the tables whose DML changes are captured by any synchronous capture in the local database. The DBA_STREAMS_TABLE_RULES view has information about each synchronous capture name and the rules used by each synchronous capture. You can display the following information by running the query in this section:

  • The name of each synchronous capture

  • The name of each rule used by the synchronous capture

  • If the rule is a subset rule, then the type of subsetting operation covered by the rule

  • The owner of each table specified in each rule

  • The name of each table specified in each rule

  • Whether synchronous capture is enabled or disabled for the table. If the synchronous capture is enabled for a table, then it captures DML changes made to the table. If synchronous capture is not enabled for a table, then it does not capture DML changes made to the table.

To display this information, run the following query:

COLUMN STREAMS_NAME HEADING 'Synchronous|Capture Name' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN SUBSETTING_OPERATION HEADING 'Subsetting|Operation' FORMAT A10
COLUMN TABLE_OWNER HEADING 'Table|Owner' FORMAT A10
COLUMN TABLE_NAME HEADING 'Table Name' FORMAT A15
COLUMN ENABLED HEADING 'Enabled?' FORMAT A8

SELECT r.STREAMS_NAME, 
       r.RULE_NAME, 
       r.SUBSETTING_OPERATION,
       t.TABLE_OWNER, 
       t.TABLE_NAME, 
       t.ENABLED
   FROM DBA_STREAMS_TABLE_RULES r,
        DBA_SYNC_CAPTURE_TABLES t
   WHERE r.STREAMS_TYPE = 'SYNC_CAPTURE' AND
         r.TABLE_OWNER  = t.TABLE_OWNER AND
         r.TABLE_NAME   = t.TABLE_NAME;

Your output looks similar to the following:

Synchronous                     Subsetting Table
Capture Name    Rule Name       Operation  Owner      Table Name      Enabled?
--------------- --------------- ---------- ---------- --------------- --------
SYNC01_CAPTURE  EMPLOYEES20                HR         EMPLOYEES       YES
SYNC02_CAPTURE  DEPARTMENTS24   DELETE     HR         DEPARTMENTS     YES
SYNC02_CAPTURE  DEPARTMENTS23   UPDATE     HR         DEPARTMENTS     YES
SYNC02_CAPTURE  DEPARTMENTS22   INSERT     HR         DEPARTMENTS     YES

This output indicates that synchronous capture sync01_capture captures DML changes made to the hr.employees table. This output also indicates that synchronous capture sync02_capture captures a subset of the changes to the hr.departments table.

If the ENABLED column shows NO for a table, then synchronous capture does not capture changes to the table. The ENABLED column shows NO when a table rule is added to a synchronous capture rule set by a procedure other than ADD_TABLE_RULES or ADD_SUBSET_RULES in the DBMS_STREAMS_ADM package. For example, if the ADD_RULE procedure in the DBMS_RULE_ADM package adds a table rule to a synchronous capture rule set, then the table appears when you query the DBA_SYNC_CAPTURE_TABLES view, but synchronous capture does not capture DML changes to the table. No results appear in the DBA_SYNC_CAPTURE_TABLES view for schema and global rules.

Viewing the Extra Attributes Captured by Implicit Capture

You can use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process or synchronous capture to capture one or more extra attributes and include the extra attributes in LCRs. The following query displays the extra attributes included in the LCRs captured by each capture process and synchronous capture in the local database:

COLUMN CAPTURE_NAME HEADING 'Capture Process or|Synchronous Capture' FORMAT A20
COLUMN ATTRIBUTE_NAME HEADING 'Attribute Name' FORMAT A15
COLUMN INCLUDE HEADING 'Include Attribute in LCRs?' FORMAT A30

SELECT CAPTURE_NAME, ATTRIBUTE_NAME, INCLUDE 
  FROM DBA_CAPTURE_EXTRA_ATTRIBUTES
  ORDER BY CAPTURE_NAME;

Your output looks similar to the following:

Capture Process or   Attribute Name  Include Attribute in LCRs?
Synchronous Capture
-------------------- --------------- ------------------------------
SYNC_CAPTURE         ROW_ID          NO
SYNC_CAPTURE         SERIAL#         NO
SYNC_CAPTURE         SESSION#        NO
SYNC_CAPTURE         THREAD#         NO
SYNC_CAPTURE         TX_NAME         YES
SYNC_CAPTURE         USERNAME        NO

Based on this output, the capture process or synchronous capture named sync_capture includes the transaction name (tx_name) in the LCRs that it captures, but this capture process or synchronous capture does not include any other extra attributes in the LCRs that it captures. To determine whether name returned by the CAPTURE_NAME column is a capture process or a synchronous capture, query the DBA_CAPTURE and DBA_SYNC_CAPTURE views.

PKF2PK&AOEBPS/strms_topology.htm Monitoring the Oracle Streams Topology and Performance

23 Monitoring the Oracle Streams Topology and Performance

The Oracle Streams Performance Advisor consists of the DBMS_STREAMS_ADVISOR_ADM PL/SQL package and a collection of data dictionary views. The Oracle Streams Performance Advisor enables you to monitor the topology and performance of an Oracle Streams environment. The Oracle Streams topology includes information about the components in an Oracle Streams environment, the links between the components, and the way information flows from capture to consumption. The Oracle Streams Performance Advisor also provides information about how Oracle Streams components are performing.

The following topics contain information about the Oracle Streams Performance Advisor:

About the Oracle Streams Topology

Oracle Streams enables you to send messages between multiple databases. An Oracle Streams environment can send the following types of messages:

The Oracle Streams topology is a representation of the databases in an Oracle Streams environment, the Oracle Streams components configured in these databases, and the flow of messages between these components.

The messages in the environment flow in separate stream paths. A stream path begins where a capture process, a synchronous capture, or an application generates messages and enqueues them. The messages can flow through one or more propagations and queues in its stream path. The stream path ends where the messages are dequeued by an apply process, a messaging client, or an application.

Currently, the Oracle Streams topology only gathers information about a stream path if the stream path ends with an apply process. The Oracle Streams topology does not track stream paths that end when a messaging client or an application dequeues messages.

About the Oracle Streams Performance Advisor

The Oracle Streams Performance Advisor consists of the DBMS_STREAMS_ADVISOR_ADM PL/SQL package and a collection of data dictionary views. You can use the ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package to gather information about the Oracle Streams topology and about the performance of the Oracle Streams components in the topology.

This section contains the following topics:

Oracle Streams Performance Advisor Data Dictionary Views

After information is gathered by the Oracle Streams Performance Advisor, you can view it by querying the following data dictionary views:

The topology information is stored permanently in the following data dictionary views: DBA_STREAMS_TP_DATABASE, DBA_STREAMS_TP_COMPONENT, and DBA_STREAMS_TP_COMPONENT_LINK.

The following views contain temporary information: DBA_STREAMS_TP_COMPONENT_STAT, DBA_STREAMS_TP_PATH_BOTTLENECK, and DBA_STREAMS_TP_PATH_STAT. Some of the data in these views is retained only for the user session that runs the ANALYZE_CURRENT_PERFORMANCE procedure. When this user session ends, this temporary information is purged.

Oracle Streams Components and Statistics

The DBMS_STREAMS_ADVISOR_ADM package gathers information about the following Oracle Streams components:

  • A QUEUE stores messages. The package gathers the following component-level statistics for queues:

    • ENQUEUE RATE

    • SPILL RATE

    • CURRENT QUEUE SIZE

  • A CAPTURE is a capture process. A capture process captures database changes in the redo log and enqueues the changes as logical change records (LCRs). Each capture process has the following subcomponents:

    • LOGMINER BUILDER is a builder server.

    • LOGMINER PREPARER is a preparer server.

    • LOGMINER READER is a reader server.

    • CAPTURE SESSION is the capture process session.

    The package gathers the following component-level statistics for each capture process (CAPTURE):

    • CAPTURE RATE

    • ENQUEUE RATE

    • LATENCY

    The package also gathers session-level statistics for capture process subcomponents.

  • A PROPAGATION SENDER sends messages from a source queue to a destination queue. The package gathers the following component-level statistics for propagation senders:

    • SEND RATE

    • BANDWIDTH

    • LATENCY

    The package also gathers session-level statistics for propagation senders.

  • A PROPAGATION RECEIVER enqueues messages sent by propagation senders into a destination queue. The package gathers session-level statistics for propagation receivers.

  • An APPLY is an apply process. These components either apply messages directly or send messages to apply handlers. This type of component has the following subcomponents:

    • APPLY READER is a reader server.

    • APPLY COORDINATOR is a coordinator process.

    • APPLY SERVER is an apply server.

    The package gathers the following component-level statistics for this component (APPLY):

    • MESSAGE APPLY RATE

    • TRANSACTION APPLY RATE

    • LATENCY

    The package also gathers session-level statistics for the subcomponents.

When the package gathers session-level statistics for a component or subcomponent, the session-level statistics include the following:

  • IDLE percentage

  • FLOW CONTROL percentage

  • EVENT percentage for wait events


Note:

Currently, the DBMS_STREAMS_ADVISOR_ADM package does not gather information about synchronous captures or messaging clients.


See Also:


About Stream Paths in an Oracle Streams Topology

In the Oracle Streams topology, a stream path is a flow of messages from a source to a destination. A stream path begins where a capture process, synchronous capture, or application enqueues messages into a queue. A stream path ends where an apply process dequeues the messages. The stream path might flow through multiple queues and propagations before it reaches an apply process. Therefore, a single stream path can consist of multiple source/destination component pairs before it reaches last component.

The Oracle Streams topology assigns a number to each stream path so that you can monitor each one easily. The Oracle Streams topology also assigns a number to each link between two components in a stream path. The number specifies the position of the link in the overall stream path.

Table 23-1 shows the position of each link in a sample stream path.

Table 23-1 Position of Each Link in a Sample Stream Path

Start ComponentEnd ComponentPosition

Capture process

Queue

1

Queue

Propagation sender

2

Propagation sender

Propagation receiver

3

Propagation receiver

Queue

4

Queue

Apply process

5


When the Oracle Streams Performance Advisor gathers information about an Oracle Streams environment, it tracks stream paths by starting with each apply process and working backward to its source. When a capture process is the source, the Oracle Streams Performance Advisor tracks the path from the apply process back to the capture process. When a synchronous capture or an application that enqueues messages is the source, the Oracle Streams Performance Advisor tracks the path from the apply process back to the queue into which the messages are enqueued.

The following sections describe sample replication environments and the stream paths in each one:


See Also:

Oracle Streams Replication Administrator's Guide for information about best practices for Oracle Streams replication environments

Separate Stream Paths in an Oracle Streams Environment

Consider an Oracle Streams environment with two databases. Each database captures changes made to the replicated database objects with a capture process and sends the changes to the other database, where they are applied by an apply process. The stream paths in this environment are completely separate.

Figure 23-1 shows an example of this type of Oracle Streams replication environment.

Figure 23-1 Oracle Streams Topology with Two Separate Stream Paths

Description of Figure 23-1 follows
Description of "Figure 23-1 Oracle Streams Topology with Two Separate Stream Paths"

Notice that the Oracle Streams Performance Advisor assigns a component ID to each Oracle Streams component and a path ID to each path. The Oracle Streams topology in Figure 23-1 shows the following information:

  • There are twelve Oracle Streams components in the Oracle Streams environment.

  • There are two stream paths in the Oracle Streams environment.

  • Stream path 1 starts with component 1 and ends with component 6.

  • Stream path 2 starts with component 7 and ends with component 12.

Shared Stream Paths in an Oracle Streams Replication Environment

When there are multiple apply processes that apply changes generated by a single source, a stream path splits into multiple stream paths. In this case, part of a stream path is shared, but the path splits into two or more distinct stream paths.

Figure 23-2 shows this type of Oracle Streams environment.

Figure 23-2 Oracle Streams Topology with Multiple Apply Processes for a Single Source

Description of Figure 23-2 follows
Description of "Figure 23-2 Oracle Streams Topology with Multiple Apply Processes for a Single Source"

The Oracle Streams topology in Figure 23-2 shows the following information:

  • There are ten Oracle Streams components in the Oracle Streams environment.

  • There are two stream paths in the Oracle Streams environment.

  • Stream path 1 starts with component 1 and ends with component 7.

  • Stream path 2 starts with component 1 and ends with component 10.

  • The messages flowing between component 1 and component 2 are in both path 1 and path2.

About the Information Gathered by the Oracle Streams Performance Advisor

The ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package gathers information about the Oracle Streams topology and the performance of Oracle Streams components. The procedure stores the information in a collection of data dictionary views. To use the Oracle Streams Performance Advisor effectively, it is important to understand how the procedure gathers information and calculates statistics.

The procedure takes snapshots of the Oracle Streams environment to gather information and calculate statistics. For some statistics, the information in a single snapshot is sufficient. For example, only one snapshot is needed to determine the current number of messages in a queue. However, to calculate other statistics, the procedure must compare two snapshots. These statistics include the rate, bandwidth, event, and flow control statistics. The first time the procedure is run in a user session, it takes two snapshots to calculate these statistics. In each subsequent run in the same user session, the procedure takes one snapshot and compares it with the snapshot taken during the previous run.

Table 23-2 illustrates how the procedure gathers information in each advisor run in a single user session.

Table 23-2 How the Oracle Streams Performance Advisor Gathers Information in a Session

Advisor RunInformation Gathered

1

  1. Take snapshot of statistics.

  2. Wait at least five seconds.

  3. Take another snapshot of statistics.

  4. Compare data from the first snapshot with data from the second snapshot to calculate performance statistics.

2

  1. Take snapshot of statistics.

  2. Compare data from the last snapshot in advisor run 1 with the snapshot taken in advisor run 2 to calculate performance statistics.

3

  1. Take snapshot of statistics.

  2. Compare data from the snapshot in advisor run 2 with the snapshot taken in advisor run 3 to calculate performance statistics.


For the best results in an advisor run, meet the following criteria:

Gathering Information About the Oracle Streams Topology and Performance

To gather information about the Oracle Streams topology and Oracle Streams performance, complete the following steps:

  1. Identify the database that you will use to gather the information. An administrative user at this database must meet the following requirements:

    • The user must have access to a database link to each database that contains Oracle Streams components.

    • The user must have been granted privileges using the DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure, and each database link must connect to a user at the remote database that has been granted privileges using the DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure.

      If you configure an Oracle Streams administrator at each database with Oracle Streams components, then the Oracle Streams administrator has the necessary privileges. See Oracle Streams Replication Administrator's Guide for information about creating an Oracle Streams administrator.

    If no database in your environment meets these requirements, then choose a database, configure the necessary database links, and grant the necessary privileges to the users before proceeding.

    The Oracle Streams Performance Advisor running on an Oracle Database 11g Release 2 (11.2) database can monitor Oracle Database 10g Release 2 (10.2) and later databases. It cannot monitor databases before release 10.2.

  2. In SQL*Plus, connect to the database you identified in Step 1 as a user that meets the requirements listed in Step 1.

    For example, connect to the hub.example.com database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Run the ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package:

    exec DBMS_STREAMS_ADVISOR_ADM.ANALYZE_CURRENT_PERFORMANCE;
    
  4. Optionally, rerun the ANALYZE_CURRENT_PERFORMANCE procedure one or more times in same session that ran the procedure in Step 3:

    exec DBMS_STREAMS_ADVISOR_ADM.ANALYZE_CURRENT_PERFORMANCE;
    
  5. Run the following query to identify the advisor run ID for the information gathered in Step 4:

    SELECT DISTINCT ADVISOR_RUN_ID FROM DBA_STREAMS_TP_COMPONENT_STAT
       ORDER BY ADVISOR_RUN_ID;
    

    Your output is similar to the following:

    ADVISOR_RUN_ID
    --------------
                 1
                 2
    

    The Oracle Streams Performance Advisor assigns an advisor run ID to the statistics for each run. Use the last value in the output for the advisor run ID in the queries in "Viewing Performance Statistics for Oracle Streams Components". In this example, use 2 for the advisor run ID in the queries.

    Remember that the Oracle Streams Performance Advisor purges some of the performance statistics that it gathered when a user session ends. Therefore, run the performance statistics queries in the same session that ran the ANALYZE_CURRENT_PERFORMANCE procedure.

Complete these steps whenever you want to monitor the current performance of your Oracle Streams environment.

You should also run the ANALYZE_CURRENT_PERFORMANCE procedure when new Oracle Streams components are added to any database in the Oracle Streams environment. Running the procedure updates the Oracle Streams topology with information about any new components.


See Also:


Viewing the Oracle Streams Topology and Analyzing Oracle Streams Performance

This section contains several queries that you can use to view your Oracle Streams topology and monitor the performance of your Oracle Streams components. The queries specify the views described in "About the Oracle Streams Topology".

The queries in this section can be run in any Oracle Stream environment. However, the output shown for these queries is based on the sample Oracle Streams replication environment shown in Figure 23-3.

Figure 23-3 Sample Oracle Streams Replication Environment

Description of Figure 23-3 follows
Description of "Figure 23-3 Sample Oracle Streams Replication Environment"

The Oracle Database 2 Day + Data Replication and Integration Guide contains instructions for configuring the Oracle Streams replication environment shown in Figure 23-3. This environment contains both of the following types of stream paths:

This section contains the following topics:

Viewing the Oracle Streams Topology

To view the Oracle Streams topology, you must first gather information about the Oracle Streams environment using the DBMS_STREAMS_ADVISOR_ADM package. See "Gathering Information About the Oracle Streams Topology and Performance".

The following sections explain how to view different types of information in an Oracle Streams topology:

Viewing the Databases in the Oracle Streams Environment

You can view the following information about the databases in an Oracle Streams environment:

  • The global name of each database

  • The last time the Oracle Streams Performance Advisor was run at each database

  • The version number of each database

  • The compatibility level of each database

  • Whether each database has access to the Oracle Diagnostics Pack and Oracle Tuning Pack

To display this information, run the following query:

COLUMN GLOBAL_NAME HEADING 'Global Name' FORMAT A15
COLUMN LAST_QUERIED HEADING 'Last|Queried'
COLUMN VERSION HEADING 'Version' FORMAT A15
COLUMN COMPATIBILITY HEADING 'Compatibility' FORMAT A15
COLUMN MANAGEMENT_PACK_ACCESS HEADING 'Management Pack' FORMAT A20

SELECT GLOBAL_NAME, LAST_QUERIED, VERSION, COMPATIBILITY, MANAGEMENT_PACK_ACCESS
   FROM DBA_STREAMS_TP_DATABASE;

The following output shows the databases in the Oracle Streams replication environment described in "Viewing the Oracle Streams Topology and Analyzing Oracle Streams Performance":

                Last                                                           
Global Name     Queried   Version         Compatibility   Management Pack       
--------------- --------- --------------- --------------- --------------------  
HUB.EXAMPLE.COM 08-APR-08 11.1.0.7.0      11.1.0          DIAGNOSTIC+TUNING     
SPOKE1.EXAMPLE. 08-APR-08 11.1.0.7.0      11.1.0          DIAGNOSTIC+TUNING     
COM
SPOKE2.EXAMPLE. 08-APR-08 11.1.0.7.0      11.1.0          DIAGNOSTIC+TUNING     
COM

This output shows the following information about the databases in the Oracle Streams environment:

  • The Global Name column shows that the global names of the databases are hub.example.com, spoke1.example.com, and spoke2.example.com.

  • The Last Queried column shows that the Oracle Streams Performance Advisor was last run on April 8, 2008 at each database.

  • The Version column shows that version of each database is 11.1.0.7.0.

  • The Compatibility column shows that the compatibility level of each database is 11.1.0.

  • The Management Pack column shows that each database has access to the Oracle Diagnostics Pack and Oracle Tuning Pack.


See Also:

Oracle Database Upgrade Guide for information about database compatibility

Viewing the Oracle Streams Components at Each Database

You can view the following information about the components in an Oracle Streams environment:

  • The component ID for each Oracle Streams component. The Oracle Streams topology assigns an ID number to each component and uses the number to track information about the component and about the stream path that flows through the component.

  • The name of the Oracle Streams component. For capture processes and apply processes, the query lists the name of each process. For queues, the query lists the name of each queue. For propagations, two Oracle Streams components are tracked in the Oracle Streams topology:

    • The name of a propagation sender is the source queue of the propagation and the destination queue and database to which the propagation sends messages. For example, a propagation sender with the strmadmin.source_hns source queue that sends messages to the strmadmin.destination_spoke1 destination queue at the spoke1.example.com database is shown in the following way:

      "STRMADMIN"."SOURCE_HNS"=>"STRMADMIN"."DESTINATION_SPOKE1"
         @SPOKE1.EXAMPLE.COM
      
    • The name of a propagation receiver is the source queue and database from which the messages are sent and the destination queue for the propagation. For example, a propagation receiver that gets messages from the strmadmin.source_hns source queue at the hub.example.com database and enqueues them into the strmadmin.destination_spoke1 destination queue is shown in the following way:

      "STRMADMIN"."SOURCE_HNS"@HUB.EXAMPLE.COM=>"STRMADMIN".
         "DESTINATION_SPOKE1"
      
  • The type of the Oracle Streams component. The following types are possible:

    • CAPTURE for capture processes

    • QUEUE for queues

    • PROPAGATION SENDER for propagation senders

    • PROPAGATION RECEIVER for propagation receivers

    • APPLY for apply processes

  • The database that contains the component

To display this information, run the following query:

COLUMN COMPONENT_ID HEADING 'ID' FORMAT 999
COLUMN COMPONENT_NAME HEADING 'Name' FORMAT A43
COLUMN COMPONENT_TYPE HEADING 'Type' FORMAT A20
COLUMN COMPONENT_DB HEADING 'Database' FORMAT A10

SELECT COMPONENT_ID, COMPONENT_NAME, COMPONENT_TYPE, COMPONENT_DB
   FROM DBA_STREAMS_TP_COMPONENT
   ORDER BY COMPONENT_ID;

The following output shows the components in the Oracle Streams replication environment described in "Viewing the Oracle Streams Topology and Analyzing Oracle Streams Performance":

  ID Name                                        Type                 Database 
---- ------------------------------------------- -------------------- ----------
   1 "STRMADMIN"."DESTINATION_SPOKE1"            QUEUE                HUB.EXAMPL
                                                                      E.COM
   2 "STRMADMIN"."DESTINATION_SPOKE2"            QUEUE                HUB.EXAMPL
                                                                      E.COM
   3 "STRMADMIN"."SOURCE_HNS"                    QUEUE                HUB.EXAMPL
                                                                      E.COM
   4 "STRMADMIN"."SOURCE_HNS"=>"STRMADMIN"."DEST PROPAGATION SENDER   HUB.EXAMPL
     INATION_SPOKE1"@SPOKE1.EXAMPLE.COM                               E.COM
   5 "STRMADMIN"."SOURCE_HNS"=>"STRMADMIN"."DEST PROPAGATION SENDER   HUB.EXAMPL
     INATION_SPOKE2"@SPOKE2.EXAMPLE.COM                               E.COM
   6 "STRMADMIN"."SOURCE_HNS"@SPOKE1.EXAMPLE.COM PROPAGATION RECEIVER HUB.EXAMPL
     =>"STRMADMIN"."DESTINATION_SPOKE1"                               E.COM
   7 "STRMADMIN"."SOURCE_HNS"@SPOKE2.EXAMPLE.COM PROPAGATION RECEIVER HUB.EXAMPL
     =>"STRMADMIN"."DESTINATION_SPOKE2"                               E.COM
   8 APPLY_SPOKE1                                APPLY                HUB.EXAMPL
                                                                      E.COM
   9 APPLY_SPOKE2                                APPLY                HUB.EXAMPL
                                                                      E.COM
  10 CAPTURE_HNS                                 CAPTURE              HUB.EXAMPL
                                                                      E.COM
  11 "STRMADMIN"."DESTINATION_SPOKE1"            QUEUE                SPOKE1.EXA
                                                                      MPLE.COM
  12 "STRMADMIN"."SOURCE_HNS"                    QUEUE                SPOKE1.EXA
                                                                      MPLE.COM
  13 "STRMADMIN"."SOURCE_HNS"=>"STRMADMIN"."DEST PROPAGATION SENDER   SPOKE1.EXA
     INATION_SPOKE1"@HUB.EXAMPLE.COM                                  MPLE.COM  
  14 "STRMADMIN"."SOURCE_HNS"@HUB.EXAMPLE.COM=>" PROPAGATION RECEIVER SPOKE1.EXA
     STRMADMIN"."DESTINATION_SPOKE1"                                  MPLE.COM
  15 APPLY_SPOKE1                                APPLY                SPOKE1.EXA
                                                                      MPLE.COM
  16 CAPTURE_HNS                                 CAPTURE              SPOKE1.EXA
                                                                      MPLE.COM
  17 "STRMADMIN"."DESTINATION_SPOKE2"            QUEUE                SPOKE2.EXA
                                                                      MPLE.COM
  18 "STRMADMIN"."SOURCE_HNS"                    QUEUE                SPOKE2.EXA
                                                                      MPLE.COM
  19 "STRMADMIN"."SOURCE_HNS"=>"STRMADMIN"."DEST PROPAGATION SENDER   SPOKE2.EXA
     INATION_SPOKE2"@HUB.EXAMPLE.COM                                  MPLE.COM  
  20 "STRMADMIN"."SOURCE_HNS"@HUB.EXAMPLE.COM=>" PROPAGATION RECEIVER SPOKE2.EXA
     STRMADMIN"."DESTINATION_SPOKE2"                                  MPLE.COM  
  21 APPLY_SPOKE2                                APPLY                SPOKE2.EXA
                                                                      MPLE.COM
  22 CAPTURE_HNS                                 CAPTURE              SPOKE2.EXA
                                                                      MPLE.COM

See Also:


Viewing Each Stream Path in an Oracle Streams Topology

You can view the following information about the stream paths in an Oracle Streams topology:

  • The path ID. The Oracle Streams topology assigns an ID number to each stream path it identifies. The path ID is associated with each link in the path. For example, a single path ID can be associated with the following component links:

    • Capture process to queue

    • Queue to propagation sender

    • Propagation sender to propagation receiver

    • Propagation receiver to queue

    • Queue to apply process

  • The source component ID. A source component is a component from which messages flow to another component.

  • The name of the source component. See "Viewing the Oracle Streams Components at Each Database" for information about how components are named in the query output.

  • The destination component ID. A destination component receives messages from another component.

  • The name of the destination component.

  • The position in the stream path shows the location of a particular link in a path. For example, a position might be the first link in a path, the second link in a path, and so on.

To display this information, run the following query:

COLUMN PATH_ID HEADING 'Path|ID' FORMAT 9999
COLUMN SOURCE_COMPONENT_ID HEADING 'Source|Component|ID' FORMAT 9999
COLUMN SOURCE_COMPONENT_NAME HEADING 'Source|Component|Name' FORMAT A20
COLUMN DESTINATION_COMPONENT_ID HEADING 'Dest|Component|ID' FORMAT 9999
COLUMN DESTINATION_COMPONENT_NAME HEADING 'Dest|Component|Name' FORMAT A15
COLUMN POSITION HEADING 'Position' FORMAT 9999
 
SELECT PATH_ID, 
       SOURCE_COMPONENT_ID, 
       SOURCE_COMPONENT_NAME,
       DESTINATION_COMPONENT_ID, 
       DESTINATION_COMPONENT_NAME,
       POSITION 
  FROM DBA_STREAMS_TP_COMPONENT_LINK
  ORDER BY PATH_ID, POSITION;

The following output shows the paths in the Oracle Streams topology for the components listed in "Viewing the Oracle Streams Components at Each Database":

         Source Source                    Dest Dest                            
 Path Component Component            Component Component                        
   ID        ID Name                        ID Name            Position         
----- --------- -------------------- --------- --------------- --------         
    1        16 CAPTURE_HNS                 12 "STRMADMIN"."SO        1         
                                               URCE_HNS"                        
    1        12 "STRMADMIN"."SOURCE_        13 "STRMADMIN"."SO        2         
                HNS"                           URCE_HNS"=>"STR                  
                                               MADMIN"."DESTIN                  
                                               ATION_SPOKE1"@H                  
                                               UB.EXAMPLE.COM                   
    1        13 "STRMADMIN"."SOURCE_         6 "STRMADMIN"."SO        3         
                HNS"=>"STRMADMIN"."D           URCE_HNS"@SPOKE                  
                ESTINATION_SPOKE1"@H           1.EXAMPLE.COM=>                  
                UB.EXAMPLE.COM                 "STRMADMIN"."DES                 
                                               TINATION_SPOKE1"                 
    1         6 "STRMADMIN"."SOURCE_         1 "STRMADMIN"."DE        4         
                HNS"@SPOKE1.EXAMPLE.           STINATION_SPOKE                  
                COM=>"STRMADMIN"."DE           1"                               
                STINATION_SPOKE1"                                               
    1         1 "STRMADMIN"."DESTINA         8 APPLY_SPOKE1           5         
                TION_SPOKE1"                                                    
    2        22 CAPTURE_HNS                 18 "STRMADMIN"."SO        1         
                                               URCE_HNS"                        
    2        18 "STRMADMIN"."SOURCE_        19 "STRMADMIN"."SO        2         
                HNS"                           URCE_HNS"=>"STR                  
                                               MADMIN"."DESTIN                  
                                               ATION_SPOKE2"@H                  
                                               UB.EXAMPLE.COM                   
    2        19 "STRMADMIN"."SOURCE_         7 "STRMADMIN"."SO        3         
                HNS"=>"STRMADMIN"."D           URCE_HNS"@SPOKE                  
                ESTINATION_SPOKE2"@H           2.EXAMPLE.COM=>                  
                UB.EXAMPLE.COM                 "STRMADMIN"."DES                 
                                               TINATION_SPOKE2"                 
    2         7 "STRMADMIN"."SOURCE_         2 "STRMADMIN"."DE        4         
                HNS"@SPOKE2.EXAMPLE.           STINATION_SPOKE                  
                COM=>"STRMADMIN"."DE           2"                               
                STINATION_SPOKE2"                                               
    2         2 "STRMADMIN"."DESTINA         9 APPLY_SPOKE2           5         
                TION_SPOKE2"                                                    
    3        10 CAPTURE_HNS                  3 "STRMADMIN"."SO        1         
                                               URCE_HNS"                        
    3         3 "STRMADMIN"."SOURCE_         4 "STRMADMIN"."SO        2         
                HNS"                           URCE_HNS"=>"STR                  
                                               MADMIN"."DESTIN                  
                                               ATION_SPOKE1"@S                  
                                               POKE1.EXAMPLE.CO                 
                                               M                                
    3         4 "STRMADMIN"."SOURCE_        14 "STRMADMIN"."SO        3         
                HNS"=>"STRMADMIN"."D           URCE_HNS"@HUB.N                  
                ESTINATION_SPOKE1"@S           ET=>"STRMADMIN"                  
                POKE1.EXAMPLE.COM              ."DESTINATION_S                  
                                               POKE1"                           
    3        14 "STRMADMIN"."SOURCE_        11 "STRMADMIN"."DE        4         
                HNS"@HUB.EXAMPLE.COM           STINATION_SPOKE                  
                =>"STRMADMIN"."DESTI           1"                               
                NATION_SPOKE1"                                                  
    3        11 "STRMADMIN"."DESTINA        15 APPLY_SPOKE1           5         
                TION_SPOKE1"                                                    
    4        10 CAPTURE_HNS                  3 "STRMADMIN"."SO        1         
                                               URCE_HNS"                        
    4         3 "STRMADMIN"."SOURCE_         5 "STRMADMIN"."SO        2         
                HNS"                           URCE_HNS"=>"STR                  
                                               MADMIN"."DESTIN                  
                                               ATION_SPOKE2"@S                  
                                               POKE2.EXAMPLE.C                  
                                               OM                               
    4         5 "STRMADMIN"."SOURCE_        20 "STRMADMIN"."SO        3         
                HNS"=>"STRMADMIN"."D           URCE_HNS"@HUB.N                  
                ESTINATION_SPOKE2"@S           ET=>"STRMADMIN"                  
                POKE2.EXAMPLE.COM              ."DESTINATION_S                  
                                               POKE2"                           
    4        20 "STRMADMIN"."SOURCE_        17 "STRMADMIN"."DE        4         
                HNS"@HUB.EXAMPLE.COM           STINATION_SPOKE                  
                =>"STRMADMIN"."DESTI           2"                               
                NATION_SPOKE2"                                                  
    4        17 "STRMADMIN"."DESTINA        21 APPLY_SPOKE2           5         
                TION_SPOKE2"                                                    

Viewing Performance Statistics for Oracle Streams Components

The DBMS_STREAMS_ADVISOR_ADM package and the Oracle Streams topology views comprise the Oracle Streams Performance Advisor. The Oracle Streams topology views enable you to display and analyze performance statistics for the Oracle Streams components in your environment.

To view performance statistics for Oracle Streams components, you must first gather information about the Oracle Streams environment using the DBMS_STREAMS_ADVISOR_ADM package. See "Gathering Information About the Oracle Streams Topology and Performance".

The following sections explain how to view performance statistics for Oracle Streams components:


Note:

The performance of Oracle Streams components depends on several factors, including the computer equipment used in the environment and the speed of the network.

Checking for Bottleneck Components in the Oracle Streams Topology

A bottleneck component is the busiest component or the component with the least amount of idle time. You can view the following information about the bottleneck components in an Oracle Streams environment:

  • The path ID of the path that includes the component.

  • The component ID for each Oracle Streams component. The Oracle Streams topology assigns an ID number to each component and uses the number to track information about the component and about the stream path that flows through the component.

  • The name of the Oracle Streams component. See "Viewing the Oracle Streams Components at Each Database" for information about how components are named in the query output.

  • The type of the Oracle Streams component. The following types are possible:

  • The database that contains the component

Run the following query to check for bottleneck components in your Oracle Streams environment:

COLUMN PATH_ID HEADING 'Path ID' FORMAT 999
COLUMN COMPONENT_ID HEADING 'Component ID' FORMAT 999
COLUMN COMPONENT_NAME HEADING 'Name' FORMAT A20
COLUMN COMPONENT_TYPE HEADING 'Type' FORMAT A20
COLUMN COMPONENT_DB HEADING 'Database' FORMAT A15
 
SELECT PATH_ID,
       COMPONENT_ID, 
       COMPONENT_NAME, 
       COMPONENT_TYPE, 
       COMPONENT_DB
   FROM DBA_STREAMS_TP_PATH_BOTTLENECK
   WHERE BOTTLENECK_IDENTIFIED='YES' AND
         ADVISOR_RUN_ID=2
   ORDER BY PATH_ID, COMPONENT_ID;

This example uses 2 for the ADVISOR_RUN_ID in the WHERE clause. Substitute the advisor run ID for the advisor run you want to query. See "Gathering Information About the Oracle Streams Topology and Performance" for information about determining the ADVISOR_RUN_ID.

The following output shows the bottleneck components for the components listed in "Viewing the Oracle Streams Components at Each Database":

Path ID Component ID Name                 Type                 Database        
------- ------------ -------------------- -------------------- ---------------  
      1            6 "STRMADMIN"."SOURCE_ PROPAGATION RECEIVER HUB.EXAMPLE.COM  
                     HNS"@SPOKE1.EXAMPLE.                                       
                     COM=>"STRMADMIN"."DE                                       
                     STINATION_SPOKE1"                                          
      3           10 CAPTURE_HNS          CAPTURE              HUB.EXAMPLE.COM  
      4           10 CAPTURE_HNS          CAPTURE              HUB.EXAMPLE.COM  

If this query returns no results, then the Oracle Streams Performance Advisor did not identify any bottleneck components in your environment. However, if this query returns one or more bottleneck components, then check the status of these components. If they are disabled, then you can enable them. If the components are enabled, then you can examine the components to see if they can be modified to perform better.

In some cases, the Oracle Streams Performance Advisor cannot determine whether a component is a bottleneck component. To view these components, set BOTTLENECK_IDENTIFIED to 'NO' when you query the DBA_STREAMS_TP_PATH_BOTTLENECK view. The output for the ADVISOR_RUN_REASON column shows why the Oracle Streams Performance Advisor could not determine whether the component is a bottleneck component. The following reasons can be specified in the ADVISOR_RUN_REASON column output:

  • PRE-11.1 DATABASE EXISTS means that the component is in a stream path that includes a database before Oracle Database 11g Release 1. Bottleneck analysis is not performed on these components.

  • DIAGNOSTIC PACK REQUIRED means that the component is in a stream path that includes a database that does not have the Oracle Diagnostics Pack. Bottleneck analysis is not performed on these components.

  • NO BOTTLENECK IDENTIFIED means that either no bottleneck was identified in a stream path or that there might be more than one bottleneck component in the stream path.

Viewing Component-Level Statistics

You can view statistics for the Oracle Streams components in the Oracle Streams topology. The query in this section displays the following information for each component:

  • The ID of the path to which the component belongs

  • The name of the Oracle Streams component

  • The type of the Oracle Streams component. The following types are possible:

  • The statistic that was gathered for the component

  • The value and unit of the statistic. For example, a LATENCY statistic shows a number for the value and SECONDS for the unit. A TRANSACTION APPLY RATE statistic shows a number for the value and TRANSACTIONS PER SECOND for the unit.

The ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package gathers the statistics returned by the query in this section. Therefore, the statistics returned by the query were the current statistics when the procedure was run. The statistics are not updated automatically.

Table 23-3 describes each of the statistics that can be returned by the query in this section:

Table 23-3 Component-Level Statistics for Oracle Streams Components

Component TypeStatisticUnitDescription

CAPTURE

CAPTURE RATE

MESSAGES PER SECOND

The average number of database changes in the redo log scanned by the capture process each second.

A capture process captures and enqueues the scanned changes that satisfy its rule sets.

CAPTURE

ENQUEUE RATE

MESSAGES PER SECOND

The average number of logical change records (LCRs) enqueued by the capture process each second.

CAPTURE

LATENCY

SECONDS

The amount of time between when the last redo entry became available for the capture process and the time when the last redo entry scanned by the capture process was recorded in the redo log.

The purpose of the statistic is to show the amount of time between when a change is recorded in the redo log and when the redo record is scanned by the capture process.

The capture process might or might not enqueue a scanned change. A capture process only enqueues a change if the change satisfies its rule sets.

PROPAGATION SENDER

SEND RATE

MESSAGES PER SECOND

The average number of messages sent each second by the propagation sender.

PROPAGATION SENDER

BANDWIDTH

BYTES PER SECOND

The average number of bytes sent each second by the propagation sender.

PROPAGATION SENDER

LATENCY

SECONDS

The amount of time between when a message was created at the source database and when the message was sent to the destination queue by the propagation sender.

The value shown is for a single message that was sent from the source queue to the destination queue by the propagation sender. This message was the last message sent by the propagation sender when the ANALYZE_CURRENT_PERFORMANCE procedure was run.

Depending on the type of message sent by the propagation, message creation time is one of the following:

APPLY

MESSAGE APPLY RATE

MESSAGES PER SECOND

The average number of messages applied each second by the apply process.

A captured LCR or persistent LCR can be applied in one of the following ways:

  • The apply process makes the change encapsulated in the LCR to a database object.

  • The apply process passes the LCR to an apply handler.

  • If the LCR raises an error, then the apply process sends the LCR to the error queue.

A persistent user message can be applied in one of the following ways:

  • The apply process sends the message to a message handler.

  • If the LCR raises an error, then the apply process sends the message to the error queue.

APPLY

TRANSACTION APPLY RATE

TRANSACTIONS PER SECOND

The average number of transactions applied by the apply process each second. Transactions typically include multiple messages.

A transaction that includes captured LCRs or persistent LCRs can be applied in one of the following ways:

  • The apply process makes all of the changes in the transaction and commits the transaction.

  • The apply process passes all of the LCRs in the transaction to an apply handler.

  • If the LCR raises an error, then the apply process sends the transaction and all of the LCRs in the transaction to the error queue.

A transaction that includes persistent user messages can be applied in one of the following ways:

  • The apply process passes all of the messages in the transaction to a message handler.

  • If the LCR raises an error, then the apply process sends all of the messages in the transaction to the error queue.

APPLY

LATENCY

SECONDS

For apply processes, the amount of time between when the message was created at a source database and when the message was applied by the apply process at the destination database.

The value shown is for a single message that was applied by the apply process. This message was the last message applied when the ANALYZE_CURRENT_PERFORMANCE procedure was run.

Depending on the type of message applied, message creation time is one of the following:

  • For captured LCRs, the time when the redo entry for the database change was recorded

  • For persistent LCRs, the time when the LCR was constructed

  • For user messages, the time when the message was enqueued

QUEUE

ENQUEUE RATE

MESSAGES PER SECOND

The average number of messages enqueued into the queue each second.

QUEUE

SPILL RATE

MESSAGES PER SECOND

The average number of messages that spilled from the buffered queue to the queue table each second.

QUEUE

CURRENT QUEUE SIZE

NUMBER OF MESSAGES

The number of messages in the queue when the ANALYZE_CURRENT_PERFORMANCE procedure was run.

CAPTURE, PROPAGATION SENDER, PROPAGATION RECEIVER, and APPLY

EVENT (Top wait event)

PERCENT

The percentage of time that the Oracle Streams component spent waiting because of a wait event.

The Oracle Streams Performance Advisor only gathers information about the top three events for each component.

For example, a capture process might wait for a redo log file to become available.


The following are general considerations for these performance statistics:

  • Regarding rate, bandwidth, and event statistics, the time period is calculated as the time difference between the two snapshots used by the ANALYZE_CURRENT_PERFORMANCE procedure in the same user session. See "About the Information Gathered by the Oracle Streams Performance Advisor" for information about the snapshots. When a user session ends, the rate, bandwidth, and event statistics are purged.

  • When a latency statistic is -1 seconds, the ANALYZE_CURRENT_PERFORMANCE procedure could not gather statistics for the component when it was run. In most cases, this result indicates that the component was disabled when the procedure was run. For example, if the LATENCY statistic for an apply process is -1, then the component was probably disabled when the ANALYZE_CURRENT_PERFORMANCE procedure was run.

To display performance statistics for the components in an Oracle Streams topology, run the following query:

COLUMN PATH_ID HEADING 'Path|ID' FORMAT 999
COLUMN COMPONENT_ID HEADING 'Component|ID' FORMAT 999
COLUMN COMPONENT_NAME HEADING 'Name' FORMAT A20
COLUMN COMPONENT_TYPE HEADING 'Type' FORMAT A12
COLUMN STATISTIC_NAME HEADING 'Statistic' FORMAT A15
COLUMN STATISTIC_VALUE HEADING 'Value' FORMAT 99999999999.99
COLUMN STATISTIC_UNIT HEADING 'Unit' FORMAT A15 

SELECT DISTINCT
       cp.PATH_ID,
       cs.COMPONENT_ID,
       cs.COMPONENT_NAME,
       cs.COMPONENT_TYPE,
       cs.STATISTIC_NAME,
       cs.STATISTIC_VALUE,
       cs.STATISTIC_UNIT
   FROM DBA_STREAMS_TP_COMPONENT_STAT  cs,
        (SELECT PATH_ID, SOURCE_COMPONENT_ID AS COMPONENT_ID
        FROM DBA_STREAMS_TP_COMPONENT_LINK
        UNION
        SELECT PATH_ID, DESTINATION_COMPONENT_ID AS COMPONENT_ID
        FROM DBA_STREAMS_TP_COMPONENT_LINK) cp
   WHERE cs.ADVISOR_RUN_ID = 2 AND
         cs.SESSION_ID IS NULL AND
         cs.SESSION_SERIAL# IS NULL AND
         cs.COMPONENT_ID = cp.COMPONENT_ID
   ORDER BY PATH_ID, COMPONENT_ID, COMPONENT_NAME, COMPONENT_TYPE, STATISTIC_NAME;

This example uses 2 for the ADVISOR_RUN_ID in the WHERE clause. Substitute the advisor run ID for the advisor run you want to query. See "Gathering Information About the Oracle Streams Topology and Performance" for information about determining the ADVISOR_RUN_ID.

The following output shows a partial list of the performance statistics for the components listed in "Viewing the Oracle Streams Components at Each Database". Specifically, the following output shows performance statistics for the components in stream path 1 and stream path 3:

Path Component                                                                                  
  ID         ID Name                 Type         Statistic                 Value Unit           
---- ---------- -------------------- ------------ --------------- --------------- ---------------
   1          1 "STRMADMIN"."DESTINA QUEUE        CURRENT QUEUE S             .00 NUMBER OF MESSA
                TION_SPOKE1"                      IZE                             GES            
   1          1 "STRMADMIN"."DESTINA QUEUE        ENQUEUE RATE            2573.21 MESSAGES PER SE
                TION_SPOKE1"                                                      COND           
   1          1 "STRMADMIN"."DESTINA QUEUE        SPILL RATE                  .00 MESSAGES PER SE
                TION_SPOKE1"                                                      COND           
   1          6 "STRMADMIN"."SOURCE_ PROPAGATION  EVENT: CPU + Wa           32.55 PERCENT        
                HNS"@SPOKE1.EXAMPLE. RECEIVER     it for CPU                                     
                COM=>"STRMADMIN"."DE                                                             
                STINATION_SPOKE1"                                                                
   1          6 "STRMADMIN"."SOURCE_ PROPAGATION  EVENT: SQL*Net            23.62 PERCENT        
                HNS"@SPOKE1.EXAMPLE. RECEIVER     more data from                                 
                COM=>"STRMADMIN"."DE              client                                         
                STINATION_SPOKE1"                                                                
   1          6 "STRMADMIN"."SOURCE_ PROPAGATION  EVENT: latch: r            2.10 PERCENT        
                HNS"@SPOKE1.EXAMPLE. RECEIVER     ow cache object                                
                COM=>"STRMADMIN"."DE              s                                              
                STINATION_SPOKE1"                                                                
   1          8 APPLY_SPOKE1         APPLY        EVENT: CPU + Wa           23.10 PERCENT        
                                                  it for CPU                                     
   1          8 APPLY_SPOKE1         APPLY        EVENT: latch: r            1.31 PERCENT        
                                                  ow cache object                                
                                                  s                                              
   1          8 APPLY_SPOKE1         APPLY        EVENT: latch: s            1.57 PERCENT        
                                                  hared pool                                     
   1          8 APPLY_SPOKE1         APPLY        LATENCY                    2.13 SECONDS        
   1          8 APPLY_SPOKE1         APPLY        MESSAGE APPLY R        10004.00 MESSAGES PER SE
                                                  ATE                             COND           
   1          8 APPLY_SPOKE1         APPLY        TRANSACTION APP          100.00 TRANSACTIONS PE
                                                  LY RATE                         R SECOND       
   1         12 "STRMADMIN"."SOURCE_ QUEUE        CURRENT QUEUE S             .00 NUMBER OF MESSA
                HNS"                              IZE                             GES            
   1         12 "STRMADMIN"."SOURCE_ QUEUE        ENQUEUE RATE            9932.00 MESSAGES PER SE
                HNS"                                                              COND           
                                                                                                 
   1         12 "STRMADMIN"."SOURCE_ QUEUE        SPILL RATE                  .00 MESSAGES PER SE
                HNS"                                                              COND           
   1         13 "STRMADMIN"."SOURCE_ PROPAGATION  BANDWIDTH              32992.96 BYTES PER SECON
                HNS"=>"STRMADMIN"."D SENDER                                       D              
                ESTINATION_SPOKE1"@H                                                             
                UB.EXAMPLE.COM                                                                   
   1         13 "STRMADMIN"."SOURCE_ PROPAGATION  EVENT: CPU + Wa           35.96 PERCENT        
                HNS"=>"STRMADMIN"."D SENDER       it for CPU                                     
                ESTINATION_SPOKE1"@H                                                             
                UB.EXAMPLE.COM                                                                   
   1         13 "STRMADMIN"."SOURCE_ PROPAGATION  EVENT: SQL*Net              .26 PERCENT        
                HNS"=>"STRMADMIN"."D SENDER       message to dbli                                
                ESTINATION_SPOKE1"@H              nk                                             
                UB.EXAMPLE.COM                                                                   
   1         13 "STRMADMIN"."SOURCE_ PROPAGATION  EVENT: latch: r             .26 PERCENT        
                HNS"=>"STRMADMIN"."D SENDER       ow cache object                                
                ESTINATION_SPOKE1"@H              s                                              
                UB.EXAMPLE.COM                                                                   
   1         13 "STRMADMIN"."SOURCE_ PROPAGATION  LATENCY                    4.00 SECONDS        
                HNS"=>"STRMADMIN"."D SENDER                                                      
                ESTINATION_SPOKE1"@H                                                             
                UB.EXAMPLE.COM                                                                   
   1         13 "STRMADMIN"."SOURCE_ PROPAGATION  SEND RATE               2568.00 MESSAGES PER SE
                HNS"=>"STRMADMIN"."D SENDER                                       COND           
                ESTINATION_SPOKE1"@H                                                             
                UB.EXAMPLE.COM                                                                   
   1         16 CAPTURE_HNS          CAPTURE      CAPTURE RATE           10464.00 MESSAGES PER SE
                                                                                  COND           
   1         16 CAPTURE_HNS          CAPTURE      ENQUEUE RATE           10002.00 MESSAGES PER SE
                                                                                  COND           
   1         16 CAPTURE_HNS          CAPTURE      EVENT: CPU + Wa           11.02 PERCENT        
                                                  it for CPU                                     
   1         16 CAPTURE_HNS          CAPTURE      EVENT: CPU + Wa           35.96 PERCENT        
                                                  it for CPU                                     
   1         16 CAPTURE_HNS          CAPTURE      EVENT: SQL*Net             5.51 PERCENT        
                                                  message from db                                
                                                  link                                           
   1         16 CAPTURE_HNS          CAPTURE      LATENCY                    2.65 SECONDS        
.
.
.

Note:

This output is for illustrative purposes only. Actual performance characteristics vary depending on individual configurations and conditions.

You can analyze this output along with the output for the queries in "Viewing the Oracle Streams Components at Each Database" and "Viewing Each Stream Path in an Oracle Streams Topology".


See Also:


Viewing Session-Level Statistics

You can view session-level statistics for the Oracle Streams components. The query in this section displays the following information for each session-level statistic:

  • The name of the Oracle Streams component

  • The type of the Oracle Streams component. The following types are possible:

  • The type of the subcomponent. Only capture processes, apply processes have subcomponents.

    The following subcomponent types are possible for capture processes:

    • LOGMINER READER for a builder server of a capture process

    • LOGMINER PREPARER for a preparer server of a capture process

    • LOGMINER BUILDER for a reader server of a capture process

    • CAPTURE SESSION for a capture process session

    The following subcomponent types are possible for apply processes:

    • PROPAGATION SENDER+RECEIVER for sending LCRs from a capture process directly to an apply process in a combined capture and apply optimization

    • APPLY READER for a reader server

    • APPLY COORDINATOR for a coordinator process

    • APPLY SERVER for a reader server

  • The statistic that was gathered for the component

  • The value and unit of the statistic. Session-level statistics show PERCENT for the unit. The value is the percentage of time spent either IDLE, paused for FLOW CONTROL, or waiting for an EVENT.

The ANALYZE_CURRENT_PERFORMANCE procedure in the DBMS_STREAMS_ADVISOR_ADM package gathers the statistics returned by the query in this section. Therefore, the statistics returned by the query were the current statistics when the procedure was run. The statistics are not updated automatically.

Table 23-4 describes each of the statistics that can be returned by the query in this section:

Table 23-4 Session-Level Statistics for Oracle Streams Components

The percentage of time that the session spent waiting because of a wait event.

The Oracle Streams Performance Advisor only gathers information about the top three events for each session.

For example, an apply server might wait for a dependent transaction to be applied before applying its transaction.

StatisticUnitDescription

IDLE

PERCENT

The percentage of time that the session spent idle. When a session is idle, it is not performing any work.

FLOW CONTROL

PERCENT

The percentage of time that the session was paused for flow control. See "Capture Process States" for information about flow control.

EVENT (Top wait event)

PERCENT


Regarding flow control and event statistics, the time period is calculated as the time difference between the two snapshots used by the ANALYZE_CURRENT_PERFORMANCE procedure in the same user session. See "About the Information Gathered by the Oracle Streams Performance Advisor" for information about the snapshots. When a user session ends, the flow control and event statistics are purged.

To display session-level performance statistics for the components in an Oracle Streams topology, run the following query:

COLUMN PATH_ID HEADING 'Path|ID' FORMAT 999
COLUMN COMPONENT_ID HEADING 'Component|ID' FORMAT 999
COLUMN COMPONENT_NAME HEADING 'Component|Name' FORMAT A20
COLUMN COMPONENT_TYPE HEADING 'Component|Type' FORMAT A10
COLUMN SUB_COMPONENT_TYPE HEADING 'Subcomponent|Type' FORMAT A17
COLUMN STATISTIC_NAME HEADING 'Statistic' FORMAT A15
COLUMN STATISTIC_VALUE HEADING 'Value' FORMAT 999.99
COLUMN STATISTIC_UNIT HEADING 'Unit' FORMAT A7
 
SELECT DISTINCT
       cp.PATH_ID,
       cs.COMPONENT_ID,
       cs.COMPONENT_NAME,
       cs.COMPONENT_TYPE,
       cs.SUB_COMPONENT_TYPE,
       cs.STATISTIC_NAME,
       cs.STATISTIC_VALUE,
       cs.STATISTIC_UNIT
   FROM DBA_STREAMS_TP_COMPONENT_STAT  cs,
        (SELECT PATH_ID, SOURCE_COMPONENT_ID AS COMPONENT_ID
        FROM DBA_STREAMS_TP_COMPONENT_LINK
        UNION
        SELECT PATH_ID, DESTINATION_COMPONENT_ID AS COMPONENT_ID
        FROM DBA_STREAMS_TP_COMPONENT_LINK) cp
   WHERE cs.ADVISOR_RUN_ID=2 AND
         cs.SESSION_ID IS NOT NULL AND
         cs.SESSION_SERIAL# IS NOT NULL AND
         cs.COMPONENT_ID = cp.COMPONENT_ID
   ORDER BY PATH_ID, COMPONENT_ID, COMPONENT_NAME, COMPONENT_TYPE, STATISTIC_NAME;

This example uses 2 for the ADVISOR_RUN_ID in the WHERE clause. Substitute the advisor run ID for the advisor run you want to query. See "Gathering Information About the Oracle Streams Topology and Performance" for information about determining the ADVISOR_RUN_ID.

The following output shows a partial list of the session-level performance statistics for the components listed in "Viewing the Oracle Streams Components at Each Database". Specifically, the following output shows session-level performance statistics for the components in stream path 1 and stream path 3:

Path Component Component            Component  Subcomponent                                     
  ID        ID Name                 Type       Type              Statistic         Value Unit    
---- --------- -------------------- ---------- ----------------- --------------- ------- ------- 
   1         6 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: CPU + Wa   32.55 PERCENT 
               HNS"@SPOKE1.EXAMPLE. N RECEIVER                   it for CPU                      
               COM=>"STRMADMIN"."DE                                                              
               STINATIO N_SPOKE1"                                                                
   1         6 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: SQL*Net    23.62 PERCENT 
               HNS"@SPOKE1.EXAMPLE. N RECEIVER                   more data from                  
               COM=>"STRMADMIN"."DE                              client                          
               STINATION_SPOKE1"                                                                 
   1         6 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: latch: r    2.10 PERCENT 
               HNS"@SPOKE1.EXAMPLE. N RECEIVER                   ow cache object                 
               COM=>"STRMADMIN"."DE                              s                               
               STINATION_SPOKE1"                                                                 
   1         6 "STRMADMIN"."SOURCE_ PROPAGATIO                   FLOW CONTROL        .89 PERCENT 
               HNS"@SPOKE1.EXAMPLE. N RECEIVER                                                   
               COM=>"STRMADMIN"."DE                                                              
               STINATION_SPOKE1"                                                                 
   1         6 "STRMADMIN"."SOURCE_ PROPAGATIO                   IDLE              36.61 PERCENT 
               HNS"@SPOKE1.EXAMPLE. N RECEIVER                                                   
               COM=>"STRMADMIN"."DE                                                              
               STINATION_SPOKE1"                                                                 
   1         8 APPLY_SPOKE1         APPLY      APPLY READER      EVENT: CPU + Wa     .26 PERCENT 
                                                                 it for CPU                      
   1         8 APPLY_SPOKE1         APPLY      APPLY SERVER      EVENT: CPU + Wa   23.10 PERCENT 
                                                                 it for CPU                      
   1         8 APPLY_SPOKE1         APPLY      APPLY SERVER      EVENT: latch: r    1.31 PERCENT 
                                                                 ow cache object                 
                                                                 s                               
   1         8 APPLY_SPOKE1         APPLY      APPLY READER      EVENT: latch: s     .26 PERCENT 
                                                                 hared pool                      
   1         8 APPLY_SPOKE1         APPLY      APPLY SERVER      EVENT: latch: s    1.57 PERCENT 
                                                                 hared pool                      
   1         8 APPLY_SPOKE1         APPLY      APPLY COORDINATOR FLOW CONTROL        .00 PERCENT 
   1         8 APPLY_SPOKE1         APPLY      APPLY READER      FLOW CONTROL      10.76 PERCENT 
   1         8 APPLY_SPOKE1         APPLY      APPLY SERVER      FLOW CONTROL        .00 PERCENT 
   1         8 APPLY_SPOKE1         APPLY      APPLY COORDINATOR IDLE               6.21 PERCENT 
   1         8 APPLY_SPOKE1         APPLY      APPLY READER      IDLE               9.24 PERCENT 
   1         8 APPLY_SPOKE1         APPLY      APPLY SERVER      IDLE               8.53 PERCENT 
   1        13 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: CPU + Wa   21.65 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                     it for CPU                      
               ESTINATION_SPOKE1"@H                                                              
               UB.EXAMPLE.COM                                                                    
   1        13 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: SQL*Net      .26 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                     message to dbli                 
               ESTINATION_SPOKE1"@H                              nk                              
               UB.EXAMPLE.COM                                                                    
   1        13 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: latch: r     .26 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                     ow cache object                 
               ESTINATION_SPOKE1"@H                              s                               
               UB.EXAMPLE.COM                                                                    
   1        13 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: latch: s     .26 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                     hared pool                      
               ESTINATION_SPOKE1"@H                                                              
               UB.EXAMPLE.COM                                                                    
   1        13 "STRMADMIN"."SOURCE_ PROPAGATIO                   FLOW CONTROL       7.37 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                                                     
               ESTINATION_SPOKE1"@H                                                              
               UB.EXAMPLE.COM                                                                    
   1        13 "STRMADMIN"."SOURCE_ PROPAGATIO                   IDLE              67.41 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                                                     
               ESTINATION_SPOKE1"@H                                                              
               UB.EXAMPLE.COM                                                                    
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER READER   EVENT: ARCH wai     .26 PERCENT 
                                                                 t on c/f tx acq                 
                                                                 uire 2                          
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: CPU + Wa   35.96 PERCENT 
                                                                 it for CPU                      
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER BUILDER  EVENT: CPU + Wa     .26 PERCENT 
                                                                 it for CPU                      
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER PREPARER EVENT: CPU + Wa   11.02 PERCENT 
                                                                 it for CPU                      
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER READER   EVENT: CPU + Wa     .26 PERCENT 
                                                                 it for CPU                      
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: SQL*Net     5.51 PERCENT 
                                                                 message from db                 
                                                                 link                            
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: SQL*Net      .26 PERCENT 
                                                                 message to dbli                 
                                                                 nk                              
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: latch: r     .26 PERCENT 
                                                                 ow cache object                 
                                                                 s                               
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER BUILDER  EVENT: latch: r    1.84 PERCENT 
                                                                 ow cache object                 
                                                                 s                               
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER PREPARER EVENT: latch: r     .79 PERCENT 
                                                                 ow cache object                 
                                                                 s                               
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: latch: s     .26 PERCENT 
                                                                 hared pool                      
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER READER   EVENT: latch: s     .79 PERCENT 
                                                                 hared pool                      
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   FLOW CONTROL      16.27 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER BUILDER  FLOW CONTROL        .00 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER PREPARER FLOW CONTROL        .00 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER READER   FLOW CONTROL        .00 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   IDLE              41.47 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER BUILDER  IDLE              97.90 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER PREPARER IDLE              88.19 PERCENT 
   1        16 CAPTURE_HNS          CAPTURE    LOGMINER READER   IDLE              98.69 PERCENT 
.
.
.
   3         4 "STRMADMIN"."SOURCE_ PROPAGATIO                   FLOW CONTROL       6.50 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                                                     
               ESTINATION_SPOKE1"@S                                                              
               POKE1.EXAMPLE.COM                                                                 
   3         4 "STRMADMIN"."SOURCE_ PROPAGATIO                   IDLE              70.50 PERCENT 
               HNS"=>"STRMADMIN"."D N SENDER                                                     
               ESTINATION_SPOKE1"@S                                                              
               POKE1.EXAMPLE.COM                                                                 
   3        10 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: ARCH wai   52.23 PERCENT 
                                                                 t for archivelo                 
                                                                 g lock                          
   3        10 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: CPU + Wa    7.35 PERCENT 
                                                                 it for CPU                      
   3        10 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   EVENT: control      .52 PERCENT 
                                                                 file sequential                 
                                                                  read                           
   3        10 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   FLOW CONTROL       4.24 PERCENT 
   3        10 CAPTURE_HNS          CAPTURE    CAPTURE SESSION   IDLE               2.23 PERCENT 
   3        14 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: CPU + Wa    6.92 PERCENT 
               HNS"@HUB.EXAMPLE.COM N RECEIVER                   it for CPU                      
               =>"STRMADMIN"."DESTI                                                              
               NATION_SPOKE1"                                                                    
   3        14 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: latch: r    2.23 PERCENT 
               HNS"@HUB.EXAMPLE.COM N RECEIVER                   ow cache object                 
               =>"STRMADMIN"."DESTI                              s                               
               NATION_SPOKE1"                                                                    
   3        14 "STRMADMIN"."SOURCE_ PROPAGATIO                   EVENT: library     3.79 PERCENT 
               HNS"@HUB.EXAMPLE.COM N RECEIVER                   cache: mutex X                  
               =>"STRMADMIN"."DESTI                                                              
               NATION_SPOKE1"                                                                    
   3        14 "STRMADMIN"."SOURCE_ PROPAGATIO                   FLOW CONTROL        .67 PERCENT 
               HNS"@HUB.EXAMPLE.COM N RECEIVER                                                   
               =>"STRMADMIN"."DESTI                                                              
               NATION_SPOKE1"                                                                    
   3        14 "STRMADMIN"."SOURCE_ PROPAGATIO                   IDLE              85.04 PERCENT 
               HNS"@HUB.EXAMPLE.COM N RECEIVER                                                   
               =>"STRMADMIN"."DESTI                                                              
               NATION_SPOKE1"                                                                    
   3        15 APPLY_SPOKE1         APPLY      APPLY COORDINATOR EVENT: latch: r    4.20 PERCENT 
                                                                 ow cache object                 
                                                                 s                               
   3        15 APPLY_SPOKE1         APPLY      APPLY COORDINATOR EVENT: latch: s     .52 PERCENT 
                                                                 hared pool                      
   3        15 APPLY_SPOKE1         APPLY      APPLY READER      EVENT: latch: s     .26 PERCENT 
                                                                 hared pool                      
   3        15 APPLY_SPOKE1         APPLY      APPLY COORDINATOR FLOW CONTROL        .00 PERCENT 
   3        15 APPLY_SPOKE1         APPLY      APPLY READER      FLOW CONTROL       1.56 PERCENT 
   3        15 APPLY_SPOKE1         APPLY      APPLY SERVER      FLOW CONTROL        .00 PERCENT 
   3        15 APPLY_SPOKE1         APPLY      APPLY COORDINATOR IDLE              87.28 PERCENT 
   3        15 APPLY_SPOKE1         APPLY      APPLY READER      IDLE              96.88 PERCENT 
   3        15 APPLY_SPOKE1         APPLY      APPLY SERVER      IDLE              91.29 PERCENT 

Note:

  • This output is for illustrative purposes only. Actual performance characteristics vary depending on individual configurations and conditions.

  • You can view the session ID and serial number for each session by adding the SESSION_ID and SESSION_SERIAL# columns to the query on the DBA_STREAMS_TP_COMPONENT_STAT view.



See Also:


Viewing Statistics for the Stream Paths in an Oracle Streams Environment

The query in this section shows the following information for each stream path in the Oracle Streams topology:

  • Whether optimization mode for Oracle Streams is used for the path. When the OPTIMIZATION_MODE statistic is greater than 0 (zero) for a path, the path uses the combined capture and apply optimization. When the OPTIMIZATION_MODE statistic is 0 (zero) for a path, the path does not use the combined capture and apply optimization.

  • The MESSAGE RATE value is the average number of messages sent each second from the start of the path to the end of the path.

  • The TRANSACTION RATE value is the average number of transactions sent each second from the start of the path to the end of the path.

The time period for these statistics is calculated as the time difference between the two snapshots used by the ANALYZE_CURRENT_PERFORMANCE procedure in the same user session. See "About the Information Gathered by the Oracle Streams Performance Advisor" for information about the snapshots. When a user session ends, these statistics are purged.

To display this information, run the following query:

COLUMN PATH_ID HEADING 'Path ID' FORMAT 999
COLUMN STATISTIC_NAME HEADING 'Statistic' FORMAT A25
COLUMN STATISTIC_VALUE HEADING 'Value' FORMAT 99999999.99
COLUMN STATISTIC_UNIT HEADING 'Unit' FORMAT A25

SELECT PATH_ID,
       STATISTIC_NAME,
       STATISTIC_VALUE,
       STATISTIC_UNIT
   FROM DBA_STREAMS_TP_PATH_STAT
   WHERE ADVISOR_RUN_ID=2
   ORDER BY PATH_ID, STATISTIC_NAME;

This example uses 2 for the ADVISOR_RUN_ID in the WHERE clause. Substitute the advisor run ID for the advisor run you want to query. See "Gathering Information About the Oracle Streams Topology and Performance" for information about determining the ADVISOR_RUN_ID.

The following output shows the path statistics for the stream paths listed in "Viewing Each Stream Path in an Oracle Streams Topology":

Path ID Statistic                        Value Unit
------- ------------------------- ------------ -------------------------
      1 OPTIMIZATION_MODE                 1.00 NUMBER
      1 MESSAGE RATE                  10004.00 MESSAGES PER SECOND
      1 TRANSACTION RATE                100.00 TRANSACTIONS PER SECOND
      2 OPTIMIZATION_MODE                 1.00 NUMBER
      2 MESSAGE RATE                  10028.25 MESSAGES PER SECOND
      2 TRANSACTION RATE                100.37 TRANSACTIONS PER SECOND
      3 OPTIMIZATION_MODE                 1.00 NUMBER
      3 MESSAGE RATE                   9623.20 MESSAGES PER SECOND
      3 TRANSACTION RATE                 97.10 TRANSACTIONS PER SECOND
      4 OPTIMIZATION_MODE                 1.00 NUMBER
      4 MESSAGE RATE                  10180.05 MESSAGES PER SECOND
      4 TRANSACTION RATE                 102.68 TRANSACTIONS PER SECOND

Note:

This output is for illustrative purposes only. Actual performance characteristics vary depending on individual configurations and conditions.

Using the UTL_SPADV Package

The UTL_SPADV package provides subprograms to collect and analyze statistics for the Oracle Streams components in a distributed database environment. The package uses the Oracle Streams Performance Advisor to gather statistics.

The COLLECT_STATS and START_MONITORING procedures use the Oracle Streams Performance Advisor to gather statistics about the Oracle Streams components and subcomponents in a distributed database environment. The SHOW_STATS procedure generates output that includes the statistics. The output is formatted so that it can be imported into a spreadsheet easily and analyzed.

You can use the COLLECT_STATS procedure to collect statistics each time the procedure is called. The comp_stat_table and path_stat_table parameters specify the tables that store the performance statistics. By default, these tables are STREAMS$_ADVISOR_COMP_STAT and STREAMS$_ADVISOR_PATH_STAT, respectively.

You can also use the START_MONITORING procedure to create a monitoring job that monitors Oracle Streams performance continually at specified intervals. The monitoring job uses the COLLECT_STATS procedure to collect statistics. The START_MONITORING procedure populates the STREAMS$_PA_MONITORING table, and the SHOW_STATS_TABLE column in this table specifies the table that contains the performance statistics. You can use the ALTER_MONITORING procedure to modify a monitoring job, and you can use the STOP_MONITORING procedure to stop a monitoring job.

These procedures collect the same statistics as the Oracle Streams Performance Advisor. These statistics are described in Table 23-3, "Component-Level Statistics for Oracle Streams Components" and Table 23-4, "Session-Level Statistics for Oracle Streams Components".

This section contains these topics:


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the UTL_SPADV package

Collecting Oracle Streams Statistics Using the UTL_SPADV Package

To collect statistics using the UTL_SPADV package, complete the following steps:

  1. Identify the database that you will use to gather the information. An administrative user at this database must meet the following requirements:

    • The user must have access to a database link to each database that contains Oracle Streams components to monitor.

    • The user must have been granted privileges using the DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure, and each database link must connect to a user at the remote database that has been granted privileges using the DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure.

      If you configure an Oracle Streams administrator at each database with Oracle Streams components, then the Oracle Streams administrator has the necessary privileges. See Oracle Streams Replication Administrator's Guide for information about creating an Oracle Streams administrator.

    If no database in your environment meets these requirements, then choose a database, configure the necessary database links, and grant the necessary privileges to the users before proceeding.

  2. In SQL*Plus, connect to the database you identified in Step 1 as a user that meets the requirements listed in Step 1.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Run the utlspadv.sql script in the rdbms/admin directory in ORACLE_HOME to load the UTL_SPADV package. For example:

    @utlspadv.sql
    
  4. Either collect the current Oracle Streams performance statistics once, or create a job that continually monitors Oracle Streams performance:

    • To collect the current Oracle Streams performance statistics once, run the COLLECT_STATS procedure:

      exec UTL_SPADV.COLLECT_STATS
      

      This example uses the default values for the parameters in the COLLECT_STATS procedure. Therefore, this example runs the Performance Advisor 10 times with 60 seconds between each run. These values correspond with the default values for the num_runs and interval parameters, respectively, in the COLLECT_STATS procedure.

    • To create a job that continually monitors Oracle Streams performance:

      exec UTL_SPADV.START_MONITORING
      

      This example creates a monitoring job, and the monitoring job gathers performance statistics continually at set intervals. This example uses the default values for the parameters in the START_MONITORING procedure. Therefore, this example runs the Performance Advisor every 60 seconds. This value corresponds with the default value for the interval parameter in the START_MONITORING procedure. If an interval is specified in the START_MONITORING procedure, then the specified interval is used for the interval parameter in the COLLECT_STATS procedure.

    These procedures include several parameters that you can use to adjust the way performance statistics are gathered. See Oracle Database PL/SQL Packages and Types Reference for more information.

You can show the statistics by running the SHOW_STATS procedure. See "Showing Oracle Streams Statistics Using the UTL_SPADV Package".


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the UTL_SPADV package

Checking Whether an Oracle Streams Monitoring Job Is Currently Running

To check whether a monitoring job is running using the UTL_SPADV package, complete the following steps:

  1. Connect to the database as the user who submitted the monitoring job.

  2. Run the IS_MONITORING function. For example, to determine whether a monitoring job submitted by the current user with the full monitoring job name of STREAM$_MONITORING_JOB is running, enter the following:

    SET SERVEROUTPUT ON
    DECLARE
      is_mon   BOOLEAN;
    BEGIN
      is_mon := UTL_SPADV.IS_MONITORING(
                  job_name    => 'STREAMS$_MONITORING_JOB',
                  client_name => NULL);
      IF is_mon=TRUE THEN
        DBMS_OUTPUT.PUT_LINE('The monitoring job is running.');
      ELSE
        DBMS_OUTPUT.PUT_LINE('No monitoring job was found.');
      END IF;
    END;
    /
    

The output displays the following text if a monitoring job with the specified full monitoring job name is currently running:

The monitoring job is running.

The output displays the following text if no monitoring job with the specified full monitoring job name is currently running:

No monitoring job was found.

Note:

When you submit a monitoring job, the client name and job name are concatenated to form the full monitoring job name. The client name for a monitoring job submitted by Oracle Enterprise Manager is always EM.

Altering an Oracle Streams Monitoring Job

To alter a monitoring job using the UTL_SPADV package, complete the following steps:

  1. Create a monitoring job if you have not done so already by completing the steps described in "Collecting Oracle Streams Statistics Using the UTL_SPADV Package". Ensure that you run the START_MONITORING procedure in Step 4.

  2. Connect to the database as the user who submitted the monitoring job. Only the user who submitted a monitoring job can alter the monitoring job, and each user can submit only one monitoring job at a time.

  3. Run the ALTER_MONITORING procedure. The following example sets the interval for the monitoring job to 120 seconds:

    BEGIN
      UTL_SPADV.ALTER_MONITORING(
        interval => 120);
    END;
    /
    

    After running this procedure, the monitoring job gathers statistics every 120 seconds.

Stopping an Oracle Streams Monitoring Job

To stop a monitoring job using the UTL_SPADV package, complete the following steps:

  1. Connect to the database as the user who submitted the monitoring job. Only the user who submitted a monitoring job can stop the monitoring job, and each user can submit only one monitoring job at a time.

  2. Run the STOP_MONITORING procedure:

    exec UTL_SPADV.STOP_MONITORING
    

The STOP_MONITORING procedure includes a purge parameter that you can use to purge the statistics gathered by the monitoring job from the result tables. By default, the purge parameter is set to FALSE, and the results are retained. Set the purge parameter to TRUE to purge the results.


See Also:

See Oracle Database PL/SQL Packages and Types Reference for more information.

Showing Oracle Streams Statistics Using the UTL_SPADV Package

The SHOW_STATS procedure displays the statistics that the Performance Advisor gathered and stored. Use the path_stat_table parameter to specify the table that contains the statistics.

When you gather statistics using the COLLECT_STATS procedure, this table is specified in the path_stat_table parameter in the COLLECT_STATS procedure. By default, the table name is STREAMS$_ADVISOR_PATH_STAT.

When you gather statistics using the START_MONITORING procedure, you can determine the name for this table by querying the SHOW_STATS_TABLE column in the STREAMS$_PA_MONITORING view. The default table for a monitoring job is STREAMS$_PA_SHOW_PATH_STAT.

To show statistics collected using the UTL_SPADV package and stored in the STREAMS$_ADVISOR_PATH_STAT table, complete the following steps:

  1. Collect statistics by completing the steps described in "Collecting Oracle Streams Statistics Using the UTL_SPADV Package".

  2. Connect to the database as the user who collected the statistics.

  3. If you are using a monitoring job, then query the SHOW_STATS_TABLE column in the STREAMS$_PA_MONITORING view to determine the name of this table that stores the statistics:

    SELECT SHOW_STATS_TABLE FROM STREAMS$_PA_MONITORING;
    
  4. Run the SHOW_STATS procedure.

    For example, if you are using a monitoring job and the default storage table, then run the following procedure:

    SET SERVEROUTPUT ON SIZE 50000
    BEGIN
      UTL_SPADV.SHOW_STATS(
        path_stat_table => 'STREAMS$_PA_SHOW_PATH_STAT');
    END;
    /
    

The output includes the following legend:

LEGEND
<statistics>= <capture> [ <queue> <psender> <preceiver> <queue> ] <apply>
<bottleneck>
<capture>   = '|<C>' <name> <msgs captured/sec> <msgs enqueued/sec> <latency>
                    'LMR' <idl%> <flwctrl%> <topevt%> <topevt>
                    'LMP' (<parallelism>) <idl%> <flwctrl%> <topevt%> <topevt>
                    'LMB' <idl%> <flwctrl%> <topevt%> <topevt>
                    'CAP' <idl%> <flwctrl%> <topevt%> <topevt>
                    'CAP+PS' <msgs sent/sec> <bytes sent/sec> <latency> <idl%>
<flwctrl%> <topevt%> <topevt>
<apply>     = '|<A>' <name> <msgs applied/sec> <txns applied/sec> <latency>
                    'PS+PR' <idl%> <flwctrl%> <topevt%> <topevt>
                    'APR' <idl%> <flwctrl%> <topevt%> <topevt>
                    'APC' <idl%> <flwctrl%> <topevt%> <topevt>
                    'APS' (<parallelism>) <idl%> <flwctrl%> <topevt%> <topevt>
<queue>     = '|<Q>' <name> <msgs enqueued/sec> <msgs spilled/sec> <msgs in
queue>
<psender>   = '|<PS>' <name> <msgs sent/sec> <bytes sent/sec> <latency> <idl%>
<flwctrl%> <topevt%> <topevt>
<preceiver> = '|<PR>' <name> <idl%> <flwctrl%> <topevt%> <topevt>
<bottleneck>= '|<B>' <name> <sub_name> <sessionid> <serial#> <topevt%> <topevt>

The following table describes the abbreviations used in the legend:

AbbreviationDescription
AApply process
APCCoordinator process used by an apply process
APRReader server used by an apply process
APSApply server used by an apply process
BBottleneck
C or CAPCapture process
CAP+PSCapture process session and propagation sender in a combined capture and apply optimization
CCACombined capture and apply (Y indicates that it is used for the path; N indicates that it is not used for the path.)
flwctrlFlow control
idlIdle
LMBBuilder server used by a capture process (LogMiner builder)
LMPPreparer server used by a capture process (LogMiner preparer)
LMRReader server used by a capture process (LogMiner reader)
msgsMessages
preceiver or PRPropagation receiver
psender or PSPropagation sender
PS+PRPropagation sender and propagation receiver in a combined capture and apply optimization in which the capture process and apply process are running on the same database instance
QQueue
serial#Session serial number
secSecond
sidSession identifier
sub_nameSubcomponent name
topevtTop event

The following is sample output for when an apply process is the last component in a path:

OUTPUT
PATH 1 RUN_ID 3 RUN_TIME 2009-JUL-02 05:59:38 CCA Y
|<C> DB2$CAP 10267 10040 3 LMR 95% 0% 3.3% "" LMP (1) 86.7% 0% 11.7% "" LMB 86.7% 0% 11.7% ""
CAP 71.7% 16.7% 11.7% "" |<Q> "STRMADMIN"."DB2$CAPQ" 2540.45 0 30 |<PS>
=>DB1.EXAMPLE.COM 2152.03 32992.96 4 59.2% 9.8% 0% "" |<PR> DB2.EXAMPLE.COM=> 98.5%
0% 0.6% "" |<Q> "STRMADMIN"."DB2$APPQ" 3657.03 0.01 460 |<A> APPLY$_DB2_2 10042 100 4
APR 93.3% 0% 6.7% "" APC 98.1% 0% 1.8% "" APS (4) 370% 0% 6.1% "" |<B> NO BOTTLENECK
IDENTIFIED
 
 
PATH 1 RUN_ID 4 RUN_TIME 2009-JUL-02 06:01:39 CCA Y
|<C> DB2$CAP 10464 10002 3 LMR 95% 0% 1.7% "" LMP (1) 83.3% 0% 16.7% "" LMB 85% 0% 15% "" 
CAP 62.9% 0% 35.7% "" |<Q> "STRMADMIN"."DB2$CAPQ" 2677.03 0.01 45 |<PS>
=>DB1.EXAMPLE.COM 2491.08 47883.46 4 65.5% 10.7% 0% "" |<PR> DB2.EXAMPLE.COM=> 0% 83.3% 
13.3% "" |<Q> "STRMADMIN"."DB2$APPQ" 2444.03 0.01 0 |<A> APPLY$_DB2_2 10004 100 3
APR 42.9% 57.1% 0% "" APC 90% 0% 10% "" APS (4) 346% 0% 10.3% "" |<B> NO BOTTLENECK
IDENTIFIED
.
.
.

Note:

This output is for illustrative purposes only. Actual performance characteristics vary depending on individual configurations and conditions.

Use the legend and the abbreviations to determine the statistics in the output. For example, the following output is for the db2$cap capture process in path 1, run ID 3:

|<C> DB2$CAP 10267 10040 3 LMR 95% 0% 3.3% "" LMP (1) 86.7% 0% 11.7% "" LMB 86.7% 0% 11.7% ""
CAP 71.7% 16.7% 11.7% ""

This output shows the following statistics:

  • The capture process captured an average of 10267 database changes each second.

  • The capture process enqueued an average of 10040 messages each second.

  • The capture process latency was 3 seconds.

  • The reader server (LMR) used by the capture process spent 95% of its time idle.

  • The reader server used by the capture process spent 0% of its time in flow control mode.

  • The reader server used by the capture process spent 3.3% of its time on the top wait event.

  • The preparer server (LMP) parallelism was 1.

  • The preparer server used by the capture process spent 86.7% of its time idle.

  • The preparer server used by the capture process spent 0% of its time in flow control mode.

  • The preparer server used by the capture process spent 11.7% of its time on the top wait event.

  • The builder server (LMB) used by the capture process spent 86.7% of its time idle.

  • The builder server used by the capture process spent 0% of its time in flow control mode.

  • The builder server used by the capture process spent 11.7% of its time on the top wait event.

  • The capture process session spent 71.7% of its time idle.

  • The capture process session spent 16.7% of its time in flow control mode.

  • The capture process session spent 11.7% of its time on the top wait event.

PKUrHYHPK&AOEBPS/title.htm9 Oracle Streams Concepts and Administration, 11g Release 2 (11.2)

Oracle® Streams

Concepts and Administration

11g Release 2 (11.2)

E17069-07

August 2011


Oracle Streams Concepts and Administration, 11g Release 2 (11.2)

E17069-07

Copyright © 2002, 2011, Oracle and/or its affiliates. All rights reserved.

Primary Author:  Randy Urbano

Contributors:  Sundeep Abraham, Geeta Arora, Nimar Arora, Lance Ashdown, Ram Avudaiappan, Neerja Bhatt, Ragamayi Bhyravabhotla, Chipper Brown, Jack Chung, Alan Downing, Jacco Draaijer, Curt Elsbernd, Yong Feng, Jairaj Galagali, Lei Gao, Connie Green, Richard Huang, Thuvan Hoang, Lewis Kaplan, Joydip Kundu, Tianshu Li, Jing Liu, Edwina Lu, Raghu Mani, Rui Mao, Pat McElroy, Shailendra Mishra, Valarie Moore, Bhagat Nainani, Srikanth Nalla, Maria Pratt, Arvind Rajaram, Ashish Ray, Abhishek Saxena, Viv Schupmann, Vipul Shah, Neeraj Shodhan, Wayne Smith, Jim Stamos, Janet Stern, Mahesh Subramaniam, Bob Thome, Byron Wang, Wei Wang, James M. Wilson, Lik Wong, Jingwei Wu, Haobo Xu, Jun Yuan, David Zhang, Ying Zhang

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PK1;PK&AOEBPS/strms_fgmon.htmS Monitoring File Group and Tablespace Repositories

37 Monitoring File Group and Tablespace Repositories

A file group repository can contain multiple file groups and multiple versions of a particular file group. A tablespace repository is a collection of tablespace sets in a file group repository. Tablespace repositories are built on file group repositories, but tablespace repositories only contain the files required to move or copy tablespaces between databases. This chapter provides sample queries that you can use to monitor file group repositories and tablespace repositories.

The following topics describe monitoring file group and tablespace repositories:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See the online Help for the Oracle Streams tool for more information.


See Also:


Monitoring a File Group Repository

The queries in the following sections provide examples for monitoring a file group repository:

Displaying General Information About the File Groups in a Database

The query in this section displays the following information for each file group in the local database:

  • The file group owner

  • The file group name

  • Whether the files in a version of the file group are kept on disk if the version is purged

  • The minimum number of versions of the file group allowed

  • The maximum number of versions of the file group allowed

  • The number of days to retain a file group version after it is created

Run the following query to display this information for the local database:

COLUMN FILE_GROUP_OWNER HEADING 'File Group|Owner' FORMAT A10
COLUMN FILE_GROUP_NAME HEADING 'File Group|Name' FORMAT A10
COLUMN KEEP_FILES HEADING 'Keep|Files?' FORMAT A10
COLUMN MIN_VERSIONS HEADING 'Minimum|Number|of Versions' FORMAT 9999
COLUMN MAX_VERSIONS HEADING 'Maximum|Number|of Versions' FORMAT 9999999999
COLUMN RETENTION_DAYS HEADING 'Days to|Retain|a Version' FORMAT 9999999999.99

SELECT FILE_GROUP_OWNER,
       FILE_GROUP_NAME,
       KEEP_FILES,
       MIN_VERSIONS,
       MAX_VERSIONS,
       RETENTION_DAYS
  FROM DBA_FILE_GROUPS;

Your output looks similar to the following:

                                     Minimum     Maximum        Days to
File Group File Group Keep            Number      Number         Retain
Owner      Name       Files?     of Versions of Versions      a Version
---------- ---------- ---------- ----------- ----------- --------------
STRMADMIN  REPORTS    Y                    2  4294967295  4294967295.00

This output shows that the database has one file group with the following characteristics:

  • The file group owner is strmadmin.

  • The file group name is reports.

  • The files in a version are kept on disk if a version is purged because the "Keep Files?" is "Y" for the file group.

  • The minimum number of versions allowed is 2. If the file group automatically purges versions, then it will not purge a version if the purge would cause the total number of versions to drop below 2.

  • The file group allows an infinite number of versions. The number 4294967295 means an infinite number of versions.

  • The file group retains a version of an infinite number of days. The number 4294967295 means an infinite number of days.

Displaying Information About File Group Versions

The query in this section displays the following information for each file group version in the local database:

  • The owner of the file group that contains the version

  • The name of the file group that contains the version

  • The version name

  • The version number

  • The name of the user who created the version

  • Comments for the version

Run the following query to display this information for the local database:

COLUMN FILE_GROUP_OWNER HEADING 'File Group|Owner' FORMAT A10
COLUMN FILE_GROUP_NAME HEADING 'File Group|Name' FORMAT A10
COLUMN VERSION_NAME HEADING 'Version Name' FORMAT A20
COLUMN VERSION HEADING 'Version|Number' FORMAT 99999999
COLUMN CREATOR HEADING 'Creator' FORMAT A10
COLUMN COMMENTS HEADING 'Comments' FORMAT A14

SELECT FILE_GROUP_OWNER,
       FILE_GROUP_NAME,
       VERSION_NAME,
       VERSION,
       CREATOR,
       COMMENTS
  FROM DBA_FILE_GROUP_VERSIONS;

Your output looks similar to the following:

File Group File Group                        Version
Owner      Name       Version Name            Number Creator    Comments
---------- ---------- -------------------- --------- ---------- --------------
STRMADMIN  REPORTS    SALES_REPORTS_V1             1 STRMADMIN  Sales reports
                                                                for week of 06
                                                                -FEB-2005
 
STRMADMIN  REPORTS    SALES_REPORTS_V2             2 STRMADMIN  Sales reports
                                                                for week of 13
                                                                -FEB-2005

Displaying Information About File Group Files

The query in this section displays the following information about each file in a file group version in the local database:

  • The owner of the file group that contains the file

  • The name of the file group that contains the file

  • The name of the version in the file group that contains the file

  • The file name

  • The directory object that contains the file

COLUMN FILE_GROUP_OWNER HEADING 'File Group|Owner' FORMAT A10
COLUMN FILE_GROUP_NAME HEADING 'File Group|Name' FORMAT A10
COLUMN VERSION_NAME HEADING 'Version Name' FORMAT A20
COLUMN FILE_NAME HEADING 'File Name' FORMAT A15
COLUMN FILE_DIRECTORY HEADING 'File Directory|Object' FORMAT A15

SELECT FILE_GROUP_OWNER,
       FILE_GROUP_NAME,
       VERSION_NAME,
       FILE_NAME,
       FILE_DIRECTORY
  FROM DBA_FILE_GROUP_FILES;

Your output looks similar to the following:

File Group File Group                                      File Directory
Owner      Name       Version Name         File Name       Object
---------- ---------- -------------------- --------------- ---------------
STRMADMIN  REPORTS    SALES_REPORTS_V1     book_sales.htm  SALES_REPORTS1
STRMADMIN  REPORTS    SALES_REPORTS_V1     music_sales.htm SALES_REPORTS1
STRMADMIN  REPORTS    SALES_REPORTS_V2     book_sales.htm  SALES_REPORTS2
STRMADMIN  REPORTS    SALES_REPORTS_V2     music_sales.htm SALES_REPORTS2

Query the DBA_DIRECTORIES data dictionary view to determine the corresponding file system directory for a directory object.

Monitoring a Tablespace Repository

The queries in the following sections provide examples for monitoring a tablespace repository:

Displaying Information About the Tablespaces in a Tablespace Repository

The query in this section displays the following information about each tablespace in the tablespace repository in the local database:

  • The owner of the file group that contains the tablespace in the tablespace repository

  • The name of the file group that contains the tablespace in the tablespace repository

  • The name of the version that contains the tablespace

  • The tablespace name

COLUMN FILE_GROUP_OWNER HEADING 'File Group|Owner' FORMAT A15
COLUMN FILE_GROUP_NAME HEADING 'File Group|Name' FORMAT A15
COLUMN VERSION_NAME HEADING 'Version Name' FORMAT A15
COLUMN VERSION HEADING 'Version|Number' FORMAT 99999999
COLUMN TABLESPACE_NAME HEADING 'Tablespace Name' FORMAT A15

SELECT FILE_GROUP_OWNER,
       FILE_GROUP_NAME,
       VERSION_NAME,
       VERSION,
       TABLESPACE_NAME
  FROM DBA_FILE_GROUP_TABLESPACES;

Your output looks similar to the following:

File Group      File Group                        Version
Owner           Name            Version Name       Number Tablespace Name
--------------- --------------- --------------- --------- ---------------
STRMADMIN       SALES           V_Q1FY2005              1 SALES_TBS1
STRMADMIN       SALES           V_Q1FY2005              1 SALES_TBS2
STRMADMIN       SALES           V_Q2FY2005              3 SALES_TBS1
STRMADMIN       SALES           V_Q2FY2005              3 SALES_TBS2
STRMADMIN       SALES           V_Q1FY2005_R            4 SALES_TBS1
STRMADMIN       SALES           V_Q1FY2005_R            4 SALES_TBS2
STRMADMIN       SALES           V_Q2FY2005_R            5 SALES_TBS1
STRMADMIN       SALES           V_Q2FY2005_R            5 SALES_TBS2

Displaying Information About the Tables in a Tablespace Repository

The query in this section displays the following information about each table in the tablespace repository in the local database:

  • The owner of the file group that contains the table in the tablespace repository

  • The name of the file group that contains the table in the tablespace repository

  • The name of the version that contains the table

  • The table owner

  • The table name

  • The tablespace that contains the table

COLUMN FILE_GROUP_OWNER HEADING 'File Group|Owner' FORMAT A10
COLUMN FILE_GROUP_NAME HEADING 'File Group|Name' FORMAT A10
COLUMN VERSION_NAME HEADING 'Version Name' FORMAT A15
COLUMN OWNER HEADING 'Table|Owner' FORMAT A10
COLUMN TABLE_NAME HEADING 'Table Name' FORMAT A15
COLUMN TABLESPACE_NAME HEADING 'Tablespace Name' FORMAT A15

SELECT FILE_GROUP_OWNER,
       FILE_GROUP_NAME,
       VERSION_NAME,
       OWNER,
       TABLE_NAME,
       TABLESPACE_NAME
  FROM DBA_FILE_GROUP_TABLES;

Your output looks similar to the following:

File Group File Group                 Table
Owner      Name       Version Name    Owner      Table Name      Tablespace Name
---------- ---------- --------------- ---------- --------------- ---------------
STRMADMIN  SALES      V_Q1FY2005      SL         ORDERS          SALES_TBS1
STRMADMIN  SALES      V_Q1FY2005      SL         ORDER_ITEMS     SALES_TBS1
STRMADMIN  SALES      V_Q1FY2005      SL         CUSTOMERS       SALES_TBS2
STRMADMIN  SALES      V_Q2FY2005      SL         ORDERS          SALES_TBS1
STRMADMIN  SALES      V_Q2FY2005      SL         ORDER_ITEMS     SALES_TBS1
STRMADMIN  SALES      V_Q2FY2005      SL         CUSTOMERS       SALES_TBS2
STRMADMIN  SALES      V_Q1FY2005_R    SL         ORDERS          SALES_TBS1
STRMADMIN  SALES      V_Q1FY2005_R    SL         ORDER_ITEMS     SALES_TBS1
STRMADMIN  SALES      V_Q1FY2005_R    SL         CUSTOMERS       SALES_TBS2
STRMADMIN  SALES      V_Q2FY2005_R    SL         ORDERS          SALES_TBS1
STRMADMIN  SALES      V_Q2FY2005_R    SL         ORDER_ITEMS     SALES_TBS1
STRMADMIN  SALES      V_Q2FY2005_R    SL         CUSTOMERS       SALES_TBS2

Displaying Export Information About Versions in a Tablespace Repository

To display export information about the versions in the tablespace repository in the local database, query the DBA_FILE_GROUP_EXPORT_INFO data dictionary view. This view only displays information for versions that contain a valid Data Pump export dump file. The query in this section displays the following export information about each version in the local database:

  • The name of the file group that contains the version

  • The name of the version

  • The export version of the export dump file. The export version corresponds to the version of Data Pump that performed the export.

  • The platform on which the export was performed

  • The date and time of the export

  • The global name of the exporting database

COLUMN FILE_GROUP_NAME HEADING 'File Group|Name' FORMAT A10
COLUMN VERSION_NAME HEADING 'Version Name' FORMAT A13
COLUMN EXPORT_VERSION HEADING 'Export|Version' FORMAT A7
COLUMN PLATFORM_NAME HEADING 'Export Platform' FORMAT A17
COLUMN EXPORT_TIME HEADING 'Export Time' FORMAT A17
COLUMN SOURCE_GLOBAL_NAME HEADING 'Export|Database' FORMAT A10

SELECT FILE_GROUP_NAME,
       VERSION_NAME,
       EXPORT_VERSION,
       PLATFORM_NAME,
       TO_CHAR(EXPORT_TIME, 'HH24:MI:SS MM/DD/YY') EXPORT_TIME,
       SOURCE_GLOBAL_NAME
  FROM DBA_FILE_GROUP_EXPORT_INFO;

Your output looks similar to the following:

File Group               Export                                      Export
Name       Version Name  Version Export Platform   Export Time       Database
---------- ------------- ------- ----------------- ----------------- ----------
SALES      V_Q1FY2005    10.2.0  Linux IA (32-bit) 12:23:52 03/08/05 INST1.EXAM
                                                                     PLE.COM   
SALES      V_Q2FY2005    10.2.0  Linux IA (32-bit) 12:27:37 03/08/05 INST1.EXAM
                                                                     PLE.COM   
SALES      V_Q1FY2005_R  10.2.0  Linux IA (32-bit) 12:39:50 03/08/05 INST2.EXAM
                                                                     PLE.COM
SALES      V_Q2FY2005_R  10.2.0  Linux IA (32-bit) 12:46:04 03/08/05 INST2.EXAM
                                                                     PLE.COM
PK#|#SSPK&AOEBPS/strms_miprov.htm Using Information Provisioning

36 Using Information Provisioning

This chapter describes how to use information provisioning. This chapter includes an example that creates a tablespace repository, examples that transfer tablespaces between databases, and an example that uses a file group repository to store different versions of files.

The following topics describe using information provisioning:

Using a Tablespace Repository

The following procedures in the DBMS_STREAMS_TABLESPACE_ADM package can create a tablespace repository, add versioned tablespace sets to a tablespace repository, and copy versioned tablespace sets from a tablespace repository:

This section illustrates how to use a tablespace repository with an example scenario. In the scenario, the goal is to run quarterly reports on the sales tablespaces (sales_tbs1 and sales_tbs2). Sales are recorded in these tablespaces in the inst1.example.com database. The example clones the tablespaces quarterly and stores a new version of the tablespaces in the tablespace repository. The tablespace repository also resides in the inst1.example.com database. When a specific version of the tablespace set is required to run reports at a reporting database, it is copied from the tablespace repository and attached to the reporting database.

In this example scenario, the following databases are the reporting databases:

The following sections describe how to create and populate the tablespace repository and how to use the tablespace repository to run reports at the other databases:

These examples must be run by an administrative user with the necessary privileges to run the procedures listed previously.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about these procedures and the privileges required to run them

Creating and Populating a Tablespace Repository

This example creates a tablespaces repository and adds a new version of a tablespace set to the repository after each quarter. The tablespace set consists of the sales tablespaces for a business: sales_tbs1 and sales_tbs2.

Figure 36-1 provides an overview of the tablespace repository created in this example:

Figure 36-1 Example Tablespace Repository

Description of Figure 36-1 follows
Description of "Figure 36-1 Example Tablespace Repository"

The following table shows the tablespace set versions created in this example, their directory objects, and the corresponding file system directory for each directory object.

VersionDirectory ObjectCorresponding File System Directory
v_q1fy2005q1fy2005/home/sales/q1fy2005
v_q2fy2005q2fy2005/home/sales/q2fy2005

This example makes the following assumptions:

  • The inst1.example.com database exists.

  • The sales_tbs1 and sales_tbs2 tablespaces exist in the inst1.example.com database.

The following steps create and populate a tablespace repository:

  1. Connect as an administrative user to the database where the sales tablespaces are modified with new sales data. In this example, connect to the inst1.example.com database.

    The administrative user must have the necessary privileges to run the procedures in the DBMS_STREAMS_TABLESPACE_ADM package and must have the necessary privileges to create directory objects.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a directory object for the first quarter in fiscal year 2005 on inst1.example.com:

    CREATE OR REPLACE DIRECTORY q1fy2005 AS '/home/sales/q1fy2005';
    

    The specified file system directory must exist when you create the directory object.

  3. Create a directory object that corresponds to the directory that contains the data files for the tablespaces in the inst1.example.com database. For example, if the data files for the tablespaces are in the /orc/inst1/dbs directory, then create a directory object that corresponds to this directory:

    CREATE OR REPLACE DIRECTORY dbfiles_inst1 AS '/orc/inst1/dbs';
    
  4. Clone the tablespace set and add the first version of the tablespace set to the tablespace repository:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.CLONE_TABLESPACES(
        tablespace_names            => tbs_set,
        tablespace_directory_object => 'q1fy2005',
        file_group_name             => 'strmadmin.sales',
        version_name                => 'v_q1fy2005');
    END;
    /
    

    The sales file group is created automatically if it does not exist.

  5. When the second quarter in fiscal year 2005 is complete, create a directory object for the second quarter in fiscal year 2005:

    CREATE OR REPLACE DIRECTORY q2fy2005 AS '/home/sales/q2fy2005';
    

    The specified file system directory must exist when you create the directory object.

  6. Clone the tablespace set and add the next version of the tablespace set to the tablespace repository at the inst1.example.com database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.CLONE_TABLESPACES(
        tablespace_names            => tbs_set,
        tablespace_directory_object => 'q2fy2005',
        file_group_name             => 'strmadmin.sales',
        version_name                => 'v_q2fy2005');
    END;
    /
    

Steps 5 and 6 can be repeated whenever a quarter ends to store a version of the tablespace set for each quarter. Each time, create a directory object to store the tablespace files for the quarter, and specify a unique version name for the quarter.

Using a Tablespace Repository for Remote Reporting with a Shared File System

This example runs reports at inst2.example.com on specific versions of the sales tablespaces stored in a tablespace repository at inst1.example.com. These two databases share a file system, and the reports that are run on inst2.example.com might make changes to the tablespace. Therefore, the tablespaces are made read/write at inst2.example.com. When the reports are complete, a new version of the tablespace files is stored in a separate directory from the original version of the tablespace files.

Figure 36-2 provides an overview of how tablespaces in a tablespace repository are attached to a different database in this example:

Figure 36-2 Attaching Tablespaces with a Shared File System

Description of Figure 36-2 follows
Description of "Figure 36-2 Attaching Tablespaces with a Shared File System"

Figure 36-3 provides an overview of how tablespaces are detached and placed in a tablespace repository in this example:

Figure 36-3 Detaching Tablespaces with a Shared File System

Description of Figure 36-3 follows
Description of "Figure 36-3 Detaching Tablespaces with a Shared File System"

The following table shows the tablespace set versions in the tablespace repository when this example is complete. It shows the directory object for each version and the corresponding file system directory for each directory object. The versions that are new are created in this example. The versions that existed before this example were created in "Creating and Populating a Tablespace Repository".

VersionDirectory ObjectCorresponding File System DirectoryNew?
v_q1fy2005q1fy2005/home/sales/q1fy2005No
v_q1fy2005_rq1fy2005_r/home/sales/q1fy2005_rYes
v_q2fy2005q2fy2005/home/sales/q2fy2005No
v_q2fy2005_rq2fy2005_r/home/sales/q2fy2005_rYes

This example makes the following assumptions:

  • The inst1.example.com and inst2.example.com databases exist.

  • The inst1.example.com and inst2.example.com databases can access a shared file system.

  • Networking is configured between the databases so that these databases can communicate with each other.

  • A tablespace repository that contains a version of the sales tablespaces (sales_tbs1 and sales_tbs2) for various quarters exists in the inst1.example.com database. This tablespace repository was created and populated in the example "Creating and Populating a Tablespace Repository".

Complete the following steps:

  1. In SQL*Plus, connect to inst1.example.com as an administrative user.

    The administrative user must have the necessary privileges to create directory objects.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a directory object that will store the tablespace files for the first quarter in fiscal year 2005 on inst1.example.com after the inst2.example.com database has completed reporting on this quarter:

    CREATE OR REPLACE DIRECTORY q1fy2005_r AS '/home/sales/q1fy2005_r';
    

    The specified file system directory must exist when you create the directory objects.

  3. Connect to the inst2.example.com database as an administrative user.

    The administrative user must have the necessary privileges to run the procedures in the DBMS_STREAMS_TABLESPACE_ADM package, create directory objects, and create database links.

  4. Create two directory objects for the first quarter in fiscal year 2005 on inst2.example.com. These directory objects must have the same names and correspond to the same directories on the shared file system as the directory objects used by the tablespace repository in the inst1.example.com database for the first quarter:

    CREATE OR REPLACE DIRECTORY q1fy2005 AS '/home/sales/q1fy2005';
    
    CREATE OR REPLACE DIRECTORY q1fy2005_r AS '/home/sales/q1fy2005_r';
    
  5. Create a database link from inst2.example.com to the inst1.example.com database. For example:

    CREATE DATABASE LINK inst1.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'inst1.example.com';
    
  6. Attach the tablespace set to the inst2.example.com database from the strmadmin.sales file group in the inst1.example.com database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q1fy2005',
        datafiles_directory_object => 'q1fy2005_r',
        repository_db_link         => 'inst1.example.com',
        tablespace_names           => tbs_set);
    END;
    /
    

    Notice that q1fy2005_r is specified for the datafiles_directory_object parameter. Therefore, the data files for the tablespaces and the export dump file are copied from the /home/sales/q1fy2005 location to the /home/sales/q1fy2005_r location by the procedure. The attached tablespaces in the inst2.example.com database use the data files in the /home/sales/q1fy2005_r location. The Data Pump import log file also is placed in this directory.

    The attached tablespaces use the data files in the /home/sales/q1fy2005_r location. However, the v_q1fy2005 version of the tablespaces in the tablespace repository consists of the files in the original /home/sales/q1fy2005 location.

  7. Make the tablespaces read/write at inst2.example.com:

    ALTER TABLESPACE sales_tbs1 READ WRITE;
    
    ALTER TABLESPACE sales_tbs2 READ WRITE;
    
  8. Run the reports on the data in the sales tablespaces at the inst2.example.com database. The reports make changes to the tablespaces.

  9. Detach the version of the tablespace set for the first quarter of 2005 from the inst2.example.com database:

    DECLARE
      tbs_set  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.DETACH_TABLESPACES(
        tablespace_names        => tbs_set,
        export_directory_object => 'q1fy2005_r',
        file_group_name         => 'strmadmin.sales',
        version_name            => 'v_q1fy2005_r',
        repository_db_link      => 'inst1.example.com');
    END;
    /
    

    Only one version of a tablespace set can be attached to a database at a time. Therefore, the version of the sales tablespaces for the first quarter of 2005 must be detached from inst2.example.com before the version of this tablespace set for the second quarter of 2005 can be attached.

    Also, notice that the specified export_directory_object is q1fy2005_r, and that the version_name is v_q1fy2005_r. After the detach operation, there are two versions of the tablespace files for the first quarter of 2005 stored in the tablespace repository on inst1.example.com: one version of the tablespace before reporting and one version after reporting. These two versions have different version names and are stored in different directory objects.

  10. Connect to the inst1.example.com database as an administrative user.

  11. Create a directory object that will store the tablespace files for the second quarter in fiscal year 2005 on inst1.example.com after the inst2.example.com database has completed reporting on this quarter:

    CREATE OR REPLACE DIRECTORY q2fy2005_r AS '/home/sales/q2fy2005_r';
    

    The specified file system directory must exist when you create the directory object.

  12. Connect to the inst2.example.com database as an administrative user.

  13. Create two directory objects for the second quarter in fiscal year 2005 at inst2.example.com. These directory objects must have the same names and correspond to the same directories on the shared file system as the directory objects used by the tablespace repository in the inst1.example.com database for the second quarter:

    CREATE OR REPLACE DIRECTORY q2fy2005 AS '/home/sales/q2fy2005';
    
    CREATE OR REPLACE DIRECTORY q2fy2005_r AS '/home/sales/q2fy2005_r';
    
  14. Attach the tablespace set for the second quarter of 2005 to the inst2.example.com database from the sales file group in the inst1.example.com database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q2fy2005',
        datafiles_directory_object => 'q2fy2005_r',
        repository_db_link         => 'inst1.example.com',
        tablespace_names           => tbs_set);
    END;
    /
    
  15. Make the tablespaces read/write at inst2.example.com:

    ALTER TABLESPACE sales_tbs1 READ WRITE;
    
    ALTER TABLESPACE sales_tbs2 READ WRITE;
    
  16. Run the reports on the data in the sales tablespaces at the inst2.example.com database. The reports make changes to the tablespace.

  17. Detach the version of the tablespace set for the second quarter of 2005 from inst2.example.com:

    DECLARE
      tbs_set  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      tbs_set(1) := 'sales_tbs1';
      tbs_set(2) := 'sales_tbs2';
      DBMS_STREAMS_TABLESPACE_ADM.DETACH_TABLESPACES(
        tablespace_names        => tbs_set,
        export_directory_object => 'q2fy2005_r',
        file_group_name         => 'strmadmin.sales',
        version_name            => 'v_q2fy2005_r',
        repository_db_link      => 'inst1.example.com');
    END;
    /
    

Steps 11-17 can be repeated whenever a quarter ends to run reports on each quarter.

Using a Tablespace Repository for Remote Reporting without a Shared File System

This example runs reports at inst3.example.com on specific versions of the sales tablespaces stored in a tablespace repository at inst1.example.com. These two databases do not share a file system, and the reports that are run on inst3.example.com do not make any changes to the tablespace. Therefore, the tablespaces remain read-only at inst3.example.com, and, when the reports are complete, there is no need for a new version of the tablespace files in the tablespace repository on inst1.example.com.

Figure 36-4 provides an overview of how tablespaces in a tablespace repository are attached to a different database in this example:

Figure 36-4 Attaching Tablespaces without a Shared File System

Description of Figure 36-4 follows
Description of "Figure 36-4 Attaching Tablespaces without a Shared File System"

The following table shows the directory objects used in this example. It shows the existing directory objects that are associated with tablespace repository versions on the inst1.example.com database, and it shows the new directory objects created on the inst3.example.com database in this example. The directory objects that existed before this example were created in "Creating and Populating a Tablespace Repository".

Directory ObjectDatabaseVersionCorresponding File System DirectoryNew?
q1fy2005inst1.example.comv_q1fy2005/home/sales/q1fy2005No
q2fy2005inst1.example.comv_q2fy2005/home/sales/q2fy2005No
q1fy2005inst3.example.comNot associated with a tablespace repository version/usr/sales_data/fy2005q1Yes
q2fy2005inst3.example.comNot associated with a tablespace repository version/usr/sales_data/fy2005q2Yes

This example makes the following assumptions:

  • The inst1.example.com and inst3.example.com databases exist.

  • The inst1.example.com and inst3.example.com databases do not share a file system.

  • Networking is configured between the databases so that they can communicate with each other.

  • The sales tablespaces (sales_tbs1 and sales_tbs2) exist in the inst1.example.com database.

Complete the following steps:

  1. In SQL*Plus, connect to the inst3.example.com database as an administrative user.

    The administrative user must have the necessary privileges to run the procedures in the DBMS_STREAMS_TABLESPACE_ADM package, create directory objects, and create database links.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a database link from inst3.example.com to the inst1.example.com database. For example:

    CREATE DATABASE LINK inst1.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'inst1.example.com';
    
  3. Create a directory object for the first quarter in fiscal year 2005 on inst3.example.com. Although inst3.example.com is a remote database that does not share a file system with inst1.example.com, the directory object must have the same name as the directory object used by the tablespace repository in the inst1.example.com database for the first quarter. However, the directory paths of the directory objects on inst1.example.com and inst3.example.com do not need to match.

    CREATE OR REPLACE DIRECTORY q1fy2005 AS '/usr/sales_data/fy2005q1';
    

    The specified file system directory must exist when you create the directory object.

  4. Connect to the inst1.example.com database as an administrative user.

    The administrative user must have the necessary privileges to run the procedures in the DBMS_FILE_TRANSFER package and create database links. This example uses the DBMS_FILE_TRANSFER package to copy the tablespace files from inst1.example.com to inst3.example.com. If some other method is used to transfer the files, then the privileges to run the procedures in the DBMS_FILE_TRANSFER package are not required.

  5. Create a database link from inst1.example.com to the inst3.example.com database. For example:

    CREATE DATABASE LINK inst3.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'inst3.example.com';
    

    This database link will be used to transfer files to the inst3.example.com database in Step 6.

  6. Copy the data file for each tablespace and the export dump file for the first quarter to the inst3.example.com database:

    BEGIN
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q1fy2005',
        source_file_name             => 'sales_tbs1.dbf',
        destination_directory_object => 'q1fy2005',
        destination_file_name        => 'sales_tbs1.dbf',
        destination_database         => 'inst3.example.com');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q1fy2005',
        source_file_name             => 'sales_tbs2.dbf',
        destination_directory_object => 'q1fy2005',
        destination_file_name        => 'sales_tbs2.dbf',
        destination_database         => 'inst3.example.com');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q1fy2005',
        source_file_name             => 'expdat16.dmp',
        destination_directory_object => 'q1fy2005',
        destination_file_name        => 'expdat16.dmp',
        destination_database         => 'inst3.example.com');
    END;
    /
    

    Before you run the PUT_FILE procedure for the export dump file, you can query the DBA_FILE_GROUP_FILES data dictionary view to determine the name and directory object of the export dump file. For example, run the following query to list this information for the export dump file in the v_q1fy2005 version:

    COLUMN FILE_NAME HEADING 'Export Dump|File Name' FORMAT A35
    COLUMN FILE_DIRECTORY HEADING 'Directory Object' FORMAT A35
    
    SELECT FILE_NAME, FILE_DIRECTORY FROM DBA_FILE_GROUP_FILES
      where FILE_GROUP_NAME = 'SALES' AND
            VERSION_NAME    = 'V_Q1FY2005';
    
  7. Connect to the inst3.example.com database as an administrative user.

  8. Attach the tablespace set for the first quarter of 2005 to the inst3.example.com database from the sales file group in the inst1.example.com database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q1fy2005',
        datafiles_directory_object => 'q1fy2005',
        repository_db_link         => 'inst1.example.com',
        tablespace_names           => tbs_set);
    END;
    /
    

    The tablespaces are read-only when they are attached. Because the reports on inst3.example.com do not change the tablespaces, the tablespaces can remain read-only.

  9. Run the reports on the data in the sales tablespaces at the inst3.example.com database.

  10. Drop the tablespaces and their contents at inst3.example.com:

    DROP TABLESPACE sales_tbs1 INCLUDING CONTENTS;
    
    DROP TABLESPACE sales_tbs2 INCLUDING CONTENTS;
    

    The tablespaces are dropped from the inst3.example.com database, but the tablespace files remain in the directory object.

  11. Create a directory object for the second quarter in fiscal year 2005 on inst3.example.com. The directory object must have the same name as the directory object used by the tablespace repository in the inst1.example.com database for the second quarter. However, the directory paths of the directory objects on inst1.example.com and inst3.example.com do not need to match.

    CREATE OR REPLACE DIRECTORY q2fy2005 AS '/usr/sales_data/fy2005q2';
    

    The specified file system directory must exist when you create the directory object.

  12. Connect to the inst1.example.com database as an administrative user.

  13. Copy the data file and the export dump file for the second quarter to the inst3.example.com database:

    BEGIN
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q2fy2005',
        source_file_name             => 'sales_tbs1.dbf',
        destination_directory_object => 'q2fy2005',
        destination_file_name        => 'sales_tbs1.dbf',
        destination_database         => 'inst3.example.com');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q2fy2005',
        source_file_name             => 'sales_tbs2.dbf',
        destination_directory_object => 'q2fy2005',
        destination_file_name        => 'sales_tbs2.dbf',
        destination_database         => 'inst3.example.com');
      DBMS_FILE_TRANSFER.PUT_FILE(
        source_directory_object      => 'q2fy2005',
        source_file_name             => 'expdat18.dmp',
        destination_directory_object => 'q2fy2005',
        destination_file_name        => 'expdat18.dmp',
        destination_database         => 'inst3.example.com');
    END;
    /
    

    Before you run the PUT_FILE procedure for the export dump file, you can query the DBA_FILE_GROUP_FILES data dictionary view to determine the name and directory object of the export dump file. For example, run the following query to list this information for the export dump file in the v_q2fy2005 version:

    COLUMN FILE_NAME HEADING 'Export Dump|File Name' FORMAT A35
    COLUMN FILE_DIRECTORY HEADING 'Directory Object' FORMAT A35
    
    SELECT FILE_NAME, FILE_DIRECTORY FROM DBA_FILE_GROUP_FILES
      where FILE_GROUP_NAME = 'SALES' AND
            VERSION_NAME    = 'V_Q2FY2005';
    
  14. Connect to the inst3.example.com database as an administrative user.

  15. Attach the tablespace set for the second quarter of 2005 to the inst3.example.com database from the sales file group in the inst1.example.com database:

    DECLARE
      tbs_set DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      DBMS_STREAMS_TABLESPACE_ADM.ATTACH_TABLESPACES(
        file_group_name            => 'strmadmin.sales',
        version_name               => 'v_q2fy2005',
        datafiles_directory_object => 'q2fy2005',
        repository_db_link         => 'inst1.example.com',
        tablespace_names           => tbs_set);
    END;
    /
    

    The tablespaces are read-only when they are attached. Because the reports on inst3.example.com do not change the tablespace, the tablespaces can remain read-only.

  16. Run the reports on the data in the sales tablespaces at the inst3.example.com database.

  17. Drop the tablespaces and their contents:

    DROP TABLESPACE sales_tbs1 INCLUDING CONTENTS;
    
    DROP TABLESPACE sales_tbs2 INCLUDING CONTENTS;
    

    The tablespaces are dropped from the inst3.example.com database, but the tablespace files remain in the directory object.

Steps 11-17 can be repeated whenever a quarter ends to run reports on each quarter.

Using a File Group Repository

The DBMS_FILE_GROUP package can create a file group repository, add versioned file groups to the repository, and copy versioned file groups from the repository. This section illustrates how to use a file group repository with a scenario that stores reports in the repository.

In this scenario, a business sells books and music over the internet. The business runs weekly reports on the sales data in the inst1.example.com database and stores these reports in two HTML files on a computer file system. The book_sales.htm file contains the report for book sales, and the music_sales.htm file contains the report for music sales. The business wants to store these weekly reports in a file group repository at the inst2.example.com remote database. Every week, the two reports are generated on the inst1.example.com database, transferred to the computer system running the inst2.example.com database, and added to the repository as a file group version. The file group repository stores all of the file group versions that contain the reports for each week.

Figure 36-5 provides an overview of the file group repository created in this example:

Figure 36-5 Example File Group Repository

Description of Figure 36-5 follows
Description of "Figure 36-5 Example File Group Repository"

The benefits of the file group repository are that it stores metadata about each file group version in the data dictionary and provides a standard interface for managing the file group versions. For example, when the business must view a specific sales report, it can query the data dictionary in the inst2.example.com database to determine the location of the report on the computer file system.

The following table shows the directory objects created in this example. It shows the directory object created on the inst1.example.com database to store new reports, and it shows the directory objects that are associated with file group repository versions on the inst2.example.com database.

Directory ObjectDatabaseVersionCorresponding File System Directory
sales_reportsinst1.example.comNot associated with a file group repository version/home/sales_reports
sales_reports1inst2.example.comsales_reports_v1/home/sales_reports/fg1
sales_reports2inst2.example.comsales_reports_v1/home/sales_reports/fg2

This example makes the following assumptions:

The following steps configure and populate a file group repository at a remote database:

  1. Connect as an administrative user to the remote database that will contain the file group repository. In this example, connect to the inst2.example.com database.

    The administrative user must have the necessary privileges to create directory objects and run the procedures in the DBMS_FILE_GROUP package.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a directory object to hold the first version of the file group:

    CREATE OR REPLACE DIRECTORY sales_reports1 AS '/home/sales_reports/fg1';
    

    The specified file system directory must exist when you create the directory object.

  3. Connect as an administrative user to the database that runs the reports. In this example, connect to the inst1.example.com database.

    The administrative user must have the necessary privileges to create directory objects.

  4. Create a directory object to hold the latest reports:

    CREATE OR REPLACE DIRECTORY sales_reports AS '/home/sales_reports';
    

    The specified file system directory must exist when you create the directory object.

  5. Create a database link to the inst2.example.com database:

    CREATE DATABASE LINK inst2.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'inst2.example.com';
    
  6. Run the reports on the inst1.example.com database. Running the reports should place the book_sales.htm and music_sales.htm files in the directory specified in Step 4.

  7. Transfer the report files from the computer system running the inst1.example.com database to the computer system running the inst2.example.com database using file transfer protocol (FTP) or some other method. Ensure that the files are copied to the directory that corresponds to the directory object created in Step 2.

  8. Connect as an administrative user to the remote database that will contain the file group repository. In this example, connect to the inst2.example.com database.

  9. Create the file group repository that will contain the reports:

    BEGIN
      DBMS_FILE_GROUP.CREATE_FILE_GROUP(
        file_group_name => 'strmadmin.reports');
    END;
    /
    

    The reports file group repository is created with the following default properties:

    • The minimum number of versions in the repository is 2. When the file group is purged, the number of versions cannot drop below 2.

    • The maximum number of versions is infinite. A file group version is not purged because of the number of versions in the file group in the repository.

    • The retention days is infinite. A file group version is not purged because of the amount of time it has been in the repository.

  10. Create the first version of the file group:

    BEGIN
      DBMS_FILE_GROUP.CREATE_VERSION(
        file_group_name => 'strmadmin.reports',
        version_name    => 'sales_reports_v1',
        comments        => 'Sales reports for week of 06-FEB-2005');
    END;
    /
    
  11. Add the report files to the file group version:

    BEGIN
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'book_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports1',
        version_name     => 'sales_reports_v1');
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'music_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports1',
        version_name     => 'sales_reports_v1');
    END;
    /
    
  12. Create a directory object on inst2.example.com to hold the next version of the file group:

    CREATE OR REPLACE DIRECTORY sales_reports2 AS '/home/sales_reports/fg2';
    

    The specified file system directory must exist when you create the directory object.

  13. At the end of the next week, run the reports on the inst1.example.com database. Running the reports should place new book_sales.htm and music_sales.htm files in the directory specified in Step 4. If necessary, remove the old files from this directory before running the reports.

  14. Transfer the report files from the computer system running the inst1.example.com database to the computer system running the inst2.example.com database using file transfer protocol (FTP) or some other method. Ensure that the files are copied to the directory that corresponds to the directory object created in Step 12.

  15. In SQL*Plus, connect to the inst2.example.com database as an administrative user.

  16. Create the next version of the file group:

    BEGIN
      DBMS_FILE_GROUP.CREATE_VERSION(
        file_group_name => 'strmadmin.reports',
        version_name    => 'sales_reports_v2',
        comments        => 'Sales reports for week of 13-FEB-2005');
    END;
    /
    
  17. Add the report files to the file group version:

    BEGIN
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'book_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports2',
        version_name     => 'sales_reports_v2');
     DBMS_FILE_GROUP.ADD_FILE(
        file_group_name  => 'strmadmin.reports',
        file_name        => 'music_sales.htm',
        file_type        => 'HTML',
        file_directory   => 'sales_reports2',
        version_name     => 'sales_reports_v2');
    END;
    /
    

The file group repository now contains two versions of the file group that contains the sales report files. Repeat steps 12-17 to add new versions of the file group to the repository.


See Also:


PKh+PK&AOEBPS/ap_strmnt.htm Online Database Upgrade and Maintenance with Oracle Streams

D Online Database Upgrade and Maintenance with Oracle Streams

This appendix describes how to use Oracle Streams to perform a database upgrade to the current release of Oracle Database from one of the following releases:

This appendix also describes how to perform some maintenance operations with Oracle Streams on an Oracle Database 11g Release 2 (11.2) database. These maintenance operations include migrating an Oracle database to a different platform or character set, upgrading user-created applications, and applying Oracle Database patches or patch sets.

The upgrade and maintenance operations described in this appendix use the features of Oracle Streams to achieve little or no database down time.

The following topics describe performing online database maintenance with Oracle Streams:


See Also:

Appendix E, "Online Upgrade of a 10.1 or Earlier Database with Oracle Streams" for instructions on performing an upgrade of a release before Oracle Database 10g Release 2 (10.2)

Overview of Using Oracle Streams for Upgrade and Maintenance Operations

Database upgrades can require substantial database down time. The following maintenance operations also typically require substantial database down time:

You can achieve these upgrade and maintenance operations with little or no down time by using the features of Oracle Streams. To do so, you use Oracle Streams to configure a replication environment with the following databases:

Specifically, you can use the following general steps to perform the upgrade or maintenance operation while the database is online:

  1. Create an empty destination database.

  2. Configure an Oracle Streams replication environment where the original database is the source database and a copy of the database is the destination database. The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures in the DBMS_STREAMS_ADM package configure the Oracle Streams replication environment.

  3. Perform the upgrade or maintenance operation on the destination database. During this time the original source database is available online, and changes to the original source database are being captured by a capture process.

  4. Use Oracle Streams to apply the changes made to the source database at the destination database.

  5. When the destination database has caught up with the changes made at the source database, take the source database offline and make the destination database available for applications and users.

Figure D-1 provides an overview of this process.

Figure D-1 Online Database Upgrade and Maintenance with Oracle Streams

Description of Figure D-1 follows
Description of "Figure D-1 Online Database Upgrade and Maintenance with Oracle Streams"

The Capture Database During the Upgrade or Maintenance Operation

During the upgrade or maintenance operation, the capture database is the database where the capture process is created. A local capture process can be created at the source database during the maintenance operation, or a downstream capture process can be created at the destination database or at a third database. If the destination database is the capture database, then a propagation from the capture database to the destination database is not needed. A downstream capture process reduces the resources required at the source database during the maintenance operation.


Note:

  • Before you begin the database upgrade or maintenance operation with Oracle Streams, decide which database will be the capture database.

  • If the RMAN DUPLICATE or CONVERT DATABASE command is used for database instantiation, then the destination database cannot be the capture database.


Assumptions for the Database Being Upgraded or Maintained

The instructions in this appendix assume that all of the following statements are true for the database being upgraded or maintained:

  • The database is not part of an existing Oracle Streams environment.

  • The database is not part of an existing logical standby environment.

  • The database is not part of an existing Advanced Replication environment.

  • No tables at the database are master tables for materialized views in other databases.

  • No messages are enqueued into user-created queues during the upgrade or maintenance operation.

Considerations for Job Slaves and PL/SQL Package Subprograms

If possible, ensure that no job slaves are created, modified, or deleted during the upgrade or maintenance operation, and that no Oracle-supplied PL/SQL package subprograms are invoked during the operation that modify both user data and data dictionary metadata at the same time. The following packages contain subprograms that modify both user data and data dictionary metadata at the same time: DBMS_RLS, DBMS_STATS, and DBMS_JOB.

It might be possible to perform such actions on the database if you ensure that the same actions are performed on the source database and destination database in Steps 19 and 20 in "Performing a Database Upgrade or Maintenance Operation Using Oracle Streams". For example, if a PL/SQL procedure gathers statistics on the source database during the maintenance operation, then the same PL/SQL procedure should be invoked at the destination database in Step 20.

Unsupported Database Objects Are Excluded

The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures in the DBMS_STREAMS_ADM package include the following parameters:

  • exclude_schemas

  • exclude_flags

These parameters specify which database objects to exclude from the Oracle Streams configuration. The examples in this appendix set these parameters to the following values:

exclude_schemas => '*',
exclude_flags   => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                   DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                   DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);

These values exclude any database objects that are not supported by Oracle Streams. The asterisk (*) specified for exclude_schemas indicates that some database objects in every schema in the database might be excluded from the replication environment. The value specified for the exclude_flags parameter indicates that DML and DDL changes for all unsupported database objects are excluded from the replication environment. Rules are placed in the negative rule sets for the capture processes to exclude these database objects.

To list unsupported database objects, query the DBA_STREAMS_UNSUPPORTED data dictionary view at the source database. If you use these parameter settings, then changes to the database objects listed in this view are not maintained by Oracle Streams during the maintenance operation. Therefore, Step 7 in "Task 1: Beginning the Operation" instructs you to ensure that no changes are made to these database objects during the database upgrade or maintenance operation.


Note:

"Preparing for Upgrade or Maintenance of a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then tables that contain user-defined types can remain open during the maintenance operation.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the exclude_schemas and exclude_flags parameters

Preparing for a Database Upgrade or Maintenance Operation

The following sections describe tasks to complete before starting the database upgrade or maintenance operation with Oracle Streams:

Preparing for Downstream Capture

If you decided that the destination database or a third database will be the capture database, then you must prepare for downstream capture by configuring log file copying from the source database to the capture database. If you decided that the source database will be the capture database, then log file copying is not required. See "The Capture Database During the Upgrade or Maintenance Operation" for information about the decision.

Complete the following steps to prepare the source database to copy its redo log files to the capture database, and to prepare the capture database to accept these redo log files:

  1. Configure Oracle Net so that the source database can communicate with the capture database.

  2. Configure authentication at both databases to support the transfer of redo data.

    Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system. The password file must be the same at the source database and the downstream capture database.


    See Also:

    Oracle Data Guard Concepts and Administration for detailed information about authentication requirements for redo transport

  3. At the source database, set the following initialization parameters to configure redo transport services to transmit redo data from the source database to the downstream database:

    • LOG_ARCHIVE_DEST_n - Configure at least one LOG_ARCHIVE_DEST_n initialization parameter to transmit redo data to the downstream database. To do this, set the following attributes of this parameter:

      • SERVICE - Specify the network service name of the downstream database.

      • ASYNC or SYNC - Specify a redo transport mode.

        The advantage of specifying ASYNC is that it results in little or no effect on the performance of the source database. ASYNC is recommended to avoid affecting source database performance if the downstream database or network is performing poorly.

        The advantage of specifying SYNC is that redo data is sent to the downstream database faster then when ASYNC is specified. Also, specifying SYNC AFFIRM results in behavior that is similar to MAXIMUM AVAILABILITY standby protection mode. Note that specifying an ALTER DATABASE STANDBY DATABASE TO MAXIMIZE AVAILABILITY SQL statement has no effect on an Oracle Streams capture process.

      • NOREGISTER - Specify this attribute so that the location of the archived redo log files is not recorded in the downstream database control file.

      • VALID_FOR - Specify either (ONLINE_LOGFILE,PRIMARY_ROLE) or (ONLINE_LOGFILE,ALL_ROLES).

      • TEMPLATE - Specify a directory and format template for archived redo logs at the downstream database. The TEMPLATE attribute overrides the LOG_ARCHIVE_FORMAT initialization parameter settings at the downstream database. The TEMPLATE attribute is valid only with remote destinations. Ensure that the format uses all of the following variables at each source database: %t, %s, and %r.

      • DB_UNIQUE_NAME - The unique name of the downstream database. Use the name specified for the DB_UNIQUE_NAME initialization parameter at the downstream database.

      The following example is a LOG_ARCHIVE_DEST_n setting that specifies a capture database (DBS2.EXAMPLE.COM):

      LOG_ARCHIVE_DEST_2='SERVICE=DBS2.EXAMPLE.COM ASYNC NOREGISTER
         VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
         TEMPLATE=/usr/oracle/log_for_dbs1/dbs1_arch_%t_%s_%r.log
         DB_UNIQUE_NAME=dbs2'
      

      Tip:

      Specify a value for the TEMPLATE attribute that keeps log files from a remote source database separate from local database log files. In addition, if the downstream database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other.

    • LOG_ARCHIVE_DEST_STATE_n - Set this initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter for the downstream database to ENABLE.

      For example, if the LOG_ARCHIVE_DEST_2 initialization parameter is set for the downstream database, then set the LOG_ARCHIVE_DEST_STATE_2 parameter in the following way:

      LOG_ARCHIVE_DEST_STATE_2=ENABLE 
      
    • LOG_ARCHIVE_CONFIG - Set the DB_CONFIG attribute in this initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database.

      For example, if the DB_UNIQUE_NAME of the source database is dbs1, and the DB_UNIQUE_NAME of the downstream database is dbs2, then specify the following parameter:

      LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbs1,dbs2)'
      

      By default, the LOG_ARCHIVE_CONFIG parameter enables a database to both send and receive redo.


    See Also:

    Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parameters

  4. At the downstream database, set the DB_CONFIG attribute in the LOG_ARCHIVE_CONFIG initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database.

    For example, if the DB_UNIQUE_NAME of the source database is dbs1, and the DB_UNIQUE_NAME of the downstream database is dbs2, then specify the following parameter:

    LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbs1,dbs2)'
    

    By default, the LOG_ARCHIVE_CONFIG parameter enables a database to both send and receive redo.

  5. If you reset any initialization parameters while the instance is running at a database in Step 3 or Step 4, then you might want to reset them in the initialization parameter file as well, so that the new values are retained when the database is restarted.

    If you did not reset the initialization parameters while the instance was running, but instead reset them in the initialization parameter file in Step 3 or Step 4, then restart the database. The source database must be open when it sends redo log files to the capture database because the global name of the source database is sent to the capture database only if the source database is open.


See Also:

"Overview of Using Oracle Streams for Upgrade and Maintenance Operations" for more information about the capture database

Preparing for Upgrade or Maintenance of a Database with User-Defined Types

User-defined types include object types, REF values, varrays, and nested tables. Currently, Oracle Streams capture processes and apply processes do not support user-defined types. This section discusses using Oracle Streams to perform an upgrade or maintenance operation on a database that has user-defined types.

One option is to ensure that no data manipulation language (DML) or data definition language (DDL) changes are made to the tables that contain user-defined types during the operation. In this case, these tables are instantiated at the destination database, and no changes are made to these tables during the entire operation. After the operation is complete, make the tables that contain user-defined types read/write at the destination database.

However, if tables that contain user-defined types must remain open during the operation, then use the following general steps to retain changes to these types during the operation:

  1. At the source database, create one or more logging tables to store row changes to tables that include user-defined types. Each column in the logging table must use a data type that is supported by Oracle Streams.

  2. At the source database, create a DML trigger that fires on the tables that contain the user-defined data types. The trigger converts each row change into relational equivalents and logs the modified row in a logging table created in Step 1.

  3. Ensure that the capture process and propagation are configured to capture and, if necessary, propagate changes made to the logging table to the destination database. Changes to tables that contain user-defined types should not be captured or propagated. Therefore, ensure that the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures include the logging tables and exclude the tables that contain user-defined types.

  4. At the destination database, configure the apply process to use a DML handler that processes the changes to the logging tables. The DML handler reconstructs the user-defined types from the relational equivalents and applies the modified changes to the tables that contain user-defined types.

For instructions, go to the My Oracle Support (formerly OracleMetaLink) Web site using a Web browser:

http://support.oracle.com/

Database bulletin 556742.1 describes extended data type support for Oracle Streams.


See Also:


Preparing for Upgrades to User-Created Applications

This section is relevant only if the operation entails upgrading user-created applications. During an upgrade of user-created applications, schema objects can be modified, and there might be logical dependencies that cannot be detected by the database alone. The following sections describe handling these issues during an application upgrade:

Handling Modifications to Schema Objects

If you are upgrading user-created applications, then, typically, schema objects in the database change to support the upgraded applications. In Oracle Streams, row logical change records (LCRs) contain information about row changes that result from DML statements. A declarative rule-based transformation or DML handler can modify row LCRs captured from the source database redo log so that the row LCRs can be applied to the altered tables at the destination database.

A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. Declarative rule-based transformations cover a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL. You specify such a transformation using a procedure in the DBMS_STREAMS_ADM package. A declarative rule-based transformation can modify row LCRs during capture, propagation, or apply.

A DML handler is either a collection of SQL statements or a user procedure that processes row LCRs resulting from DML statements at a source database. An Oracle Streams apply process at a destination database can pass row LCRs to a DML handler, and the DML handler can modify the row LCRs.

The process for upgrading user-created applications using Oracle Streams can involve modifying and creating the schema objects at the destination database after instantiation. You can use one or more declarative rule-based transformations and DML handlers at the destination database to process changes from the source database so that they apply to the modified schema objects correctly. Declarative rule-based transformations and DML handlers can be used during application upgrade to account for differences between the source database and destination database.

In general, declarative rule-based transformations are easier to use than DML handlers. Therefore, when modifications to row LCRs are required, try to configure a declarative rule-based transformation first. If a declarative rule-based transformation is not sufficient, then use a DML handler. If row LCRs for tables that contain one or more LOB columns must be modified, then you should use a procedure DML handler and LOB assembly.

Before you begin the database upgrade or maintenance operation, you should complete the following tasks to prepare your declarative rule-based transformations or DML handlers:


Note:

Custom rule-based transformation can also be used to modify row LCRs during application upgrade. However, these modifications can be accomplished using DML handlers, and DML handlers are more efficient than custom rule-based transformations.

Handling Logical Dependencies

In some cases, an apply process requires additional information to detect dependencies in row LCRs that are being applied in parallel. During application upgrades, an apply process might require additional information to detect dependencies in the following situations:

  • The application, rather than the database, enforces logical dependencies.

  • Schema objects have been modified to support the application upgrade, and a DML handler will modify row LCRs to account for differences between the source database and destination database.

A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions at a destination database. A virtual dependency definition is not described as a constraint in the destination database data dictionary. Instead, it is specified using procedures in the DBMS_APPLY_ADM package. Virtual dependency definitions enable an apply process to detect dependencies that it would not be able to detect by using only the constraint information in the data dictionary. After dependencies are detected, an apply process schedules LCRs and transactions in the correct order for apply.

If virtual dependency definitions are required for your application upgrade, then learn about virtual dependency definitions and plan to configure them during the application upgrade.


See Also:

"Apply Processes and Dependencies" for more information about virtual dependency definitions

Deciding Whether to Configure Oracle Streams Directly or Generate a Script

The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures in the DBMS_STREAMS_ADM package configure the Oracle Streams replication environment during the upgrade or maintenance operation. These procedures can configure the Oracle Streams replication environment directly, or they can generate a script that configures the environment.

Using a procedure to configure replication directly is simpler than running a script, and the environment is configured immediately. However, you might choose to generate a script for the following reasons:

  • You want to review the actions performed by the procedure before configuring the environment.

  • You want to modify the script to customize the configuration.

To configure Oracle Streams directly when you run one of these procedures, set the perform_actions parameter to TRUE. The examples in this appendix assume that the procedures will configure Oracle Streams directly.

To generate a configuration script when you run one of these procedures, complete the following steps when you are instructed to run a procedure in this appendix:

  1. In SQL*Plus, connect as the Oracle Streams administrator to database where you will run the procedure.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a directory object to store the script that will be generated by the procedure. For example:

    CREATE DIRECTORY scripts_dir AS '/usr/scripts';
    
  3. Run the procedure. Ensure that the following parameters are set to generate a script:

    • Set the perform_actions parameter to FALSE.

    • Set the script_name parameter to the name of the script you want to generate.

    • Set the script_directory_object parameter to the directory object into which you want to place the script. This directory object was created in Step 2.

  4. Review or modify the script, if necessary.

  5. In SQL*Plus, connect as the Oracle Streams administrator to database where you will run the procedure.

  6. Run the generated script. For example:

    @/usr/scripts/pre_instantiation.sql;
    

Deciding Which Utility to Use for Instantiation

Before you begin the database upgrade or maintenance operation, decide whether you want to use Export/Import utilities (Data Pump or original) or the Recovery Manager (RMAN) utility to instantiate the destination database during the operation. Consider the following factors when you make this decision:

  • If you are migrating the database to a different platform, then you can use either Export/Import or the RMAN CONVERT DATABASE command. The RMAN DUPLICATE command does not support migrating a database to a different platform.

  • If you are migrating the database to a different character set, then you must use Export/Import, and the new character set must be a superset of the old character set. The RMAN DUPLICATE and CONVERT DATABASE commands do not support migrating a database to a different character set.

  • If you are upgrading from a prior release of Oracle Database to Oracle Database 11g Release 2 (11.2), then consider these additional factors:

  • If RMAN is supported for the operation, then using RMAN for the instantiation might be faster than using Export/Import, especially if the database is large.

  • Oracle recommends that you do not use RMAN for instantiation in an environment where distributed transactions are possible. Doing so might cause in-doubt transactions that must be corrected manually.

  • If the RMAN DUPLICATE or CONVERT DATABASE command is used for database instantiation, then the destination database cannot be the capture database.

  • If you are upgrading from a prior release of Oracle Database to Oracle Database 11g Release 2 (11.2), then consider these additional factors:

    • If you use Export/Import, then you can make the destination database an Oracle Database 11g Release 2 (11.2) database at the beginning of the operation. Therefore, you do not need to upgrade the destination database after the instantiation.

    • If you use the RMAN DUPLICATE, then the database release of the destination database must be the same as the source database.

    • If you use the RMAN CONVERT DATABASE, then the database release of the destination database must be the equal to or later than the source database.

Table D-1 describes when each instantiation method is supported based on whether the platform at the source and destination databases are the same or different, and whether the character set at the source and destination databases are the same or different.

Table D-1 Instantiation Methods for Database Maintenance with Oracle Streams

Instantiation MethodSame Platform Supported?Different Platforms Supported?Same Character Set Supported?Different Character Sets Supported?

Data Pump Export/Import

Yes

Yes

Yes

Yes

RMAN DUPLICATE

Yes

No

Yes

No

RMAN CONVERT DATABASE

No

Maybe

Yes

No


Only some platform combinations are supported by the RMAN CONVERT DATABASE command. You can use the DBMS_TDB package to determine whether a platform combination is supported.


See Also:


Performing a Database Upgrade or Maintenance Operation Using Oracle Streams

This section describes performing one of the following operations on an Oracle database:

You can use Oracle Streams to achieve little or no downtime during these operations. During the operation, the source database is the existing database on which you are performing the database operation. The capture database is the database on which the Oracle Streams capture process runs. The destination database is the database that will replace the source database at the end of the operation.

Complete the following tasks to perform a database maintenance operation using Oracle Streams:

Task 1: Beginning the Operation

Complete the following steps to begin the upgrade or maintenance operation using Oracle Streams:

  1. Create an empty destination database. If you are migrating the database to a different platform, then create the database on a computer system that uses the new platform. If you are migrating the database to a different character set, then create a database that uses the new character set.

    Ensure that the destination database has a different global name than the source database. This example assumes that the global name of the source database is orcl.example.com and the global name of the destination database during the database maintenance operation is stms.example.com. The global name of the destination database is changed when the destination database replaces the source database at the end of the maintenance operation.

    If you are not upgrading from a prior release of Oracle Database, then create an Oracle Database 11g Release 2 (11.2) database. See the Oracle installation guide for your operating system if you must install Oracle, and see the Oracle Database Administrator's Guide for information about creating a database.

    If you are upgrading from a prior release of Oracle Database, then the release of the empty database you create depends on the instantiation method you decided to use in "Deciding Which Utility to Use for Instantiation":

    • If you decided to use export/import for instantiation, then create an empty Oracle Database 11g Release 2 database. This database will be the destination database during the upgrade process.

      See the Oracle Database installation guide for your operating system if you must install Oracle Database, and see the Oracle Database Administrator's Guide for information about creating a database.

    • If you decided to use RMAN DUPLICATE for instantiation, then create an empty Oracle database that is the same release as the database you are upgrading.

      Specifically, if you are upgrading an Oracle Database 10g Release 2 (10.2) database, then create an Oracle Database 10g Release 2 database. Alternatively, if you are upgrading an Oracle Database 11g Release 1 (11.1) database, then create an Oracle Database 11g Release 1 database.

      This database will be the destination database during the upgrade process. Both the source database that is being upgraded and the destination database must be the same release of Oracle when you start the upgrade process.

      See the Oracle installation guide for your operating system if you must install Oracle, and see the Oracle Database Administrator's Guide for the release for information about creating a database.

    • If you decided to use RMAN CONVERT DATABASE for instantiation, then create an empty Oracle database that is a release equal to or later than the database you are upgrading.

      Specifically, if you are upgrading an Oracle Database 10g Release 2 (10.2) database, then create an Oracle Database 10g Release 2 database, an Oracle Database 11g Release 1 database, or an Oracle Database 11g Release 2 database. Alternatively, if you are upgrading an Oracle Database 11g Release 1 (11.1) database, then create an Oracle Database 11g Release 1 database or an Oracle Database 11g Release 2 database.

      This database will be the destination database during the upgrade process.

      See the Oracle installation guide for your operating system if you must install Oracle, and see the Oracle Database Administrator's Guide for the release for information about creating a database.

  2. Ensure that the source database is running in ARCHIVELOG mode. See Oracle Database Administrator's Guide for information about running a database in ARCHIVELOG mode.

  3. Create an undo tablespace at the capture database if one does not exist. For example, run the following statement while logged into the capture database as an administrative user:

    CREATE UNDO TABLESPACE undotbs_02
      DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON;
    

    The capture process at the capture database uses the undo tablespace.

    See "The Capture Database During the Upgrade or Maintenance Operation" for more information about the capture database.

    See Oracle Database Administrator's Guide for more information about creating an undo tablespace.

  4. Ensure that the initialization parameters are set properly at both databases to support an Oracle Streams environment.

    For Oracle Database 11g Release 2 (11.2) databases, see Oracle Streams Replication Administrator's Guide for information about setting initialization parameters that are relevant to Oracle Streams.

    If you are upgrading from a prior release of Oracle Database, then for the source database, see the Oracle Streams documentation for the source database release.

  5. Configure an Oracle Streams administrator at each database, including the source database, destination database, and capture database (if the capture database is a third database). This example assumes that the name of the Oracle Streams administrator is strmadmin at each database.

    For Oracle Database 11g Release 2 (11.2) databases, see Oracle Streams Replication Administrator's Guide for instructions.

    If you are upgrading from a prior release of Oracle Database, then for the source database, see the Oracle Streams documentation for the source database release.

  6. If you are upgrading user-created applications, then supplementally log any columns at the source database that will be involved in a rule-based transformation, procedure DML handler, or value dependency. These columns must be unconditionally logged at the source database. See Oracle Streams Replication Administrator's Guide for information about specifying unconditional supplemental log groups for these columns.

  7. At the source database, ensure that no changes are made to the database objects that are not supported by Oracle Streams during the upgrade or maintenance operation. To list unsupported database objects, query the DBA_STREAMS_UNSUPPORTED data dictionary view.

    "Preparing for Upgrade or Maintenance of a Database with User-Defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then tables that contain user-defined types can remain open during the operation.


Tip:

In Oracle Database 11g Release 1 (11.1) and later databases, you can use the ALTER TABLE statement with the READ ONLY clause to make a table read-only.

Task 2: Setting Up Oracle Streams Before Instantiation

The specific instructions for setting up Oracle Streams before instantiation depend on which database is the capture database. The PRE_INSTANTIATION_SETUP procedure always configures the capture process on the database where it is run. Therefore, this procedure must be run at the capture database.

When you run this procedure, you can specify that the procedure performs the configuration directly, or that the procedure generates a script that contains the configuration actions. See "Deciding Whether to Configure Oracle Streams Directly or Generate a Script". The examples in this section specify that the procedure performs the configuration directly.

Follow the instructions in the appropriate section:


Note:

When the PRE_INSTANTIATION_SETUP procedure is running with the perform_actions parameter set to TRUE, metadata about its configuration actions is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT, DBA_RECOVERABLE_SCRIPT_PARAMS, DBA_RECOVERABLE_SCRIPT_BLOCKS, and DBA_RECOVERABLE_SCRIPT_ERRORS. If the procedure stops because it encounters an error, then you can use the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to complete the configuration after you correct the conditions that caused the error. These views are not populated if a script is used to configure the replication environment.


See Also:


The Source Database Is the Capture Database

Complete the following steps to set up Oracle Streams before instantiation when the source database is the capture database:

  1. Configure your network and Oracle Net so that the source database can communicate with the destination database. See Oracle Database Net Services Administrator's Guide for instructions.

  2. In SQL*Plus, connect to the source database as the Oracle Streams administrator. In this example, the source database is orcl.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Create a database link to the destination database. For example:

    CREATE DATABASE LINK stms.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'stms.example.com';
    
  4. Run the PRE_INSTANTIATION_SETUP procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.capture_q_table',
        capture_queue_name      => 'strmadmin.capture_q',
        propagation_name        => 'prop_maint',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.apply_q',
        apply_queue_name        => 'strmadmin.apply_q_table',
        bi_directional          => FALSE,
        include_ddl             => TRUE,
        start_processes         => FALSE,
        exclude_schemas         => '*',
        exclude_flags           => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    
  5. Proceed to "Task 3: Instantiating the Database".

The Destination Database Is the Capture Database

Complete the following steps to set up Oracle Streams before instantiation when the destination database is the capture database:

  1. Configure your network and Oracle Net so that the source database and destination database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.

  2. Ensure that log file shipping from the source database to the destination database is configured. See "Preparing for Downstream Capture" for instructions.

  3. In SQL*Plus, connect to the destination database as the Oracle Streams administrator. In this example, the destination database is stms.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  4. Create a database link to the source database. For example:

    CREATE DATABASE LINK orcl.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'orcl.example.com';
    
  5. Run the PRE_INSTANTIATION_SETUP procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.streams_q_table',
        capture_queue_name      => 'strmadmin.streams_q',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.streams_q',
        apply_queue_name        => 'strmadmin.streams_q_table',
        bi_directional          => FALSE,
        include_ddl             => TRUE,
        start_processes         => FALSE,
        exclude_schemas         => '*',
        exclude_flags           => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    

    Notice that the propagation_name parameter is omitted because a propagation is not necessary when the destination database is the capture database and the downstream capture process and apply process use the same queue at the destination database.

    Also, notice that the capture process and apply process will share a queue named streams_q at the destination database.

  6. Proceed to "Task 3: Instantiating the Database".

A Third Database Is the Capture Database

This example assumes that the global name of the third database is thrd.example.com. Complete the following steps to set up Oracle Streams before instantiation when a third database is the capture database:

  1. Configure your network and Oracle Net so that the source database, destination database, and third database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.

  2. Ensure that log file shipping from the source database to the third database is configured. See "Preparing for Downstream Capture" for instructions.

  3. In SQL*Plus, connect to the third database as the Oracle Streams administrator. In this example, the third database is thrd.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  4. Create a database link to the source database. For example:

    CREATE DATABASE LINK orcl.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'orcl.example.com';
    
  5. Create a database link to the destination database. For example:

    CREATE DATABASE LINK stms.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'stms.example.com';
    
  6. Run the PRE_INSTANTIATION_SETUP procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.capture_q_table',
        capture_queue_name      => 'strmadmin.capture_q',
        propagation_name        => 'prop_maint',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.apply_q',
        apply_queue_name        => 'strmadmin.apply_q_table',
        bi_directional          => FALSE,
        include_ddl             => TRUE,
        start_processes         => FALSE,
        exclude_schemas         => '*',
        exclude_flags           => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    
  7. Proceed to "Task 3: Instantiating the Database".

Task 3: Instantiating the Database

"Deciding Which Utility to Use for Instantiation" discusses different options for instantiating an entire database. Complete the steps in the appropriate section based on the instantiation option you are using:


See Also:

Oracle Streams Replication Administrator's Guide for more information about performing instantiations

Instantiating the Database Using Export/Import

Complete the following steps to instantiate an entire database with Data Pump:

  1. In SQL*Plus, connect to the source database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a directory object to hold the export dump file and export log file. For example:

    CREATE DIRECTORY dpump_dir AS '/usr/dpump_dir';
    
  3. While connected to the source database as the Oracle Streams administrator, determine the current system change number (SCN) of the source database:

    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
      current_scn NUMBER;
    BEGIN
      current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
          DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn);
    END;
    /
    

    The returned SCN value is specified for the FLASHBACK_SCN Data Pump export parameter in Step 4. Specifying the FLASHBACK_SCN export parameter, or a similar export parameter, ensures that the export is consistent to a single SCN. In this example, assume that the query returned 876606.

    After you perform this query, ensure that no DDL changes are made to the objects being exported until after the export is complete.

  4. On a command line, use Data Pump to export the source database.

    Perform the export by connecting as an administrative user who is granted EXP_FULL_DATABASE role. This user also must have READ and WRITE privilege on the directory object created in Step 2. This example connects as the Oracle Streams administrator strmadmin.

    The following example is a Data Pump export command:

    expdp strmadmin FULL DIRECTORY=DPUMP_DIR DUMPFILE=orc1.dmp FLASHBACK_SCN=876606
    

    See Also:

    Oracle Database Utilities for information about performing a Data Pump export

  5. In SQL*Plus, connect to the destination database as the Oracle Streams administrator.

  6. Create a directory object to hold the import dump file and import log file. For example:

    CREATE DIRECTORY dpump_dir AS '/usr/dpump_dir';
    
  7. Transfer the Data Pump export dump file orc1.dmp to the destination database. You can use the DBMS_FILE_TRANSFER package, binary FTP, or some other method to transfer the file to the destination database. After the file transfer, the export dump file should reside in the directory that corresponds to the directory object created in Step 6.

  8. On a command line at the destination database, use Data Pump to import the export dump file orc1.dmp. Ensure that no changes are made to the database tables until the import is complete. Performing the import automatically sets the instantiation SCN for the destination database and all of its objects.

    Perform the import by connecting as an administrative user who is granted IMP_FULL_DATABASE role. This user also must have READ and WRITE privilege on the directory object created in Step 6. This example connects as the Oracle Streams administrator strmadmin.

    Ensure that you set the STREAMS_CONFIGURATION import parameter to n.

    The following example is an import command:

    impdp strmadmin FULL DIRECTORY=DPUMP_DIR DUMPFILE=orc1.dmp STREAMS_CONFIGURATION=n
    

    See Also:

    Oracle Database Utilities for information about performing a Data Pump import

Instantiating the Database Using the RMAN DUPLICATE Command

If you use the RMAN DUPLICATE command for instantiation on the same platform, then complete the following steps:

  1. Create a backup of the source database if one does not exist. RMAN requires a valid backup for duplication. In this example, create a backup of orcl.example.com if one does not exist.


    Note:

    A backup of the source database is not necessary if you use the FROM ACTIVE DATABASE option when you run the RMAN DUPLICATE command. For large databases, the FROM ACTIVE DATABASE option requires significant network resources. This example does not use this option.

  2. In SQL*Plus, connect as an administrative user to the source database. In this example, the source database is orcl.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Determine the until SCN for the RMAN DUPLICATE command. For example:

    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
      until_scn NUMBER;
    BEGIN
      until_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
          DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn);
    END;
    /
    

    Make a note of the until SCN value. This example assumes that the until SCN value is 748045. You will set the UNTIL SCN option to this value when you use RMAN to duplicate the database in Step 7 and as the instantiation SCN in "Task 4: Setting Up Oracle Streams After Instantiation".

  4. Archive the current online redo log. For example:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  5. Prepare your environment for database duplication, which includes preparing the destination database as an auxiliary instance for duplication. See the Oracle Database Backup and Recovery User's Guide for instructions.

  6. Start the RMAN client, and connect to the database orcl.example.com as TARGET and to the stms.example.com database as AUXILIARY. Connect to each database as an administrative user.

    See Oracle Database Backup and Recovery Reference for more information about the RMAN CONNECT command.

  7. Use the RMAN DUPLICATE command with the OPEN RESTRICTED option to instantiate the source database at the destination database. The OPEN RESTRICTED option is required. This option enables a restricted session in the duplicate database by issuing the following SQL statement: ALTER SYSTEM ENABLE RESTRICTED SESSION. RMAN issues this statement immediately before the duplicate database is opened.

    You can use the UNTIL SCN clause to specify an SCN for the duplication. Use the until SCN determined in Step 3 for this clause. Archived redo logs must be available for the until SCN specified and for higher SCN values. Therefore, Step 4 archived the redo log containing the until SCN.

    Ensure that you use TO database_name in the DUPLICATE command to specify the database name of the duplicate database. In this example, the database name of the duplicate database is stms. Therefore, the DUPLICATE command for this example includes TO stms.

    The following example is an RMAN DUPLICATE command:

    RMAN> RUN
          { 
            SET UNTIL SCN 748045;
            ALLOCATE AUXILIARY CHANNEL stms DEVICE TYPE sbt; 
            DUPLICATE TARGET DATABASE TO stms 
            NOFILENAMECHECK
            OPEN RESTRICTED;
          }
    
  8. In SQL*Plus, connect to the destination database as a system administrator. In this example, the destination database is stms.example.com.

  9. Rename the global name. After an RMAN database instantiation, the destination database has the same global name as the source database, but the destination database must have its original name until the end of the operation. Rename the global name of the destination database back to its original name with the following statement:

    ALTER DATABASE RENAME GLOBAL_NAME TO stms.example.com;
    
  10. If you are upgrading the database from a prior release to Oracle Database 11g Release 2, then upgrade the destination database. See the Oracle Database Upgrade Guide for instructions. If you are not upgrading the database, then skip this step and proceed to the next step.

  11. In SQL*Plus, connect to the destination database as the Oracle Streams administrator.

  12. Create a database link to the source database. For example:

    CREATE DATABASE LINK orcl.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'orcl.example.com';
    

    This database link is required because the POST_INSTANTIATION_SETUP procedure runs the SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package at the destination database, and the SET_GLOBAL_INSTANTIATION_SCN procedure requires the database link.

  13. If the source database and the capture database are the same database, then while still connected as the Oracle Streams administrator in SQL*Plus to the destination database, drop the database link from the source database to the destination database that was cloned from the source database:

    DROP DATABASE LINK stms.example.com;
    

See Also:

Oracle Database Backup and Recovery Reference for more information about the RMAN DUPLICATE command

Instantiating the Database Using the RMAN CONVERT DATABASE Command

If you use the RMAN CONVERT DATABASE command for instantiation to migrate the database to a different platform, then complete the following steps:

  1. Create a backup of the source database if one does not exist. RMAN requires a valid backup. In this example, create a backup of orcl.example.com if one does not exist.

  2. In SQL*Plus, connect to the source database as an administrative user.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Archive the current online redo log. For example:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  4. Prepare your environment for database conversion, which includes opening the source database in read-only mode. Complete the following steps:

    1. If the source database is open, then shut it down and start it in read-only mode.

    2. Run the CHECK_DB and CHECK_EXTERNAL functions in the DBMS_TDB package. Check the results to ensure that the conversion is supported by the RMAN CONVERT DATABASE command.


    See Also:

    Oracle Database Backup and Recovery User's Guide for more information about these steps

  5. Determine the current SCN of the source database:

    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
      current_scn NUMBER;
    BEGIN
      current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
          DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn);
    END;
    /
    

    Make a note of the SCN value returned. You will use this number for the instantiation SCN in "Task 4: Setting Up Oracle Streams After Instantiation". For this example, assume that the returned value is 748044.

  6. Start the RMAN client, and connect to the source database orcl.example.com as TARGET as an administrative user.

    See Oracle Database Backup and Recovery Reference for more information about the RMAN CONNECT command.

  7. Run the CONVERT DATABASE command.

    Ensure that you use NEW DATABASE database_name in the CONVERT DATABASE command to specify the database name of the destination database. In this example, the database name of the destination database is stms. Therefore, the CONVERT DATABASE command for this example includes NEW DATABASE stms.

    The following example is an RMAN CONVERT DATABASE command for a destination database that is running on the Linux IA (64-bit) platform:

    CONVERT DATABASE NEW DATABASE 'stms'
              TRANSPORT SCRIPT '/tmp/convertdb/transportscript.sql'     
              TO PLATFORM 'Linux IA (64-bit)'
              DB_FILE_NAME_CONVERT = ('/home/oracle/dbs','/tmp/convertdb');
    
  8. Transfer the data files, PFILE, and SQL script produced by the RMAN CONVERT DATABASE command to the computer system that is running the destination database.

  9. On the computer system that is running the destination database, modify the SQL script so that the destination database always opens with restricted session enabled.

    An example script follows with the necessary modifications in bold font:

    -- The following commands will create a control file and use it
    -- to open the database.
    -- Data used by Recovery Manager will be lost.
    -- The contents of online logs will be lost and all backups will
    -- be invalidated. Use this only if online logs are damaged.
     
    -- After mounting the created controlfile, the following SQL
    -- statement will place the database in the appropriate
    -- protection mode:
    --  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
     
    STARTUP NOMOUNT PFILE='init_00gd2lak_1_0.ora'
    CREATE CONTROLFILE REUSE SET DATABASE "STMS" RESETLOGS  NOARCHIVELOG
        MAXLOGFILES 32
        MAXLOGMEMBERS 2
        MAXDATAFILES 32
        MAXINSTANCES 1
        MAXLOGHISTORY 226
    LOGFILE
      GROUP 1 '/tmp/convertdb/archlog1'  SIZE 25M,
      GROUP 2 '/tmp/convertdb/archlog2'  SIZE 25M
    DATAFILE
      '/tmp/convertdb/systemdf',
      '/tmp/convertdb/sysauxdf',
      '/tmp/convertdb/datafile1',
      '/tmp/convertdb/datafile2',
      '/tmp/convertdb/datafile3'
    CHARACTER SET WE8DEC
    ;
     
    -- NOTE: This ALTER SYSTEM statement is added to enable restricted session.
    
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    
    -- Database can now be opened zeroing the online logs.
    ALTER DATABASE OPEN RESETLOGS;
     
    -- No tempfile entries found to add.
    --
     
    set echo off
    prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    prompt * Your database has been created successfully!
    prompt * There are many things to think about for the new database. Here
    prompt * is a checklist to help you stay on track:
    prompt * 1. You may want to redefine the location of the directory objects.
    prompt * 2. You may want to change the internal database identifier (DBID) 
    prompt *    or the global database name for this database. Use the 
    prompt *    NEWDBID Utility (nid).
    prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     
    SHUTDOWN IMMEDIATE 
    -- NOTE: This startup has the UPGRADE parameter.
    -- The startup already has restricted session enabled, so no change is needed.
    STARTUP UPGRADE PFILE='init_00gd2lak_1_0.ora'
    @@ ?/rdbms/admin/utlirp.sql 
    SHUTDOWN IMMEDIATE 
    -- NOTE: The startup below is generated without the RESTRICT clause.
    -- Add the RESTRICT clause.
    STARTUP RESTRICT PFILE='init_00gd2lak_1_0.ora'
    -- The following step will recompile all PL/SQL modules.
    -- It may take serveral hours to complete.
    @@ ?/rdbms/admin/utlrp.sql 
    set feedback 6;
    

    Other changes to the script might be necessary. For example, the data file locations and PFILE location might need to be changed to point to the correct locations on the destination database computer system.

  10. In SQL*Plus, connect to the destination database as a system administrator.

  11. Rename the global name. After an RMAN database instantiation, the destination database has the same global name as the source database, but the destination database must have its original name until the end of the maintenance operation. Rename the global name of the destination database back to its original name with the following statement:

    ALTER DATABASE RENAME GLOBAL_NAME TO stms.example.com;
    
  12. If you are upgrading the database from a prior release to Oracle Database 11g Release 2, then upgrade the destination database. See the Oracle Database Upgrade Guide for instructions. If you are not upgrading the database, then skip this step and proceed to the next step.

  13. Connect to the destination database as the Oracle Streams administrator using the new global name.

  14. Create a database link to the source database. For example:

    CREATE DATABASE LINK orcl.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'orcl.example.com';
    

    This database link is required because the POST_INSTANTIATION_SETUP procedure runs the SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package at the destination database, and the SET_GLOBAL_INSTANTIATION_SCN procedure requires the database link.

  15. If the source database and the capture database are the same database, then while still connected as the Oracle Streams administrator in SQL*Plus to the destination database, drop the database link froml@ the source database to the destination database that was cloned from the source database:

    DROP DATABASE LINK stms.example.com;
    

Task 4: Setting Up Oracle Streams After Instantiation

To set up Oracle Streams after instantiation, run the POST_INSTANTIATION_SETUP procedure. The POST_INSTANTIATION_SETUP procedure must be run at the database where the PRE_INSTANTIATION_SETUP procedure was run in "Task 2: Setting Up Oracle Streams Before Instantiation".

When you run the POST_INSTANTIATION_SETUP procedure, you can specify that the procedure performs the configuration directly, or that the procedure generates a script that contains the configuration actions. See "Deciding Whether to Configure Oracle Streams Directly or Generate a Script". The examples in this section specify that the procedure performs the configuration directly.

The parameter values specified in the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures must match, except for the values of the following parameters: perform_actions, script_name, script_directory_object, and start_processes. In this example, all of the parameter values match in the two procedures.

It is important to set the instantiation_scn parameter in the POST_INSTANTIATION_SETUP procedure correctly. Follow these instructions when you set this parameter:

  • If RMAN was used for instantiation, then set the instantiation_scn parameter to the value determined during instantiation. This value was determined when you completed the instantiation in "Instantiating the Database Using the RMAN DUPLICATE Command" or "Instantiating the Database Using the RMAN CONVERT DATABASE Command".

    The source database and third database examples in this section set the instantiation_scn parameter to 748044 for the following reasons:

    • If the RMAN DUPLICATE command was used for instantiation, then the command duplicates the database up to one less than the SCN value specified in the UNTIL SCN clause. Therefore, you should subtract one from the until SCN value that you specified when you ran the DUPLICATE command in Step 7 in "Instantiating the Database Using the RMAN DUPLICATE Command". In this example, the until SCN was set to 748045. Therefore, the instantiation_scn parameter should be set to 748045 - 1, or 748044.

    • If the RMAN CONVERT DATABASE command was used for instantiation, then the instantiation_scn parameter should be set to the SCN value determined immediately before running the CONVERT DATABASE command. This value was determined in Step 5 in "Instantiating the Database Using the RMAN CONVERT DATABASE Command".

  • If Export/Import was used for instantiation, then the instantiation SCN was set during import, and the instantiation_scn parameter must be set to NULL. The destination database example in this section sets the instantiation_scn to NULL because RMAN cannot be used for database instantiation when the destination database is the capture database.

The specific instructions for setting up Oracle Streams after instantiation depend on which database is the capture database. Follow the instructions in the appropriate section:


Note:

When the POST_INSTANTIATION_SETUP procedure is running with the perform_actions parameter set to TRUE, metadata about its configuration actions is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT, DBA_RECOVERABLE_SCRIPT_PARAMS, DBA_RECOVERABLE_SCRIPT_BLOCKS, and DBA_RECOVERABLE_SCRIPT_ERRORS. If the procedure stops because it encounters an error, then you can use the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to complete the configuration after you correct the conditions that caused the error. These views are not populated if a script is used to configure the replication environment.


See Also:


The Source Database Is the Capture Database

Complete the following steps to set up Oracle Streams after instantiation when the source database is the capture database:

  1. In SQL*Plus, connect to the source database as the Oracle Streams administrator. In this example, the source database is orcl.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Run the POST_INSTANTIATION_SETUP procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.capture_q_table',
        capture_queue_name      => 'strmadmin.capture_q',
        propagation_name        => 'prop_maint',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.apply_q',
        apply_queue_name        => 'strmadmin.apply_q_table',
        bi_directional          => FALSE,
        include_ddl             => TRUE,
        start_processes         => FALSE,
        instantiation_scn       => 748044, -- NULL if Export/Import instantiation
        exclude_schemas         => '*',
        exclude_flags           => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    

    Ensure that the instantiation_scn parameter is set to NULL if export/import was used for instantiation instead of RMAN.

  3. Proceed to "Task 5: Finishing the Upgrade or Maintenance Operation and Removing Oracle Streams".

The Destination Database Is the Capture Database

Complete the following steps to set up Oracle Streams after instantiation when the destination database is the capture database:

  1. In SQL*Plus, connect to the destination database as the Oracle Streams administrator. In this example, the destination database is stms.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Run the POST_INSTANTIATION_SETUP procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.streams_q_table',
        capture_queue_name      => 'strmadmin.streams_q',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.streams_q',
        apply_queue_name        => 'strmadmin.streams_q_table',
        bi_directional          => FALSE,
        include_ddl             => TRUE,
        start_processes         => FALSE,
        instantiation_scn       => NULL,
        exclude_schemas         => '*',
        exclude_flags           => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    

    Notice that the propagation_name parameter is omitted because a propagation is not necessary when the destination database is the capture database.

  3. Proceed to "Task 5: Finishing the Upgrade or Maintenance Operation and Removing Oracle Streams".

A Third Database Is the Capture Database

This example assumes that the global name of the third database is thrd.example.com. Complete the following steps to set up Oracle Streams after instantiation when a third database is the capture database:

  1. In SQL*Plus, connect to the third database as the Oracle Streams administrator. In this example, the third database is thrd.example.com.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Run the POST_INSTANTIATION_SETUP procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.capture_q_table',
        capture_queue_name      => 'strmadmin.capture_q',
        propagation_name        => 'prop_maint',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.apply_q',
        apply_queue_name        => 'strmadmin.apply_q_table',
        bi_directional          => FALSE,
        include_ddl             => TRUE,
        start_processes         => FALSE,
        instantiation_scn       => 748044, -- NULL if Export/Import instantiation
        exclude_schemas         => '*',
        exclude_flags           => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    

    Ensure that the instantiation_scn parameter is set to NULL if export/import was used for instantiation instead of RMAN.

  3. Proceed to "Task 5: Finishing the Upgrade or Maintenance Operation and Removing Oracle Streams".

Task 5: Finishing the Upgrade or Maintenance Operation and Removing Oracle Streams

Complete the following steps to finish the upgrade or maintenance operation and remove Oracle Streams components:

  1. At the destination database, disable any imported jobs that modify data that will be replicated from the source database. Query the DBA_JOBS data dictionary view to list the jobs.

  2. If you are applying a patch or patch set, then apply the patch or patch set to the destination database. Follow the instructions included with the patch or patch set. If you are not applying a patch or patch set, then skip this step and proceed to the next step.

  3. If you are upgrading user-created applications, then, at the destination database, you might need to complete the following steps:

    1. Modify the schema objects in the database to support the upgraded user-created applications.

    2. Configure one or more declarative rule-based transformations and procedure DML handlers that modify row LCRs from the source database so that the apply process applies these row LCRs to the modified schema objects correctly. For example, if a column name was changed to support the upgraded user-created applications, then a declarative rule-based transformation should rename the column in a row LCR that involves the column.

      See "Handling Modifications to Schema Objects".

    3. Configure one or more virtual dependency definitions if row LCRs might contain logical dependencies that cannot be detected by the apply process alone.

      See "Handling Logical Dependencies".

  4. In SQL*Plus, connect to the destination database as an administrative user.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  5. Use the ALTER SYSTEM statement to disable the RESTRICTED SESSION:

    ALTER SYSTEM DISABLE RESTRICTED SESSION;
    
  6. In SQL*Plus, connect to the destination database as the Oracle Streams administrator.

  7. Start the apply process. For example:

    BEGIN
      DBMS_APPLY_ADM.START_APPLY(
        apply_name  => 'apply_maint');
    END;
    /
    
  8. In SQL*Plus, connect to the capture database as the Oracle Streams administrator.

  9. Start the capture process. For example:

    BEGIN
      DBMS_CAPTURE_ADM.START_CAPTURE(
        capture_name  => 'capture_maint');
    END;
    /
    

    This step begins the process of replicating changes that were made to the source database during instantiation of the destination database.

  10. Monitor the Oracle Streams environment until the apply process at the destination database has applied most of the changes from the source database.

    To determine whether the apply process at the destination database has applied most of the changes from the source database, complete the following steps:

    1. Query the enqueue message number of the capture process and the message with the oldest system change number (SCN) for the apply process to see if they are nearly equal.

      For example, if the name of the capture process is capture_maint, and the name of the apply process is apply_maint, then run the following query at the capture database:

      COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Captured SCN' FORMAT 99999999999
      COLUMN OLDEST_SCN_NUM HEADING 'Oldest Applied SCN' FORMAT 99999999999
      
      SELECT c.ENQUEUE_MESSAGE_NUMBER, a.OLDEST_SCN_NUM
        FROM V$STREAMS_CAPTURE c, V$STREAMS_APPLY_READER@stms.example.com a
        WHERE c.CAPTURE_NAME = 'CAPTURE_MAINT'
          AND a.APPLY_NAME   = 'APPLY_MAINT';
      

      When the two values returned by this query are nearly equal, most of the changes from the source database have been applied at the destination database, and you can proceed to the next step. At this point in the process, the values returned by this query might never be equal because the source database still allows changes.

      If this query returns no results, then ensure that the Oracle Streams clients in the environment are enabled by querying the STATUS column in the DBA_CAPTURE view at the capture database and the DBA_APPLY view at the destination database. If a propagation is used, you can check the status of the propagation by running the query in "Displaying Information About the Schedules for Propagation Jobs".

      If an Oracle Streams client is disabled, then try restarting it. If an Oracle Streams client will not restart, then troubleshoot the environment using the information in Chapter 30, "Identifying Problems in an Oracle Streams Environment".

    2. Query the state of the apply process apply servers at the destination database to determine whether they have finished applying changes.

      For example, if the name of the apply process is apply_maint, then run the following query at the source database:

      COLUMN STATE HEADING 'Apply Server State' FORMAT A20
       
      SELECT STATE
        FROM V$STREAMS_APPLY_SERVER@stms.example.com
        WHERE APPLY_NAME = 'APPLY_MAINT';
      

      When the state for all apply servers is IDLE, you can proceed to the next step.

  11. Connect to the destination database as the Oracle Streams administrator.

  12. Ensure that there are no apply errors by running the following query:

    SELECT COUNT(*) FROM DBA_APPLY_ERROR;
    

    If this query returns zero, then move on to the next step. If this query shows errors in the error queue, then resolve these errors before continuing. See "Managing Apply Errors" for instructions.

  13. Disconnect all applications and users from the source database.

  14. Connect to the source database as an administrative user.

  15. Restrict access to the database. For example:

    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    
  16. While connected as an administrative user in SQL*Plus to the source database, repeat the query you ran in Step 10a. When the two values returned by the query are equal, all of the changes from the source database have been applied at the destination database, and you can move on to the next step.

  17. Connect to the destination database as the Oracle Streams administrator.

  18. Repeat the query you ran in Step 12. If this query returns zero, then move on to the next step. If this query shows errors in the error queue, then resolve these errors before continuing. See "Managing Apply Errors" for instructions.

  19. If you performed any actions that created, modified, or deleted job slaves at the source database during the upgrade or maintenance operation, then perform the same actions at the destination database. See "Considerations for Job Slaves and PL/SQL Package Subprograms" for more information.

  20. If you invoked any Oracle-supplied PL/SQL package subprograms at the source database during the upgrade or maintenance operation that modified both user data and dictionary metadata at the same time, then invoke the same subprograms at the destination database. See "Considerations for Job Slaves and PL/SQL Package Subprograms" for more information.

  21. Remove the Oracle Streams components that are no longer needed from both databases, including the ANYDATA queues, supplemental logging specifications, the capture process, the propagation if one exists, and the apply process. Connect as the Oracle Streams administrator in SQL*Plus to the capture database, and run the CLEANUP_INSTANTIATION_SETUP procedure to remove the Oracle Streams components at both databases.

    If the capture database is the source database or a third database, then run the following procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.CLEANUP_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.capture_q_table',
        capture_queue_name      => 'strmadmin.capture_q',
        propagation_name        => 'prop_maint',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.apply_q',
        apply_queue_name        => 'strmadmin.apply_q_table',
        bi_directional          => FALSE,
        change_global_name      => TRUE);
    END;
    /
    

    If the capture database is the destination database, then run the following procedure:

    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.CLEANUP_INSTANTIATION_SETUP(
        maintain_mode           => 'GLOBAL',
        tablespace_names        => empty_tbs,
        source_database         => 'orcl.example.com',
        destination_database    => 'stms.example.com',
        perform_actions         => TRUE,
        script_name             => NULL,
        script_directory_object => NULL,
        capture_name            => 'capture_maint',
        capture_queue_table     => 'strmadmin.streams_q_table',
        capture_queue_name      => 'strmadmin.streams_q',
        apply_name              => 'apply_maint',
        apply_queue_table       => 'strmadmin.streams_q',
        apply_queue_name        => 'strmadmin.streams_q_table',
        bi_directional          => FALSE,
        change_global_name      => TRUE);
    END;
    /
    

    Notice that the propagation_name parameter is omitted because a propagation is not necessary when the destination database is the capture database.

    Both sample procedures in this step rename the global name of the destination database to orc1.example.com because the change_global_name parameter is set to TRUE.

  22. Shut down the source database. This database should not be opened again.

  23. At the destination database, enable any jobs that you disabled earlier.

  24. Make the destination database available for applications and users. Redirect any applications and users that were connecting to the source database to the destination database. If necessary, reconfigure your network and Oracle Net so that systems that communicated with the source database now communicate with the destination database. See Oracle Database Net Services Administrator's Guide for instructions.

  25. If you no longer need the Oracle Streams administrator at the destination database, then connect as an administrative user in SQL*Plus to the destination database, and run the following statement:

    DROP USER strmadmin CASCADE;
    

The upgrade or maintenance operation is complete.

PKؤPK&AOEBPS/ap_other.htm How Oracle Streams Works with Other Database Components

A How Oracle Streams Works with Other Database Components

This appendix describes how Oracle Streams works with other Oracle Database components.

This appendix includes these topics:

Oracle Streams and Oracle Real Application Clusters

The following topics describe how Oracle Streams works with Oracle Real Application Clusters (Oracle RAC):


See Also:

Oracle Streams Replication Administrator's Guide for information about best practices for Oracle Streams in an Oracle RAC environment

Capture Processes and Oracle Real Application Clusters

A capture process can capture changes in an Oracle Real Application Clusters (Oracle RAC) environment. If you use one or more capture processes and Oracle RAC in the same environment, then all archived logs that contain changes to be captured by a capture process must be available for all instances in the Oracle RAC environment. In an Oracle RAC environment, a capture process reads changes made by all instances. Any processes used by a single capture process run on a single instance in an Oracle RAC environment.

Each capture process is started and stopped on the owner instance for its ANYDATA queue, even if the start or stop procedure is run on a different instance. Also, a capture process follows its queue to a different instance if the current owner instance becomes unavailable. The queue itself follows the rules for primary instance and secondary instance ownership.

If the owner instance for a queue table containing a queue used by a capture process becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. In addition, if the capture process was enabled when the owner instance became unavailable, then the capture process is restarted automatically on the new owner instance. If the capture process was disabled when the owner instance became unavailable, then the capture process remains disabled on the new owner instance.

LogMiner supports the LOG_ARCHIVE_DEST_n initialization parameter, and Oracle Streams capture processes use LogMiner to capture changes from the redo log. If an archived log file is inaccessible from one destination, then a local capture process can read it from another accessible destination. On an Oracle RAC database, this ability also enables you to use cross instance archival (CIA) such that each instance archives its files to all other instances. This solution cannot detect or resolve gaps caused by missing archived log files. Hence, it can be used only to complement an existing solution to have the archived files shared between all instances.

In a downstream capture process environment, the source database can be a single instance database or a multi-instance Oracle RAC database. The downstream database can be a single instance database or a multi-instance Oracle RAC database, regardless of whether the source database is single instance or multi-instance.


See Also:


Synchronous Capture and Oracle Real Application Clusters

A synchronous capture can capture changes in an Oracle Real Application Clusters (Oracle RAC) environment. In an Oracle RAC environment, synchronous capture reads changes made by all instances.

For the best performance with synchronous capture in an Oracle RAC environment, changes to independent sets of tables should be captured by separate synchronous captures. For example, if different applications use different sets of database objects in the database, then configure a separate synchronous capture to capture changes to the database objects for each application. In this case, each synchronous capture should use a different queue and queue table.

Combined Capture and Apply and Oracle Real Application Clusters

Combined capture and apply can be used in an Oracle Real Application Clusters (Oracle RAC) environment. In an Oracle RAC environment, the capture process and apply process can be on the same instance, on different instances in a single Oracle RAC database, or on different databases. When the capture process and apply process are on different instances in the same database or on different databases, you must configure a propagation between the capture process's queue and the apply process's queue for combined capture and apply to be used.

Queues and Oracle Real Application Clusters

You can configure a queue to stage LCRs and user messages in an Oracle Real Application Clusters (Oracle RAC) environment. In an Oracle RAC environment, only the owner instance can have a buffer for a queue, but different instances can have buffers for different queues. A buffered queue is System Global Area (SGA) memory associated with a queue.

Oracle Streams processes and jobs support primary instance and secondary instance specifications for queue tables. If you use these specifications, then the secondary instance assumes ownership of a queue table when the primary instance becomes unavailable, and ownership is transferred back to the primary instance when it becomes available again.

You can set primary and secondary instance specifications using the ALTER_QUEUE_TABLE procedure in the DBMS_AQADM package. The DBA_QUEUE_TABLES data dictionary view contains information about the owner instance for a queue table. A queue table can contain multiple queues. In this case, each queue in a queue table has the same owner instance as the queue table.


See Also:


Propagations and Oracle Real Application Clusters

A propagation can propagate messages from one queue to another in an Oracle Real Application Clusters (Oracle RAC) environment. A propagation job running on an instance propagates logical change records (LCRs) from any queue owned by that instance to destination queues.

Any propagation to an Oracle RAC database is made over database links. The database links must be configured to connect to the destination instance that owns the queue that will receive the messages.

If the owner instance for a queue table containing a destination queue for a propagation becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. If both the primary and secondary instance for a queue table containing a destination queue become unavailable, then queue ownership is transferred automatically to another instance in the cluster. In this case, if the primary or secondary instance becomes available again, then ownership is transferred back to one of them accordingly.

A queue-to-queue propagation to a buffered destination queue uses a service to provide transparent failover in an Oracle RAC environment. That is, a propagation job for a queue-to-queue propagation automatically connects to the instance that owns the destination queue. The service used by a queue-to-queue propagation always runs on the owner instance of the destination queue. This service is created only for buffered queues in an Oracle RAC database. If you plan to use buffered messaging with an Oracle RAC database, then messages can be enqueued into a buffered queue on any instance. If messages are enqueued on an instance that does not own the queue, then the messages are sent to the correct instance, but it is more efficient to enqueue messages on the instance that owns the queue. You can use the service to connect to the owner instance of the queue before enqueuing messages into a buffered queue.

Because the queue service always runs on the owner instance of the queue, transparent failover can occur when Oracle RAC instances fail. When multiple queue-to-queue propagations use a single database link, the connect description for each queue-to-queue propagation changes automatically to propagate messages to the correct destination queue.

In contrast, queue-to-dblink propagations do not use services. Queue-to-dblink propagations require you to repoint your database links if the owner instance in an Oracle RAC database that contains the destination queue for the propagation fails. To make the propagation job connect to the correct instance on the destination database, manually reconfigure the database link from the source database to connect to the instance that owns the destination queue. You do not need to modify a propagation that uses a re-created database link.

The NAME column in the DBA_SERVICES data dictionary view contains the service name for a queue. The NETWORK_NAME column in the DBA_QUEUES data dictionary view contains the network name for a queue. Do not manage the services for queue-to-queue propagations in any way. Oracle manages them automatically. For queue-to-dblink propagations, use the network name as the service name in the connect string of the database link to connect to the correct instance.


Note:

If a queue contains or will contain captured LCRs in an Oracle RAC environment, then use queue-to-queue propagations to propagate messages to an Oracle RAC destination database. If a queue-to-dblink propagation propagates captured LCRs to an Oracle RAC destination database, then this propagation must use an instance-specific database link that refers to the owner instance of the destination queue. If such a propagation connects to any other instance, then the propagation raises an error.

Apply Processes and Oracle Real Application Clusters

You can configure an Oracle Streams apply process to apply changes in an Oracle Real Application Clusters (Oracle RAC) environment. Each apply process is started and stopped on the owner instance for its ANYDATA queue, even if the start or stop procedure is run on a different instance. An apply coordinator process, its corresponding apply reader server, and all of its apply servers run on a single instance.

If the owner instance for a queue table containing a queue used by an apply process becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. Also, an apply process will follow its queue to a different instance if the current owner instance becomes unavailable. The queue itself follows the rules for primary instance and secondary instance ownership. In addition, if the apply process was enabled when the owner instance became unavailable, then the apply process is restarted automatically on the new owner instance. If the apply process was disabled when the owner instance became unavailable, then the apply process remains disabled on the new owner instance.


See Also:


Oracle Streams and Transparent Data Encryption

The following topics describe how Oracle Streams works with Transparent Data Encryption:


See Also:

Oracle Database Advanced Security Administrator's Guide for information about transparent data encryption

Capture Processes and Transparent Data Encryption

A local capture process can capture changes to columns that have been encrypted using transparent data encryption. A downstream capture process can capture changes to columns that have been encrypted only if the downstream database shares a wallet with the source database. A wallet can be shared through a network file system (NFS), or it can be copied from one computer system to another manually. When a wallet is shared with a downstream database, ensure that the ENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file at the downstream database specifies the wallet location.

If you copy a wallet to a downstream database, then ensure that you copy the wallet from the source database to the downstream database whenever the wallet at the source database changes. Do not perform any operations on the wallet at the downstream database, such as changing the encryption key for a replicated table.

Encrypted columns in row logical change records (row LCRs) captured by a local or downstream capture process are decrypted when the row LCRs are staged in a buffered queue. If row LCRs spill to disk in a database with transparent data encryption enabled, then Oracle Streams transparently encrypts any encrypted columns while the row LCRs are stored on disk.


Note:

A capture process only supports encrypted columns if the redo logs used by the capture process were generated by a database with a compatibility level of 11.0.0 or higher. The compatibility level is controlled by the COMPATIBLE initialization parameter.

Synchronous Capture and Transparent Data Encryption

A synchronous capture can capture changes to columns that have been encrypted using transparent data encryption. Encrypted columns in row logical change records (row LCRs) captured by a synchronous capture remain encrypted when the row LCRs are staged in a persistent queue.

Explicit Capture and Transparent Data Encryption

You can use explicit capture to construct and enqueue row logical change records (row LCRs) for columns that are encrypted in database tables. However, you cannot specify that columns are encrypted when you construct the row LCRs. Therefore, when explicitly captured row LCRs are staged in a queue, all of the columns in the row LCRs are decrypted.

Queues and Transparent Data Encryption

A persistent queue can store row logical change records (row LCRs) captured by a synchronous capture, and these row LCRs can contain changes to columns that were encrypted using transparent data encryption. The row LCRs remain encrypted while they are stored in the persistent queue. Explicitly captured row LCRs cannot contain encrypted columns.

A buffered queue can store row LCRs that contain changes captured by a capture process, and these row LCRs can contain changes to columns that were encrypted using transparent data encryption. When row LCRs with encrypted columns are stored in buffered queues, the columns are decrypted. When row LCRs spill to disk, Oracle Streams transparently encrypts any encrypted columns while the row LCRs are stored on disk.


Note:

For Oracle Streams to encrypt columns transparently, the encryption master key must be stored in the wallet on the local database, and the wallet must be open. The following statements set the master key and open the wallet:
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY key-password;
 
ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY key-password;

Propagations and Transparent Data Encryption

A propagation can propagate row logical change records (row LCRs) that contain changes to columns that were encrypted using transparent data encryption. When a propagation propagates row LCRs with encrypted columns, the encrypted columns are decrypted while the row LCRs are transferred over the network. You can use the features of Oracle Advanced Security to encrypt data transfers over the network if necessary.


See Also:


Apply Processes and Transparent Data Encryption

An apply process can dequeue and process implicitly captured row logical change records (row LCRs) that contain columns encrypted using transparent data encryption. When row LCRs with encrypted columns are dequeued by an apply process, the encrypted columns are decrypted. These row LCRs with decrypted columns can be sent to an apply handler for custom processing, or they can be applied directly. When row LCRs are applied, and the modified table contains encrypted columns, any changes to encrypted columns are encrypted when they are applied.

When row LCRs containuc encrypted columns, but the corresponding columns at the destination database are not encrypted, then the preserve_encryption apply process parameter controls apply process behavior:

  • If the preserve_encryption parameter is set to Y, then the apply process raises an error when row LCRs contain encrypted columns, but the corresponding columns at the destination database are not encrypted. When an error is raised, the row LCR is not applied, and all of the row LCRs in the transaction are moved to the error queue.

  • If the preserve_encryption parameter is set to N, then the apply process applies the row changes when row LCRs contain encrypted columns, but the corresponding columns at the destination database are not encrypted.

When an apply process moves implicitly captured row LCRs with encrypted columns to the error queue, the encrypted columns are encrypted when the row LCRs are in the error queue. Row LCRs are implicitly captured using capture processes and synchronous captures.

Messaging Clients and Transparent Data Encryption

A messaging client can dequeue implicitly captured row LCRs that contain columns encrypted using transparent data encryption. When row LCRs with encrypted columns are dequeued by a messaging client, the encrypted columns are decrypted.

Manual Dequeue and Transparent Data Encryption

A user or application can dequeue implicitly captured row LCRs that contain columns encrypted using transparent data encryption. When row LCRs with encrypted columns are dequeued, the encrypted columns are decrypted.

Oracle Streams and Flashback Data Archive

Oracle Streams supports tables in a flashback data archive. Capture processes can capture data manipulation language (DML) and data definition language (DDL) changes made to these tables. Synchronous captures can capture DML changes made to these tables. Apply processes can apply changes encapsulated in logical change records (LCRs) to these tables.

Oracle Streams capture processes and apply processes also support the following DDL statements:


Note:

Oracle Streams does not capture or apply changes made to internal tables used by a flashback data archive.

Oracle Streams and Recovery Manager (RMAN)

The following topics describe how Oracle Streams works with Recovery Manager (RMAN):

RMAN and Instantiation

You can use RMAN to instantiate database objects during the configuration of an Oracle Streams replication environment. The RMAN DUPLICATE and CONVERT DATABASE commands can instantiate an entire database, and the RMAN TRANSPORT TABLESPACE command can instantiate a tablespace or set of tablespaces.


See Also:

Oracle Streams Replication Administrator's Guide for information about using RMAN for instantiation

RMAN and Archived Redo Log Files Required by a Capture Process

Some Recovery Manager (RMAN) deletion policies and commands delete archived redo log files. If one of these RMAN policies or commands is used on a database that generates redo log files for one or more capture processes, then ensure that the RMAN commands do not delete archived redo log files that are required by a capture process.

The following sections describe the behavior of RMAN deletion policies and commands for local capture processes and downstream capture processes


See Also:


RMAN and Local Capture Processes

When a local capture process is configured, RMAN does not delete archived redo log files that are required by the local capture process unless there is space pressure in the fast recovery area. Specifically, RMAN does not delete archived redo log files that contain changes with system change number (SCN) values that are equal to or greater than the required checkpoint SCN for the local capture process. This is the default RMAN behavior for all RMAN deletion policies and DELETE commands, including DELETE ARCHIVELOG and DELETE OBSOLETE.

When there is not enough space in the fast recovery area to write a new log file, RMAN automatically deletes one or more archived redo log files. Oracle Database writes warnings to the alert log when RMAN automatically deletes an archived redo log file that is required by a local capture process.

When backups of the archived redo log files are taken on the local capture process database, Oracle recommends the following RMAN deletion policy:

CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP integer TIMES 
   TO DEVICE TYPE deviceSpecifier;

This deletion policy requires that a log file be backed up integer times before it is considered for deletion.

When no backups of the archived redo log files are taken on the local capture process database, no specific deletion policy is recommended. By default, RMAN does not delete archived redo log files that are required by a local capture process.

RMAN and Downstream Capture Processes

When a downstream capture process captures database changes made at a source database, ensure that no RMAN deletion policy or command deletes an archived redo log file until after it is transferred from the source database to the downstream capture process database.

The following are considerations for specific RMAN deletion policies and commands that delete archived redo log files:

  • The RMAN command CONFIGURE ARCHIVELOG DELETION POLICY sets a deletion policy that determines when archived redo log files in the fast recovery area are eligible for deletion. The deletion policy also applies to all RMAN DELETE commands, including DELETE ARCHIVELOG and DELETE OBSOLETE.

    The following settings determine the behavior at the source database:

    • A deletion policy set TO SHIPPED TO STANDBY does not delete a log file until after it is transferred to a downstream capture process database that requires the file. These log files might or might not have been processed by the downstream capture process. Automatic deletion occurs when there is not enough space in the fast recovery area to write a new log file.

    • A deletion policy set TO APPLIED ON STANDBY does not delete a log file until after it is transferred to a downstream capture process database that requires the file and the source database marks the log file as applied. The source database marks a log file as applied when the minimum required checkpoint SCN of all of the downstream capture processes for the source database is greater than the highest SCN in the log file.

    • A deletion policy set to BACKED UP integer TIMES TO DEVICE TYPE requires that a log file be backed up integer times before it is considered for deletion. A log file can be deleted even if the log file has not been processed by a downstream capture process that requires it.

    • A deletion policy set TO NONE means that a log file can be deleted when there is space pressure on the fast recovery area, even if the log file has not been processed by a downstream capture process that requires it.

  • The RMAN command DELETE ARCHIVELOG deletes archived redo log files that meet all of the following conditions:

    • The log files satisfy the condition specified in the DELETE ARCHIVELOG command.

    • The log files can be deleted according to the CONFIGURE ARCHIVELOG DELETION POLICY. For example, if the policy is set TO SHIPPED TO STANDBY, then this command does not delete a log file until after it is transferred to any downstream capture process database that requires it.

    This behavior applies when the database is mounted or open.

    If archived redo log files are not deleted because they contain changes required by a downstream capture process, then RMAN displays a warning message about skipping the delete operation for these files.

  • The RMAN command DELETE OBSOLETE permanently purges the archived redo log files that meet all of the following conditions:

    • The log files are obsolete according to the retention policy.

    • The log files can be deleted according to the CONFIGURE ARCHIVELOG DELETION POLICY. For example, if the policy is set TO SHIPPED TO STANDBY, then this command does not delete a log file until after it is transferred to any downstream capture process database that requires it.

    This behavior applies when the database is mounted or open.

  • The RMAN command BACKUP ARCHIVELOG ALL DELETE INPUT copies the archived redo log files and deletes the original files after completing the backup. This command does not delete the log file until after it is transferred to a downstream capture process database when the following conditions are met:

    • The database is mounted or open.

    • The log file is required by a downstream capture process.

    • The deletion policy is set TO SHIPPED TO STANDBY.

    If archived redo log files are not deleted because they contain changes required by a downstream capture process, then RMAN displays a warning message about skipping the delete operation for these files.

Oracle recommends one of the following RMAN deletion policies at the source database for a downstream capture process:

  • When backups of the archived redo log files are taken on the source database, set the deletion policy to the following:

    CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY 
       BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier;
    
  • When no backups of the archived redo log files are taken on the source database, set the deletion policy to the following:

    CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;
    

Note:

At a downstream capture process database, archived redo log files transferred from a source database are not managed by RMAN.

The Recovery Catalog and Oracle Streams

Oracle Streams supports replicating a recovery catalog in a one-way replication environment. Bi-directional replication of a recovery catalog is not supported.


See Also:


Oracle Streams and Distributed Transactions

You can perform distributed transactions using either of the following methods:

Oracle Streams replicates changes made to the source database during a distributed transaction using either of these two methods to the destination database. An apply process at the destination database applies the changes in a transaction after the transaction has committed.

However, the distributed transaction state is not replicated or sent. The destination database or client application does not inherit the in-doubt or prepared state of such a transaction. Also, Oracle Streams does not replicate or send the changes using the same global transaction identifier used at the source database for XA transactions.

XA transactions can be performed in two ways:

Oracle Streams supports replication of changes made by loosely coupled XA branches regardless of the COMPATIBLE initialization parameter value. Oracle Streams supports replication of changes made by tightly coupled branches on an Oracle RAC source database only if the COMPATIBLE initialization parameter set to 11.2.0 or higher.


See Also:


Oracle Streams and Oracle Data Vault

Oracle Database Vault restricts access to specific areas in an Oracle database from any user, including users who have administrative access. If you are using Oracle Streams in an Oracle Data Vault environment, then the following privileges and roles are required:

To authorize an apply user for a realm, run the DBMS_MACADM.ADD_AUTH_TO_REALM procedure and specify the realm and the apply user. For example, to authorize apply user strmadmin for the sales realm, run the following procedure:

 BEGIN
     DBMS_MACADM.ADD_AUTH_TO_REALM(
      realm_name  => 'sales', 
      grantee     => 'strmadmin'); 
    END;
    /

In addition, the user who performs the following actions must be granted the BECOME USER system privilege:

Granting the BECOME USER system privilege to the user who performs these actions is not required if Oracle Database Vault is not installed. You can revoke the BECOME USER system privilege from the user after the completing one of these actions, if necessary.

See Oracle Database Vault Administrator's Guide.

PK\uPK&AOEBPS/strms_rumon.htm Monitoring Rules

27 Monitoring Rules

The following topics describe monitoring rules, rule sets, and evaluation contexts:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See the online Help for the Oracle Streams tool for more information.

Displaying All Rules Used by All Oracle Streams Clients

Oracle Streams rules are created using the DBMS_STREAMS_ADM package or the Oracle Streams tool in Oracle Enterprise Manager. Oracle Streams rules in the rule sets for an Oracle Streams client determine the behavior of the Oracle Streams client. Oracle Streams clients include capture processes, propagations, apply processes, and messaging clients. The rule sets for an Oracle Streams client can also contain rules created using the DBMS_RULE_ADM package, and these rules also determine the behavior of the Oracle Streams client.

For example, if a rule in the positive rule set for a capture process evaluates to TRUE for DML changes to the hr.employees table, then the capture process captures DML changes to this table. However, if a rule in the negative rule set for a capture process evaluates to TRUE for DML changes to the hr.employees table, then the capture process discards DML changes to this table.

You query the following data dictionary views to display all rules in the rule sets for Oracle Streams clients, including Oracle Streams rules and rules created using the DBMS_RULE_ADM package:

In addition, these two views display the current rule condition for each rule and whether the rule condition has been modified.

The query in this section displays the following information about all of the rules used by Oracle Streams clients in a database:

Run the following query to display this information:

COLUMN STREAMS_NAME HEADING 'Oracle|Streams|Name' FORMAT A14
COLUMN STREAMS_TYPE HEADING 'Oracle|Streams|Type' FORMAT A11
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A12
COLUMN RULE_SET_TYPE HEADING 'Rule Set|Type' FORMAT A8
COLUMN STREAMS_RULE_TYPE HEADING 'Oracle|Streams|Rule|Level' FORMAT A7
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A11
COLUMN RULE_TYPE HEADING 'Rule|Type' FORMAT A4

SELECT STREAMS_NAME, 
       STREAMS_TYPE,
       RULE_NAME,
       RULE_SET_TYPE,
       STREAMS_RULE_TYPE,
       SCHEMA_NAME,
       OBJECT_NAME,
       RULE_TYPE
  FROM DBA_STREAMS_RULES;

Your output looks similar to the following:

                                                 Oracle
Oracle         Oracle                            Streams
Streams        Streams     Rule         Rule Set Rule    Schema Object      Rule
Name           Type        Name         Type     Level   Name   Name        Type
-------------- ----------- ------------ -------- ------- ------ ----------- ----
STRM01_CAPTURE CAPTURE     JOBS4        POSITIVE TABLE   HR     JOBS        DML
STRM01_CAPTURE CAPTURE     JOBS5        POSITIVE TABLE   HR     JOBS        DDL
DBS1_TO_DBS2   PROPAGATION HR18         POSITIVE SCHEMA  HR                 DDL
DBS1_TO_DBS2   PROPAGATION HR17         POSITIVE SCHEMA  HR                 DML
APPLY          APPLY       HR20         POSITIVE SCHEMA  HR                 DML
APPLY          APPLY       JOB_HISTORY2 NEGATIVE TABLE   HR     JOB_HISTORY DML
OE             DEQUEUE     RULE$_28     POSITIVE

This output provides the following information about the rules used by Oracle Streams clients in the database:

The ALL_STREAMS_RULES and DBA_STREAMS_RULES views also contain information about the rule sets used by an Oracle Streams client, the current and original rule condition for Oracle Streams rules, whether the rule condition has been changed, the subsetting operation and DML condition for each Oracle Streams subset rule, the source database specified for each Oracle Streams rule, and information about the message type and message variable for Oracle Streams messaging rules.

The following data dictionary views also display Oracle Streams rules:

These views display Oracle Streams rules only. They do not display any manual modifications to these rules made by the DBMS_RULE_ADM package, and they do not display rules created using the DBMS_RULE_ADM package. These views can display the original rule condition for each rule only. They do not display the current rule condition for a rule if the rule condition was modified after the rule was created.

Displaying the Oracle Streams Rules Used by a Specific Oracle Streams Client

To determine which rules are in a rule set used by a particular Oracle Streams client, you can query the DBA_STREAMS_RULES data dictionary view. For example, suppose a database is running an apply process named strm01_apply. The following sections describe how to determine the rules in the positive rule set and negative rule set for this apply process.

The following sections describe how to determine which rules are in a rule set used by a particular Oracle Streams client:

Displaying the Rules in the Positive Rule Set for an Oracle Streams Client

The following query displays all of the rules in the positive rule set for an apply processs named strm01_apply:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A12
COLUMN STREAMS_RULE_TYPE HEADING 'Oracle Streams|Rule|Level' FORMAT A7
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A11
COLUMN RULE_TYPE HEADING 'Rule|Type' FORMAT A4
COLUMN SOURCE_DATABASE HEADING 'Source' FORMAT A10
COLUMN INCLUDE_TAGGED_LCR HEADING 'Apply|Tagged|LCRs?' FORMAT A9

SELECT RULE_OWNER,
       RULE_NAME,
       STREAMS_RULE_TYPE,
       SCHEMA_NAME,
       OBJECT_NAME,
       RULE_TYPE,
       SOURCE_DATABASE,
       INCLUDE_TAGGED_LCR
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME  = 'STRM01_APPLY' AND
        RULE_SET_TYPE = 'POSITIVE';

If this query returns any rows, then the apply process applies LCRs containing changes that evaluate to TRUE for the rules.

Your output looks similar to the following:

                           Oracle Streams                                    Apply
           Rule            Rule    Schema Object      Rule            Tagged
Rule Owner Name            Level   Name   Name        Type Source     LCRs?
---------- --------------- ------- ------ ----------- ---- ---------- ---------
STRMADMIN  HR20            SCHEMA  HR                 DML   DBS1.EXAM NO
                                                            PLE.COM
STRMADMIN  HR21            SCHEMA  HR                 DDL   DBS1.EXAM NO
                                                            PLE.COM

Assuming the rule conditions for the Oracle Streams rules returned by this query have not been modified, these results show that the apply process applies LCRs containing DML changes and DDL changes to the hr schema and that the LCRs originated at the dbs1.example.com database. The rules in the positive rule set that instruct the apply process to apply these LCRs are owned by the strmadmin user and are named hr20 and hr21. Also, the apply process applies an LCR that satisfies one of these rules only if the tag in the LCR is NULL.

If the rule condition for an Oracle Streams rule has been modified, then you must check the current rule condition to determine the effect of the rule on an Oracle Streams client. Oracle Streams rules whose rule condition has been modified have NO for the SAME_RULE_CONDITION column.

Displaying the Rules in the Negative Rule Set for an Oracle Streams Client

The following query displays all of the rules in the negative rule set for an apply process named strm01_apply:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A15
COLUMN STREAMS_RULE_TYPE HEADING 'Oracle Streams|Rule|Level' FORMAT A7
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A11
COLUMN RULE_TYPE HEADING 'Rule|Type' FORMAT A4
COLUMN SOURCE_DATABASE HEADING 'Source' FORMAT A10
COLUMN INCLUDE_TAGGED_LCR HEADING 'Apply|Tagged|LCRs?' FORMAT A9

SELECT RULE_OWNER,
       RULE_NAME,
       STREAMS_RULE_TYPE,
       SCHEMA_NAME,
       OBJECT_NAME,
       RULE_TYPE,
       SOURCE_DATABASE,
       INCLUDE_TAGGED_LCR
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME  = 'APPLY' AND
        RULE_SET_TYPE = 'NEGATIVE';

If this query returns any rows, then the apply process discards LCRs containing changes that evaluate to TRUE for the rules.

Your output looks similar to the following:

                           Oracle Streams                                    Apply
           Rule            Rule    Schema Object      Rule            Tagged
Rule Owner Name            Level   Name   Name        Type Source     LCRs?
---------- --------------- ------- ------ ----------- ---- ---------- ---------
STRMADMIN  JOB_HISTORY22   TABLE   HR     JOB_HISTORY DML  DBS1.EXAMP YES
                                                           LE.COM
STRMADMIN  JOB_HISTORY23   TABLE   HR     JOB_HISTORY DDL  DBS1.EXAMP YES
                                                           LE.COM

Assuming the rule conditions for the Oracle Streams rules returned by this query have not been modified, these results show that the apply process discards LCRs containing DML changes and DDL changes to the hr.job_history table and that the LCRs originated at the dbs1.example.com database. The rules in the negative rule set that instruct the apply process to discard these LCRs are owned by the strmadmin user and are named job_history22 and job_history23. Also, the apply process discards an LCR that satisfies one of these rules regardless of the value of the tag in the LCR.

If the rule condition for an Oracle Streams rule has been modified, then you must check the current rule condition to determine the effect of the rule on an Oracle Streams client. Oracle Streams rules whose rule condition has been modified have NO for the SAME_RULE_CONDITION column.

Displaying the Current Condition for a Rule

If you know the name of a rule, then you can display its rule condition. For example, consider the rule returned by the query in "Displaying the Oracle Streams Rules Used by a Specific Oracle Streams Client". The name of the rule is hr1, and you can display its condition by running the following query:

SET LONG  8000
SET PAGES 8000
SELECT RULE_CONDITION "Current Rule Condition"
  FROM DBA_STREAMS_RULES 
  WHERE RULE_NAME  = 'HR1' AND
        RULE_OWNER = 'STRMADMIN';

Your output looks similar to the following:

Current Rule Condition
--------------------------------------------------------------------------------
((((:dml.get_object_owner() = 'HR') and :dml.get_source_database_name() = 'DA.EX
AMPLE.COM' )) and (:dml.get_compatible() <= dbms_streams.compatible_11_2))

Displaying Modified Rule Conditions for Oracle Streams Rules

It is possible to modify the rule condition of an Oracle Streams rule. These modifications can change the behavior of the Oracle Streams clients using the Oracle Streams rule. In addition, some modifications can degrade rule evaluation performance.

The following query displays the rule name, the original rule condition, and the current rule condition for each Oracle Streams rule whose condition has been modified:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A12
COLUMN ORIGINAL_RULE_CONDITION HEADING 'Original Rule Condition' FORMAT A33
COLUMN RULE_CONDITION HEADING 'Current Rule Condition' FORMAT A33

SET LONG  8000
SET PAGES 8000
SELECT RULE_NAME, ORIGINAL_RULE_CONDITION, RULE_CONDITION
  FROM DBA_STREAMS_RULES 
  WHERE SAME_RULE_CONDITION = 'NO';

Your output looks similar to the following:

Rule Name    Original Rule Condition           Current Rule Condition
------------ --------------------------------- ---------------------------------
HR20         ((:dml.get_object_owner() = 'HR') ((:dml.get_object_owner() = 'HR')
              and :dml.is_null_tag() = 'Y' )    and :dml.is_null_tag() = 'Y' and
                                                :dml.get_object_name() != 'JOB_H
                                               ISTORY')

In this example, the output shows that the condition of the hr20 rule has been modified. Originally, this schema rule evaluated to TRUE for all changes to the hr schema. The current modified condition for this rule evaluates to TRUE for all changes to the hr schema, except for DML changes to the hr.job_history table.


Note:

The query in this section applies only to Oracle Streams rules. It does not apply to rules created using the DBMS_RULE_ADM package because these rules always show NULL for the ORIGINAL_RULE_CONDITION column and NULL for the SAME_RULE_CONDITION column.

Displaying the Evaluation Context for Each Rule Set

The following query displays the default evaluation context for each rule set in a database:

COLUMN RULE_SET_OWNER HEADING 'Rule Set|Owner' FORMAT A10
COLUMN RULE_SET_NAME HEADING 'Rule Set Name' FORMAT A20
COLUMN RULE_SET_EVAL_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A12
COLUMN RULE_SET_EVAL_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A30

SELECT RULE_SET_OWNER, 
       RULE_SET_NAME, 
       RULE_SET_EVAL_CONTEXT_OWNER,
       RULE_SET_EVAL_CONTEXT_NAME
  FROM DBA_RULE_SETS;

Your output looks similar to the following:

Rule Set                        Eval Context
Owner      Rule Set Name        Owner        Eval Context Name
---------- -------------------- ------------ ------------------------------
STRMADMIN  RULESET$_2           SYS          STREAMS$_EVALUATION_CONTEXT
STRMADMIN  STRM02_QUEUE_R       STRMADMIN    AQ$_STRM02_QUEUE_TABLE_V
STRMADMIN  APPLY_OE_RS          STRMADMIN    OE_EVAL_CONTEXT
STRMADMIN  OE_QUEUE_R           STRMADMIN    AQ$_OE_QUEUE_TABLE_V
STRMADMIN  AQ$_1_RE             STRMADMIN    AQ$_OE_QUEUE_TABLE_V
SUPPORT    RS                   SUPPORT      EVALCTX
OE         NOTIFICATION_QUEUE_R OE           AQ$_NOTIFICATION_QUEUE_TABLE_V

Displaying Information About the Tables Used by an Evaluation Context

The following query displays information about the tables used by an evaluation context named evalctx, which is owned by the support user:

COLUMN TABLE_ALIAS HEADING 'Table Alias' FORMAT A20
COLUMN TABLE_NAME HEADING 'Table Name' FORMAT A40

SELECT TABLE_ALIAS,
       TABLE_NAME
  FROM DBA_EVALUATION_CONTEXT_TABLES
  WHERE EVALUATION_CONTEXT_OWNER = 'SUPPORT' AND
        EVALUATION_CONTEXT_NAME = 'EVALCTX';

Your output looks similar to the following:

Table Alias          Table Name
-------------------- ----------------------------------------
PROB                 problems

Displaying Information About the Variables Used in an Evaluation Context

The following query displays information about the variables used by an evaluation context named evalctx, which is owned by the support user:

COLUMN VARIABLE_NAME HEADING 'Variable Name' FORMAT A15
COLUMN VARIABLE_TYPE HEADING 'Variable Type' FORMAT A15
COLUMN VARIABLE_VALUE_FUNCTION HEADING 'Variable Value|Function' FORMAT A20
COLUMN VARIABLE_METHOD_FUNCTION HEADING 'Variable Method|Function' FORMAT A20

SELECT VARIABLE_NAME,
       VARIABLE_TYPE,
       VARIABLE_VALUE_FUNCTION,
       VARIABLE_METHOD_FUNCTION
  FROM DBA_EVALUATION_CONTEXT_VARS
  WHERE EVALUATION_CONTEXT_OWNER = 'SUPPORT' AND
        EVALUATION_CONTEXT_NAME = 'EVALCTX';

Your output looks similar to the following:

                                Variable Value       Variable Method
Variable Name   Variable Type   Function             Function
--------------- --------------- -------------------- --------------------
CURRENT_TIME    DATE            timefunc

Displaying All of the Rules in a Rule Set

The query in this section displays the following information about all of the rules in a rule set:

  • The owner of the rule.

  • The name of the rule.

  • The evaluation context for the rule, if any. If a rule does not have an evaluation context, and no evaluation context is specified in the ADD_RULE procedure when the rule is added to a rule set, then it inherits the evaluation context of the rule set.

  • The evaluation context owner, if the rule has an evaluation context.

For example, to display this information for each rule in a rule set named oe_queue_r that is owned by the user strmadmin, run the following query:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
COLUMN RULE_EVALUATION_CONTEXT_NAME HEADING 'Eval Context Name' FORMAT A27
COLUMN RULE_EVALUATION_CONTEXT_OWNER HEADING 'Eval Context|Owner' FORMAT A11

SELECT R.RULE_OWNER, 
       R.RULE_NAME, 
       R.RULE_EVALUATION_CONTEXT_NAME,
       R.RULE_EVALUATION_CONTEXT_OWNER
  FROM DBA_RULES R, DBA_RULE_SET_RULES RS 
  WHERE RS.RULE_SET_OWNER = 'STRMADMIN' AND 
        RS.RULE_SET_NAME = 'OE_QUEUE_R' AND 
  RS.RULE_NAME = R.RULE_NAME AND 
  RS.RULE_OWNER = R.RULE_OWNER;

Your output looks similar to the following:

                                                            Eval Contex
Rule Owner Rule Name            Eval Context Name           Owner
---------- -------------------- --------------------------- -----------
STRMADMIN  HR1                  STREAMS$_EVALUATION_CONTEXT SYS
STRMADMIN  APPLY_LCRS           STREAMS$_EVALUATION_CONTEXT SYS
STRMADMIN  OE_QUEUE$3
STRMADMIN  APPLY_ACTION

Displaying the Condition for Each Rule in a Rule Set

The following query displays the condition for each rule in a rule set named hr_queue_r that is owned by the user strmadmin:

SET LONGCHUNKSIZE 4000
SET LONG 4000
COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A45

SELECT R.RULE_OWNER, 
       R.RULE_NAME, 
       R.RULE_CONDITION
  FROM DBA_RULES R, DBA_RULE_SET_RULES RS 
  WHERE RS.RULE_SET_OWNER = 'STRMADMIN' AND 
        RS.RULE_SET_NAME = 'HR_QUEUE_R' AND 
  RS.RULE_NAME = R.RULE_NAME AND 
  RS.RULE_OWNER = R.RULE_OWNER;

Your output looks similar to the following:

Rule Owner      Rule Name       Rule Condition
--------------- --------------- ---------------------------------------------
STRMADMIN       APPLY_ACTION     hr.get_hr_action(tab.user_data) = 'APPLY'
STRMADMIN       APPLY_LCRS      :dml.get_object_owner() = 'HR' AND  (:dml.get
                                _object_name() = 'DEPARTMENTS' OR 
                                :dml.get_object_name() = 'EMPLOYEES')

STRMADMIN       HR_QUEUE$3      hr.get_hr_action(tab.user_data) != 'APPLY'

Listing Each Rule that Contains a Specified Pattern in Its Condition

To list each rule in a database that contains a specified pattern in its condition, you can query the DBMS_RULES data dictionary view and use the DBMS_LOB.INSTR function to search for the pattern in the rule conditions. For example, the following query lists each rule that contains the pattern 'HR' in its condition:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A30
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30

SELECT RULE_OWNER, RULE_NAME FROM DBA_RULES 
  WHERE DBMS_LOB.INSTR(RULE_CONDITION, 'HR', 1, 1) > 0;

Your output looks similar to the following:

Rule Owner                     Rule Name
------------------------------ ------------------------------
STRMADMIN                      DEPARTMENTS4
STRMADMIN                      DEPARTMENTS5
STRMADMIN                      DEPARTMENTS6

Displaying Aggregate Statistics for All Rule Set Evaluations

You can query the V$RULE_SET_AGGREGATE_STATS dynamic performance view to display statistics for all rule set evaluations since the database instance last started.

The query in this section contains the following information about rule set evaluations:

  • The number of rule set evaluations.

  • The number of rule set evaluations that were instructed to stop on the first hit.

  • The number of rule set evaluations that were instructed to evaluate only simple rules.

  • The number of times a rule set was evaluated without issuing any SQL. Generally, issuing SQL to evaluate rules is more expensive than evaluating rules without issuing SQL.

  • The number of centiseconds of CPU time used for rule set evaluation.

  • The number of centiseconds spent on rule set evaluation.

  • The number of SQL executions issued to evaluate a rule in a rule set.

  • The number of rule conditions processed during rule set evaluation.

  • The number of TRUE rules returned to the rules engine clients.

  • The number of MAYBE rules returned to the rules engine clients.

  • The number of times the following types of functions were called during rule set evaluation: variable value function, variable method function, and evaluation function.

Run the following query to display this information:

COLUMN NAME HEADING 'Name of Statistic' FORMAT A55
COLUMN VALUE HEADING 'Value' FORMAT 999999999

SELECT NAME, VALUE FROM V$RULE_SET_AGGREGATE_STATS;

Your output looks similar to the following:

Name of Statistic                                            Value
------------------------------------------------------- ----------
rule set evaluations (all)                                    5584
rule set evaluations (first_hit)                              5584
rule set evaluations (simple_rules_only)                      3675
rule set evaluations (SQL free)                               5584
rule set evaluation time (CPU)                                 179
rule set evaluation time (elapsed)                            1053
rule set SQL executions                                          0
rule set conditions processed                                11551
rule set true rules                                             10
rule set maybe rules                                           328
rule set user function calls (variable value function)         182
rule set user function calls (variable method function)      12794
rule set user function calls (evaluation function)            3857

Note:

A centisecond is one-hundredth of a second. So, for example, this output shows 1.79 seconds of CPU time and 10.53 seconds of elapsed time.

Displaying Information About Evaluations for Each Rule Set

You can query the V$RULE_SET dynamic performance view to display information about evaluations for each rule set since the database instance last started. The query in this section contains the following information about each rule set in a database:

  • The owner of the rule set.

  • The name of the rule set.

  • The total number of evaluations of the rule set since the database instance last started.

  • The total number of times SQL was executed to evaluate rules since the database instance last started. Generally, issuing SQL to evaluate rules is more expensive than evaluating rules without issuing SQL.

  • The total number of evaluations on the rule set that did not issue SQL to evaluate rules since the database instance last started.

  • The total number of TRUE rules returned to the rules engine clients using the rule set since the database instance last started.

  • The total number of MAYBE rules returned to the rules engine clients using the rule set since the database instance last started.

Run the following query to display this information for each rule set in the database:

COLUMN OWNER HEADING 'Rule Set|Owner' FORMAT A9
COLUMN NAME HEADING 'Rule Set|Name' FORMAT A11
COLUMN EVALUATIONS HEADING 'Total|Evaluations' FORMAT 99999999
COLUMN SQL_EXECUTIONS HEADING 'SQL|Executions' FORMAT 99999999
COLUMN SQL_FREE_EVALUATIONS HEADING 'SQL Free|Evaluations' FORMAT 99999999
COLUMN TRUE_RULES HEADING 'True|Rules' FORMAT 999999999
COLUMN MAYBE_RULES HEADING 'Maybe|Rules' FORMAT 99999999

SELECT OWNER, 
       NAME, 
       EVALUATIONS,
       SQL_EXECUTIONS,
       SQL_FREE_EVALUATIONS,
       TRUE_RULES,
       MAYBE_RULES
  FROM V$RULE_SET;

Your output looks similar to the following:

Rule Set  Rule Set          Total        SQL    SQL Free       True     Maybe
Owner     Name        Evaluations Executions Evaluations      Rules     Rules
--------- ----------- ----------- ---------- ----------- ---------- ---------
SYS       ALERT_QUE_R           3          0           0          2         0
STRMADMIN RULESET$_4           86          0           0         43         1
STRMADMIN RULESET$_11         458          0           0         11         0
STRMADMIN RULESET$_9           87          0           0          1        42
STRMADMIN RULESET$_7           87          0           0         44         1

Note:

Querying the V$RULE_SET view can have a negative impact on performance if a database has a large library cache.

Determining the Resources Used by Evaluation of Each Rule Set

You can query the V$RULE_SET dynamic performance view to determine the resources used by evaluation of a rule set since the database instance last started. If a rule set was evaluated more than one time since the database instance last started, then some statistics are cumulative, including statistics for the amount of CPU time, evaluation time, and shared memory bytes used.

The query in this section contains the following information about each rule set in a database:

  • The owner of the rule set

  • The name of the rule set

  • The total number of seconds of CPU time used to evaluate the rule set since the database instance last started

  • The total number of seconds used to evaluate the rule set since the database instance last started

  • The total number of shared memory bytes used to evaluate the rule set since the database instance last started

Run the following query to display this information for each rule set in the database:

COLUMN OWNER HEADING 'Rule Set|Owner' FORMAT A15
COLUMN NAME HEADING 'Rule Set Name' FORMAT A15
COLUMN CPU_SECONDS HEADING 'Seconds|of CPU|Time' FORMAT 999999.999
COLUMN ELAPSED_SECONDS HEADING 'Seconds of|Evaluation|Time' FORMAT 999999.999
COLUMN SHARABLE_MEM HEADING 'Bytes|of Shared|Memory' FORMAT 999999999

SELECT OWNER, 
       NAME, 
       (CPU_TIME/100) CPU_SECONDS,
       (ELAPSED_TIME/100) ELAPSED_SECONDS,
       SHARABLE_MEM
  FROM V$RULE_SET;

Your output looks similar to the following:

                                    Seconds  Seconds of      Bytes
Rule Set                             of CPU  Evaluation  of Shared
Owner           Rule Set Name          Time        Time     Memory
--------------- --------------- ----------- ----------- ----------
SYS             ALERT_QUE_R            .230        .490      25120
STRMADMIN       RULESET$_4             .060        .970      25097
STRMADMIN       RULESET$_11            .040        .030      25098
STRMADMIN       RULESET$_9             .220       3.040      25505
STRMADMIN       RULESET$_7             .040        .380      21313

Note:

Querying the V$RULE_SET view can have a negative impact on performance if a database has a large library cache.

Displaying Evaluation Statistics for a Rule

You can query the V$RULE dynamic performance view to display evaluation statistics for a particular rule since the database instance last started. The query in this section contains the following information about each rule set in a database:

  • The total number of times the rule evaluated to TRUE since the database instance last started.

  • The total number of times the rule evaluated to MAYBE since the database instance last started.

  • The total number of evaluations on the rule that issued SQL since the database instance last started. Generally, issuing SQL to evaluate a rule is more expensive than evaluating the rule without issuing SQL.

For example, run the following query to display this information for the locations25 rule in the strmadmin schema:

COLUMN TRUE_HITS HEADING 'True Evaluations' FORMAT 99999999999
COLUMN MAYBE_HITS HEADING 'Maybe Evaluations' FORMAT 99999999999
COLUMN SQL_EVALUATIONS HEADING 'SQL Evaluations' FORMAT 99999999999

SELECT TRUE_HITS, MAYBE_HITS, SQL_EVALUATIONS 
  FROM V$RULE
  WHERE RULE_OWNER = 'STRMADMIN' AND
        RULE_NAME  = 'LOCATIONS25';

Your output looks similar to the following:

True Evaluations Maybe Evaluations SQL Evaluations
---------------- ----------------- ---------------
            1518               154               0
PK`yPK&AOEBPS/strms_qpmon.htm Monitoring Oracle Streams Queues and Propagations

25 Monitoring Oracle Streams Queues and Propagations

The following topics describe monitoring Oracle Streams queues and propagations:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See Oracle Database 2 Day + Data Replication and Integration Guide and the online Help for the Oracle Streams tool for more information.


See Also:


Monitoring Queues and Messaging

The following topics describe displaying information about queues and messaging:

Displaying the ANYDATA Queues in a Database

To display all of the ANYDATA queues in a database, run the following query:

COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN NAME HEADING 'Queue Name' FORMAT A28
COLUMN QUEUE_TABLE HEADING 'Queue Table' FORMAT A22
COLUMN USER_COMMENT HEADING 'Comment' FORMAT A15

SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
  FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
  WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
        q.QUEUE_TABLE = t.QUEUE_TABLE AND
        q.OWNER       = t.OWNER;

Your output looks similar to the following:

Owner      Queue Name                   Queue Table            Comment
---------- ---------------------------- ---------------------- ---------------
STRMADMIN  DB$APPQ                      DB$APPQT
STRMADMIN  AQ$_DB$APPQT_E               DB$APPQT               exception queue
STRMADMIN  DA$CAPQ                      DA$CAPQT
STRMADMIN  AQ$_DA$CAPQT_E               DA$CAPQT               exception queue
IX         STREAMS_QUEUE                STREAMS_QUEUE_TABLE
IX         AQ$_STREAMS_QUEUE_TABLE_E    STREAMS_QUEUE_TABLE    exception queue

An exception queue is created automatically when you create an ANYDATA queue.


See Also:

"Managing Queues"

Viewing the Messaging Clients in a Database

You can view the messaging clients in a database by querying the DBA_STREAMS_MESSAGE_CONSUMERS data dictionary view. The query in this section displays the following information about each messaging client:

Run the following query to view this information about messaging clients:

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A25
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A18
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A11
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A11

SELECT STREAMS_NAME, 
       QUEUE_OWNER, 
       QUEUE_NAME, 
       RULE_SET_NAME, 
       NEGATIVE_RULE_SET_NAME 
  FROM DBA_STREAMS_MESSAGE_CONSUMERS;

Your output looks similar to the following:

Messaging                 Queue                         Positive    Negative
Client                    Owner      Queue Name         Rule Set    Rule Set
------------------------- ---------- ------------------ ----------- -----------
SCHEDULER_PICKUP          SYS        SCHEDULER$_JOBQ    RULESET$_8
SCHEDULER_COORDINATOR     SYS        SCHEDULER$_JOBQ    RULESET$_4
HR                        STRMADMIN  STREAMS_QUEUE      RULESET$_15

See Also:

Chapter 3, "Oracle Streams Staging and Propagation" for more information about messaging clients

Viewing Message Notifications

You can configure a message notification to send a notification when a message that can be dequeued by a messaging client is enqueued into a queue. The notification can be sent to an e-mail address, to an HTTP URL, or to a PL/SQL procedure. Run the following query to view the message notifications configured in a database:

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A10
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A20
COLUMN NOTIFICATION_TYPE HEADING 'Notification|Type' FORMAT A15
COLUMN NOTIFICATION_ACTION HEADING 'Notification|Action' FORMAT A25

SELECT STREAMS_NAME, 
       QUEUE_OWNER, 
       QUEUE_NAME, 
       NOTIFICATION_TYPE, 
       NOTIFICATION_ACTION 
  FROM DBA_STREAMS_MESSAGE_CONSUMERS
  WHERE NOTIFICATION_TYPE IS NOT NULL;

Your output looks similar to the following:

Messaging  Queue                      Notification    Notification
Client     Owner Queue Name           Type            Action
---------- ----- -------------------- --------------- -------------------------
OE         OE    NOTIFICATION_QUEUE   MAIL            mary.smith@example.com

See Also:


Determining the Consumer of Each Message in a Persistent Queue

To determine the consumer for each message in a persistent queue, query AQ$queue_table_name in the queue owner's schema, where queue_table_name is the name of the queue table. For example, to find the consumers of the messages in the oe_q_table_any queue table, run the following query:

COLUMN MSG_ID HEADING 'Message ID' FORMAT 9999
COLUMN MSG_STATE HEADING 'Message State' FORMAT A13
COLUMN CONSUMER_NAME HEADING 'Consumer' FORMAT A30

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME FROM AQ$OE_Q_TABLE_ANY;

Your output looks similar to the following:

Message ID                       Message State Consumer
-------------------------------- ------------- ------------------------------
B79AC412AE6E08CAE034080020AE3E0A PROCESSED     OE
B79AC412AE6F08CAE034080020AE3E0A PROCESSED     OE
B79AC412AE7008CAE034080020AE3E0A PROCESSED     OE

Note:

This query lists only messages in a persistent queue, not captured LCRs or other messages in a buffered queue.


See Also:

Oracle Streams Advanced Queuing User's Guide for an example that enqueues messages into an ANYDATA queue

Viewing the Contents of Messages in a Persistent Queue

In an ANYDATA queue, to view the contents of a payload that is encapsulated within an ANYDATA payload, you query the queue table using the Accessdata_type static functions of the ANYDATA type, where data_type is the type of payload to view.


See Also:

Oracle Streams Advanced Queuing User's Guide for an example that enqueues the messages shown in the queries in this section into an ANYDATA queue

For example, to view the contents of payload of type NUMBER in a queue with a queue table named oe_queue_table, run the following query as the queue owner:

SELECT qt.user_data.AccessNumber() "Numbers in Queue" 
  FROM strmadmin.oe_q_table_any qt;

Your output looks similar to the following:

Numbers in Queue
----------------
              16

Similarly, to view the contents of a payload of type VARCHAR2 in a queue with a queue table named oe_q_table_any, run the following query:

SELECT qt.user_data.AccessVarchar2() "Varchar2s in Queue"
   FROM strmadmin.oe_q_table_any qt;

Your output looks similar to the following:

Varchar2s in Queue
--------------------------------------------------------------------------------
Chemicals - SW

To view the contents of a user-defined data type, you query the queue table using a custom function that you create. For example, to view the contents of a payload of oe.cust_address_typ, create a function similar to the following:

CREATE OR REPLACE FUNCTION oe.view_cust_address_typ(
in_any IN ANYDATA) 
RETURN oe.cust_address_typ
IS
  address   oe.cust_address_typ;
  num_var   NUMBER;
BEGIN
  IF (in_any.GetTypeName() = 'OE.CUST_ADDRESS_TYP') THEN
    num_var := in_any.GetObject(address);
    RETURN address;
  ELSE RETURN NULL;
  END IF;
END;
/

GRANT EXECUTE ON oe.view_cust_address_typ TO strmadmin;

GRANT EXECUTE ON oe.cust_address_typ TO strmadmin;

Query the queue table using the function, as in the following example:

SELECT oe.view_cust_address_typ(qt.user_data) "Customer Addresses"
  FROM strmadmin.oe_q_table_any qt 
  WHERE qt.user_data.GetTypeName() = 'OE.CUST_ADDRESS_TYP';

Your output looks similar to the following:

Customer Addresses(STREET_ADDRESS, POSTAL_CODE, CITY, STATE_PROVINCE, COUNTRY_ID
--------------------------------------------------------------------------------
CUST_ADDRESS_TYP('1646 Brazil Blvd', '361168', 'Chennai', 'Tam', 'IN')

Monitoring Buffered Queues

A buffered queue includes the following storage areas:

Buffered queues are stored in the Oracle Streams pool, and the Oracle Streams pool is a portion of memory in the SGA that is used by Oracle Streams. In an Oracle Streams environment, LCRs captured by a capture process always are stored in the buffered queue of an ANYDATA queue. Users and application can also enqueue messages into buffered queues, and these buffered queues be part of ANYDATA queues or part of typed queues.

Buffered queues enable Oracle databases to optimize messages by storing them in the SGA instead of always storing them in a queue table. Captured LCRs always are stored in buffered queues, but other types of messages can be stored in buffered queues or persistently in queue tables. Messages in a buffered queue can spill from memory if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate queue table.

The following sections describe queries that monitor buffered queues:

Determining the Number of Messages in Each Buffered Queue

The V$BUFFERED_QUEUES dynamic performance view contains information about the number of messages in a buffered queue. The messages can be captured LCRs, buffered LCRs, or buffered user messages.

You can determine the following information about each buffered queue in a database by running the query in this section:

  • The queue owner

  • The queue name

  • The number of messages currently in memory

  • The number of messages that have spilled from memory into the queue table

  • The total number of messages in the buffered queue, which includes the messages in memory and the messages spilled to the queue table

To display this information, run the following query:

COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT QUEUE_SCHEMA, 
       QUEUE_NAME, 
       (NUM_MSGS - SPILL_MSGS) MEM_MSG, 
       SPILL_MSGS, 
       NUM_MSGS
  FROM V$BUFFERED_QUEUES;

Your output looks similar to the following:

                                     Messages      Messages      Total Messages
Queue Owner     Queue Name          in Memory       Spilled   in Buffered Queue
--------------- --------------- ------------- ------------- -------------------
STRMADMIN       STREAMS_QUEUE             534            21                 555

Viewing the Capture Processes for the LCRs in Each Buffered Queue

A capture process is a queue publisher that enqueues captured LCRs into a buffered queue. These LCRs can be propagated to other queues subsequently. By querying the V$BUFFERED_PUBLISHERS dynamic performance view, you can display each capture process that captured the LCRs in the buffered queue. These LCRs might have been captured at the local database, or they might have been captured at a remote database and propagated to the queue specified in the query.

The query in this section assumes that the buffered queues in the local database only store captured LCRs, not buffered LCRs or buffered user messages. The query displays the following information about each capture process:

  • The name of a capture process that captured the LCRs in the buffered queue

  • If the capture process is running on a remote database, and the captured LCRs have been propagated to the local queue, then the name of the queue and database from which the captured LCRs were last propagated

  • The name of the local queue staging the captured LCRs

  • The total number of LCRs captured by a capture process that have been staged in the buffered queue since the database instance was last started

  • The message number of the LCR last enqueued into the buffered queue from the sender

  • The percentage of the Streams pool used at the capture process database

  • The state of the publisher. The capture process is the publisher, and the following states are possible:

    • PUBLISHING MESSAGES

    • IN FLOW CONTROL: TOO MANY UNBROWSED MESSAGES

    • IN FLOW CONTROL: OVERSPILLED MESSAGES

    • IN FLOW CONTROL: INSUFFICIENT MEMORY AND UNBROWSED MESSAGES

To display this information, run the following query:

COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A10
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A10
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 9999999999
COLUMN MEMORY_USAGE HEADING 'Percent|Streams|Pool|Used' FORMAT 999
COLUMN PUBLISHER_STATE HEADING 'Publisher|State' FORMAT A10
 
SELECT SENDER_NAME,
       SENDER_ADDRESS,
       QUEUE_NAME,        
       CNUM_MSGS, 
       LAST_ENQUEUED_MSG,
       MEMORY_USAGE,
       PUBLISHER_STATE
  FROM V$BUFFERED_PUBLISHERS;

Your output looks similar to the following:

                                                            Percent
                                         Number        Last Streams
Capture                                 of LCRs    Enqueued    Pool Publisher
Process    Sender Queue    Queue Name  Enqueued         LCR    Used State
---------- --------------- ---------- --------- ----------- ------- ----------
DB1$CAP                    DB1$CAPQ        3670     1002253      21 PUBLISHING
                                                                     MESSAGES
 
DB2$CAP    "STRMADMIN"."DB DB2$APPQ        3427      981066      21 PUBLISHING
           2$CAPQ"@DB2.EXA                                           MESSAGES
           MPLE.COM

This output shows following:

  • 3670 LCRs from the local db1$cap capture process were enqueued into the local queue named db1$capq. The capture process is local because the Sender Queue column is NULL. The message number of the last enqueued LCR from this capture process was 1002253. 21% of the Streams pool is used at the capture process database, and the capture process is publishing messages normally.

  • 3427 LCRs from the db2$cap capture process running on a remote database were propagated from a queue named db2$capq on database db2.example.com to the local queue named db2$appq. The message number of the last enqueued LCR from this sender was 961066. 21% of the Streams pool is used at the remote capture process database, and the capture process is publishing messages normally.

Displaying Information About Propagations that Send Buffered Messages

The query in this section displays the following information about each propagation that sends buffered messages from a buffered queue in the local database:

  • The name of the propagation

  • The queue owner

  • The queue name

  • The name of the database link used by the propagation

  • The status of the propagation schedule

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20

SELECT p.PROPAGATION_NAME,
       s.QUEUE_SCHEMA,
       s.QUEUE_NAME,
       s.DBLINK,
       s.SCHEDULE_STATUS
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.SOURCE_QUEUE_OWNER      = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME       = s.QUEUE_NAME AND
        p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
        p.DESTINATION_QUEUE_NAME  = s.DST_QUEUE_NAME;

Your output looks similar to the following:

                Queue      Queue           Database
Propagation     Owner      Name            Link       Schedule Status
--------------- ---------- --------------- ---------- --------------------
PROPAGATION$_6  STRMADMIN  DB1$CAPQ        "STRMADMIN SCHEDULE OPTIMIZED
                                           "."DB1$APP
                                           Q"@DB2.EXA
                                           MPLE.COM

When the SCHEDULE_STATUS column in the V$PROPAGATION_SENDER view shows SCHEDULE OPTIMIZED for a propagation, it means that the propagation is part of a combined capture and apply optimization.

Displaying the Number of Messages and Bytes Sent By Propagations

The query in this section displays the number of messages and the number of bytes sent by each propagation that sends buffered messages from a buffered queue in the local database:

  • The name of the propagation

  • The queue name

  • The name of the database link used by the propagation

  • The total number of messages sent since the database instance was last started

  • The total number of bytes sent since the database instance was last started

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A20
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 999999999999

SELECT p.PROPAGATION_NAME,
       s.QUEUE_NAME,
       s.DBLINK,
       s.TOTAL_MSGS,
       s.TOTAL_BYTES
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.SOURCE_QUEUE_OWNER      = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME       = s.QUEUE_NAME AND
        p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
        p.DESTINATION_QUEUE_NAME  = s.DST_QUEUE_NAME;

Your output looks similar to the following:

                Queue           Database            Total     Total
Propagation     Name            Link                 Messages     Bytes
--------------- --------------- -------------------- --------- ---------
MULT1_TO_MULT3  STREAMS_QUEUE   MULT3.EXAMPLE.COM           79     71467
MULT1_TO_MULT2  STREAMS_QUEUE   MULT2.EXAMPLE.COM           79     71467

Displaying Performance Statistics for Propagations that Send Buffered Messages

The query in this section displays the amount of time that a propagation sending buffered messages spends performing various tasks. Each propagation sends messages from the source queue to the destination queue. Specifically, the query displays the following information:

  • The name of the propagation

  • The queue name

  • The name of the database link used by the propagation

  • The amount of time spent dequeuing messages from the queue since the database instance was last started, in seconds

  • The amount of time spent pickling messages since the database instance was last started, in seconds. Pickling involves changing a message in memory into a series of bytes that can be sent over a network.

  • The amount of time spent propagating messages since the database instance was last started, in seconds

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.PROPAGATION_NAME,
       s.QUEUE_NAME,
       s.DBLINK,
       (s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
       (s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
       (s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.SOURCE_QUEUE_OWNER      = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME       = s.QUEUE_NAME AND
        p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
        p.DESTINATION_QUEUE_NAME  = s.DST_QUEUE_NAME;

Your output looks similar to the following:

                Queue         Database       Dequeue       Pickle  Propagation
Propagation     Name          Link              Time         Time         Time
--------------- ------------- --------- ------------ ------------ ------------
MULT1_TO_MULT2  STREAMS_QUEUE MULT2.EXA        30.65        45.10        10.91
                              MPLE.COM
MULT1_TO_MULT3  STREAMS_QUEUE MULT3.EXA        25.36        37.07         8.35
                              MPLE.COM

Viewing the Propagations Dequeuing Messages from Each Buffered Queue

Propagations are queue subscribers that can dequeue messages. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the propagations that can dequeue buffered messages.

Apply processes also are queue subscribers. This query joins with the DBA_PROPAGATION and V$BUFFERED_QUEUES views to limit the output to propagations only and to show the propagation name of each propagation.

The query in this section displays the following information about each propagation that can dequeue messages from queues:

  • The name of the propagation.

  • The owner and name of the queue to which the propagation subscribes

    This queue is the source queue for the propagation.

  • The subscriber address

    For a propagation, the subscriber address is the propagation's destination queue and destination database

  • The time when the propagation last started

  • The cumulative number of messages dequeued by the propagation since the database last started

  • The total number of messages dequeued by the propagation since the propagation last started

  • The message number of the message most recently dequeued by the propagation

To display this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A11
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A5
COLUMN SUBSCRIBER_ADDRESS HEADING 'Subscriber|Address' FORMAT A15
COLUMN STARTUP_TIME HEADING 'Startup|Time' FORMAT A9
COLUMN CNUM_MSGS HEADING 'Cumulative|Messages' FORMAT 99999999
COLUMN TOTAL_DEQUEUED_MSG HEADING 'Total|Messages' FORMAT 99999999
COLUMN LAST_DEQUEUED_NUM HEADING 'Last|Dequeued|Message|Number' FORMAT 99999999
 
SELECT p.PROPAGATION_NAME,
       s.QUEUE_SCHEMA,
       s.QUEUE_NAME,
       s.SUBSCRIBER_ADDRESS,
       s.STARTUP_TIME,
       s.CNUM_MSGS,          
       s.TOTAL_DEQUEUED_MSG,
       s.LAST_DEQUEUED_NUM
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
      p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME AND 
      p.PROPAGATION_NAME   = s.SUBSCRIBER_NAME AND
      s.SUBSCRIBER_ADDRESS LIKE '%' || p.DESTINATION_DBLINK;

Your output looks similar to the following:

                                                                            Last
                                                                        Dequeued
            Queue Queue Subscriber      Startup   Cumulative     Total   Message
Propagation Owner Name  Address         Time        Messages  Messages    Number
----------- ----- ----- --------------- --------- ---------- --------- ---------
PROPAGATION STRMA DB1$C "STRMADMIN"."DB 25-JUN-09      11079     11079   1525762
$_5         DMIN  APQ   1$APPQ"@DB2.EXA
                        MPLE.COM

Note:

If there are multiple propagations using the same database link but propagating messages to different queues at the destination database, then the statistics returned by this query are approximate rather than accurate.

Displaying Performance Statistics for Propagations That Receive Buffered Messages

The query in this section displays the amount of time that each propagation receiving buffered messages spends performing various tasks. Each propagation receives the messages and enqueues them into the destination queue for the propagation. Specifically, the query displays the following information:

  • The name of the source queue from which messages are propagated.

  • The name of the source database.

  • The amount of time spent unpickling messages since the database instance was last started, in seconds. Unpickling involves changing a series of bytes that can be sent over a network back into a buffered message in memory.

  • The amount of time spent evaluating rules for propagated messages since the database instance was last started, in seconds.

  • The amount of time spent enqueuing messages into the destination queue for the propagation since the database instance was last started, in seconds.

To display this information, run the following query:

COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A20
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99

SELECT SRC_QUEUE_NAME,
       SRC_DBNAME,
       (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
       (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
       (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME
  FROM V$PROPAGATION_RECEIVER;

Your output looks similar to the following:

Source                                                    Rule
Queue                Source              Unpickle   Evaluation      Enqueue
Name                 Database                Time         Time         Time
-------------------- -------------------- ------------ ------------ ------------
STREAMS_QUEUE        MULT2.EXAMPLE.COM           45.65         5.44        45.85
STREAMS_QUEUE        MULT3.EXAMPLE.COM           53.35         8.01        50.41

Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue

Apply processes are queue subscribers that can dequeue messages. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the apply processes that can dequeue messages.

This query joins with the V$BUFFERED_QUEUES views to show the name of the queue. In addition, propagations also are queue subscribers, and this query limits the output to subscribers where the SUBSCRIBER_ADDRESS is NULL to return only apply processes.

The query in this section displays the following information about the apply processes that can dequeue messages from queues:

  • The name of the apply process

  • The queue owner

  • The queue name

  • The time when the apply process last started

  • The cumulative number of messages dequeued by the apply process since the database last started

  • The total number of messages dequeued by the apply process since the apply process last started

  • The message number of the message most recently dequeued by the apply process

To display this information, run the following query:

COLUMN SUBSCRIBER_NAME HEADING 'Apply Process' FORMAT A16
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A5
COLUMN STARTUP_TIME HEADING 'Startup|Time' FORMAT A9
COLUMN CNUM_MSGS HEADING 'Cumulative|Messages' FORMAT 99999999
COLUMN TOTAL_DEQUEUED_MSG HEADING 'Number of|Dequeued|Messages' 
  FORMAT 99999999
COLUMN LAST_DEQUEUED_NUM HEADING 'Last|Dequeued|Message|Number' FORMAT 99999999

SELECT s.SUBSCRIBER_NAME,
       q.QUEUE_SCHEMA,
       q.QUEUE_NAME, 
       s.STARTUP_TIME,
       s.CNUM_MSGS,          
       s.TOTAL_DEQUEUED_MSG,
       s.LAST_DEQUEUED_NUM
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND 
      s.SUBSCRIBER_ADDRESS IS NULL AND
      s.SUBSCRIBER_NAME = a.APPLY_NAME;

Your output looks similar to the following:

                                                                  Last
                                                   Number of  Dequeued
                 Queue Queue Startup   Cumulative   Dequeued   Message
Apply Process    Owner Name  Time        Messages   Messages    Number
---------------- ----- ----- --------- ---------- ---------- ---------
APPLY$_DB2_2     STRMA DB2$A 25-JUN-09      11039      11039   1509859
                 DMIN  PPQ

Monitoring Oracle Streams Propagations and Propagation Jobs

The following topics describe monitoring propagations and propagation jobs:

Displaying the Queues and Database Link for Each Propagation

You can display information about each propagation by querying the DBA_PROPAGATION data dictionary view. This view contains information about each propagation with a source queue is at the local database.

The query in this section displays the following information about each propagation:

  • The propagation name

  • The source queue name

  • The database link used by the propagation

  • The destination queue name

  • The status of the propagation, either ENABLED, DISABLED, or ABORTED

  • Whether the propagation is a queue-to-queue propagation

To display this information about each propagation in a database, run the following query:

COLUMN PROPAGATION_NAME        HEADING 'Propagation|Name'   FORMAT A19
COLUMN SOURCE_QUEUE_NAME       HEADING 'Source|Queue|Name'  FORMAT A17
COLUMN DESTINATION_DBLINK      HEADING 'Database|Link'      FORMAT A9
COLUMN DESTINATION_QUEUE_NAME  HEADING 'Dest|Queue|Name'    FORMAT A15
COLUMN STATUS                  HEADING 'Status'             FORMAT A8
COLUMN QUEUE_TO_QUEUE          HEADING 'Queue-|to-|Queue?'  FORMAT A6
 
SELECT PROPAGATION_NAME,
       SOURCE_QUEUE_NAME,
       DESTINATION_DBLINK, 
       DESTINATION_QUEUE_NAME,
       STATUS,
       QUEUE_TO_QUEUE
  FROM DBA_PROPAGATION;

Your output looks similar to the following:

                    Source                      Dest                     Queue-
Propagation         Queue             Database  Queue                    to-
Name                Name              Link      Name            Status   Queue?
------------------- ----------------- --------- --------------- -------- ------
PROPAGATION$_6      DA$CAPQ           DB.EXAMPL DA$APPQ         ENABLED  TRUE
                                      E.COM

Determining the Source Queue and Destination Queue for Each Propagation

You can determine the source queue and destination queue for each propagation by querying the DBA_PROPAGATION data dictionary view.

The query in this section displays the following information about each propagation:

  • The propagation name

  • The source queue owner

  • The source queue name

  • The database that contains the source queue

  • The destination queue owner

  • The destination queue name

  • The database that contains the destination queue

To display this information about each propagation in a database, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN SOURCE_QUEUE_OWNER HEADING 'Source|Queue|Owner' FORMAT A10
COLUMN 'Source Queue' HEADING 'Source|Queue' FORMAT A15
COLUMN DESTINATION_QUEUE_OWNER HEADING 'Dest|Queue|Owner'   FORMAT A10
COLUMN 'Destination Queue' HEADING 'Destination|Queue' FORMAT A15

SELECT p.PROPAGATION_NAME,
       p.SOURCE_QUEUE_OWNER,
       p.SOURCE_QUEUE_NAME ||'@'|| 
       g.GLOBAL_NAME "Source Queue",
       p.DESTINATION_QUEUE_OWNER,
       p.DESTINATION_QUEUE_NAME ||'@'|| 
       p.DESTINATION_DBLINK "Destination Queue"
  FROM DBA_PROPAGATION p, GLOBAL_NAME g;

Your output looks similar to the following:

                     Source                     Dest
Propagation          Queue      Source          Queue      Destination
Name                 Owner      Queue           Owner      Queue
-------------------- ---------- --------------- ---------- ---------------
PROPAGATION$_6       STRMADMIN  DA$CAPQ@DA.EXAM STRMADMIN  DA$APPQ@DB.EXAM
                                PLE.COM                    PLE.COM

Determining the Rule Sets for Each Propagation

The query in this section displays the following information for each propagation:

  • The propagation name

  • The owner of the positive rule set for the propagation

  • The name of the positive rule set used by the propagation

  • The owner of the negative rule set used by the propagation

  • The name of the negative rule set used by the propagation

To display this general information about each propagation in a database, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN RULE_SET_OWNER HEADING 'Positive|Rule Set|Owner' FORMAT A10
COLUMN RULE_SET_NAME HEADING 'Positive Rule|Set Name' FORMAT A15
COLUMN NEGATIVE_RULE_SET_OWNER HEADING 'Negative|Rule Set|Owner' FORMAT A10
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative Rule|Set Name' FORMAT A15

SELECT PROPAGATION_NAME, 
       RULE_SET_OWNER, 
       RULE_SET_NAME, 
       NEGATIVE_RULE_SET_OWNER, 
       NEGATIVE_RULE_SET_NAME
  FROM DBA_PROPAGATION;

Your output looks similar to the following:

                     Positive                   Negative
Propagation          Rule Set   Positive Rule   Rule Set   Negative Rule
Name                 Owner      Set Name        Owner      Set Name
-------------------- ---------- --------------- ---------- ---------------
PROPAGATION$_6       STRMADMIN  RULESET$_7      STRMADMIN  RULESET$_9

Displaying Information About the Schedules for Propagation Jobs

The query in this section displays the following information about the propagation schedules for each propagation job used by a propagation in the database:

  • The name of the propagation

  • The latency of the propagation job, which is the maximum wait time to propagate a new message during the duration, when all other messages in the queue to the relevant destination have been propagated

  • Whether the propagation job is enabled

  • The name of the process that most recently executed the schedule

  • The number of consecutive times schedule execution has failed, if any

    After 16 consecutive failures, a propagation job is aborted automatically.

  • Whether the propagation is queue-to-queue or queue-to-dblink

  • The error message text of the last unsuccessful propagation execution

Run this query at the database that contains the source queue:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 99999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Failures' FORMAT 999
COLUMN QUEUE_TO_QUEUE HEADING 'Queue|to|Queue'
COLUMN LAST_ERROR_MSG HEADING 'Last Error|Message' FORMAT A15
 
SELECT p.PROPAGATION_NAME,
       s.LATENCY,
       DECODE(s.SCHEDULE_DISABLED,
                'Y', 'Disabled',
                'N', 'Enabled') SCHEDULE_DISABLED,
       s.PROCESS_NAME,
       s.FAILURES,
       p.QUEUE_TO_QUEUE,
       s.LAST_ERROR_MSG
  FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
  WHERE s.MESSAGE_DELIVERY_MODE = 'BUFFERED'
    AND s.DESTINATION LIKE '%' || p.DESTINATION_DBLINK
    AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
    AND s.QNAME  = p.SOURCE_QUEUE_NAME
  ORDER BY PROPAGATION_NAME;

Your output looks similar to the following:

                                                      Queue
                   Latency                            to     Last Error
Propagation     in Seconds Status   Process  Failures Queue  Message
--------------- ---------- -------- -------- -------- ------ ---------------
PROPAGATION$_6          19 Enabled  CS00            0 TRUE

See Also:


Determining the Total Number of Messages and Bytes Propagated

A propagation can be queue-to-queue or queue-to-database link (queue-to-dblink). A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. All queue-to-dblink propagations that share the same database link have a single propagation schedule.

The query in this section displays the following information for each propagation:

  • The name of the propagation

  • The total time spent by the system executing the propagation schedule

  • The total number of messages propagated by the propagation schedule

  • The total number of bytes propagated by the propagation schedule

Run the following query to display this information for each propagation with a source queue at the local database:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN TOTAL_TIME HEADING 'Total Time|Executing|in Seconds' FORMAT 999999
COLUMN TOTAL_NUMBER HEADING 'Total Messages|Propagated' FORMAT 999999999
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999

SELECT p.PROPAGATION_NAME, s.TOTAL_TIME, s.TOTAL_NUMBER, s.TOTAL_BYTES 
  FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
  WHERE s.DESTINATION LIKE '%' || p.DESTINATION_DBLINK
    AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
    AND s.QNAME  = p.SOURCE_QUEUE_NAME
    AND s.MESSAGE_DELIVERY_MODE = 'BUFFERED';

Your output looks similar to the following:

                     Total Time
Propagation           Executing Total Messages    Total Bytes
Name                 in Seconds     Propagated     Propagated
-------------------- ---------- -------------- --------------
PROPAGATION$_6                0         432615       94751013

See Also:

Oracle Streams Advanced Queuing User's Guide and Oracle Database Reference for more information about the DBA_QUEUE_SCHEDULES data dictionary view

Displaying Information About Propagation Senders

A propagation sender sends messages from a source queue to a destination queue.

The query in this section displays the following information about each propagation sender in a database:

  • The name of the propagation

  • The session identifier of the propagation sender

  • The session serial number of the propagation sender

  • The operating system process identification number of the propagation sender

  • The state of the propagation sender

In a combined capture and apply optimization, the capture process acts as the propagation sender and transmits messages directly to the propagation receiver. When a propagation is part of a combined capture and apply optimization, this query shows the capture process session ID, session serial number, operating system process ID, and state.

When a propagation is not part of a combined capture and apply optimization, this query shows the propagation job session ID, session serial number, operating system process ID, and state.

To view this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A11
COLUMN SESSION_ID HEADING 'Session ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial Number' FORMAT 9999
COLUMN SPID HEADING g'Operating System|Process ID' FORMAT A24
COLUMN STATE HEADING 'State' FORMAT A16

SELECT p.PROPAGATION_NAME, 
       s.SESSION_ID, 
       s.SERIAL#, 
       s.SPID, 
       s.STATE
  FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
  WHERE p.SOURCE_QUEUE_OWNER      = s.QUEUE_SCHEMA AND
        p.SOURCE_QUEUE_NAME       = s.QUEUE_NAME AND
        p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
        p.DESTINATION_QUEUE_NAME  = s.DST_QUEUE_NAME;

Your output looks similar to the following:

Propagation                  Session Operating System
Name        Session ID Serial Number Process ID               State
----------- ---------- ------------- ------------------------ ----------------
PROPAGATION         61            17 21145                    Waiting on empty
$_6                                                            queue

Note:

When column SCHEDULE_STATUS in the V$PROPAGATION_SENDER view shows SCHEDULE OPTIMIZED, it means that the propagation is part of a combined capture and apply optimization.

Displaying Information About Propagation Receivers

A propagation receiver enqueues messages sent by propagation senders into a destination queue. The query in this section displays the following information about each propagation receiver in a database:

  • The name of the propagation

  • The session ID of the propagation receiver

  • The session serial number propagation receiver

  • The operating system process identification number of the propagation receiver

  • The state of the propagation receiver

To view this information, run the following query:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A15
COLUMN SESSION_ID HEADING 'Session ID' FORMAT 999999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 999999
COLUMN SPID HEADING 'Operating|System|Process ID' FORMAT 999999
COLUMN STATE HEADING 'State' FORMAT A16
 
SELECT PROPAGATION_NAME, 
       SESSION_ID, 
       SERIAL#, 
       SPID, 
       STATE
  FROM V$PROPAGATION_RECEIVER;

Your output looks similar to the following:

                           Session Operating
Propagation                 Serial System
Name            Session ID  Number Process ID               State
--------------- ---------- ------- ------------------------ ----------------
PROPAGATION$_5          60       5 21050                    Waiting for mess
                                                            age from propaga
                                                            tion sender

Displaying Session Information About Each Propagation

The query in this section displays the following session information about each session associated with a propagation in a database:

  • The Oracle Streams component

  • The session identifier

  • The serial number

  • The operating system process identification number

  • The process names of the propagation sender and propagation receiver processes

To display this information for each propagation in a database, run the following query:

COLUMN ACTION HEADING 'Streams Component' FORMAT A28
COLUMN SID HEADING 'Session ID' FORMAT 99999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999999
COLUMN PROCESS HEADING 'Operating System|Process Number' FORMAT A20
COLUMN PROCESS_NAME HEADING 'Process|Names' FORMAT A7
 
SELECT /*+PARAM('_module_action_old_length',0)*/ ACTION,
       SID,
       SERIAL#,
       PROCESS,
       SUBSTR(PROGRAM,INSTR(PROGRAM,'(')+1,4) PROCESS_NAME
  FROM V$SESSION
  WHERE MODULE ='Streams' AND
        ACTION LIKE '%Propagation%';

Your output looks similar to the following:

                                        Session
                                         Serial Operating System     Process
Streams Component            Session ID   Number Process Number       Names
---------------------------- ---------- -------- -------------------- -------
APPLY$_DB_3 - Propagation Re         60        5 21048                TNS
ceiver CCA
 
PROPAGATION$_6 - Propagation         61       17 21145                CS00
Sender CCA

The CCA in the Streams component sample output indicates that the propagation is part of a combined capture and apply optimization. The TNS process name indicates that the propagation receiver was initiated remotely by a capture process.

PK@W PK&AOEBPS/strms_adcca.htm+ Combined Capture and Apply Optimization

12 Combined Capture and Apply Optimization

The following topics contain information about the combined capture and apply optimization:

About Combined Capture and Apply Optimization

For improved efficiency, a capture process can create a propagation sender to transmit logical change records (LCRs) directly to a propagation receiver under specific conditions. The propagation receiver enqueues the LCRs into the buffered queue portion of the destination queue, and an apply process dequeues the LCRs. This optimization is called combined capture and apply.

Combined Capture and Apply Requirements

Combined capture and apply can be used when the capture process and apply process run on the same database instance or on different databases.

When the capture process and apply process run on the same database instance, combined capture and apply is possible only if all of the following conditions are met:

When the capture process and apply process run on different databases, or on different instances in the same database, combined capture and apply is possible only if all of the following conditions are met:


Note:

  • Combined capture and apply is not possible with synchronous capture.

  • Combined capture and apply is not possible when an Oracle Database 10g or earlier database is part of the configuration.

  • The combined capture and apply requirements are different in Oracle Database 11g Release 2 (11.2) and Oracle Database 11g Release 1 (11.1). If a database in a combined capture and apply optimization is an 11.1 database, then the 11.1 requirements must be met. See Oracle Streams Concepts and Administration for the 11.1 release for information about these requirements.


How to Use Combined Capture and Apply

After you meet the requirements for combined capture and apply, you do not need to perform any other configuration tasks to use it. The capture process automatically detects that combined capture and apply is possible when it is started. After it creates a propagation sender to establish a connection with the propagation receiver, the propagation sender sends captured LCRs directly to the propagation receiver.

If combined capture and apply is used, and you change the configuration so that it no longer meets the requirements of combined capture and apply, then the capture process detects this change and restarts. After the capture process restarts, it no longer uses combined capture and apply.

If combined capture and apply is not used, and you change the configuration so that it meets the requirements of combined capture and apply, then combined capture and apply is used automatically when the capture process is restarted. In this case, you must restart the capture process manually. It is not restarted automatically.

How to Determine Whether Combined Capture and Apply Is Being Used

Check the following dynamic performance views to determine whether combined capture and apply is used:

Combined Capture and Apply and Point-in-Time Recovery

When you use combined capture and apply in a single-source replication environment, the Oracle Streams clients handle point-in-time recovery of the destination database automatically. The Oracle Streams client include the capture process, propagation, and apply process that form the combined capture and apply optimization.

In a single-source replication environment that uses combined capture and apply, complete these general steps to perform point-in-time recovery on the destination database:

  1. Stop the capture process and apply process, and disable the propagation.

  2. Perform the point-in-time recovery on the destination database.

  3. Ensure that the capture process has access to the archived redo log files for the previous point in time.

  4. Start the apply process.

  5. Enable the propagation.

  6. Start the capture process.

When you follow these steps, the capture process determines its start SCN automatically, and no other steps are required.


See Also:

Oracle Streams Replication Administrator's Guide for more information about performing point-in-time recovery in an Oracle Streams replication environment

PK3++PK&AOEBPS/strms_mcap.htm Managing Oracle Streams Implicit Capture

15 Managing Oracle Streams Implicit Capture

Both capture processes and synchronous captures perform implicit capture. This chapter contains instructions for managing implicit capture.

The following topics describe managing Oracle Streams implicit capture:

Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.

Managing a Capture Process

A capture process captures changes in a redo log, reformats each captured change into a logical change record (LCR), and enqueues the LCR into an ANYDATA queue.

The following topics describe managing a capture process:


See Also:


Starting a Capture Process

You run the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package to start an existing capture process. For example, the following procedure starts a capture process named strm01_capture:

BEGIN
  DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'strm01_capture');
END;
/

Note:

If a new capture process will use a new LogMiner data dictionary, then, when you first start the new capture process, some time might be required to populate the new LogMiner data dictionary. A new LogMiner data dictionary is created if a non-NULL first SCN value was specified when the capture process was created.


See Also:

Oracle Database 2 Day + Data Replication and Integration Guide for instructions about starting a capture process with Oracle Enterprise Manager

Stopping a Capture Process

You run the STOP_CAPTURE procedure in the DBMS_CAPTURE_ADM package to stop an existing capture process. For example, the following procedure stops a capture process named strm01_capture:

BEGIN
  DBMS_CAPTURE_ADM.STOP_CAPTURE(
    capture_name => 'strm01_capture');
END;
/

See Also:

Oracle Database 2 Day + Data Replication and Integration Guide for instructions about stopping a capture process with Oracle Enterprise Manager

Managing the Rule Set for a Capture Process

This section contains instructions for completing the following tasks:

Specifying a Rule Set for a Capture Process

You can specify one positive rule set and one negative rule set for a capture process. The capture process captures a change if it evaluates to TRUE for at least one rule in the positive rule set and evaluates to FALSE for all the rules in the negative rule set. The negative rule set is evaluated before the positive rule set.

Specifying a Positive Rule Set for a Capture Process

You specify an existing rule set as the positive rule set for an existing capture process using the rule_set_name parameter in the ALTER_CAPTURE procedure. This procedure is in the DBMS_CAPTURE_ADM package.

For example, the following procedure sets the positive rule set for a capture process named strm01_capture to strm02_rule_set.

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name  => 'strm01_capture',
    rule_set_name => 'strmadmin.strm02_rule_set');
END;
/
Specifying a Negative Rule Set for a Capture Process

You specify an existing rule set as the negative rule set for an existing capture process using the negative_rule_set_name parameter in the ALTER_CAPTURE procedure. This procedure is in the DBMS_CAPTURE_ADM package.

For example, the following procedure sets the negative rule set for a capture process named strm01_capture to strm03_rule_set.

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name           => 'strm01_capture',
    negative_rule_set_name => 'strmadmin.strm03_rule_set');
END;
/

Adding Rules to a Rule Set for a Capture Process

To add rules to a rule set for an existing capture process, you can run one of the following procedures in the DBMS_STREAMS_ADM package and specify the existing capture process:

Excluding the ADD_SUBSET_RULES procedure, these procedures can add rules to the positive rule set or negative rule set for a capture process. The ADD_SUBSET_RULES procedure can add rules only to the positive rule set for a capture process.

Adding Rules to the Positive Rule Set for a Capture Process

The following example runs the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the positive rule set of a capture process named strm01_capture:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name      =>  'hr.departments',
    streams_type    =>  'capture',
    streams_name    =>  'strm01_capture',
    queue_name      =>  'strmadmin.streams_queue',
    include_dml     =>  TRUE,
    include_ddl     =>  TRUE,
    inclusion_rule  =>  TRUE);
END;
/

Running this procedure performs the following actions:

  • Creates two rules. One rule evaluates to TRUE for DML changes to the hr.departments table, and the other rule evaluates to TRUE for DDL changes to the hr.departments table. The rule names are system generated.

  • Adds the two rules to the positive rule set associated with the capture process because the inclusion_rule parameter is set to TRUE.

  • Prepares the hr.departments table for instantiation by running the PREPARE_TABLE_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package.

  • Enables supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the hr.departments table. When the PREPARE_TABLE_INSTANTIATION procedure is run, the default value (keys) is specified for the supplemental_logging parameter.

If the capture process is performing downstream capture, then the table is prepared for instantiation and supplemental logging is enabled for key columns only if the downstream capture process uses a database link to the source database. If a downstream capture process does not use a database link to the source database, then the table must be prepared for instantiation manually and supplemental logging must be enabled manually.

Adding Rules to the Negative Rule Set for a Capture Process

The following example runs the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the negative rule set of a capture process named strm01_capture:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name      =>  'hr.job_history',
    streams_type    =>  'capture',
    streams_name    =>  'strm01_capture',
    queue_name      =>  'strmadmin.streams_queue',
    include_dml     =>  TRUE,
    include_ddl     =>  TRUE,
    inclusion_rule  =>  FALSE);
END;
/

Running this procedure performs the following actions:

  • Creates two rules. One rule evaluates to TRUE for DML changes to the hr.job_history table, and the other rule evaluates to TRUE for DDL changes to the hr.job_history table. The rule names are system generated.

  • Adds the two rules to the negative rule set associated with the capture process, because the inclusion_rule parameter is set to FALSE.

Removing a Rule from a Rule Set for a Capture Process

You remove a rule from the rule set for a capture process if you no longer want the capture process to capture the changes specified in the rule. For example, assume that the departments3 rule specifies that DML changes to the hr.departments table be captured. If you no longer want a capture process to capture changes to the hr.departments table, then remove the departments3 rule from its rule set.

You remove a rule from a rule set for an existing capture process by running the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the following procedure removes a rule named departments3 from the positive rule set of a capture process named strm01_capture.

BEGIN
  DBMS_STREAMS_ADM.REMOVE_RULE(
    rule_name        => 'departments3',
    streams_type     => 'capture',
    streams_name     => 'strm01_capture',
    drop_unused_rule => TRUE,
    inclusion_rule   => TRUE);
END;
/

In this example, the drop_unused_rule parameter in the REMOVE_RULE procedure is set to TRUE, which is the default setting. Therefore, if the rule being removed is not in any other rule set, then it will be dropped from the database. If the drop_unused_rule parameter is set to FALSE, then the rule is removed from the rule set, but it is not dropped from the database.

If the inclusion_rule parameter is set to FALSE, then the REMOVE_RULE procedure removes the rule from the negative rule set for the capture process, not the positive rule set.

To remove all of the rules in a rule set for the capture process, specify NULL for the rule_name parameter when you run the REMOVE_RULE procedure.

Removing a Rule Set for a Capture Process

You remove a rule set from an existing capture process using the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package. This procedure can remove the positive rule set, negative rule set, or both. Specify TRUE for the remove_rule_set parameter to remove the positive rule set for the capture process. Specify TRUE for the remove_negative_rule_set parameter to remove the negative rule set for the capture process.

For example, the following procedure removes both the positive and negative rule set from a capture process named strm01_capture.

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name             => 'strm01_capture',
    remove_rule_set          => TRUE,
    remove_negative_rule_set => TRUE);
END;
/

Note:

If a capture process does not have a positive or negative rule set, then the capture process captures all supported changes to all objects in the database, excluding database objects in the SYS, SYSTEM, and CTXSYS schemas.

Setting a Capture Process Parameter

Set a capture process parameter using the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM package. Capture process parameters control the way a capture process operates.

For example, the following procedure sets the parallelism parameter for a capture process named strm01_capture to 4.

BEGIN
  DBMS_CAPTURE_ADM.SET_PARAMETER(
    capture_name => 'strm01_capture',
    parameter    => 'parallelism',
    value        => '4');
END;
/

Note:

  • Setting the parallelism parameter automatically stops and restarts a capture process.

  • The value parameter is always entered as a VARCHAR2 value, even if the parameter value is a number.

  • If the value parameter is set to NULL or is not specified, then the parameter is set to its default value.



See Also:


Setting the Capture User for a Capture Process

The capture user is the user who captures all DML changes and DDL changes that satisfy the capture process rule sets. Set the capture user for a capture process using the capture_user parameter in the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package.

To change the capture user, the user who invokes the ALTER_CAPTURE procedure must be granted DBA role. Only the SYS user can set the capture_user to SYS.

For example, the following procedure sets the capture user for a capture process named strm01_capture to hr.

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name => 'strm01_capture',
    capture_user => 'hr');
END;
/

Running this procedure grants the new capture user enqueue privilege on the queue used by the capture process and configures the user as a secure queue user of the queue. In addition, ensure that the capture user has the following privileges:

These privileges can be granted to the capture user directly or through roles.

In addition, the capture user must be granted EXECUTE privilege on all packages, including Oracle-supplied packages, that are invoked in rule-based transformations run by the capture process. These privileges must be granted directly to the capture user. They cannot be granted through roles.


Note:

If Oracle Database Vault is installed, follow the steps outlined in "Oracle Streams and Oracle Data Vault" to ensure the correct privileges and roles have been granted.

Managing the Checkpoint Retention Time for a Capture Process

The checkpoint retention time is the amount of time that a capture process retains checkpoints before purging them automatically.

Set the checkpoint retention time for a capture process using checkpoint_retention_time parameter in the ALTER_CAPTURE procedure of the DBMS_CAPTURE_ADM package.

This section contains these topics:

Setting the Checkpoint Retention Time for a Capture Process to a New Value

When you set the checkpoint retention time, you can specify partial days with decimal values. For example, run the following procedure to specify that a capture process named strm01_capture should purge checkpoints automatically every ten days and twelve hours:

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name              => 'strm01_capture',
    checkpoint_retention_time => 10.5);
END;
/

Setting the Checkpoint Retention Time for a Capture Process to Infinite

To specify that a capture process should not purge checkpoints automatically, set the checkpoint retention time to DBMS_CAPTURE_ADM.INFINITE. For example, the following procedure sets the checkpoint retention time for a name strm01_capture to infinite:

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name              => 'strm01_capture',
    checkpoint_retention_time => DBMS_CAPTURE_ADM.INFINITE);
END;
/

Adding an Archived Redo Log File to a Capture Process Explicitly

You can add an archived redo log file to a capture process manually using the following statement:

ALTER DATABASE REGISTER LOGICAL LOGFILE 
   file_name FOR capture_process;

Here, file_name is the name of the archived redo log file being added, and capture_process is the name of the capture process that will use the redo log file at the downstream database. The capture_process is equivalent to the logminer_session_name and must be specified. The redo log file must be present at the site running capture process.

For example, to add the /usr/log_files/1_3_486574859.dbf archived redo log file to a capture process named strm03_capture, issue the following statement:

ALTER DATABASE REGISTER LOGICAL LOGFILE '/usr/log_files/1_3_486574859.dbf' 
  FOR 'strm03_capture';

See Also:

Oracle Database SQL Language Reference for more information about the ALTER DATABASE statement and Oracle Data Guard Concepts and Administration for more information registering redo log files

Setting the First SCN for an Existing Capture Process

You can set the first SCN for an existing capture process.

The specified first SCN must meet the following requirements:

  • It must be greater than the current first SCN for the capture process.

  • It must be less than or equal to the current applied SCN for the capture process. However, this requirement does not apply if the current applied SCN for the capture process is zero.

  • It must be less than or equal to the required checkpoint SCN for the capture process.

You can determine the current first SCN, applied SCN, and required checkpoint SCN for each capture process in a database using the following query:

SELECT CAPTURE_NAME, FIRST_SCN, APPLIED_SCN, REQUIRED_CHECKPOINT_SCN
   FROM DBA_CAPTURE;

When you reset a first SCN for a capture process, information below the new first SCN setting is purged from the LogMiner data dictionary for the capture process automatically. Therefore, after the first SCN is reset for a capture process, the start SCN for the capture process cannot be set lower than the new first SCN. Also, redo log files that contain information before the new first SCN setting will never be needed by the capture process.

For example, the following procedure sets the first SCN for a capture process named strm01_capture to 351232 using the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package:

BEGIN
  DBMS_CAPTURE_ADM.ALTER_CAPTURE(
    capture_name => 'strm01_capture',
    first_scn    => 351232);
END;
/

Note:

  • If the specified first SCN is higher than the current start SCN for the capture process, then the start SCN is set automatically to the new value of the first SCN.

  • If you must capture changes in the redo log from a point in time in the past, then you can create a capture process and specify a first SCN that corresponds to a previous data dictionary build in the redo log. The BUILD procedure in the DBMS_CAPTURE_ADM package performs a data dictionary build in the redo log.

  • You can query the DBA_LOGMNR_PURGED_LOG data dictionary view to determine which redo log files will never be needed by any capture process.


Setting the Start SCN for an Existing Capture Process

You can set the start SCN for an existing capture process. Typically, you reset the start SCN for a capture process if point-in-time recovery must be performed on one of the destination databases that receive changes from the capture process.

The specified start SCN must be greater than or equal to the first SCN for the capture process. When you reset a start SCN for a capture process, ensure that the required redo log files are available to the capture process.

You can determine the first SCN for each capture process in a database using the following query:

SELECT CAPTURE_NAME, FIRST_SCN FROM DBA_CAPTURE;

For example, to set the start SCN for a capture process named strm01_capture to 750338, complete the following steps:

  1. Stop the capture process. See "Stopping a Capture Process" for instructions.

  2. Run the ALTER_CAPTURE procedure to set the start SCN:

    BEGIN
      DBMS_CAPTURE_ADM.ALTER_CAPTURE(
        capture_name => 'strm01_capture',
        start_scn    => 750338);
    END;
    /
    
  3. Start the capture process. See "Starting a Capture Process" for instructions.


See Also:


Specifying Whether Downstream Capture Uses a Database Link

You specify whether an existing downstream capture process uses a database link to the source database for administrative purposes using the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package. Set the use_database_link parameter to TRUE to specify that the downstream capture process uses a database link, or you set the use_database_link parameter to FALSE to specify that the downstream capture process does not use a database link.

If you want a capture process that is not using a database link currently to begin using a database link, then specify TRUE for the use_database_link parameter. In this case, a database link with the same name as the global name as the source database must exist at the downstream database.

If you want a capture process that is using a database link currently to stop using a database link, then specify FALSE for the use_database_link parameter. In this case, some administration must be performed manually after you alter the capture process. For example, if you add new capture process rules using the DBMS_STREAMS_ADM package, then you must prepare the objects relating to the rules for instantiation manually at the source database.

If you specify NULL for the use_database_link parameter, then the current value of this parameter for the capture process is not changed.

To create a database link to the source database dbs1.example.com and specify that this capture process uses the database link, complete the following steps:

  1. In SQL*Plus, connect to the downstream database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create the database link to the source database. Ensure that the database link connects to the Oracle Streams administrator at the source database. For example:

    CREATE DATABASE LINK dbs1.example.com CONNECT TO strmadmin 
       IDENTIFIED BY password 
       USING 'dbs1.example.com';
    
  3. Alter the capture process to use the database link. For example:

    BEGIN
      DBMS_CAPTURE_ADM.ALTER_CAPTURE(
        capture_name       => 'strm05_capture',
        use_database_link  => TRUE);
    END;
    /
    

Dropping a Capture Process

You run the DROP_CAPTURE procedure in the DBMS_CAPTURE_ADM package to drop an existing capture process. For example, the following procedure drops a capture process named strm02_capture:

BEGIN
  DBMS_CAPTURE_ADM.DROP_CAPTURE(
    capture_name          => 'strm02_capture',
    drop_unused_rule_sets => TRUE);
END;
/

Because the drop_unused_rule_sets parameter is set to TRUE, this procedure also drops any rule sets used by the strm02_capture capture process, unless a rule set is used by another Oracle Streams client. If the drop_unused_rule_sets parameter is set to TRUE, then both the positive rule set and negative rule set for the capture process might be dropped. If this procedure drops a rule set, then it also drops any rules in the rule set that are not in another rule set.


Note:

The status of a capture process must be DISABLED or ABORTED before it can be dropped. You cannot drop an ENABLED capture process.

Managing a Synchronous Capture

A synchronous capture uses an internal mechanism to capture data manipulation language (DML) changes made to tables. A synchronous capture reformats each captured change into a logical change record (LCR), and enqueues the LCR into an ANYDATA queue.

This section contains these topics:


See Also:


Managing the Rule Set for a Synchronous Capture

This section contains instructions for completing the following tasks:

Specifying a Rule Set for a Synchronous Capture

You can specify one positive rule set for a synchronous capture. The synchronous capture captures a change if it evaluates to TRUE for at least one rule in the positive rule set.

You specify an existing rule set as the positive rule set for an existing synchronous capture using the rule_set_name parameter in the ALTER_SYNC_CAPTURE procedure. This procedure is in the DBMS_CAPTURE_ADM package.

For example, the following procedure sets the positive rule set for a synchronous capture named sync_capture to sync_rule_set.

BEGIN
  DBMS_CAPTURE_ADM.ALTER_SYNC_CAPTURE(
    capture_name  => 'sync_capture',
    rule_set_name => 'strmadmin.sync_rule_set');
END;
/

Note:

You cannot remove the rule set for a synchronous capture.

Adding Rules to a Rule Set for a Synchronous Capture

To add rules to a rule set for an existing synchronous capture, you can run one of the following procedures in the DBMS_STREAMS_ADM package and specify the existing synchronous capture:

The following example runs the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the positive rule set of a synchronous capture named sync_capture:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name      =>  'hr.departments',
    streams_type    =>  'sync_capture',
    streams_name    =>  'sync_capture',
    queue_name      =>  'strmadmin.streams_queue',
    include_dml     =>  TRUE);
END;
/

Running this procedure performs the following actions:

  • Creates one rule which evaluates to TRUE for DML changes to the hr.departments table. The rule name is system generated.

  • Adds the rule to the positive rule set associated with the synchronous capture.

  • Prepares the hr.departments table for instantiation by running the PREPARE_SYNC_INSTANTIATION function in the DBMS_CAPTURE_ADM package.


Note:

  • A synchronous capture captures changes to a table only if the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure was used to add the rule or rules for the table to the synchronous capture rule set. Synchronous capture does not capture changes to a table if a table or subset rule is added to its rule set using the ADD_RULE procedure in the DBMS_RULE_ADM package. In addition, a synchronous capture ignores all non-table and non-subset rules in its rule set, including global and schema rules.

  • When the ADD_TABLE_RULES or the ADD_SUBSET_RULES procedure adds rules to a synchronous capture rule set, the procedure must obtain an exclusive lock on the specified table. If there are outstanding transactions on the specified table, then the procedure waits until it can obtain a lock.


Removing a Rule from a Rule Set for a Synchronous Capture

You remove a rule from the rule set for a synchronous capture if you no longer want the synchronous capture to capture the changes specified in the rule. For example, assume that the departments3 rule specifies that DML changes to the hr.departments table be captured. If you no longer want a synchronous capture to capture changes to the hr.departments table, then remove the departments3 rule from its rule set.

You remove a rule from a rule set for an existing synchronous capture by running the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the following procedure removes a rule named departments3 from the positive rule set of a synchronous capture named sync_capture.

BEGIN
  DBMS_STREAMS_ADM.REMOVE_RULE(
    rule_name        => 'departments3',
    streams_type     => 'sync_capture',
    streams_name     => 'sync_capture',
    drop_unused_rule => TRUE);
END;
/

In this example, the drop_unused_rule parameter in the REMOVE_RULE procedure is set to TRUE, which is the default setting. Therefore, if the rule being removed is not in any other rule set, then it will be dropped from the database. If the drop_unused_rule parameter is set to FALSE, then the rule is removed from the rule set, but it is not dropped from the database.

To remove all of the rules in a rule set for the synchronous capture, specify NULL for the rule_name parameter when you run the REMOVE_RULE procedure.

Setting the Capture User for a Synchronous Capture

The capture user is the user who captures all DML changes that satisfy the synchronous capture rule set. Set the capture user for a synchronous capture using the capture_user parameter in the ALTER_SYNC_CAPTURE procedure in the DBMS_CAPTURE_ADM package.

To change the capture user, the user who invokes the ALTER_SYNC_CAPTURE procedure must be granted DBA role. Only the SYS user can set the capture_user to SYS.

For example, the following procedure sets the capture user for a synchronous capture named sync_capture to hr.

BEGIN
  DBMS_CAPTURE_ADM.ALTER_SYNC_CAPTURE(
    capture_name => 'sync_capture',
    capture_user => 'hr');
END;
/

Running this procedure grants the new capture user enqueue privilege on the queue used by the synchronous capture and configures the user as a secure queue user of the queue. In addition, ensure that the capture user has the following privileges:

These privileges can be granted to the capture user directly or through roles.

In addition, the capture user must be granted EXECUTE privilege on all packages, including Oracle-supplied packages, that are invoked in rule-based transformations run by the synchronous capture. These privileges must be granted directly to the capture user. They cannot be granted through roles.


Note:

If Oracle Database Vault is installed, follow the steps outlined in "Oracle Streams and Oracle Data Vault" to ensure the correct privileges and roles have been granted.

Dropping a Synchronous Capture

You run the DROP_CAPTURE procedure in the DBMS_CAPTURE_ADM package to drop an existing synchronous capture. For example, the following procedure drops a synchronous capture named sync_capture:

BEGIN
  DBMS_CAPTURE_ADM.DROP_CAPTURE(
    capture_name          => 'sync_capture',
    drop_unused_rule_sets => TRUE);
END;
/

Because the drop_unused_rule_sets parameter is set to TRUE, this procedure also drops any rule sets used by the sync_capture synchronous capture, unless a rule set is used by another Oracle Streams client. If the drop_unused_rule_sets parameter is set to TRUE, then the rule set for the synchronous capture might be dropped. If this procedure drops a rule set, then it also drops any rules in the rule set that are not in another rule set.

Managing Extra Attributes in Captured LCRs

You can use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process or a synchronous capture to capture one or more extra attributes. You can also use this procedure to instruct a capture process or synchronous capture to exclude an extra attribute that it is capturing currently.

The extra attributes are the following:

This section contains instructions for completing the following tasks:

Including Extra Attributes in Implicitly Captured LCRs

To include an extra attribute in the LCRs captured by a capture process or synchronous capture, run the INCLUDE_EXTRA_ATTRIBUTES procedure, and set the include parameter to TRUE. For example, to instruct a capture process or synchronous capture named strm01_capture to include the transaction name in each LCR that it captures, run the following procedure:

BEGIN
  DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
    capture_name   => 'strm01_capture',
    attribute_name => 'tx_name',
    include        => TRUE);
END;
/

Excluding Extra Attributes from Implicitly Captured LCRs

To exclude an extra attribute from the LCRs captured by a capture process or synchronous capture, run the INCLUDE_EXTRA_ATTRIBUTES procedure, and set the include parameter to FALSE. For example, to instruct a capture process or synchronous capture named strm01_capture to exclude the transaction name from each LCR that it captures, run the following procedure:

BEGIN
  DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
    capture_name   => 'strm01_capture',
    attribute_name => 'tx_name',
    include        => FALSE);
END;
/

Switching From a Capture Process to a Synchronous Capture

This section describes how to switch from a capture process to a synchronous capture. Typically, a synchronous capture is used to capture data manipulation language (DML) changes to a relatively small number of tables. You might decide to make this switch if you are currently capturing changes to a small number of tables with a capture process instead of a synchronous capture.

You should not switch from a capture process to a synchronous capture if any of the following conditions are true:

This section uses an example to describe how to switch from a capture process to a synchronous capture. Table 15-1 shows the Oracle Streams components in the sample environment before the switch and after the switch.

Table 15-1 Sample Switch From a Capture Process to a Synchronous Capture

Oracle Streams ComponentBefore SwitchAfter Switch

Capture Process

cap_proc

None

Capture Process Rule Set

cap_rules

None

Synchronous Capture

None

sync_cap

Synchronous Capture Rule Set

None

cap_rules

Propagation

cap_proc_prop

sync_cap_prop

Propagation Rule Set

prop_rules

prop_rules

Source Queue

cap_proc_source

sync_cap_source

Destination Queue

cap_proc_dest

sync_cap_dest

Apply Process

apply_cap_proc

apply_sync_cap

Apply Process Rule Set

apply_rules

apply_rules


In Table 15-1, notice that the Oracle Streams environment uses the same rule sets before the switch and after the switch. Also, for the example in this section, assume that the source database is db1.example.com and the destination database is db2.example.com.


Note:

The example in this section assumes that the Oracle Streams environment only involves two databases. If you are using a directed network to send changes through multiple databases, then you might need to configure additional propagations and queues for the new synchronous capture stream of changes, and you might need to drop additional propagations and queues that were used by the capture process stream.

To switch from a capture process to a synchronous capture, complete the following steps:

  1. In SQL*Plus, log in to the source database as the Oracle Streams administrator.

    This example assumes that the Oracle Streams administrator is strmadmin at each database. See Oracle Streams Replication Administrator's Guide for information about creating an Oracle Streams administrator.

  2. Stop the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.STOP_CAPTURE(
        capture_name => 'cap_proc');
    END;
    /
    
  3. In SQL*Plus, log in to the destination database as the Oracle Streams administrator.

  4. Create a commit-time queue for the apply process that will apply the changes that were captured by the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.sync_cap_dest_qt',
        queue_name  => 'strmadmin.sync_cap_dest');
    END;
    /
    
  5. Create an apply process that applies the changes in the queue created in Step 4. Ensure that the apply_captured parameter is set to FALSE. Also, ensure that the rule_set_name parameter specifies the rule set used by the existing apply process.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.CREATE_APPLY(
        queue_name     => 'strmadmin.sync_cap_dest',
        apply_name     => 'apply_sync_cap',
        rule_set_name  => 'strmadmin.apply_rules',
        apply_captured => FALSE);
    END;
    /
    

    Ensure that the apply process is configured properly for your environment. Specifically, ensure that the new apply process is configured properly regarding the following items:

    • Apply user

    • Apply handlers

    • Apply tag

    If appropriate, then ensure that the new apply process is configured in the same way as the existing apply process regarding these items.

    See Oracle Streams Replication Administrator's Guide for information about creating an apply process.

  6. In SQL*Plus, log in to the source database as the Oracle Streams administrator.

  7. Create a commit-time queue for the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.sync_cap_source_qt',
        queue_name  => 'strmadmin.sync_cap_source');
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for information about configuring queues.

  8. Create a propagation that sends changes from the queue created in Step 7 to the queue created in Step 4. Ensure that the rule_set_name parameter specifies the rule set used by the existing propagation.

    In this example, run the following procedure:

    BEGIN
      DBMS_PROPAGATION_ADM.CREATE_PROPAGATION(
        propagation_name   => 'sync_cap_prop',
        source_queue       => 'strmadmin.sync_cap_source',
        destination_queue  => 'strmadmin.sync_cap_dest',
        destination_dblink => 'db2.example.com',
        rule_set_name      => 'strmadmin.prop_rules');
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for information about creating propagations.

  9. Create a synchronous capture. Ensure that the queue_name parameter specifies the queue created in Step 7. Also, ensure that the rule_set_name parameter specifies the rule set used by the existing capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE(
        queue_name    => 'strmadmin.sync_cap_source',
        capture_name  => 'sync_cap',
        rule_set_name => 'strmadmin.capture_rules');
    END;
    /
    

    The specified rule set must only contain rules that were created using the ADD_TABLE_RULES and ADD_SUBSET_RULES procedures in the DBMS_STREAMS_ADM package. If the current capture process rule set contains other types of rules, then create a rule set for the synchronous capture and use the ADD_TABLE_RULES and ADD_SUBSET_RULES procedures to add rules to the new rule set.

    In addition, a synchronous capture cannot have a negative rule set. If the current capture process has a negative rule set, and you want the synchronous capture to behave the same as the capture process, then add rules to the positive synchronous capture rule set that result in the same behavior.

    If the existing capture process uses a capture user that is not the Oracle Streams administrator, then ensure that you use the capture_user parameter in the CREATE_SYNC_CAPTURE procedure to specify the correct capture user for the new synchronous capture.

    See Oracle Streams Replication Administrator's Guide for information about configuring synchronous capture.

  10. Verify that the tables that are configured for synchronous capture are the same as the ones configured for the existing capture process by running the following query:

    SELECT * FROM DBA_SYNC_CAPTURE_TABLES ORDER BY TABLE_OWNER, TABLE_NAME;
    

    If any table is missing or not enabled, then use the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure to add the table.

  11. Prepare the replicated tables for instantiation. The replicated tables are the tables for which the synchronous capture captures changes.

    For example, if the synchronous capture captures changes to the hr.employees and hr.departments tables, then run the following function:

    SET SERVEROUTPUT ON
    DECLARE
      tables       DBMS_UTILITY.UNCL_ARRAY;
      prepare_scn  NUMBER;
      BEGIN
        tables(1) := 'hr.departments';
        tables(2) := 'hr.employees';
        prepare_scn := DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION(
                          table_names => tables);
      DBMS_OUTPUT.PUT_LINE('Prepare SCN = ' || prepare_scn);
    END;
    /
    

    The returned prepare system change number (SCN) is used in Steps 13, 17, and 18. This example assumes that the prepare SCN is 2700000.

    All of the replicated tables must be included in one call to the PREPARE_SYNC_INSTANTIATION function.

    See Oracle Streams Replication Administrator's Guide for more information about preparing database objects for instantiation.

  12. In SQL*Plus, log in to the destination database as the Oracle Streams administrator.

  13. Set the apply process that applies changes from the capture process to stop applying changes when it reaches the SCN returned in Step 11 plus 1.

    For example, if the prepare SCN is 2700000, then run the following procedure to set the maximum_scn parameter to 2700001 (2700000 + 1).:

    BEGIN
      DBMS_APPLY_ADM.SET_PARAMETER(
        apply_name   => 'apply_cap_proc',
        parameter    => 'maximum_scn',
        value        => '2700001');
    END;
    /
    
  14. In SQL*Plus, log in to the source database as the Oracle Streams administrator.

  15. Start the capture process that you stopped in Step 2.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.START_CAPTURE(
        capture_name => 'cap_proc');
    END;
    /
    
  16. In SQL*Plus, log in to the destination database as the Oracle Streams administrator.

  17. Wait until the apply process that applies changes that were captured by the capture process has reached the SCN specified in Step 13. When this event occurs, the apply process is automatically disabled with error ORA-26717 to indicate the SCN limit has reached.

    To determine if the apply process has reached this point, query the DBA_APPLY view. In this example, run the following query:

    SELECT 1 FROM DBA_APPLY 
       WHERE STATUS       = 'DISABLED' AND
             ERROR_NUMBER = 26717 AND 
             APPLY_NAME   = 'APPLY_CAP_PROC';
    

    Do not proceed to the next step until this query returns a row.

  18. Set the instantiation SCN for the replicated tables to the SCN value the SCN returned in Step 11.

    In this example, run the following procedures:

    BEGIN
      DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
        source_object_name    => 'hr.employees',
        source_database_name  => 'db1.example.com',
        instantiation_scn     => 2700000);
    END;
    /
    
    BEGIN
      DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
        source_object_name    => 'hr.departments',
        source_database_name  => 'db1.example.com',
        instantiation_scn     => 2700000);
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for more information about setting the instantiation SCN.

  19. Start the apply process that you created in Step 5.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.START_APPLY(
        apply_name => 'apply_sync_cap');
    END;
    /
    
  20. Drop the apply process that applied changes that were captured by the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.DROP_APPLY(
        apply_name => 'apply_cap_proc');
    END;
    /
    
  21. If it is no longer needed, then drop the queue that was used by the apply process that you dropped in Step 20.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.REMOVE_QUEUE(
        queue_name  => 'strmadmin.cap_proc_dest',
        drop_unused_queue_table => TRUE);
    END;
    /
    
  22. In SQL*Plus, log in to the source database as the Oracle Streams administrator.

  23. Stop the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.STOP_CAPTURE(
        capture_name => 'cap_proc');
    END;
    /
    
  24. Drop the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.DROP_CAPTURE(
        capture_name => 'cap_proc');
    END;
    /
    
  25. Drop the propagation that sent changes that were captured by the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_PROPAGATION_ADM.DROP_PROPAGATION(
        propagation_name => 'cap_proc_prop');
    END;
    /
    
  26. If it is no longer needed, then drop the queue that was used by the capture process and propagation that you dropped in Steps 24 and 25.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.REMOVE_QUEUE(
        queue_name  => 'strmadmin.cap_proc_source',
        drop_unused_queue_table => TRUE);
    END;
    /
    

If you have a bi-directional replication environment, then you can perform these steps independently to switch from a capture process to synchronous capture in both directions.

Switching from a Synchronous Capture to a Capture Process

This section describes how to switch from a synchronous capture to a capture process. You might decide to make this switch for one or more of the following reasons:

This section uses an example to describe how to switch from a synchronous capture to a capture process. Table 15-2 shows the Oracle Streams components in the sample environment before the switch and after the switch.

Table 15-2 Sample Switch From a Synchronous Capture to a Capture Process

Oracle Streams ComponentBefore SwitchAfter Switch

Synchronous Capture

sync_proc

None

Synchronous Capture Rule Set

cap_rules

None

Capture Process

None

cap_proc

Capture Process Rule Set

None

cap_rules

Propagation

sync_cap_prop

cap_proc_prop

Propagation Rule Set

prop_rules

prop_rules

Source Queue

sync_cap_source

cap_proc_source

Destination Queue

sync_cap_dest

cap_proc_dest

Apply Process

apply_sync_cap

apply_cap_proc

Apply Process Rule Set

apply_rules

apply_rules


In Table 15-2, notice that the Oracle Streams environment uses the same rule sets before the switch and after the switch. Also, for the example in this section, assume that the source database is db1.example.com and the destination database is db2.example.com.


Note:

The example in this section assumes that the Oracle Streams environment only involves two databases. If you are using a directed network to send changes through multiple databases, then you might need to configure additional propagations and queues for the new capture process stream of changes, and you might need to drop additional propagations and queues that were used by the synchronous capture stream.

To switch from a synchronous capture to a capture process, complete the following steps:

  1. Ensure that the source database is running in ARCHIVELOG mode. See "ARCHIVELOG Mode and a Capture Process" and Oracle Database Administrator's Guide for more information.

  2. In SQL*Plus, log in to the destination database as the Oracle Streams administrator.

    This example assumes that the Oracle Streams administrator is strmadmin at each database. See Oracle Streams Replication Administrator's Guide for information about creating an Oracle Streams administrator.

  3. Create the queue for the apply process that will apply the changes that were captured by the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.cap_proc_dest_qt',
        queue_name  => 'strmadmin.cap_proc_dest');
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for information about configuring queues.

  4. Create an apply process that applies the changes in the queue created in Step 3. Ensure that the apply_captured parameter is set to TRUE. Also, ensure that the rule_set_name parameter specifies the rule set used by the existing apply process.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.CREATE_APPLY(
        queue_name     => 'strmadmin.cap_proc_dest',
        apply_name     => 'apply_cap_proc',
        rule_set_name  => 'strmadmin.apply_rules',
        apply_captured => TRUE);
    END;
    /
    

    Ensure that the apply process is configured properly for your environment. Specifically, ensure that the new apply process is configured properly regarding the following items:

    • Apply user

    • Apply handlers

    • Apply tag

    If appropriate, then ensure that the new apply process is configured in the same way as the existing apply process regarding these items.

    See Oracle Streams Replication Administrator's Guide for information about creating an apply process.

  5. Stop the apply process that applies changes captured by the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.STOP_APPLY(
        apply_name => 'apply_sync_cap');
    END;
    /
    
  6. In SQL*Plus, log in to the source database as the Oracle Streams administrator.

  7. Create the queue for & the capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.SET_UP_QUEUE(
        queue_table => 'strmadmin.cap_proc_source_qt',
        queue_name  => 'strmadmin.cap_proc_source');
    END;
    /
    
  8. Create a propagation that sends changes from the queue created in Step 7 to the queue created in Step 3. Ensure that the rule_set_name parameter specifies the rule set used by the existing propagation.

    In this example, run the following procedure:

    BEGIN
      DBMS_PROPAGATION_ADM.CREATE_PROPAGATION(
        propagation_name   => 'cap_proc_prop',
        source_queue       => 'strmadmin.cap_proc_source',
        destination_queue  => 'strmadmin.cap_proc_dest',
        destination_dblink => 'db2.example.com',
        rule_set_name      => 'strmadmin.prop_rules');
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for information about creating propagations.

  9. Create a capture process. Ensure that the parameters are set properly in the CREATE_CAPTURE procedure:

    • Set the queue_name parameter to the queue created in Step 7.

    • Set the rule_set_name parameter to the rule set used by the existing synchronous capture.

    • If the existing synchronous capture uses a capture user that is not the Oracle Streams administrator, then set the capture_user parameter to the correct capture user for the new capture process.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.CREATE_CAPTURE(
        queue_name    => 'strmadmin.cap_proc_source',
        capture_name  => 'cap_proc',
        rule_set_name => 'strmadmin.cap_rules');
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for more information about configuring a capture process.

  10. Prepare the replicated tables for instantiation. The replicated tables are the tables for which the capture process captures changes.

    For example, if the capture process captures changes to the hr.employees and hr.departments tables, then run the following procedures:

    BEGIN
      DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(
        table_name           => 'hr.employees',
        supplemental_logging => 'keys');
    END;
    /
    
    BEGIN
      DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(
        table_name           => 'hr.departments',
        supplemental_logging => 'keys');
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for more information about preparing database objects for instantiation.

  11. Lock all of the replicated tables in SHARE MODE.

    In this example, run the following SQL statement:

    LOCK TABLE hr.employees, hr.departments IN SHARE MODE;
    
  12. Determine the current system change number (SCN) by running the following query:

    SELECT CURRENT_SCN FROM V$DATABASE;
    

    The returned switch SCN is used in Steps 15 and 18. This example assumes that the switch SCN is 2700000.

  13. Run a COMMIT statement to release the lock on the replicated tables:

    COMMIT;
    
  14. In SQL*Plus, log in to the destination database as the Oracle Streams administrator.

  15. Set the apply process that applies changes from the synchronous capture to stop applying changes when it reaches the SCN returned in Step 12 plus 1.

    For example, if the switch SCN is 2700000, then run the following procedure to set the maximum_scn parameter to 2700001 (2700000 + 1):

    BEGIN
      DBMS_APPLY_ADM.SET_PARAMETER(
        apply_name   => 'apply_sync_cap',
        parameter    => 'maximum_scn',
        value        => '2700001');
    END;
    /
    
  16. Start the apply process that applies changes from the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.START_APPLY(
        apply_name => 'apply_sync_cap');
    END;
    /
    
  17. Wait until the apply process that applies changes that were captured by the synchronous capture has reached the SCN specified in Step 15. When this event occurs, the apply process is automatically disabled with error ORA-26717 to indicate the SCN limit has reached.

    To determine if the apply process has reached this point, query the DBA_APPLY view. In this example, run the following query:

    SELECT 1 FROM DBA_APPLY 
       WHERE STATUS       = 'DISABLED' AND
             ERROR_NUMBER = 26717 AND 
             APPLY_NAME   = 'APPLY_SYNC_CAP';
    

    Do not proceed to the next step until this query returns a row.

  18. Set the instantiation SCN for the replicated tables to the SCN value returned in Step 12.

    In this example, run the following procedures:

    BEGIN
      DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
        source_object_name    => 'hr.employees',
        source_database_name  => 'db1.example.com',
        instantiation_scn     => 2700000);
    END;
    /
    
    BEGIN
      DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
        source_object_name    => 'hr.departments',
        source_database_name  => 'db1.example.com',
        instantiation_scn     => 2700000);
    END;
    /
    

    See Oracle Streams Replication Administrator's Guide for more information about setting the instantiation SCN.

  19. Start the apply process that you created in Step 4.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.START_APPLY(
        apply_name => 'apply_cap_proc');
    END;
    /
    
  20. Drop the apply process that applied changes that were captured by the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_APPLY_ADM.DROP_APPLY(
        apply_name => 'apply_sync_cap');
    END;
    /
    
  21. If it is no longer needed, then drop the queue that was used by the apply process that you dropped in Step 20.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.REMOVE_QUEUE(
        queue_name              => 'strmadmin.sync_cap_dest',
        drop_unused_queue_table => TRUE);
    END;
    /
    
  22. In SQL*Plus, log in to the source database as the Oracle Streams administrator.

  23. Start the capture process that you created in Step 9.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.START_CAPTURE(
        capture_name => 'cap_proc');
    END;
    /
    
  24. Drop the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_CAPTURE_ADM.DROP_CAPTURE(
        capture_name => 'sync_cap');
    END;
    /
    
  25. Drop the propagation that sent changes that were captured by the synchronous capture.

    In this example, run the following procedure:

    BEGIN
      DBMS_PROPAGATION_ADM.DROP_PROPAGATION(
        propagation_name => 'sync_cap_prop');
    END;
    /
    
  26. If it is no longer needed, then drop the queue that was used by the synchronous capture and propagation that you dropped in Steps 24 and 25.

    In this example, run the following procedure:

    BEGIN
      DBMS_STREAMS_ADM.REMOVE_QUEUE(
        queue_name              => 'strmadmin.sync_cap_source',
        drop_unused_queue_table => TRUE);
    END;
    /
    

If you have a bi-directional replication environment, then you can perform these steps independently to switch from a synchronous capture to a capture process in both directions.

PK.G  PK&AOEBPS/strms_adprop.htm>K Advanced Propagation Concepts

9 Advanced Propagation Concepts

The following topics contain conceptual information about staging messages in queues and propagating messages from one queue to another:


See Also:


Propagation Jobs

An Oracle Streams propagation is configured internally using Oracle Scheduler. Therefore, a propagation job is a job that propagates messages from a source queue to a destination queue. Like other Oracle Scheduler jobs, propagation jobs have an owner, and they use slave processes (jnnn) as needed to execute jobs.

The following procedures can create a propagation job when they create a propagation:

When one of these procedures creates a propagation, a new propagation job is created in the following cases:

This section contains the following topics:


Note:

The source queue owner performs the propagation, but the propagation job is owned by the user who creates it. These two users might or might not be the same.


See Also:


Propagation Scheduling and Oracle Streams Propagations

A propagation schedule specifies how often a propagation job propagates messages from a source queue to a destination queue. Each queue-to-queue propagation has its own propagation job and propagation schedule, but queue-to-dblink propagations that use the same propagation job have the same propagation schedule.

A default propagation schedule is established when a new propagation job is created by a procedure in the DBMS_STREAMS_ADM or DBMS_PROPAGATION_ADM package.

The default schedule has the following properties:

  • The start time is SYSDATE().

  • The duration is NULL, which means infinite.

  • The next time is NULL, which means that propagation restarts as soon as it finishes the current duration.

  • The latency is three seconds, which is the wait time after a queue becomes empty to resubmit the propagation job. Therefore, the latency is the maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued.

You can alter the schedule for a propagation job using the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package. Changes made to a propagation job affect all propagations that use the propagation job.

Propagation Jobs and RESTRICTED SESSION

When the restricted session is enabled during system startup by issuing a STARTUP RESTRICT statement, propagation jobs with enabled propagation schedules do not propagate messages. When the restricted session is disabled, each propagation schedule that is enabled and ready to run will run when there is an available slave process.

When the restricted session is enabled in a running database by the SQL statement ALTER SYSTEM ENABLE RESTRICTED SESSION, any running propagation job continues to run to completion. However, any new propagation job submitted for a propagation schedule is not started. Therefore, propagation for an enabled schedule can eventually come to a halt.

Oracle Streams Data Dictionary for Propagations

When a database object is prepared for instantiation at a source database, an Oracle Streams data dictionary is populated automatically at the database where changes to the object are captured by a capture process. The Oracle Streams data dictionary is a multiversioned copy of some of the information in the primary data dictionary at a source database. The Oracle Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types. This mapping keeps each captured LCR as small as possible, because the message can store numbers rather than names internally.

The mapping information in the Oracle Streams data dictionary at the source database is needed to evaluate rules at any database that propagates the captured LCRs from the source database. To make this mapping information available to a propagation, Oracle automatically populates a multiversioned Oracle Streams data dictionary at each database that has an Oracle Streams propagation. Oracle automatically sends internal messages that contain relevant information from the Oracle Streams data dictionary at the source database to all other databases that receive captured LCRs from the source database.

The Oracle Streams data dictionary information contained in these internal messages in a queue might or might not be propagated by a propagation. Which Oracle Streams data dictionary information to propagate depends on the rule sets for the propagation. When a propagation encounters Oracle Streams data dictionary information for a table, the propagation rule sets are evaluated with partial information that includes the source database name, table name, and table owner. If the partial rule evaluation of these rule sets determines that there might be relevant LCRs for the given table from the specified database, then the Oracle Streams data dictionary information for the table is propagated.

When Oracle Streams data dictionary information is propagated to a destination queue, it is incorporated into the Oracle Streams data dictionary at the database that contains the destination queue, in addition to being enqueued into the destination queue. Therefore, a propagation reading the destination queue in a directed networks configuration can forward LCRs immediately without waiting for the Oracle Streams data dictionary to be populated. In this way, the Oracle Streams data dictionary for a source database always reflects the correct state of the relevant database objects for the LCRs relating to these database objects.

Binary File Propagation

You can propagate a binary file between databases by using Oracle Streams. To do so, you put one or more BFILE attributes in a message payload and then propagate the message to a remote queue. Each BFILE referenced in the payload is transferred to the remote database after the message is propagated, but before the message propagation is committed. The directory object and filename of each propagated BFILE are preserved, but you can map the directory object to different directories on the source and destination databases. The message payload can be a BFILE wrapped in an ANYDATA payload, or the message payload can be one or more BFILE attributes of an object wrapped in an ANYDATA payload.

The following are not supported in a message payload:

Propagating a BFILE in Oracle Streams has the same restrictions as the procedure DBMS_FILE_TRANSFER.PUT_FILE.


See Also:

Oracle Database Administrator's Guide, and Oracle Database PL/SQL Packages and Types Reference for more information about transferring files with the DBMS_FILE_TRANSFER package

PKZ>>PK&AOEBPS/strms_transform.htm Rule-Based Transformations

6 Rule-Based Transformations

A rule-based transformation is any modification to a message when a rule in a positive rule set evaluates to TRUE. There are two types of rule-based transformations: declarative and custom.

The following topics contain information about rule-based transformations:

Declarative Rule-Based Transformations

Declarative rule-based transformations cover a set of common transformation scenarios for row LCRs.

You specify (or declare) such a transformation using one of the following procedures in the DBMS_STREAMS_ADM package:

When you specify a declarative rule-based transformation, you specify the rule that is associated with it. When the specified rule evaluates to TRUE for a row LCR, Oracle Streams performs the declarative transformation internally on the row LCR, without invoking PL/SQL.

Declarative rule-based transformations provide the following advantages:


Note:

Declarative rule-based transformations can transform row LCRs only. These row LCRs can be captured LCRs or persistent LCRs. Therefore, a DML rule must be specified when you run one of the procedures to add a declarative transformation. If a DDL rule is specified, then an error is raised.

Custom Rule-Based Transformations

Custom rule-based transformations require a user-defined PL/SQL function to perform the transformation. The function takes as input an ANYDATA object containing a message and returns either an ANYDATA object containing the transformed message or an array that contains zero or more ANYDATA encapsulations of a message. A custom rule-based transformation function that returns one message is a one-to-one transformation function. A custom rule-based transformation function that can return more than one message in an array is a one-to-many transformation function. One-to-one transformation functions are supported for any type of Oracle Streams client, but one-to-many transformation functions are supported only for Oracle Streams capture processes and synchronous captures.

To specify a custom rule-based transformation, use the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION procedure. You can use a custom rule-based transformation to modify captured LCRs, persistent LCRs, and persistent user messages.

For example, a custom rule-based transformation can be used when the data type of a particular column in a table is different at two different databases. The column might be a NUMBER column in the source database and a VARCHAR2 column in the destination database. In this case, the transformation takes as input an ANYDATA object containing a row LCR with a NUMBER data type for a column and returns an ANYDATA object containing a row LCR with a VARCHAR2 data type for the same column.

Other examples of custom transformations on messages include:

Custom rule-based transformations provide the following advantages:

The following considerations apply to custom rule-based transformations:

Custom Rule-Based Transformations and Action Contexts

You use the SET_RULE_TRANSFORM_FUNCTION procedure in the DBMS_STREAMS_ADM package to specify a custom rule-based transformation for a rule. This procedure modifies the action context of a rule to specify the transformation. A rule action context is optional information associated with a rule that is interpreted by the client of the rules engine after the rule evaluates to TRUE for a message. The client of the rules engine can be a user-created application or an internal feature of Oracle, such as Oracle Streams. The information in an action context is an object of type SYS.RE$NV_LIST, which consists of a list of name-value pairs.

A custom rule-based transformation in Oracle Streams always consists of the following name-value pair in an action context:

  • If the function is a one-to-one transformation function, then the name is STREAMS$_TRANSFORM_FUNCTION. If the function is a one-to-many transformation function, then the name is STREAMS$_ARRAY_TRANS_FUNCTION.

  • The value is an ANYDATA instance containing a PL/SQL function name specified as a VARCHAR2. This function performs the transformation.

You can display the existing custom rule-based transformations in a database by querying the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view.

When a rule in a positive rule set evaluates to TRUE for a message in an Oracle Streams environment, and an action context that contains a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION or STREAMS$_ARRAY_TRANS_FUNCTION is returned, the PL/SQL function is run, taking the message as an input parameter. Other names in an action context beginning with STREAMS$_ are used internally by Oracle and must not be directly added, modified, or removed. Oracle Streams ignores any name-value pair that does not begin with STREAMS$_ or APPLY$_.

When a rule evaluates to FALSE for a message in an Oracle Streams environment, the rule is not returned to the client, and any PL/SQL function appearing in a name-value pair in the action context is not run. Different rules can use the same or different transformations. For example, different transformations can be associated with different operation types, tables, or schemas for which messages are being captured, propagated, applied, or dequeued.

Required Privileges for Custom Rule-Based Transformations

The user who calls the transformation function must have EXECUTE privilege on the function. The following list describes which user calls the transformation function:

  • If a transformation is specified for a rule used by a capture process, then the capture user for the capture process calls the transformation function.

  • If a transformation is specified for a rule used by a synchronous capture, then the capture user for the synchronous capture calls the transformation function.

  • If a transformation is specified for a rule used by a propagation, then the owner of the source queue for the propagation calls the transformation function.

  • If a transformation is specified on a rule used by an apply process, then the apply user for the apply process calls the transformation function.

  • If a transformation is specified on a rule used by a messaging client, then the user who invokes the messaging client calls the transformation function.

Rule-Based Transformations and Oracle Streams Clients

The following sections provide more information about rule-based transformations and Oracle Streams clients:

The information in this section applies to both declarative and custom rule-based transformations.

Rule-Based Transformations and Capture Processes

For a transformation to be performed during capture by a capture process, a rule that is associated with a rule-based transformation in the positive rule set for the capture process must evaluate to TRUE for a particular change found in the redo log.

If the transformation is a declarative rule-based transformation, then Oracle transforms the captured LCR internally when the rule in a positive rule set evaluates to TRUE for the message. If the transformation is a custom rule-based transformation, then an action context containing a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION or STREAMS$_ARRAY_TRANS_FUNCTION is returned to the capture process when the rule in a positive rule set evaluates to TRUE for the captured LCR.

The capture process completes the following steps to perform a rule-based transformation:

  1. Formats the change in the redo log into an LCR.

  2. Converts the LCR into an ANYDATA object.

  3. Transforms the LCR. If the transformation is a declarative rule-based transformation, then Oracle transforms the ANYDATA object internally based on the specifications of the declarative transformation. If the transformation is a custom rule-based transformation, then the capture user for the capture process runs the PL/SQL function in the name-value pair to transform the ANYDATA object.

  4. Enqueues one or more transformed ANYDATA objects into the queue associated with the capture process, or discards the LCR if an array that contains zero elements is returned by the transformation function.

All actions are performed by the capture user for the capture process. Figure 6-1 shows a transformation during capture by a capture process.

Figure 6-1 Transformation During Capture by a Capture Process

Description of Figure 6-1 follows
Description of "Figure 6-1 Transformation During Capture by a Capture Process"

For example, if an LCR is transformed during capture by a capture process, then the transformed LCR is enqueued into the queue used by the capture process. Therefore, if such a captured LCR is propagated from the dbs1.example.com database to the dbs2.example.com and the dbs3.example.com databases, then the queues at dbs2.example.com and dbs3.example.com will contain the transformed LCR after propagation.

The advantages of performing transformations during capture by a capture process are the following:

  • Security can be improved if the transformation removes or changes private information, because this private information does not appear in the source queue and is not propagated to any destination queue.

  • Space consumption can be reduced, depending on the type of transformation performed. For example, a transformation that reduces the amount of data results in less data to enqueue, propagate, and apply.

  • Transformation overhead is reduced when there are multiple destinations for a transformed LCR, because the transformation is performed only once at the source, not at multiple destinations.

  • A capture process transformation can transform a single message into multiple messages.

The possible disadvantages of performing transformations during capture by a capture process are the following:


Note:

A rule-based transformation cannot be used with a capture process to modify or remove a column of a data type that is not supported by Oracle Streams.

Rule-Based Transformation Errors During Capture by a Capture Process

If an error occurs when the transformation is run during capture by a capture process, then the error is returned to the capture process. The behavior of the capture process depends on the type of transformation being performed and the type of error encountered. The following capture process behaviors are possible:

  • If the transformation is a declarative rule-based transformation, and the capture process can ignore the error, then the capture process performs the transformation and captures the change. For example, if a capture process tries to perform a DELETE_COLUMN declarative rule-based transformation, and the column specified for deletion does not exist in the row LCR, then the capture process captures the change and continues to run.

  • If the transformation is a declarative rule-based transformation, and the capture process cannot ignore the error, then the change is not captured, and the capture process becomes disabled. For example, if a capture process tries to perform an ADD_COLUMN declarative rule-based transformation, and the column specified for addition already exists in the row LCR, then the change is not captured, and the capture process becomes disabled.

  • Whenever an error is encountered in a custom rule-based transformation, the change is not captured, and the capture process becomes disabled.

If the capture process becomes disabled, then you must either change or remove the rule-based transformation to avoid the error before the capture process can be enabled.

Rule-Based Transformations and Synchronous Captures

For a transformation to be performed during capture by a synchronous capture, a rule that is associated with a rule-based transformation in the positive rule set for the synchronous capture must evaluate to TRUE for a particular DML change made to a table.

If the transformation is a declarative rule-based transformation, then Oracle transforms the persistent LCR internally when the rule in a positive rule set evaluates to TRUE for the message. If the transformation is a custom rule-based transformation, then an action context containing a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION is returned to the capture process when the rule in a positive rule set evaluates to TRUE for the persistent LCR.

The synchronous capture completes the following steps to perform a rule-based transformation:

  1. Formats the change in the redo log into a row LCR.

  2. Converts the row LCR into an ANYDATA object.

  3. Transforms the LCR. If the transformation is a declarative rule-based transformation, then Oracle transforms the ANYDATA object internally based on the specifications of the declarative transformation. If the transformation is a custom rule-based transformation, then the capture user for the synchronous capture runs the PL/SQL function in the name-value pair to transform the ANYDATA object.

  4. Enqueues the transformed ANYDATA object into the queue associated with the synchronous capture.

All actions are performed by the capture user for the synchronous capture. Figure 6-2 shows a transformation during capture.

Figure 6-2 Transformation During Capture by a Synchronous Capture

Description of Figure 6-2 follows
Description of "Figure 6-2 Transformation During Capture by a Synchronous Capture"

For example, if a row LCR is transformed during capture by a synchronous capture, then the transformed row LCR is enqueued into the queue used by the synchronous capture. Therefore, if such a captured LCR is propagated from the dbs1.example.com database to the dbs2.example.com and the dbs3.example.com databases, then the queues at dbs2.example.com and dbs3.example.com will contain the transformed row LCR after propagation.

The advantages of performing transformations during capture by a synchronous capture are the following:

  • Security can be improved if the transformation removes or changes private information, because this private information does not appear in the source queue and is not propagated to any destination queue.

  • Space consumption can be reduced, depending on the type of transformation performed. For example, a transformation that reduces the amount of data results in less data to enqueue, propagate, and apply.

  • Transformation overhead is reduced when there are multiple destinations for a transformed row LCR, because the transformation is performed only once at the source, not at multiple destinations.

The possible disadvantages of performing transformations during capture by a synchronous capture are the following:

  • The transformation overhead occurs in the source database.

  • All sites receive the transformed LCR.


Note:

A rule-based transformation cannot be used with a synchronous capture to modify or remove a column of a data type that is not supported by Oracle Streams.

Rule-Based Transformations and Errors During Capture by a Synchronous Capture

If an error occurs when the transformation is run during capture by a synchronous capture, then the error is returned to the synchronous capture. The behavior of the synchronous capture depends on the type of transformation being performed and the type of error encountered. The following synchronous capture behaviors are possible:

  • If the transformation is a declarative rule-based transformation, and the synchronous capture can ignore the error, then the synchronous capture performs the transformation and captures the change. For example, if a synchronous capture tries to perform a DELETE_COLUMN declarative rule-based transformation, and the column specified for deletion does not exist in the row LCR, then the synchronous capture captures the change.

  • If the transformation is a declarative rule-based transformation, and the synchronous capture cannot ignore the error, then the change is not captured, and the DML operation aborts. For example, if a synchronous capture tries to perform an ADD_COLUMN declarative rule-based transformation, and the column specified for addition already exists in the row LCR, then the change is not captured, and the DML aborts.

  • Whenever an error is encountered in a custom rule-based transformation, the change is not captured, and the DML aborts.

If the DML aborts because of a rule-based transformation, then you must either change or remove the rule-based transformation to perform the DML operation.

Rule-Based Transformations and Propagations

For a transformation to be performed during propagation, a rule that is associated with a rule-based transformation in the positive rule set for the propagation must evaluate to TRUE for a message in the source queue for the propagation. This message can be a captured LCR, buffered LCR, buffered user message, persistent LCR, and persistent user message.

If the transformation is a declarative rule-based transformation, then Oracle transforms the message internally when the rule in a positive rule set evaluates to TRUE for the message. If the transformation is a custom rule-based transformation, then an action context containing a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION is returned to the propagation when the rule in a positive rule set evaluates to TRUE for the message.

The propagation completes the following steps to perform a rule-based transformation:

  1. Starts dequeuing the message from the source queue.

  2. Transforms the message. If the transformation is a declarative rule-based transformation, then Oracle transforms the message internally based on the specifications of the declarative transformation. If the transformation is a custom rule-based transformation, then the source queue owner runs the PL/SQL function in the name-value pair to transform the message.

  3. Completes dequeuing the transformed message.

  4. Propagates the transformed message to the destination queue.

Figure 6-3 shows a transformation during propagation.

Figure 6-3 Transformation During Propagation

Description of Figure 6-3 follows
Description of "Figure 6-3 Transformation During Propagation"

For example, suppose you use a rule-based transformation for a propagation that propagates messages from the dbs1.example.com database to the dbs2.example.com database, but you do not use a rule-based transformation for a propagation that propagates messages from the dbs1.example.com database to the dbs3.example.com database.

In this case, a message in the queue at dbs1.example.com can be transformed before it is propagated to dbs2.example.com, but the same message can remain in its original form when it is propagated to dbs3.example.com. In this case, after propagation, the queue at dbs2.example.com contains the transformed message, and the queue at dbs3.example.com contains the original message.

The advantages of performing transformations during propagation are the following:

  • Security can be improved if the transformation removes or changes private information before messages are propagated.

  • Some destination queues can receive a transformed message, while other destination queues can receive the original message.

  • Different destinations can receive different variations of the same transformed message.

The possible disadvantages of performing transformations during propagation are the following:

  • Once a message is transformed, any database to which it is propagated after the first propagation receives the transformed message. For example, if dbs2.example.com propagates the message to dbs4.example.com, then dbs4.example.com receives the transformed message.

  • When the first propagation in a directed network performs the transformation, and a local capture process captured the message, the transformation overhead occurs on the source database. However, if the capture process is a downstream capture process, then this overhead occurs at the downstream database, not at the source database.

  • When the first propagation in a directed network performs the transformation, and a synchronous capture captured the message, the transformation overhead occurs on the source database.

  • The same transformation can be done multiple times on a message when different propagations send the message to multiple destination databases.

Rule-Based Transformation Errors During Propagation

If an error occurs during the transformation, then the message that caused the error is not dequeued or propagated, and the error is returned to the propagation. Before the message can be propagated, you must change or remove the rule-based transformation to avoid the error.

Rule-Based Transformations and an Apply Process

For a transformation to be performed during apply, a rule that is associated with a rule-based transformation in the positive rule set for the apply process must evaluate to TRUE for a message in the queue for the apply process. This message can be a captured LCR, a persistent LCR, or a persistent user message.

If the transformation is a declarative rule-based transformation, then Oracle transforms the message internally when the rule in a positive rule set evaluates to TRUE for the message. If the transformation is a custom rule-based transformation, then an action context containing a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION is returned to the apply process when the rule in a positive rule set evaluates to TRUE for the message.

The apply process completes the following steps to perform a rule-based transformation:

  1. Starts to dequeue the message from the queue.

  2. Transforms the message. If the transformation is a declarative rule-based transformation, then Oracle transforms the message internally based on the specifications of the declarative transformation. If the transformation is a custom rule-based transformation, then the apply user runs the PL/SQL function in the name-value pair to transform the message.

  3. Completes dequeuing the transformed message.

  4. Applies the transformed message, which can entail changing database objects at the destination database or sending the transformed message to an apply handler.

All actions are performed by the apply user.

Figure 6-4 shows a transformation during apply.

Figure 6-4 Transformation During Apply

Description of Figure 6-4 follows
Description of "Figure 6-4 Transformation During Apply"

For example, suppose a message is propagated from the dbs1.example.com database to the dbs2.example.com database in its original form. When the apply process dequeues the message at dbs2.example.com, the message is transformed.

The possible advantages of performing transformations during apply are the following:

  • Any database to which the message is propagated after the first propagation can receive the message in its original form. For example, if dbs2.example.com propagates the message to dbs4.example.com, then dbs4.example.com can receive the original message.

  • The transformation overhead does not occur on the source database when the source and destination database are different.

The possible disadvantages of performing transformations during apply are the following:

  • Security might be a concern if the messages contain private information, because all databases to which the messages are propagated receive the original messages.

  • The same transformation can be done multiple times when multiple destination databases need the same transformation.


Note:

Before modifying one or more rules for an apply process, you should stop the apply process.

Rule-Based Transformation Errors During Apply Process Dequeue

If an error occurs when the transformation function is run during apply process dequeue, then the message that caused the error is not dequeued, the transaction containing the message is not applied, the error is returned to the apply process, and the apply process is disabled. Before the apply process can be enabled, you must change or remove the rule-based transformation to avoid the error.

Apply Errors on Transformed Messages

If an apply error occurs for a transaction in which some of the messages have been transformed by a rule-based transformation, then the transformed messages are moved to the error queue with all of the other messages in the transaction. If you use the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package to reexecute a transaction in the error queue that contains transformed messages, then the transformation is not performed on the messages again because the apply process rule set containing the rule is not evaluated again.

Rule-Based Transformations and a Messaging Client

For a transformation to be performed during dequeue by a messaging client, a rule that is associated with a rule-based transformation in the positive rule set for the messaging client must evaluate to TRUE for a message in the queue for the messaging client.

If the transformation is a declarative rule-based transformation, then Oracle transforms the message internally when the rule in a positive rule set evaluates to TRUE for the message. If the transformation is a custom rule-based transformation, then an action context containing a name-value pair with the name STREAMS$_TRANSFORM_FUNCTION is returned to the messaging client when the rule in a positive rule set evaluates to TRUE for the message.

The messaging client completes the following steps to perform a rule-based transformation:

  1. Starts to dequeue the message from the queue.

  2. Transforms the message. If the transformation is a declarative rule-based transformation, then the message must be a persistent LCR, and Oracle transforms the row LCR internally based on the specifications of the declarative transformation. If the transformation is a custom rule-based transformation, then the message can be a persistent LCR or a persistent user message. The user who invokes the messaging client runs the PL/SQL function in the name-value pair to transform the message during dequeue.

  3. Completes dequeuing the transformed message.

All actions are performed by the user who invokes the messaging client.

Figure 6-5 shows a transformation during messaging client dequeue.

Figure 6-5 Transformation During Messaging Client Dequeue

Description of Figure 6-5 follows
Description of "Figure 6-5 Transformation During Messaging Client Dequeue"

For example, suppose a message is propagated from the dbs1.example.com database to the dbs2.example.com database in its original form. When the messaging client dequeues the message at dbs2.example.com, the message is transformed.

One possible advantage of performing transformations during dequeue in a messaging environment is that any database to which the message is propagated after the first propagation can receive the message in its original form. For example, if dbs2.example.com propagates the message to dbs4.example.com, then dbs4.example.com can receive the original message.

The possible disadvantages of performing transformations during dequeue in a messaging environment are the following:

  • Security might be a concern if the messages contain private information, because all databases to which the messages are propagated receive the original messages.

  • The same transformation can be done multiple times when multiple destination databases need the same transformation.

Rule-Based Transformation Errors During Messaging Client Dequeue

If an error occurs when the transformation function is run during messaging client dequeue, then the message that caused the error is not dequeued, and the error is returned to the messaging client. Before the message can be dequeued by the messaging client, you must change or remove the rule-based transformation to avoid the error.

Multiple Rule-Based Transformations

You can transform a message during capture, propagation, apply, or dequeue, or during any combination of capture, propagation, apply, and dequeue. For example, if you want to hide sensitive data from all recipients, then you can transform a message during capture. If some recipients require additional custom transformations, then you can transform the previously transformed message during propagation, apply, or dequeue.

Transformation Ordering

In addition to declarative rule-based transformations and custom rule-based transformations, a row migration is an internal transformation that takes place when a subset rule evaluates to TRUE. If all three types of transformations are specified for a single rule, then Oracle Database performs the transformations in the following order when the rule evaluates to TRUE:

  1. Row migration

  2. Declarative rule-based transformation

  3. Custom rule-based transformation

Declarative Rule-Based Transformation Ordering

If more than one declarative rule-based transformation is specified for a single rule, then Oracle must perform the transformations in a particular order. You can use the default ordering for declarative transformations, or you can specify the order.

This section contains the following topics:

Default Declarative Transformation Ordering

By default, Oracle Database performs declarative transformations in the following order when the rule evaluates to TRUE:

  1. Keep columns

  2. Delete column

  3. Rename column

  4. Add column

  5. Rename table

  6. Rename schema

The results of a declarative transformation are used in each subsequent declarative transformation. For example, suppose the following declarative transformations are specified for a single rule:

  • Delete column address

  • Add column address

Assuming column address exists in a row LCR, both declarative transformations should be performed in this case because column address is deleted from the row LCR before column address is added back to the row LCR. The following table shows the transformation ordering for this example.

Step NumberTransformation TypeTransformation DetailsTransformation Performed?
1Keep columns--
2Delete columnDelete column address from row LCRYes
3Rename column--
4Add columnAdd column address to row LCRYes
5Rename table--
6Rename schema--

Another scenario might rename a table and then rename a schema. For example, suppose the following declarative transformations are specified for a single rule:

  • Rename table john.customers to sue.clients

  • Rename schema sue to mary

Notice that the rename table transformation also renames the schema for the table. In this case, both transformations should be performed and, after both transformations, the table name becomes mary.clients. The following table shows the transformation ordering for this example.

Step NumberTransformation TypeTransformation DetailsTransform"*ation Performed?
1Keep columns--
2Delete column--
3Rename column--
4Add column--
5Rename tableRename table john.customers to sue.clientsYes
6Rename schemaRename schema sue to maryYes

Consider a similar scenario in which the following declarative transformations are specified for a single rule:

  • Rename table john.customers to sue.clients

  • Rename schema john to mary

In this case, the first transformation is performed, but the second one is not. After the first transformation, the table name is sue.clients. The second transformation is not performed because the schema of the table is now sue, not john. The following table shows the transformation ordering for this example.

Step NumberTransformation TypeTransformation DetailsTransformation Performed?
1Keep columns--
2Delete column--
3Rename column--
4Add column--
5Rename tableRename table john.customers to sue.clientsYes
6Rename schemaRename schema john to maryNo

The rename schema transformation is not performed, but it does not result in an error. In this case, the row LCR is transformed by the rename table transformation, and a row LCR with the table name sue.clients is returned.

User-Specified Declarative Transformation Ordering

If you do not want to use the default declarative rule-based transformation ordering for a particular rule, then you can specify step numbers for each declarative transformation specified for the rule. If you specify a step number for one or more declarative transformations for a particular rule, then the declarative transformations for the rule behave in the following way:

  • Declarative transformations are performed in order of increasing step number.

  • The default step number for a declarative transformation is 0 (zero). A declarative transformation uses this default if no step number is specified for it explicitly.

  • If two or more declarative transformations have the same step number, then these declarative transformations follow the default ordering described in "Default Declarative Transformation Ordering".

For example, you can reverse the default ordering for declarative transformations by specifying the following step numbers for transformations associated with a particular rule:

  • Keep columns with step number 6

  • Delete column with step number 5

  • Rename column with step number 4

  • Add column with step number 3

  • Rename table with step number 2

  • Rename schema with step number 1

With this ordering specified, rename schema transformations are performed first, and delete column transformations are performed last.

Considerations for Rule-Based Transformations

The following considerations apply to both declarative rule-based transformations and custom rule-based transformations:


See Also:

Oracle Streams Advanced Queuing User's Guide and Oracle Database PL/SQL Packages and Types Reference for more information about the DBMS_TRANSFORM package

PK76E1*"*PK&AOEBPS/strms_adrules.htm Advanced Rule Concepts

11 Advanced Rule Concepts

The following topics contain information about rules.

The Components of a Rule

A rule is a database object that enables a client to perform an action when an event occurs and a condition is satisfied. A rule consists of the following components:

Each rule is specified as a condition that is similar to the condition in the WHERE clause of a SQL query. You can group related rules together into rule sets. A single rule can be in one rule set, multiple rule sets, or no rule sets.

Rule sets are evaluated by a rules engine, which is a built-in part of Oracle. Both user-created applications and Oracle features, such as Oracle Streams, can be clients of the rules engine.


Note:

A rule must be in a rule set for it to be evaluated.

Rule Condition

A rule condition combines one or more expressions and conditions and returns a Boolean value, which is a value of TRUE, FALSE, or NULL (unknown). An expression is a combination of one or more values and operators that evaluate to a value. A value can be data in a table, data in variables, or data returned by a SQL function or a PL/SQL function. For example, the following expression includes only a single value:

salary

The following expression includes two values (salary and .1) and an operator (*):

salary * .1

The following condition consists of two expressions (salary and 3800) and a condition (=):

salary = 3800

This logical condition evaluates to TRUE for a given row when the salary column is 3800. Here, the value is data in the salary column of a table.

A single rule condition can include more than one condition combined with the AND, OR, and NOT logical conditions to a form compound condition. A logical condition combines the results of two component conditions to produce a single result based on them or to invert the result of a single condition. For example, consider the following compound condition:

salary = 3800 OR job_title = 'Programmer' 

This rule condition contains two conditions joined by the OR logical condition. If either condition evaluates to TRUE, then the rule condition evaluates to TRUE. If the logical condition were AND instead of OR, then both conditions must evaluate to TRUE for the entire rule condition to evaluate to TRUE.

Variables in Rule Conditions

Rule conditions can contain variables. When you use variables in rule conditions, precede each variable with a colon (:). The following is an example of a variable used in a rule condition:

:x = 55

Variables let you refer to data that is not stored in a table. A variable can also improve performance by replacing a commonly occurring expression. Performance can improve because, instead of evaluating the same expression multiple times, the variable is evaluated once.

A rule condition can also contain an evaluation of a call to a subprogram. Such a condition is evaluated in the same way as other conditions. That is, it evaluates to a value of TRUE, FALSE, or NULL (unknown). The following is an example of a condition that contains a call to a simple function named is_manager that determines whether an employee is a manager:

is_manager(employee_id) = 'Y'

Here, the value of employee_id is determined by data in a table where employee_id is a column.

You can use user-defined types for variables. Therefore, variables can have attributes. When a variable has attributes, each attribute contains partial data for the variable. In rule conditions, you specify attributes using dot notation. For example, the following condition evaluates to TRUE if the value of attribute z in variable y is 9:

:y.z = 9

Note:

A rule cannot have a NULL (or empty) rule condition.


See Also:


Simple Rule Conditions

A simple rule condition is a condition that has one of the following forms:

  • simple_rule_expression condition constant

  • constant condition simple_rule_expression

  • constant condition constant

Simple Rule Expressions

In a simple rule condition, a simple_rule_expression is one of the following:

  • Table column.

  • Variable.

  • Variable attribute.

  • Method result where the method either takes no arguments or constant arguments and the method result can be returned by the variable method function, so that the expression is one of the data types supported for simple rules. Such methods include LCR member subprograms that meet these requirements, such as GET_TAG, GET_VALUE, GET_COMPATIBLE, GET_EXTRA_ATTRIBUTE, and so on.

For table columns, variables, variable attributes, and method results, the following data types can be used in simple rule conditions:

  • VARCHAR2

  • NVARCHAR2

  • NUMBER

  • DATE

  • BINARY_FLOAT

  • BINARY_DOUBLE

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

  • RAW

  • CHAR

Use of other data types in expressions results in nonsimple rule conditions.

Conditions

In a simple rule condition, a condition is one of the following:

  • <=

  • <

  • =

  • >

  • >=

  • !=

  • IS NULL

  • IS NOT NULL

Use of other conditions results in nonsimple rule conditions.

Constants

A constant is a fixed value. A constant can be:

  • A number, such as 12 or 5.4

  • A character, such as x or $

  • A character string, such as "this is a string"

Examples of Simple Rule Conditions

The following conditions are simple rule conditions, assuming the data types used in expressions are supported in simple rule conditions:

  • tab1.col = 5

  • tab2.col != 5

  • :v1 > 'aaa'

  • :v2.a1 < 10.01

  • :v3.m() = 10

  • :v4 IS NOT NULL

  • 1 = 1

  • 'abc' > 'AB'

  • :date_var < to_date('04-01-2004, 14:20:17', 'mm-dd-yyyy, hh24:mi:ss')

  • :adt_var.ts_attribute >= to_timestamp('04-01-2004, 14:20:17 PST', 'mm-dd-yyyy, hh24:mi:ss TZR')

  • :my_var.my_to_upper('abc') = 'ABC'

Rules with simple rule conditions are called simple rules. You can combine two or more simple conditions with the logical conditions AND and OR for a rule, and the rule remains simple. For example, rules with the following conditions are simple rules:

  • tab1.col = 5 AND :v1 > 'aaa'

  • tab1.col = 5 OR :v1 > 'aaa'

However, using the NOT logical condition in a rule condition causes the rule to be nonsimple.

Benefits of Simple Rules

Simple rules are important for the following reasons:

  • Simple rules are indexed by the rules engine internally.

  • Simple rules can be evaluated without executing SQL.

  • Simple rules can be evaluated with partial data.

When a client uses the DBMS_RULE.EVALUATE procedure to evaluate an event, the client can specify that only simple rules should be evaluated by specifying TRUE for the simple_rules_only parameter.


See Also:


Rule Evaluation Context

An evaluation context is a database object that defines external data that can be referenced in rule conditions. The external data can exist as variables, table data, or both. The following analogy might be helpful: If the rule condition were the WHERE clause in a SQL query, then the external data in the evaluation context would be the tables and bind variables referenced in the FROM clause of the query. That is, the expressions in the rule condition should reference the tables, table aliases, and variables in the evaluation context to make a valid WHERE clause.

A rule evaluation context provides the necessary information for interpreting and evaluating the rule conditions that reference external data. For example, if a rule refers to a variable, then the information in the rule evaluation context must contain the variable type. Or, if a rule refers to a table alias, then the information in the evaluation context must define the table alias.

The objects referenced by a rule are determined by the rule evaluation context associated with it. The rule owner must have the necessary privileges to access these objects, such as SELECT privilege on tables, EXECUTE privilege on types, and so on. The rule condition is resolved in the schema that owns the evaluation context.

For example, consider a rule evaluation context named hr_evaluation_context that contains the following information:

  • Table alias dep corresponds to the hr.departments table.

  • Variables loc_id1 and loc_id2 are both of type NUMBER.

The hr_evaluation_context rule evaluation context provides the necessary information for evaluating the following rule condition:

dep.location_id IN (:loc_id1, :loc_id2)

In this case, the rule condition evaluates to TRUE for a row in the hr.departments table if that row has a value in the location_id column that corresponds to either of the values passed in by the loc_id1 or loc_id2 variables. The rule cannot be interpreted or evaluated properly without the information in the hr_evaluation_context rule evaluation context. Also, notice that dot notation is used to specify the column location_id in the dep table alias.


Note:

Views are not supported as base tables in evaluation contexts.

Explicit and Implicit Variables

The value of a variable referenced in a rule condition can be explicitly specified when the rule is evaluated, or the value of a variable can be implicitly available given the event.

Explicit variables are supplied by the caller at evaluation time. These values are specified by the variable_values parameter when the DBMS_RULE.EVALUATE procedure is run.

Implicit variables are not given a value supplied by the caller at evaluation time. The value of an implicit variable is obtained by calling the variable value function. You define this function when you specify the variable_types list during the creation of an evaluation context using the CREATE_EVALUATION_CONTEXT procedure in the DBMS_RULE_ADM package. If the value for an implicit variable is specified during evaluation, then the specified value overrides the value returned by the variable value function.

Specifically, the variable_types list is of type SYS.RE$VARIABLE_TYPE_LIST, which is a list of variables of type SYS.RE$VARIABLE_TYPE. Within each instance of SYS.RE$VARIABLE_TYPE in the list, the function used to determine the value of an implicit variable is specified as the variable_value_function attribute.

Whether variables are explicit or implicit is the choice of the designer of the application using the rules engine. The following are reasons for using an implicit variable:

  • The caller of the DBMS_RULE.EVALUATE procedure does not need to know anything about the variable, which can reduce the complexity of the application using the rules engine. For example, a variable can call a function that returns a value based on the data being evaluated.

  • The caller might not have EXECUTE privileges on the variable value function.

  • The caller of the DBMS_RULE.EVALUATE procedure does not know the variable value based on the event, which can improve security if the variable value contains confidential information.

  • The variable will be used infrequently, and the variable's value always can be derived if necessary. Making such variables implicit means that the caller of the DBMS_RULE.EVALUATE procedure does not need to specify many uncommon variables.

For example, in the following rule condition, the values of variable x and variable y could be specified explicitly, but the value of the variable max could be returned by running the max function:

:x = 4 AND :y < :max

Alternatively, variable x and y could be implicit variables, and variable max could be an explicit variable. So, there is no syntactic difference between explicit and implicit variables in the rule condition. You can determine whether a variable is explicit or implicit by querying the DBA_EVALUATION_CONTEXT_VARS data dictionary view. For explicit variables, the VARIABLE_VALUE_FUNCTION field is NULL. For implicit variables, this field contains the name of the function called by the implicit variable.


See Also:


Evaluation Context Association with Rule Sets and Rules

To be evaluated, each rule must be associated with an evaluation context or must be part of a rule set that is associated with an evaluation context. A single evaluation context can be associated with multiple rules or rule sets. The following list describes which evaluation context is used when a rule is evaluated:

  • If an evaluation context is associated with a rule, then it is used for the rule whenever the rule is evaluated, and any evaluation context associated with the rule set being evaluated is ignored.

  • If a rule does not have an evaluation context, but an evaluation context was specified for the rule when it was added to a rule set using the ADD_RULE procedure in the DBMS_RULE_ADM package, then the evaluation context specified in the ADD_RULE procedure is used for the rule when the rule set is evaluated.

  • If no rule evaluation context is associated with a rule and none was specified by the ADD_RULE procedure, then the evaluation context of the rule set is used for the rule when the rule set is evaluated.


Note:

If a rule does not have an evaluation context, and you try to add it to a rule set that does not have an evaluation context, then an error is raised, unless you specify an evaluation context when you run the ADD_RULE procedure.

Evaluation Function

You have the option of creating an evaluation function to be run with a rule evaluation context. You can use an evaluation function for the following reasons:

  • You want to bypass the rules engine and instead evaluate events using the evaluation function.

  • You want to filter events so that some events are evaluated by the evaluation function and other events are evaluated by the rules engine.

You associate a function with a rule evaluation context by specifying the function name for the evaluation_function parameter when you create the rule evaluation context with the CREATE_EVALUATION_CONTEXT procedure in the DBMS_RULE_ADM package. The rules engine invokes the evaluation function during the evaluation of any rule set that uses the evaluation context.

The DBMS_RULE.EVALUATE procedure is overloaded. The function must have each parameter in one of the DBMS_RULE.EVALUATE procedures, and the type of each parameter must be same as the type of the corresponding parameter in the DBMS_RULE.EVALUATE procedure, but the names of the parameters can be different.

An evaluation function has the following return values:

  • DBMS_RULE_ADM.EVALUATION_SUCCESS: The user specified evaluation function completed the rule set evaluation successfully. The rules engine returns the results of the evaluation obtained by the evaluation function to the rules engine client using the DBMS_RULE.EVALUATE procedure.

  • DBMS_RULE_ADM.EVALUATION_CONTINUE: The rules engine evaluates the rule set as if there were no evaluation function. The evaluation function is not used, and any results returned by the evaluation function are ignored.

  • DBMS_RULE_ADM.EVALUATION_FAILURE: The user-specified evaluation function failed. Rule set evaluation stops, and an error is raised.

If you always want to bypass the rules engine, then the evaluation function should return either EVALUATION_SUCCESS or EVALUATION_FAILURE. However, if you want to filter events so that some events are evaluated by the evaluation function and other events are evaluated by the rules engine, then the evaluation function can return all three return values, and it returns EVALUATION_CONTINUE when the rules engine should be used for evaluation.

If you specify an evaluation function for an evaluation context, then the evaluation function is run during evaluation when the evaluation context is used by a rule set or rule.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the evaluation function specified in the DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT procedure and for more information about the overloaded DBMS_RULE.EVALUATE procedure

Rule Action Context

An action context contains optional information associated with a rule that is interpreted by the client of the rules engine when the rule is evaluated for an event. The client of the rules engine can be a user-created application or an internal feature of Oracle, such as Oracle Streams. Each rule has only one action context. The information in an action context is of type SYS.RE$NV_LIST, which is a type that contains an array of name-value pairs.

The rule action context information provides a context for the action taken by a client of the rules engine when a rule evaluates to TRUE or MAYBE. The rules engine does not interpret the action context. Instead, it returns the action context, and a client of the rules engine can interpret the action context information.

For example, suppose an event is defined as the addition of a new employee to a company. If the employee information is stored in the hr.employees table, then the event occurs whenever a row is inserted into this table. The company wants to specify that several actions are taken when a new employee is added, but the actions depend on which department the employee joins. One of these actions is that the employee is registered for a course relating to the department.

In this scenario, the company can create a rule for each department with an appropriate action context. Here, an action context returned when a rule evaluates to TRUE specifies the number of a course that an employee should take. Here are parts of the rule conditions and the action contexts for three departments:

Rule NamePart of the Rule ConditionAction Context Name-Value Pair
rule_dep_10department_id = 10course_number, 1057
rule_dep_20department_id = 20course_number, 1215
rule_dep_30department_id = 30NULL

These action contexts return the following instructions to the client application:

  • The action context for the rule_dep_10 rule instructs the client application to enroll the new employee in course number 1057.

  • The action context for the rule_dep_20 rule instructs the client application to enroll the new employee in course number 1215.

  • The NULL action context for the rule_dep_30 rule instructs the client application not to enroll the new employee in any course.

Each action context can contain zero or more name-value pairs. If an action context contains more than one name-value pair, then each name in the list must be unique. In this example, the client application to which the rules engine returns the action context registers the new employee in the course with the returned course number. The client application does not register the employee for a course if a NULL action context is returned or if the action context does not contain a course number.

If multiple clients use the same rule, or if you want an action context to return more than one name-value pair, then you can list more than one name-value pair in an action context. For example, suppose the company also adds a new employee to a department electronic mailing list. In this case, the action context for the rule_dep_10 rule might contain two name-value pairs:

NameValue
course_number1057
dist_listadmin_list

The following are considerations for names in name-value pairs:

  • If different applications use the same action context, then use different names or prefixes of names to avoid naming conflicts.

  • Do not use $ and # in names because they can cause conflicts with Oracle-supplied action context names.

You add a name-value pair to an action context using the ADD_PAIR member procedure of the RE$NV_LIST type. You remove a name-value pair from an action context using the REMOVE_PAIR member procedure of the RE$NV_LIST type. If you want to modify an existing name-value pair in an action context, then you should first remove it using the REMOVE_PAIR member procedure and then add an appropriate name-value pair using the ADD_PAIR member procedure.


Note:

Oracle Streams uses action contexts for custom rule-based transformations and, when subset rules are specified, for internal transformations that might be required on LCRs containing UPDATE operations. Oracle Streams also uses action contexts to specify a destination queue into which an apply process enqueues messages that satisfy the rule. In addition, Oracle Streams uses action contexts to specify whether a message that satisfies an apply process rule is executed by the apply process.

Rule Set Evaluation

The rules engine evaluates rule sets against an event. An event is an occurrence that is defined by the client of the rules engine. The client initiates evaluation of an event by calling the DBMS_RULE.EVALUATE procedure. This procedure enables the client to send some information about the event to the rules engine for evaluation against a rule set. The event itself can have more information than the information that the client sends to the rules engine.

The following information is specified by the client when it calls the DBMS_RULE.EVALUATE procedure:

The client can also send other information about how to evaluate an event against the rule set using the DBMS_RULE.EVALUATE procedure. For example, the caller can specify if evaluation must stop as soon as the first TRUE rule or the first MAYBE rule (if there are no TRUE rules) is found.

If the client wants all of the rules that evaluate to TRUE or MAYBE returned to it, then the client can specify whether evaluation results should be sent back in a complete list of the rules that evaluated to TRUE or MAYBE, or evaluation results should be sent back iteratively. When evaluation results are sent iteratively to the client, the client can retrieve each rule that evaluated to TRUE or MAYBE one by one using the GET_NEXT_HIT function in the DBMS_RULE package.

The rules engine uses the rules in the specified rule set for evaluation and returns the results to the client. The rules engine returns rules using two OUT parameters in the EVALUATE procedure. This procedure is overloaded and the two OUT parameters are different in each version of the procedure:

Rule Set Evaluation Process

Figure 11-1 shows the rule set evaluation process:

  1. A client-defined event occurs.

  2. The client initiates evaluation of a rule set by sending information about an event to the rules engine using the DBMS_RULE.EVALUATE procedure.

  3. The rules engine evaluates the rule set for the event using the relevant evaluation context. The client specifies both the rule set and the evaluation context in the call to the DBMS_RULE.EVALUATE procedure. Only rules that are in the specified rule set, and use the specified evaluation context, are used for evaluation.

  4. The rules engine obtains the results of the evaluation. Each rule evaluates to either TRUE, FALSE, or NULL (unknown).

  5. The rules engine returns rules that evaluated to TRUE to the client, either in a complete list or one by one. Each returned rule is returned with its entire action context, which can contain information or can be NULL.

  6. The client performs actions based on the results returned by the rules engine. The rules engine does not perform actions based on rule evaluations.

Figure 11-1 Rule Set Evaluation

Description of Figure 11-1 follows
Description of "Figure 11-1 Rule Set Evaluation"


See Also:


Partial Evaluation

Partial evaluation occurs when the DBMS_RULE.EVALUATE procedure is run without data for all the tables and variables in the specified evaluation context. During partial evaluation, some rules can reference columns, variables, or attributes that are unavailable, while some other rules can reference only available data.

For example, consider a scenario where only the following data is available during evaluation:

  • Column tab1.col = 7

  • Attribute v1.a1 = 'ABC'

The following rules are used for evaluation:

  • Rule R1 has the following condition:

    (tab1.col = 5)
    
  • Rule R2 has the following condition:

    (:v1.a2 > 'aaa')
    
  • Rule R3 has the following condition:

    (:v1.a1 = 'ABC') OR (:v2 = 5)
    
  • Rule R4 has the following condition:

    (:v1.a1 = UPPER('abc'))
    

Given this scenario, R1 and R4 reference available data, R2 references unavailable data, and R3 references available data and unavailable data.

Partial evaluation always evaluates only simple conditions within a rule. If the rule condition has parts which are not simple, then the rule might or might not be evaluated completely, depending on the extent to which data is available. If a rule is not completely evaluated, then it can be returned as a MAYBE rule.

Given the rules in this scenario, R1 and the first part of R3 are evaluated, but R2 and R4 are not evaluated. The following results are returned to the client:

  • R1 evaluates to FALSE, and so is not returned.

  • R2 is returned as MAYBE because information about attribute v1.a2 is not available.

  • R3 is returned as TRUE because R3 is a simple rule and the value of v1.a1 matches the first part of the rule condition.

  • R4 is returned as MAYBE because the rule condition is not simple. The client must supply the value of variable v1 for this rule to evaluate to TRUE or FALSE.

Database Objects and Privileges Related to Rules

You can create the following types of database objects directly using the DBMS_RULE_ADM package:

You can create rules and rule sets indirectly using the DBMS_STREAMS_ADM package. You control the privileges for these database objects using the following procedures in the DBMS_RULE_ADM package:

To allow a user to create rule sets, rules, and evaluation contexts in the user's own schema, grant the user the following system privileges:

These privileges, and the privileges discussed in the following sections, can be granted to the user directly or through a role.

This section contains these topics:


Note:

When you grant a privilege on "ANY" object (for example, ALTER_ANY_RULE), and the initialization parameter O7_DICTIONARY_ACCESSIBILITY is set to FALSE, you give the user access to that type of object in all schemas except the SYS schema. By default, the initialization parameter O7_DICTIONARY_ACCESSIBILITY is set to FALSE.

If you want to grant access to an object in the SYS schema, then you can grant object privileges explicitly on the object. Alternatively, you can set the O7_DICTIONARY_ACCESSIBILITY initialization parameter to TRUE. Then privileges granted on "ANY" object will allow access to any schema, including SYS.



See Also:


Privileges for Creating Database Objects Related to Rules

To create an evaluation context, rule, or rule set in a schema, a user must meet at least one of the following conditions:

  • The schema must be the user's own schema, and the user must be granted the create system privilege for the type of database object being created. For example, to create a rule set in the user's own schema, a user must be granted the CREATE_RULE_SET_OBJ system privilege.

  • The user must be granted the create any system privilege for the type of database object being created. For example, to create an evaluation context in any schema, a user must be granted the CREATE_ANY_EVALUATION_CONTEXT system privilege.


Note:

When creating a rule with an evaluation context, the rule owner must have privileges on all objects accessed by the evaluation context.

Privileges for Altering Database Objects Related to Rules

To alter an evaluation context, rule, or rule set, a user must meet at least one of the following conditions:

  • The user must own the database object.

  • The user must be granted the alter object privilege for the database object if it is in another user's schema. For example, to alter a rule set in another user's schema, a user must be granted the ALTER_ON_RULE_SET object privilege on the rule set.

  • The user must be granted the alter any system privilege for the database object. For example, to alter a rule in any schema, a user must be granted the ALTER_ANY_RULE system privilege.

Privileges for Dropping Database Objects Related to Rules

To drop an evaluation context, rule, or rule set, a user must meet at least one of the following conditions:

  • The user must own the database object.

  • The user must be granted the drop any system privilege for the database object. For example, to drop a rule set in any schema, a user must be granted the DROP_ANY_RULE_SET system privilege.

Privileges for Placing Rules in a Rule Set

This section describes the privileges required to place a rule in a rule set. The user must meet at least one of the following conditions for the rule:

  • The user must own the rule.

  • The user must be granted the execute object privilege on the rule if the rule is in another user's schema. For example, to place a rule named depts in the hr schema in a rule set, a user must be granted the EXECUTE_ON_RULE privilege for the hr.depts rule.

  • The user must be granted the execute any system privilege for rules. For example, to place any rule in a rule set, a user must be granted the EXECUTE_ANY_RULE system privilege.

The user also must meet at least one of the following conditions for the rule set:

  • The user must own the rule set.

  • The user must be granted the alter object privilege on the rule set if the rule set is in another user's schema. For example, to place a rule in the human_resources rule set in the hr schema, a user must be granted the ALTER_ON_RULE_SET privilege for the hr.human_resources rule set.

  • The user must be granted the alter any system privilege for rule sets. For example, to place a rule in any rule set, a user must be granted the ALTER_ANY_RULE_SET system privilege.

In addition, the rule owner must have privileges on all objects referenced by the rule. These privileges are important when the rule does not have an evaluation context associated with it.

Privileges for Evaluating a Rule Set

To evaluate a rule set, a user must meet at least one of the following conditions:

  • The user must own the rule set.

  • The user must be granted the execute object privilege on the rule set if it is in another user's schema. For example, to evaluate a rule set named human_resources in the hr schema, a user must be granted the EXECUTE_ON_RULE_SET privilege for the hr.human_resources rule set.

  • The user must be granted the execute any system privilege for rule sets. For example, to evaluate any rule set, a user must be granted the EXECUTE_ANY_RULE_SET system privilege.

Granting EXECUTE object privilege on a rule set requires that the grantor have the EXECUTE privilege specified WITH GRANT OPTION on all rules currently in the rule set.

Privileges for Using an Evaluation Context

To use an evaluation context in a rule or a rule set, the user who owns the rule or rule set must meet at least one of the following conditions for the evaluation context:

  • The user must own the evaluation context.

  • The user must be granted the EXECUTE_ON_EVALUATION_CONTEXT privilege on the evaluation context, if it is in another user's schema.

  • The user must be granted the EXECUTE_ANY_EVALUATION_CONTEXT system privilege for evaluation contexts.

Evaluation Contexts Used in Oracle Streams

The following sections describe the system-created evaluation contexts used in Oracle Streams.

Evaluation Context for Global, Schema, Table, and Subset Rules

When you create global, schema, table, and subset rules, the system-created rule sets and rules use a built-in evaluation context in the SYS schema named STREAMS$_EVALUATION_CONTEXT. PUBLIC is granted the EXECUTE privilege on this evaluation context. Global, schema, table, and subset rules can be used by capture processes, synchronous captures, propagations, apply processes, and messaging clients.

During Oracle installation, the following statement creates the Oracle Streams evaluation context:

DECLARE
  vt  SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  vt := SYS.RE$VARIABLE_TYPE_LIST(
    SYS.RE$VARIABLE_TYPE('DML', 'SYS.LCR$_ROW_RECORD', 
       'SYS.DBMS_STREAMS_INTERNAL.ROW_VARIABLE_VALUE_FUNCTION',
       'SYS.DBMS_STREAMS_INTERNAL.ROW_FAST_EVALUATION_FUNCTION'),
    SYS.RE$VARIABLE_TYPE('DDL', 'SYS.LCR$_DDL_RECORD',
       'SYS.DBMS_STREAMS_INTERNAL.DDL_VARIABLE_VALUE_FUNCTION',
       'SYS.DBMS_STREAMS_INTERNAL.DDL_FAST_EVALUATION_FUNCTION'));
    SYS.RE$VARIABLE_TYPE(NULL, 'SYS.ANYDATA', 
       NULL,
       'SYS.DBMS_STREAMS_INTERNAL.ANYDATA_FAST_EVAL_FUNCTION'));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name => 'SYS.STREAMS$_EVALUATION_CONTEXT',
    variable_types          => vt,
    evaluation_function     =>
                       'SYS.DBMS_STREAMS_INTERNAL.EVALUATION_CONTEXT_FUNCTION');
END;
/

This statement includes references to the following internal functions in the SYS.DBMS_STREAM_INTERNAL package:

  • ROW_VARIABLE_VALUE_FUNCTION

  • DDL_VARIABLE_VALUE_FUNCTION

  • EVALUATION_CONTEXT_FUNCTION

  • ROW_FAST_EVALUATION_FUNCTION

  • DDL_FAST_EVALUATION_FUNCTION

  • ANYDATA_FAST_EVAL_FUNCTION


Caution:

Information about these internal functions is provided for reference purposes only. You should never run any of these functions directly.

The ROW_VARIABLE_VALUE_FUNCTION converts an ANYDATA payload, which encapsulates a SYS.LCR$_ROW_RECORD instance, into a SYS.LCR$_ROW_RECORD instance before evaluating rules on the data.

The DDL_VARIABLE_VALUE_FUNCTION converts an ANYDATA payload, which encapsulates a SYS.LCR$_DDL_RECORD instance, into a SYS.LCR$_DDL_RECORD instance before evaluating rules on the data.

The EVALUATION_CONTEXT_FUNCTION is specified as an evaluation_function in the call to the CREATE_EVALUATION_CONTEXT procedure. This function supplements normal rule evaluation for captured LCRs. A capture process enqueues row LCRs and DDL LCRs into its queue, and this function enables it to enqueue other internal messages into the queue, such as commits, rollbacks, and data dictionary changes. This information that is enqueued by capture processes is also used during rule evaluation for a propagation or apply process. Synchronous captures do not use the EVALUATION_CONTEXT_FUNCTION.

ROW_FAST_EVALUATION_FUNCTION improves performance by optimizing access to the following LCR$_ROW_RECORD member functions during rule evaluation:

  • GET_OBJECT_OWNER

  • GET_OBJECT_NAME

  • IS_NULL_TAG

  • GET_SOURCE_DATABASE_NAME

  • GET_COMMAND_TYPE

DDL_FAST_EVALUATION_FUNCTION improves performance by optimizing access to the following LCR$_DDL_RECORD member functions during rule evaluation if the condition is <, <=, =, >=, or > and the other operand is a constant:

  • GET_OBJECT_OWNER

  • GET_OBJECT_NAME

  • IS_NULL_TAG

  • GET_SOURCE_DATABASE_NAME

  • GET_COMMAND_TYPE

  • GET_BASE_TABLE_NAME

  • GET_BASE_TABLE_OWNER

ANYDATA_FAST_EVAL_FUNCTION improves performance by optimizing access to values inside an ANYDATA object.

Rules created using the DBMS_STREAMS_ADM package use ROW_FAST_EVALUATION_FUNCTION or DDL_FAST_EVALUATION_FUNCTION, except for subset rules created using the ADD_SUBSET_RULES or ADD_SUBSET_PROPAGATION_RULES procedure.


See Also:


Evaluation Contexts for Message Rules

When you use either the ADD_MESSAGE_RULE procedure or the ADD_MESSAGE_PROPAGATION_RULE procedure to create a message rule, the message rule uses a user-defined message type that you specify when you create the rule. Such a system-created message rule uses a system-created evaluation context. The name of the system-created evaluation context is different for each message type used to create message rules. Such an evaluation context has a system-generated name and is created in the schema that owns the rule. Only the user who owns this evaluation context is granted the EXECUTE privilege on it.

The evaluation context for this type of message rule contains a variable that is the same type as the message type. The name of this variable is in the form VAR$_number, where number is a system-generated number. For example, if you specify strmadmin.region_pri_msg as the message type when you create a message rule, then the system-created evaluation context has a variable of this type, and the variable is used in the rule condition. Assume that the following statement created the strmadmin.region_pri_msg type:

CREATE TYPE strmadmin.region_pri_msg AS OBJECT(
  region         VARCHAR2(100),
  priority       NUMBER,
  message        VARCHAR2(3000))
/

When you create a message rule using this type, you can specify the following rule condition:

:msg.region = 'EUROPE' AND :msg.priority = '1'

The system-created message rule replaces :msg in the rule condition you specify with the name of the variable. The following is an example of a message rule condition that might result:

:VAR$_52.region = 'EUROPE' AND  :VAR$_52.priority = '1'

In this case, VAR$_52 is the variable name, the type of the VAR$_52 variable is strmadmin.region_pri_msg, and the evaluation context for the rule contains this variable.

The message rule itself has an evaluation context. A statement similar to the following creates an evaluation context for a message rule:

DECLARE
  vt  SYS.RE$VARIABLE_TYPE_LIST;
BEGIN
  vt := SYS.RE$VARIABLE_TYPE_LIST(
    SYS.RE$VARIABLE_TYPE('VAR$_52', 'STRMADMIN.REGION_PRI_MSG', 
       'SYS.DBMS_STREAMS_INTERNAL.MSG_VARIABLE_VALUE_FUNCTION', NULL));
  DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(
    evaluation_context_name => 'STRMADMIN.EVAL_CTX$_99',
    variable_types          => vt,
    evaluation_function     => NULL);
END;
/

The name of the evaluation context is in the form EVAL_CTX$_number, where number is a system-generated number. In this example, the name of the evaluation context is EVAL_CTX$_99.

This statement also includes a reference to the MSG_VARIABLE_VALUE_FUNCTION internal function in the SYS.DBMS_STREAM_INTERNAL package. This function converts an ANYDATA payload, which encapsulates a message instance, into an instance of the same type as the variable before evaluating rules on the data. For example, if the variable type is strmadmin.region_pri_msg, then the MSG_VARIABLE_VALUE_FUNCTION converts the message payload from an ANYDATA payload to a strmadmin.region_pri_msg payload.

If you create rules for different message types, then Oracle creates a different evaluation context for each message type. If you create a rule with the same message type as an existing rule, then the new rule uses the evaluation context for the existing rule. When you use the ADD_MESSAGE_RULE or ADD_MESSAGE_PROPAGATION_RULE to create a rule set for a messaging client or apply process, the new rule set does not have an evaluation context.

Oracle Streams and Event Contexts

In Oracle Streams, capture processes, synchronous captures, and messaging clients do not use event contexts, but propagations and apply processes do. The following types of messages can be staged in a queue: captured LCRs, buffered LCRs, buffered user messages, persistent LCRs, and persistent user messages. When a message is staged in a queue, a propagation or apply process can send the message, along with an event context, to the rules engine for evaluation. An event context always has the following name-value pair: AQ$_MESSAGE as the name and the message as the value.

If you create a custom evaluation context, then you can create propagation and apply process rules that refer to Oracle Streams events using implicit variables. The variable value function for each implicit variable can check for event contexts with the name AQ$_MESSAGE. If an event context with this name is found, then the variable value function returns a value based on a message. You can also pass the event context to an evaluation function and a variable method function.


See Also:


Oracle Streams and Action Contexts

The following sections describe the purposes of action contexts in Oracle Streams and the importance of ensuring that only one rule in a rule set can evaluate to TRUE for a particular rule condition.

Purposes of Action Contexts in Oracle Streams

In Oracle Streams, an action context serves the following purposes:

A different name-value pair can exist in the action context of a rule for each of these purposes. If an action context for a rule contains more than one of these name-value pairs, then the actions specified or described by the name-value pairs are performed in the following order:

  1. Perform subset transformation.

  2. Display information about declarative rule-based transformation.

  3. Perform custom rule-based transformation.

  4. Follow execution directive and perform execution if directed to do so (apply only).

  5. Enqueue into a destination queue (apply only).


    Note:

    The actions specified in the action context for a rule are performed only if the rule is in the positive rule set for a capture process, synchronous capture, propagation, apply process, or messaging client. If a rule is in a negative rule set, then these Oracle Streams clients ignore the action context of the rule.

Internal LCR Transformations in Subset Rules

When you use subset rules, an update operation can be converted into an insert or delete operation when it is captured, propagated, applied, or dequeued. This automatic conversion is called row migration and is performed by an internal transformation specified in the action context when the subset rule evaluates to TRUE. The name-value pair for a subset transformation has STREAMS$_ROW_SUBSET for the name and either INSERT or DELETE for the value.


See Also:


Information About Declarative Rule-Based Transformations

A declarative rule-based transformation is an internal modification of a row LCR that results when a rule evaluates to TRUE. The name-value pair for a declarative rule-based transformation has STREAMS$_INTERNAL_TRANFORM for the name and the name of a data dictionary view that provides additional information about the transformation for the value.

The name-value pair added for a declarative rule-based transformation is for information purposes only. These name-value pairs are not used by Oracle Streams clients. However, the declarative rule-based transformations described in an action context are performed internally before any custom rule-based transformations specified in the same action context.

Custom Rule-Based Transformations

A custom rule-based transformation is any modification made by a user-defined function to a message when a rule evaluates to TRUE. The name-value pair for a custom rule-based transformation has STREAMS$_TRANSFORM_FUNCTION for the name and the name of the transformation function for the value.

Execution Directives for Messages During Apply

The SET_EXECUTE procedure in the DBMS_APPLY_ADM package specifies whether a message that satisfies the specified rule is executed by an apply process. The name-value pair for an execution directive has APPLY$_EXECUTE for the name and NO for the value if the apply process should not execute the message. If a message that satisfies a rule should be executed by an apply process, then this name-value pair is not present in the action context of the rule.

Enqueue Destinations for Messages During Apply

The SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package sets the queue where a message that satisfies the specified rule is enqueued automatically by an apply process. The name-value pair for an enqueue destination has APPLY$_ENQUEUE for the name and the name of the destination queue for the value.

Ensure That Only One Rule Can Evaluate to TRUE for a Particular Rule Condition

If you use a non-NULL action context for one or more rules in a positive rule set, then ensure that only one rule can evaluate to TRUE for a particular rule condition. If more than one rule evaluates to TRUE for a particular condition, then only one of the rules is returned, which can lead to unpredictable results.

For example, suppose two rules evaluate to TRUE if an LCR contains a DML change to the hr.employees table. The first rule has a NULL action context. The second rule has an action context that specifies a custom rule-based transformation. If there is a DML change to the hr.employees table, then both rules evaluate to TRUE for the change, but only one rule is returned. In this case, the transformation might or might not occur, depending on which rule is returned.

You might want to ensure that only one rule in a positive rule set can evaluate to TRUE for any condition, regardless of whether any of the rules have a non-NULL action context. By following this guideline, you can avoid unpredictable results if, for example, a non-NULL action context is added to a rule in the future.

Action Context Considerations for Schema and Global Rules

If you use an action context for a custom rule-based transformation, enqueue destination, or execute directive with a schema rule or global rule, then the action specified by the action context is carried out on a message if the message causes the schema or global rule to evaluate to TRUE. For example, if a schema rule has an action context that specifies a custom rule-based transformation, then the transformation is performed on LCRs for the tables in the schema.

You might want to use an action context with a schema or global rule but exclude a subset of LCRs from the action performed by the action context. For example, if you want to perform a custom rule-based transformation on all of the tables in the hr schema except for the job_history table, then ensure that the transformation function returns the original LCR if the table is job_history.

If you want to set an enqueue destination or an execute directive for all of the tables in the hr schema except for the job_history table, then you can use a schema rule and add the following condition to it:

:dml.get_object_name() != 'JOB_HISTORY'

In this case, if you want LCRs for the job_history table to evaluate to TRUE, but you do not want to perform the enqueue or execute directive, then you can add a table rule for the table to a positive rule set. That is, the schema rule would have the enqueue destination or execute directive, but the table rule would not.


See Also:

"System-Created Rules" for more information about schema and global rules

User-Created Rules, Rule Sets, and Evaluation Contexts

The DBMS_STREAMS_ADM package generates system-created rules and rule sets, and it can specify an Oracle-supplied evaluation context for rules and rule sets or generate system-created evaluation contexts. If you must create rules, rule sets, or evaluation contexts that cannot be created using the DBMS_STREAMS_ADM package, then you can use the DBMS_RULE_ADM package to create them.

Use the DBMS_RULE_ADM package for the following reasons:

You can create a rule set using the DBMS_RULE_ADM package, and you can associate it with a capture process, synchronous capture, propagation, apply process, or messaging client. Such a rule set can be a positive rule set or negative rule set for an Oracle Streams client, and a rule set can be a positive rule set for one Oracle Streams client and a negative rule set for another.

This section contains the following topics:

User-Created Rules and Rule Sets

The following sections describe some of the types of rules and rule sets that you can create using the DBMS_RULE_ADM package:

Rule Conditions for Specific Types of Operations

In some cases, you might want to capture, propagate, apply, or dequeue only changes that contain specific types of operations. For example, you might want to apply changes containing only insert operations for a particular table, but not other operations, such as update and delete.

Suppose you want to specify a rule condition that evaluates to TRUE only for INSERT operations on the hr.employees table. You can accomplish this by specifying the INSERT command type in the rule condition:

:dml.get_command_type() = 'INSERT' AND :dml.get_object_owner() = 'HR' 
AND :dml.get_object_name() = 'EMPLOYEES' AND :dml.is_null_tag() = 'Y'

Similarly, suppose you want to specify a rule condition that evaluates to TRUE for all DML operations on the hr.departments table, except DELETE operations. You can accomplish this by specifying the following rule condition:

:dml.get_object_owner() = 'HR' AND :dml.get_object_name() = 'DEPARTMENTS' AND
:dml.is_null_tag() = 'Y' AND (:dml.get_command_type() = 'INSERT' OR
:dml.get_command_type() = 'UPDATE')

This rule condition evaluates to TRUE for INSERT and UPDATE operations on the hr.departments table, but not for DELETE operations. Because the hr.departments table does not include any LOB columns, you do not need to specify the LOB command types for DML operations (LOB ERASE, LOB WRITE, and LOB TRIM), but these command types should be specified in such a rule condition for a table that contains one or more LOB columns.

The following rule condition accomplishes the same behavior for the hr.departments table. That is, the following rule condition evaluates to TRUE for all DML operations on the hr.departments table, except DELETE operations:

:dml.get_object_owner() = 'HR' AND :dml.get_object_name() = 'DEPARTMENTS' AND
:dml.is_null_tag() = 'Y' AND :dml.get_command_type() != 'DELETE'

The example rule conditions described previously in this section are all simple rule conditions. However, when you add custom conditions to system-created rule conditions, the entire condition might not be a simple rule condition, and nonsimple rules might not evaluate efficiently. In general, you should use simple rule conditions whenever possible to improve rule evaluation performance. Rule conditions created using the DBMS_STREAMS_ADM package, without custom conditions added, are always simple.

Rule Conditions that Instruct Oracle Streams Clients to Discard Unsupported LCRs

You can use the following functions in rule conditions to instruct an Oracle Streams client to discard LCRs that encapsulate unsupported changes:

  • The GET_COMPATIBLE member function for LCRs. This function returns the minimal database compatibility required to support an LCR.

  • The COMPATIBLE_9_2 function, COMPATIBLE_10_1 function, COMPATIBLE_10_2 function, COMPATIBLE_11_1 function, COMPATIBLE_11_2 function, and MAX_COMPATIBLE function in the DBMS_STREAMS package. These functions return constant values that correspond to 9.2.0, 10.1.0, 10.2.0, 11.1.0, 11.2.0, and maximum compatibility in a database, respectively. You control the compatibility of an Oracle database using the COMPATIBLE initialization parameter.

For example, consider the following rule:

BEGIN
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'strmadmin.dml_compat_9_2',
    condition => ':dml.GET_COMPATIBLE() > DBMS_STREAMS.COMPATIBLE_9_2()');
END;
/

If this rule is in the negative rule set for an Oracle Streams client, such as a capture process, a propagation, or an apply process, then the Oracle Streams client discards any row LCR that is not compatible with Oracle9i Database Release 2 (9.2).

The following is an example that is more appropriate for a positive rule set:

BEGIN
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'strmadmin.dml_compat_9_2',
    condition => ':dml.GET_COMPATIBLE() <= DBMS_STREAMS.COMPATIBLE_10_1()');
END;
/

If this rule is in the positive rule set for an Oracle Streams client, then the Oracle Streams client discards any row LCR that is not compatible with Oracle Database 10g Release 1 or earlier. That is, the Oracle Streams client processes any row LCR that is compatible with Oracle9i Database Release 2 (9.2) or Oracle Database 10g Release 1 (10.1) and satisfies the other rules in its rule sets, but it discards any row LCR that is not compatible with these releases.

You can add the following rule to a positive rule set to discard row LCRs that are not supported by Oracle Streams in your current release of Oracle Database:

BEGIN
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'strmadmin.dml_compat_max',
    condition => ':dml.GET_COMPATIBLE() < DBMS_STREAMS.MAX_COMPATIBLE()');
END;
/

The MAX_COMPATIBLE function always returns the maximum compatibility, which is greater than the compatibility constants returned by the DBMS_STREAMS package. Therefore, when you use this function in rule conditions, the rule conditions do not need to be changed when you upgrade to a later release of Oracle Database. Newly supported changes in a later release will automatically be captured and LCRs containing newly supported changes will not be discarded.

The rules in the previous examples evaluate efficiently. If you use schema rules or global rules created by the DBMS_STREAMS_ADM package to capture, propagate, apply, or dequeue LCRs, then you can use rules such as these to discard LCRs that are not supported by a particular database.


Note:

  • You can determine which database objects in a database are not supported by Oracle Streams by querying the DBA_STREAMS_UNSUPPORTED and DBA_STREAMS_COLUMNS data dictionary views.

  • Instead of using the DBMS_RULE_ADM package to create rules with GET_COMPATIBLE conditions, you can use one of the procedures in the DBMS_STREAMS_ADM package to create such rules by specifying the GET_COMPATIBLE condition in the AND_CONDITION parameter.

  • DDL LCRs always return DBMS_STREAMS.COMPATIBLE_9_2.


Complex Rule Conditions

Complex rule conditions are rule conditions that do not meet the requirements for simple rule conditions described in "Simple Rule Conditions". In an Oracle Streams environment, the DBMS_STREAMS_ADM package creates rules with simple rule conditions only, assuming no custom conditions are added to the system-created rules.

Table 5-3 describes the types of system-created rule conditions that you can create with the DBMS_STREAMS_ADM package. If you must create rules with complex conditions, then you can use the DBMS_RULE_ADM package.

There is a wide range of complex rule conditions. The following sections contain some examples of complex rule conditions.


Note:

  • Complex rule conditions can degrade rule evaluation performance.

  • In rule conditions, if you specify the name of a database, then ensure that you include the full database name, including the domain name.


Rule Conditions Using the NOT Logical Condition to Exclude Objects

You can use the NOT logical condition to exclude certain changes from being captured, propagated, applied, or dequeued in an Oracle Streams environment.

For example, suppose you want to specify rule conditions that evaluate to TRUE for all DML and DDL changes to all database objects in the hr schema, except for changes to the hr.regions table. You can use the NOT logical condition to accomplish this with two rules: one for DML changes and one for DDL changes. Here are the rule conditions for these rules:

(:dml.get_object_owner() = 'HR' AND NOT :dml.get_object_name() = 'REGIONS')
AND :dml.is_null_tag() = 'Y' ((:ddl.get_object_owner() = 'HR' OR :ddl.get_base_
table_owner() =   'HR') AND NOT :ddl.get_object_name() = 'REGIONS') AND :ddl.is_
null_tag() = 'Y'

Notice that object names, such as HR and REGIONS are specified in all uppercase characters in these examples. For rules to evaluate properly, the case of the characters in object names, such as tables and users, must match the case of the characters in the data dictionary. Therefore, if no case was specified for an object when the object was created, then specify the object name in all uppercase in rule conditions. However, if a particular case was specified with double quotation marks when the objects was created, then specify the object name in the same case in rule conditions. However, the object name cannot be enclosed in double quotes in rule conditions.

For example, if the REGIONS table in the HR schema was actually created as "Regions", then specify Regions in rule conditions that involve this table, as in the following example:

:dml.get_object_name() = 'Regions'

You can use the Oracle Streams evaluation context when you create these rules using the DBMS_RULE_ADM package. The following example creates a rule set to hold the complex rules, creates rules with the previous conditions, and adds the rules to the rule set:

BEGIN
  -- Create the rule set
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name       => 'strmadmin.complex_rules',
    evaluation_context  => 'SYS.STREAMS$_EVALUATION_CONTEXT');
  -- Create the complex rules
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name  => 'strmadmin.hr_not_regions_dml',
    condition  => ' (:dml.get_object_owner() = ''HR'' AND NOT ' ||
                  ' :dml.get_object_name() = ''REGIONS'') AND ' ||
                  ' :dml.is_null_tag() = ''Y'' ');
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name  => 'strmadmin.hr_not_regions_ddl',
    condition  => ' ((:ddl.get_object_owner() = ''HR'' OR ' ||
                  ' :ddl.get_base_table_owner() =   ''HR'') AND NOT ' ||
                  ' :ddl.get_object_name() = ''REGIONS'') AND ' ||
                  ' :ddl.is_null_tag() = ''Y'' ');
  --  Add the rules to the rule set
  DBMS_RULE_ADM.ADD_RULE(
    rule_name      => 'strmadmin.hr_not_regions_dml', 
    rule_set_name  => 'strmadmin.complex_rules');
  DBMS_RULE_ADM.ADD_RULE(
    rule_name      => 'strmadmin.hr_not_regions_ddl', 
    rule_set_name  => 'strmadmin.complex_rules');
END;
/

In this case, the rules inherit the Oracle Streams evaluation context from the rule set.


Note:

In most cases, you can avoid using complex rules with the NOT logical condition by using the DBMS_STREAMS_ADM package to add rules to the negative rule set for an Oracle Streams client

Rule Conditions Using the LIKE Condition

You can use the LIKE condition to create complex rules that evaluate to TRUE when a condition in the rule matches a specified pattern. For example, suppose you want to specify rule conditions that evaluate to TRUE for all DML and DDL changes to all database objects in the hr schema that begin with the pattern JOB. You can use the LIKE condition to accomplish this with two rules: one for DML changes and one for DDL changes. Here are the rule conditions for these rules:

(:dml.get_object_owner() = 'HR' AND :dml.get_object_name() LIKE 'JOB%')
AND :dml.is_null_tag() = 'Y'

((:ddl.get_object_owner() = 'HR' OR :ddl.get_base_table_owner() =      'HR') 
AND :ddl.get_object_name() LIKE 'JOB%') AND :ddl.is_null_tag() = 'Y'

Rule Conditions with Undefined Variables that Evaluate to NULL

During evaluation, an implicit variable in a rule condition is undefined if the variable value function for the variable returns NULL. An explicit variable without any attributes in a rule condition is undefined if the client does not send the value of the variable to the rules engine when it runs the DBMS_RULE.EVALUATE procedure.

Regarding variables with attributes, a variable is undefined if the client does not send the value of the variable, or any of its attributes, to the rules engine when it runs the DBMS_RULE.EVALUATE procedure. For example, if variable x has attributes a and b, then the variable is undefined if the client does not send the value of x and does not send the value of a and b. However, if the client sends the value of at least one attribute, then the variable is defined. In this case, if the client sends the value of a, but not b, then the variable is defined.

An undefined variable in a rule condition evaluates to NULL for Oracle Streams clients of the rules engine, which include capture processes, synchronous captures, propagations, apply processes, and messaging clients. In contrast, for non-Oracle Streams clients of the rules engine, an undefined variable in a rule condition can cause the rules engine to return maybe_rules to the client. When a rule set is evaluated, maybe_rules are rules that might evaluate to TRUE given more information.

The number of maybe_rules returned to Oracle Streams clients is reduced by treating each undefined variable as NULL. Reducing the number of maybe_rules can improve performance if the reduction results in more efficient evaluation of a rule set when a message occurs. Rules that would result in maybe_rules for non-Oracle Streams clients can result in TRUE or FALSE rules for Oracle Streams clients, as the following examples illustrate.

Examples of Undefined Variables that Result in TRUE Rules for Oracle Streams Clients

Consider the following user-defined rule condition:

:m IS NULL

If the value of the variable m is undefined during evaluation, then a maybe rule results for non-Oracle Streams clients of the rules engine. However, for Oracle Streams clients, this condition evaluates to TRUE because the undefined variable m is treated as a NULL. You should avoid adding rules such as this to rule sets for Oracle Streams clients, because such rules will evaluate to TRUE for every message. So, for example, if the positive rule set for a capture process has such a rule, then the capture process might capture messages that you did not intend to capture.

Here is another user-specified rule condition that uses an Oracle Streams :dml variable:

:dml.get_object_owner() = 'HR' AND :m IS NULL

For Oracle Streams clients, if a message consists of a row change to a table in the hr schema, and the value of the variable m is not known during evaluation, then this condition evaluates to TRUE because the undefined variable m is treated as a NULL.

Examples of Undefined Variables that Result in FALSE Rules for Oracle Streams Clients

Consider the following user-defined rule condition:

:m = 5

If the value of the variable m is undefined during evaluation, then a maybe rule results for non-Oracle Streams clients of the rules engine. However, for Oracle Streams clients, this condition evaluates to FALSE because the undefined variable m is treated as a NULL.

Consider another user-specified rule condition that uses an Oracle Streams :dml variable:

:dml.get_object_owner() = 'HR' AND :m = 5

For Oracle Streams clients, if a message consists of a row change to a table in the hr schema, and the value of the variable m is not known during evaluation, then this condition evaluates to FALSE because the undefined variable m is treated as a NULL.

Variables as Function Parameters in Rule Conditions

Oracle recommends that you avoid using :dml and :ddl variables as function parameters for rule conditions. The following example uses the :dml variable as a parameter to a function named my_function:

my_function(:dml) = 'Y'

Rule conditions such as these can degrade rule evaluation performance and can result in the capture or propagation of extraneous Oracle Streams data dictionary information.

User-Created Evaluation Contexts

You can use a custom evaluation context in an Oracle Streams environment. Any user-defined evaluation context involving LCRs must include all the variables in SYS.STREAMS$_EVALUATION_CONTEXT. The type of each variable and its variable value function must be the same for each variable as the ones defined in SYS.STREAMS$_EVALUATION_CONTEXT. In addition, when creating the evaluation context using DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT, the SYS.DBMS_STREAMS_INTERNAL.EVALUATION_CONTEXT_FUNCTION must be specified for the evaluation_function parameter. You can alter an existing evaluation context using the DBMS_RULE_ADM.ALTER_EVALUATION_CONTEXT procedure.

You can find information about an evaluation context in the following data dictionary views:

If necessary, you can use the information in these data dictionary views to build a new evaluation context based on the SYS.STREAMS$_EVALUATION_CONTEXT.


Note:

Avoid using variable names with special characters, such as $ and #, to ensure that there are no conflicts with Oracle-supplied evaluation context variables.


See Also:

Oracle Database Reference for more information about these data dictionary views

PKBtlXPK&AOEBPS/strms_glossary.htm Glossary

Glossary

action context

Optional information associated with a rule that is interpreted by the client of the rules engine when the rule is evaluated for a message.

ANYDATA queue

A queue of type ANYDATA. These queues can stage messages of different types wrapped in an ANYDATA wrapper.

See Also: typed queue

applied SCN

A system change number (SCN) relating to a capture process that corresponds to the most recent message dequeued by an apply process that applies changes captured by the capture process.

apply forwarding

A directed network in which messages being forwarded at an intermediate database are first processed by an apply process. These messages are then recaptured by a capture process at the intermediate database and forwarded.

See Also: queue forwarding

apply handler

A collection of SQL statements or a user-defined procedure used by an apply process for customized processing of messages. Apply handlers include statement DML handlers, message handlers, procedure DML handlers, DDL handlers, precommit handlers, and error handlers.

apply process

An optional Oracle background process that dequeues messages from a specific queue and either applies each message directly, discards it, passes it as a parameter to an apply handler, or re-enqueues it. An apply process is an Oracle Streams client.

See Also: logical change record (LCR)

apply servers

A component of an apply process that includes one or more processes that apply LCRs to database objects as DML or DDL statements or pass the LCRs to their appropriate apply handlers. For user messages, the apply servers pass the messages to the message handler. Apply servers can also enqueue logical change record (LCR) and non-LCR messages specified by the DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION procedure. If an apply server encounters an error, then it tries to resolve the error with a user-specified error handler. If an apply server cannot resolve an error, then it places the entire transaction, including all of its LCRs, in the error queue.

See Also: logical change record (LCR)

apply user

The user in whose security domain an apply process dequeues messages that satisfy its rule sets, applies messages directly to database objects, runs custom rule-based transformations configured for apply process rules, and runs apply handlers configured for the apply process.

approximate commit system change number (approximate CSCN)

An SCN value based on the current SCN of the database when a transaction that has enqueued messages into a commit-time queue is committed.

archived-log downstream capture process

A downstream capture process that captures changes in archived redo log files copied from the source database to the downstream database.

barrier transaction

A DDL transaction or a transaction that includes a row logical change record (row LCR) for which an apply process cannot identify the table rows or the database object by using the destination database data dictionary and virtual dependency definitions.

buffered LCR

A logical change record (LCR) that is constructed explicitly by an application and enqueued into the buffered queue portion of an ANYDATA queue.

buffered queue

The portion of a queue that uses the Oracle Streams pool to store messages in memory and a queue table to store messages that have spilled from memory.

buffered user message

A non-LCR message of a user-defined type that is created explicitly by an application and enqueued into a buffered queue. A buffered user message can be enqueued into the buffered queue portion of an ANYDATA queue or a typed queue.

builder server

A component of a capture process that is a process that merges redo records from the preparer server. These redo records either evaluated to TRUE during partial evaluation or partial evaluation was inconclusive for them. The builder server preserves the system change number (SCN) order of these redo records and passes the merged redo records to the capture process.

capture database

The database running the capture process that captures changes made to the source database. The capture database and the source database are the same database when the capture process is a local capture process. The capture database and the source database are different when the capture process is a downstream capture process.

capture process

An optional Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. A capture process is an Oracle Streams client.

capture user

Either the user in whose security domain a capture process captures changes that satisfy its rule sets and runs custom rule-based transformations configured for capture process rules, or the user in whose security domain a synchronous capture captures changes that satisfy its rule set and runs custom rule-based transformations configured for synchronous capture rules.

captured LCR

A logical change record (LCR) that was captured implicitly by a capture process and enqueued into the buffered queue portion of an ANYDATA queue.

See Also: user message

captured SCN

The system change number (SCN) that corresponds to the most recent change scanned in the redo log by a capture process.

change cycling

Sending a change back to the database where it originated. Typically, change cycling should be avoided in an information sharing environment by using tags and by using the LCR member function GET_SOURCE_DATABASE_NAME in rule conditions.

See Also: logical change record (LCR)

change handler

A special type of statement DML handler that tracks table changes and was created by either the DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE procedure or the DBMS_APPLY_ADM.SET_CHANGE_HANDLER procedure.

checkpoint

Information about the current state of a capture process that is stored persistently in the data dictionary of the database running the capture process.

checkpoint interval

A regular interval at which a capture process attempts to record a checkpoint.

checkpoint retention time

The amount of time that a capture process retains checkpoints before purging them automatically.

column list

A list of columns for which an update conflict handler is called when an update conflict occurs for one or more of the columns in the list.

See Also: conflict resolution

commit-time queue

A queue in which messages are ordered by their approximate commit system change number (approximate CSCN) values.

conditional log group

A supplemental log group that logs the before images of all specified columns only if at least one of the columns in the supplemental log group is modified.

See Also: unconditional log group

conflict

A mismatch between the old values in an LCR and the expected data in a table. Conflicts are detected by an apply process when it attempts to apply an LCR. Conflicts typically result when two different databases that are sharing data in a table modify the same row in the table at nearly the same time.

See Also: logical change record (LCR)

conflict resolution

Handling a conflict to avoid an apply error. Either prebuilt update conflict handlers or custom conflict handlers can resolve conflicts.

consumption

The process of dequeuing an message from a queue.

coordinator process

A component of an apply process that is an Oracle background process that gets transactions from the reader server and passes them to apply servers.

custom apply

An apply process passes an LCR as a parameter to a user procedure for processing. The user procedure can process the LCR in a customized way.

See Also: logical change record (LCR)

custom rule-based transformation

A rule-based transformation that requires a user-defined PL/SQL function to perform the transformation.

See Also: declarative rule-based transformation

database supplemental logging

The type of supplemental logging that can apply to the primary key, foreign key, and unique key columns in an entire database.

DDL handler

An apply handler that uses a PL/SQL procedure to process DDL LCRs.

See Also: DDL logical change record (DDL LCR)

DDL logical change record (DDL LCR)

A logical change record (LCR) that describes a data definition language (DDL) change.

declarative rule-based transformation

A rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL.

See Also: row logical change record (row LCR) and custom rule-based transformation

dequeue

To retrieve a message from a queue.

destination database

A database where messages are consumed. Messages can be consumed when they are dequeued implicitly from a queue by a propagation or apply process, or messages can be consumed when they are dequeued explicitly by an application, a messaging client, or a user.

See Also: consumption

destination queue

The queue that receives the messages propagated by a propagation from a source queue.

direct apply

An apply process applies an LCR without running a user procedure.

See Also: logical change record (LCR)

directed network

A network in which propagated messages pass through one or more intermediate databases before arriving at a destination database.

DML handler

An apply handler that processes row LCRs.

See Also: row logical change record (row LCR)

downstream capture process

A capture process that runs on a database other than its source database.

downstream database

The database on which a downstream capture process runs.

enqueue

To place a message in a queue.

error handler

An apply handler that uses a PL/SQL procedure to try to resolve apply errors. An error handler is invoked only when a row logical change record (row LCR) raises an apply process error. Such an error might result from a conflict if no conflict handler is specified or if the update conflict handler cannot resolve the conflict.

evaluation context

A database object that defines external data that can be referenced in rule conditions. The external data can exist as variables, table data, or both.

exception queue

Messages are transferred to an exception queue if they cannot be retrieved and processed for some reason.

explicit capture

The messages are enqueued into a queue by an application or a user.

explicit consumption

The messages in a queue are dequeued either by a messaging client when it is invoked by a user or application or by an application or user directly.

expression

A combination of one or more values and operators that evaluate to a value.

file

In the context of a file group, a reference to a file stored on hard disk. A file is composed of a file name, a directory object, and a file type. The directory object references the directory in which the file is stored on hard disk.

file group

A collection of versions.

file group repository

A collection of all of the file groups in a database.

first SCN

The lowest system change number (SCN) in the redo log from which a capture process can capture changes.

global rule

A rule that is relevant either to an entire database or an entire queue.

heterogeneous information sharing

Sharing information between Oracle and non-Oracle databases.

high-watermark

The system change number (SCN) beyond which no messages have been applied by an apply process.

See Also: low-watermark

ignore SCN

The system change number (SCN) specified for a table below which changes cannot be applied by an apply process.

implicit capture

The messages are captured automatically by a capture process or by synchronous capture and enqueued into a queue.

implicit consumption

The messages in a queue are dequeued automatically by an apply process.

instantiation

The process of preparing database objects for instantiation at a source database, optionally copying the database objects from a source database to a destination database, and setting the instantiation SCN for each instantiated database object.

instantiation SCN

The system change number (SCN) for a table which specifies that only changes that were committed after the SCN at the source database are applied by an apply process.

LCR

See logical change record (LCR).

LOB assembly

An option for DML handlers and error handlers that assembles multiple row LCRs resulting from a change to a single row with LOB columns into a single row LCR. LOB assembly simplifies processing of row LCRs with LOB columns in DML handlers and error handlers.

local capture process

A capture process that runs on its source database.

logical change record (LCR)

A message with a specific format that describes a database change.

See Also: row logical change record (row LCR) and DDL logical change record (DDL LCR)

LogMiner data dictionary

A separate data dictionary used by a capture process to determine the details of a change that it is capturing. The LogMiner data dictionary is necessary because the primary data dictionary of the source database might not be synchronized with the redo data being scanned by a capture process.

low-watermark

The system change number (SCN) up to which all messages have been applied by an apply process.

See Also: high-watermark

maximum checkpoint SCN

The system change number (SCN) that corresponds to the last checkpoint interval recorded by a capture process.

message

A unit of shared information in an Oracle Streams environment.

message handler

An apply handler that uses a PL/SQL procedure to process persistent user messages.

See Also: logical change record (LCR)

message rule

A rule that is relevant only for a user message of a specific message type.

messaging client

An optional Oracle Streams client that dequeues persistent LCRs or persistent user messages when it is invoked by an application or a user.

negative rule set

A rule set for an Oracle Streams client that results in the Oracle Streams client discarding a message when a rule in the rule set evaluates to TRUE for the message. The negative rule set for an Oracle Streams client always is evaluated before the positive rule set.

nonpersistent queue

Nonpersistent queues store messages in memory. They are generally used to provide an asynchronous mechanism to send notifications to all users that are currently connected. Nonpersistent queues were deprecated in Oracle Database 10g Release 2. Oracle recommends that you use buffered messaging instead.

nontransactional queue

A queue in which each message is its own transaction.

See Also: transactional queue

object dependency

A virtual dependency definition that defines a parent-child relationship between two objects at a destination database.

oldest SCN

For a running apply process, the earliest system change number (SCN) of the transactions currently being dequeued and applied. For a stopped apply process, the oldest SCN is the earliest SCN of the transactions that were being applied when the apply process was stopped.

Oracle Streams client

A mechanism that performs work in an Oracle Streams environment and is a client of the rules engine (when the mechanism is associated with one or more rule sets). The following are Oracle Streams clients: capture process, propagation, apply process, and messaging client.

Oracle Streams data dictionary

A separate data dictionary used by propagations and apply processes to keep track of the database objects from a particular source database.

Oracle Streams pool

A portion of memory in the System Global Area (SGA) that is used by Oracle Streams. The Oracle Streams pool stores buffered queue messages in memory, and it provides memory for capture processes and apply processes.

Oracle Streams topology

A representation of the databases in an Oracle Streams environment, the Oracle Streams components configured in these databases, and the flow of messages between these components.

persistent LCR

A logical change record (LCR) that is enqueued into the persistent queue portion of an ANYDATA queue. A persistent LCR can be enqueued in one of the following ways:

persistent queue

The portion of a queue that only stores messages on hard disk in a queue table, not in memory.

persistent user message

A non-LCR message of a user-defined type that is enqueued into a persistent queue. A persistent user message can be enqueued in one of the following ways:

A persistent user message can be enqueued into the persistent queue portion of an ANYDATA queue or a typed queue.

positive rule set

A rule set for an Oracle Streams client that results in the Oracle Streams client performing its task for a message when a rule in the rule set evaluates to TRUE for the message. The negative rule set for an Oracle Streams client always is evaluated before the positive rule set.

precommit handler

An apply handler that can receive the commit information for a transaction and use a PL/SQL procedure to process the commit information in any customized way.

prepared table

A table that has been prepared for instantiation.

preparer server

A component of a capture process that scans a region defined by the reader server and performs prefiltering of changes found in the redo log. A reader server is a process, and multiple reader servers can run in parallel. Prefiltering entails sending partial information about changes, such as schema and object name for a change, to the rules engine for evaluation, and receiving the results of the evaluation.

procedure DML handler

An apply handler that uses a PL/SQL procedure to process row LCRs.

See Also: row logical change record (row LCR)

propagation

An optional Oracle Streams client that uses an Oracle Scheduler job to send messages from a source queue to a destination queue.

propagation job

An Oracle Scheduler job used by a propagation to propagate messages.

propagation schedule

A schedule that specifies how often a propagation job propagates messages.

queue

The abstract storage unit used by a messaging system to store messages.

queue forwarding

A directed network in which the messages being forwarded at an intermediate database are the messages received by the intermediate database, so that the source database for a message is the database where the message originated.

See Also: apply forwarding

queue table

A database table where queues are stored. Each queue table contains a default exception queue.

reader server

  1. A component of a capture process that is a process that reads the redo log and divides the redo log into regions.

  2. A component of an apply process that dequeues messages. The reader server is a process that computes dependencies between LCRs and assembles messages into transactions. The reader server then returns the assembled transactions to the coordinator process, which assigns them to idle apply servers.

See Also: logical change record (LCR)

real-time downstream capture process

A downstream capture process that can capture changes made at the source database before the changes are archived in an archived redo log file.

required checkpoint SCN

The system change number (SCN) that corresponds to the lowest checkpoint interval for which a capture process requires redo data.

replication

The process of sharing database objects and data at multiple databases.

resolution column

The column used to identify a prebuilt update conflict handler.

See Also: conflict resolution

row logical change record (row LCR)

A logical change record (LCR) that describes a change to the data in a single row or a change to a single LONG, LONG RAW, or LOB column in a row that results from a data manipulation language (DML) statement or a piecewise operation. One DML statement can result in multiple row LCRs.

row migration

An automatic conversion performed by an internal rule-based transformation when a subset rule evaluates to TRUE in which an UPDATE operation might be converted into an INSERT or DELETE operation.

rule

A database object that enables a client to perform an action when an event occurs and a condition is satisfied.

rule-based transformation

Any modification to a message when a rule in a positive rule set evaluates to TRUE.

rule condition

A component of a rule which combines one or more expressions and conditions and returns a Boolean value, which is a value of TRUE, FALSE, or NULL (unknown).

rule set

A group of rules.

rules engine

A built-in part of Oracle that evaluates rule sets.

schema rule

A rule that is relevant only to a particular schema.

secure queue

A queue for which Oracle Streams Advanced Queuing (AQ) agents must be associated explicitly with one or more database users who can perform queue operations, such as enqueue and dequeue.

source database

The database where changes captured by a capture process are generated in a redo log, or the database where a synchronous capture that generated LCRs is configured.

source queue

The queue from which a propagation propagates messages to a destination queue.

start SCN

The system change number (SCN) from which a capture process begins to capture changes.

statement DML handler

An apply handler that uses one or more SQL statements to process row LCRs.

See Also: row logical change record (row LCR)

subset rule

A rule that is relevant only to a subset of the rows in a particular table.

supplemental log group

A group of columns in a table that is supplementally logged.

See Also: supplemental logging

supplemental logging

Additional column data placed in a redo log whenever an operation is performed. A capture process captures this additional information and places it in LCRs, and the additional information might be needed for an apply process to apply LCRs properly at a destination database.

See Also: logical change record (LCR)

synchronous capture

An optional Oracle Streams client that uses an internal mechanism to capture DML changes made to tables immediately after the changes are made.

system-created rule

A rule with a system-generated name that was created using the DBMS_STREAMS_ADM package.

table rule

A rule that is relevant only to a particular table.

table supplemental logging

The type of supplemental logging that applies to columns in a particular table.

tablespace repository

A collection of the tablespace sets in a file group.

tag

Data of RAW data type that appears in each redo entry and LCR. You can use tags to modify the behavior of Oracle Streams clients and to track LCRs. Tags can also be used to prevent change cycling.

See Also: logical change record (LCR)

topology

See Oracle Streams topology.

transaction control directive

A special type of row LCR captured by a capture process or synchronous capture that contains transaction control statements, such as COMMIT and ROLLBACK.

See Also: row logical change record (row LCR)

transactional queue

A queue in which messages can be grouped into a set that are applied as one transaction.

See Also: nontransactional queue

typed queue

A queue that can stage messages of one specific type only.

See Also: ANYDATA queue

< !-- class="glossentry" -->

unconditional log group

A supplemental log group that logs the before images of specified columns when the table is changed, regardless of whether the change affected any of the specified columns.

See Also: conditional log group

user message

A non-LCR message of a user-defined type. A user message can be a buffered user message or a persistent user message.

See Also: logical change record (LCR)

value dependency

A virtual dependency definition that defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables.

version

A collection of related files.

virtual dependency definition

A description of a dependency that is used by an apply process to detect dependencies between transactions being applied at a destination database.

PK0   PK&AOEBPS/pt_infprv.htmI Oracle Streams Information Provisioning

Part VI

Oracle Streams Information Provisioning

This part describes information provisioning with Oracle Streams. This part contains the following chapters:

PKtLNIPK&AOEBPS/preface.htmf Preface

Preface

Oracle Streams Concepts and Administration describes the features and functionality of Oracle Streams. This document contains conceptual information about Oracle Streams, along with information about managing an Oracle Streams environment. In addition, this document contains detailed examples that configure an Oracle Streams capture and apply environment and a rule-based application.

This Preface contains these topics:

Audience

Oracle Streams Concepts and Administration is intended for database administrators who create and maintain Oracle Streams environments. These administrators perform one or more of the following tasks:

To use this document, you must be familiar with relational database concepts, SQL, distributed database administration, Advanced Queuing concepts, PL/SQL, and the operating systems under which you run an Oracle Streams environment.

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Related Documents

For more information, see these Oracle resources:

Many of the examples in this book use the sample schemas of the sample database, which is installed by default when you install Oracle Database. Refer to Oracle Database Sample Schemas for information about how these schemas were created and how you can use them yourself.

Conventions

The following text conventions are used in this document:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.

PKU^DOPK&AOEBPS/strms_adcapture.htm Advanced Capture Process Concepts

7 Advanced Capture Process Concepts

Capturing information with Oracle Streams means creating a message that contains the information and enqueuing the message into a queue. The captured information can describe a database change, or it can be any other type of information.

The following topics contain conceptual information about capturing information with Oracle Streams:

Multiple Capture Processes in a Single Database

If you run multiple capture processes in a single database, increase the size of the System Global Area (SGA) for each instance. Use the SGA_MAX_SIZE initialization parameter to increase the SGA size. Also, if the size of the Oracle Streams pool is not managed automatically in the database, then increase the size of the Oracle Streams pool by 10 MB for each capture process parallelism. For example, if you have two capture processes running in a database, and the parallelism parameter is set to 4 for one of them and 1 for the other, then increase the Oracle Streams pool by 50 MB (4 + 1 = 5 parallelism).

Also, Oracle recommends that each ANYDATA queue used by a capture process, propagation, or apply process store captured LCRs from at most one capture process from a particular source database. Therefore, use a separate queue for each capture process that captures changes originating at a particular source database, and make sure each queue has its own queue table. Also, do not propagate messages from two or more capture processes with the same source database to the same queue.


Note:

The size of the Oracle Streams pool is managed automatically if the MEMORY_TARGET, MEMORY_MAX_TARGET, or SGA_TARGET initialization parameter is set to a nonzero value.


See Also:


Capture Process Checkpoints

A checkpoint is information about the current state of a capture process that is stored persistently in the data dictionary of the database running the capture process. A capture process tries to record a checkpoint at regular intervals called checkpoint intervals.

Required Checkpoint SCN

The system change number (SCN) that corresponds to the lowest checkpoint for which a capture process requires redo data is the required checkpoint SCN. The redo log file that contains the required checkpoint SCN, and all subsequent redo log files, must be available to the capture process. If a capture process is stopped and restarted, then it starts scanning the redo log from the SCN that corresponds to its required checkpoint SCN. The required checkpoint SCN is important for recovery if a database stops unexpectedly. Also, if the first SCN is reset for a capture process, then it must be set to a value that is less than or equal to the required checkpoint SCN for the captured process. You can determine the required checkpoint SCN for a capture process by querying the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view.

Maximum Checkpoint SCN

The SCN that corresponds to the last checkpoint recorded by a capture process is the maximum checkpoint SCN. If you create a capture process that captures changes from a source database, and other capture processes already exist which capture changes from the same source database, then the maximum checkpoint SCNs of the existing capture processes can help you to decide whether the new capture process should create a LogMiner data dictionary or share one of the existing LogMiner data dictionaries. You can determine the maximum checkpoint SCN for a capture process by querying the MAX_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view.

Checkpoint Retention Time

The checkpoint retention time is the amount of time, in number of days, that a capture process retains checkpoints before purging them automatically. A capture process periodically computes the age of a checkpoint by subtracting the NEXT_TIME of the archived redo log file that corresponds to the checkpoint from FIRST_TIME of the archived redo log file containing the required checkpoint SCN for the capture process. If the resulting value is greater than the checkpoint retention time, then the capture process automatically purges the checkpoint by advancing its first SCN value. Otherwise, the checkpoint is retained. The DBA_REGISTERED_ARCHIVED_LOG view displays the FIRST_TIME and NEXT_TIME for archived redo log files, and the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE view displays the required checkpoint SCN for a capture process.

Figure 7-1 shows an example of a checkpoint being purged when the checkpoint retention time is set to 20 days.

Figure 7-1 Checkpoint Retention Time Set to 20 Days

Description of Figure 7-1 follows
Description of "Figure 7-1 Checkpoint Retention Time Set to 20 Days"

In Figure 7-1, with the checkpoint retention time set to 20 days, the checkpoint at SCN 435250 is purged because it is 21 days old, while the checkpoint at SCN 479315 is retained because it is 8 days old.

Whenever the first SCN is reset for a capture process, the capture process purges information about archived redo log files before the new first SCN from its LogMiner data dictionary. After this information is purged, the archived redo log files remain on the hard disk, but the files are not needed by the capture process. The PURGEABLE column in the DBA_REGISTERED_ARCHIVED_LOG view displays YES for the archived redo log files that are no longer needed. These files can be removed from disk or moved to another location without affecting the capture process.

If you create a capture process using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package, then you can specify the checkpoint retention time, in days, using the checkpoint_retention_time parameter. The default checkpoint retention time is 60 days if the checkpoint_retention_time parameter is not specified in the CREATE_CAPTURE procedure, or if you use the DBMS_STREAMS_ADM package to create the capture process. The CHECKPOINT_RETENTION_TIME column in the DBA_CAPTURE view displays the current checkpoint retention time for a capture process.

You can change the checkpoint retention time for a capture process by specifying a new time period in the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package. If you do not want checkpoints for a capture process to be purged automatically, then specify DBMS_CAPTURE_ADM.INFINITE for the checkpoint_retention_time parameter in CREATE_CAPTURE or ALTER_CAPTURE.


Note:

To specify a checkpoint retention time for a capture process, the compatibility level of the database running the capture process must be 10.2.0 or higher. If the compatibility level is lower than 10.2.0 for a database, then the checkpoint retention time for all capture processes running on the database is infinite.

A New First SCN Value and Purged LogMiner Data Dictionary Information

If you reset the first SCN value for an existing capture process, or if the first SCN is reset automatically when checkpoints are purged, then Oracle automatically purges LogMiner data dictionary information before the new first SCN setting. If the start SCN for a capture process corresponds to redo information that has been purged, then Oracle Database automatically resets the start SCN to the same value as the first SCN. However, if the start SCN is higher than the new first SCN setting, then the start SCN remains unchanged.

Figure 7-2 shows how Oracle automatically purges LogMiner data dictionary information prior to a new first SCN setting, and how the start SCN is not changed if it is higher than the new first SCN setting.

Figure 7-2 Start SCN Higher than Reset First SCN

Description of Figure 7-2 follows
Description of "Figure 7-2 Start SCN Higher than Reset First SCN"

Given this example, if the first SCN is reset again to a value higher than the start SCN value for a capture process, then the start SCN no longer corresponds to existing information in the LogMiner data dictionary. Figure 7-3 shows how Oracle Database resets the start SCN automatically if it is lower than a new first SCN setting.

Figure 7-3 Start SCN Lower than Reset First SCN

Description of Figure 7-3 follows
Description of "Figure 7-3 Start SCN Lower than Reset First SCN"

As you can see, the first SCN and start SCN for a capture process can continually increase over time, and, as the first SCN moves forward, it might no longer correspond to an SCN established by the DBMS_CAPTURE_ADM.BUILD procedure.


See Also:


ARCHIVELOG Mode and a Capture Process

The following list describes how different types of capture processes read the redo data:

You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process. When the capture process is restarted, it scans the redo log from the required checkpoint SCN forward. Therefore, the redo log file that includes the required checkpoint SCN, and all subsequent redo log files, must be available to the capture process.

You must keep an archived redo log file available until you are certain that no capture process will need that file. The first SCN for a capture process can be reset to a higher value, but it cannot be reset to a lower value. Therefore, a capture process will never need the redo log files that contain information before its first SCN. Query the DBA_LOGMNR_PURGED_LOG data dictionary view to determine which archived redo log files will never be needed by any capture process.

When a local capture process falls behind, there is a seamless transition from reading an online redo log to reading an archived redo log, and, when a local capture process catches up, there is a seamless transition from reading an archived redo log to reading an online redo log. Similarly, when a real-time downstream capture process falls behind, there is a seamless transition from reading the standby redo log to reading an archived redo log, and, when a real-time downstream capture process catches up, there is a seamless transition from reading an archived redo log to reading the standby redo log.


Note:

At a downstream database in a downstream capture configuration, log files from a remote source database should be kept separate from local database log files. In addition, if the downstream database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other.


See Also:


Capture Process Creation

You can create a capture process using a procedure in the DBMS_STREAMS_ADM package or the DBMS_CAPTURE_ADM package. Using a procedure the DBMS_STREAMS_ADM package to create a capture process is simpler because the procedure automatically uses defaults for some configuration options. In addition, when you use a procedure in the DBMS_STREAMS_ADM package, a rule set is created for the capture process, and rules can be added to the rule set automatically. The rule set is a positive rule set if the inclusion_rule parameter is set to TRUE (the default) in the procedure, or it is a negative rule set if the inclusion_rule parameter is set to FALSE in the procedure.

Alternatively, using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package to create a capture process is more flexible, and you create one or more rule sets and rules for the capture process either before or after it is created. You can use the procedures in the DBMS_STREAMS_ADM package or the DBMS_RULE_ADM package to add rules to a rule set for the capture process. To create a capture process at a downstream database, you must use the DBMS_CAPTURE_ADM package.

When you create a capture process using a procedure in the DBMS_STREAMS_ADM package and generate one or more rules in the positive rule set for the capture process, the objects for which changes are captured are prepared for instantiation automatically, unless it is a downstream capture process and there is no database link from the downstream database to the source database.

When you create a capture process using the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package, you should prepare for instantiation any objects for which you plan to capture changes. Prepare these objects for instantiations as soon as possible after capture process creation. You can prepare objects for instantiation using one of the following procedures in the DBMS_CAPTURE_ADM package:

These procedures can also enable supplemental logging for the key columns or for all columns in the table or tables prepared for instantiation.


Note:

After creating a capture process, avoid changing the DBID or global name of the source database for the capture process. If you change either the DBID or global name of the source database, then the capture process must be dropped and re-created.

The LogMiner Data Dictionary for a Capture Process

A capture process requires a data dictionary that is separate from the primary data dictionary for the source database. This separate data dictionary is called a LogMiner data dictionary. There can be more than one LogMiner data dictionary for a particular source database. If there are multiple capture processes capturing changes from the source database, then two or more capture processes can share a LogMiner data dictionary, or each capture process can have its own LogMiner data dictionary. If the LogMiner data dictionary that is needed by a capture process does not exist, then the capture process populates it using information in the redo log when the capture process is started for the first time.

The DBMS_CAPTURE_ADM.BUILD procedure extracts data dictionary information to the redo log, and this procedure must be run at least once on the source database before any capture process configured to capture changes originating at the source database is started. The extracted data dictionary information in the redo log is consistent with the primary data dictionary at the time when the DBMS_CAPTURE_ADM.BUILD procedure is run. This procedure also identifies a valid first SCN value that you can use to create a capture process.

You can perform a build of data dictionary information in the redo log multiple times, and a particular build might or might not be used by a capture process to create a LogMiner data dictionary. The amount of information extracted to a redo log when you run the BUILD procedure depends on the number of database objects in the database. Typically, the BUILD procedure generates a large amount of redo data that a capture process must scan subsequently. Therefore, you should run the BUILD procedure only when necessary.

In most cases, if a build is required when a capture process is created using a procedure in the DBMS_STREAMS_ADM or DBMS_CAPTURE_ADM package, then the procedure runs the BUILD procedure automatically. However, the BUILD procedure is not run automatically during capture process creation in the following cases:

  • You use CREATE_CAPTURE and specify a non-NULL value for the first_scn parameter. In this case, the specified first SCN must correspond to a previous build.

  • You create a downstream capture process that does not use a database link. In this case, the command at the downstream database cannot communicate with the source database to run the BUILD procedure automatically. Therefore, you must run it manually on the source database and specify the first SCN that corresponds to the build during capture process creation.

A capture process requires a LogMiner data dictionary because the information in the primary data dictionary might not apply to the changes being captured from the redo log. These changes might have occurred minutes, hours, or even days before they are captured by a capture process. For example, consider the following scenario:

  1. A capture process is configuredh to capture changes to tables.

  2. A database administrator stops the capture process. When the capture process is stopped, it records the SCN of the change it was currently capturing.

  3. User applications continue to make changes to the tables while the capture process is stopped.

  4. The capture process is restarted three hours after it was stopped.

In this case, to ensure data consistency, the capture process must begin capturing changes in the redo log at the time when it was stopped. The capture process starts capturing changes at the SCN that it recorded when it was stopped.

The redo log contains raw data. It does not contain database object names and column names in tables. Instead, it uses object numbers and internal column numbers for database objects and columns, respectively. Therefore, when a change is captured, a capture process must reference a data dictionary to determine the details of the change.

Because a LogMiner data dictionary might be populated when a capture process is started for the first time, it might take some time to start capturing changes. The amount of time required depends on the number of database objects in the database. You can query the STATE column in the V$STREAMS_CAPTURE dynamic performance view to monitor the progress while a capture process is processing a data dictionary build.


See Also:


Scenario Illustrating Why a Capture Process Needs a LogMiner Data Dictionary

Consider a scenario in which a capture process has been configured to capture changes to table t1, which has columns a and b, and the following changes are made to this table at three different points in time:

Time 1: Insert values a=7 and b=15.

Time 2: Add column c.

Time 3: Drop column b.

If for some reason the capture process is capturing changes from an earlier time, then the primary data dictionary and the relevant version in the LogMiner data dictionary contain different information. Table 7-1 illustrates how the information in the LogMiner data dictionary is used when the current time is different than the change capturing time.

Table 7-1 Information About Table t1 in the Primary and LogMiner Data Dictionaries

Current TimeChange Capturing TimePrimary Data DictionaryLogMiner Data Dictionary

1

1

Table t1 has columns a and b.

Table t1 has columns a and b at time 1.

2

1

Table t1 has columns a, b, and c at time 2.

Table t1 has columns a and b at time 1.

3

1

Table t1 has columns a and c at time 3.

Table t1 has columns a and b at time 1.


Assume that the capture process captures the change resulting from the insert at time 1 when the actual time is time 3. If the capture process used the primary data dictionary, then it might assume that a value of 7 was inserted into column a and a value of 15 was inserted into column c, because those are the two columns for table t1 at time 3 in the primary data dictionary. However, a value of 15 actually was inserted into column b, not column c.

Because the capture process uses the LogMiner data dictionary, the error is avoided. The LogMiner data dictionary is synchronized with the capture process and continues to record that table t1 has columns a and b at time 1. So, the captured change specifies that a value of 15 was inserted into column b.

Multiple Capture Processes for the Same Source Database

If one or more capture processes are capturing changes made to a source database, and you want to create a capture process that captures changes to the same source database, then the new capture process can either create a LogMiner data dictionary or share one of the existing LogMiner data dictionaries with one or more other capture processes.

Whether a new LogMiner data dictionary is created for a new capture process depends on the setting for the first_scn parameter when you run CREATE_CAPTURE to create a capture process.

If multiple LogMiner data dictionaries exist, and you specify NULL for the first_scn parameter during capture process creation, then the new capture process automatically attempts to share the LogMiner data dictionary of one of the existing capture processes that has taken at least one checkpoint. You can view the maximum checkpoint SCN for all existing capture processes by querying the MAX_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view. During capture process creation, if the first_scn parameter is NULL and the start_scn parameter is non-NULL, then an error is raised if the start_scn parameter setting is lower than all of the first SCN values for all existing capture processes.

If multiple LogMiner data dictionaries exist, and you specify a non-NULL value for the first_scn parameter during capture process creation, then the new capture process creates a new LogMiner data dictionary the first time it is started. In this case, before you create the new capture process, you must run the BUILD procedure in the DBMS_CAPTURE_ADM package on the source database. The BUILD procedure generates a corresponding valid first SCN value that you can specify when you create the new capture process.

You can find a first SCN generated by the BUILD procedure by running the following query:

COLUMN FIRST_CHANGE# HEADING 'First SCN' FORMAT 999999999
COLUMN NAME HEADING 'Log File Name' FORMAT A50

SELECT DISTINCT FIRST_CHANGE#, NAME FROM V$ARCHIVED_LOG
  WHERE DICTIONARY_BEGIN = 'YES';

This query can return more than one row if the BUILD procedure was run more than once.

The most important factor to consider when deciding whether a new capture process should share an existing LogMiner data dictionary or create one is the difference between the maximum checkpoint SCN values of the existing capture processes and the start SCN of the new capture process. If the new capture process shares a LogMiner data dictionary, then it must scan the redo log from the point of the maximum checkpoint SCN of the shared LogMiner data dictionary onward, even though the new capture process cannot capture changes before its first SCN. If the start SCN of the new capture process is much higher than the maximum checkpoint SCN of the existing capture process, then the new capture process must scan a large amount of redo data before it reaches its start SCN.

A capture process creates a new LogMiner data dictionary when the first_scn parameter is non-NULL during capture process creation. Follow these guidelines when you decide whether a new capture process should share an existing LogMiner data dictionary or create one:

  • If one or more maximum checkpoint SCN values is greater than the start SCN you want to specify, and if this start SCN is greater than the first SCN of one or more existing capture processes, then it might be better to share the LogMiner data dictionary of an existing capture process. In this case, you can assume there is a checkpoint SCN that is less than the start SCN and that the difference between this checkpoint SCN and the start SCN is small. The new capture process will begin scanning the redo log from this checkpoint SCN and will catch up to the start SCN quickly.

  • If no maximum checkpoint SCN is greater than the start SCN, and if the difference between the maximum checkpoint SCN and the start SCN is small, then it might be better to share the LogMiner data dictionary of an existing capture process. The new capture process will begin scanning the redo log from the maximum checkpoint SCN, but it will catch up to the start SCN quickly.

  • If no maximum checkpoint SCN is greater than the start SCN, and if the difference between the highest maximum checkpoint SCN and the start SCN is large, then it might take a long time for the capture process to catch up to the start SCN. In this case, it might be better for the new capture process to create a LogMiner data dictionary. It will take some time to create the new LogMiner data dictionary when the new capture process is first started, but the capture process can specify the same value for its first SCN and start SCN, and thereby avoid scanning a large amount of redo data unnecessarily.

Figure 7-4 illustrates these guidelines.

Figure 7-4 Deciding Whether to Share a LogMiner Data Dictionary

Description of Figure 7-4 follows
Description of "Figure 7-4 Deciding Whether to Share a LogMiner Data Dictionary"


Note:

  • If you create a capture process using one of the procedures in the DBMS_STREAMS_ADM package, then it is the same as specifying NULL for the first_scn and start_scn parameters in the CREATE_CAPTURE procedure.

  • You must prepare database objects for instantiation if a new capture process will capture changes made to these database objects. This requirement holds even if the new capture process shares a LogMiner data dictionary with one or more other capture processes for which these database objects have been prepared for instantiation.



See Also:


The Oracle Streams Data Dictionary

Propagations and apply processes use an Oracle Streams data dictionary to keep track of the database objects from a particular source database. An Oracle Streams data dictionary is populated whenever one or more database objects are prepared for instantiation at a source database. Specifically, when a database object is prepared for instantiation, it is recorded in the redo log. When a capture process scans the redo log, it uses this information to populate the local Oracle Streams data dictionary for the source database. For local capture, this Oracle Streams data dictionary is at the source database. For downstream capture, this Oracle Streams data dictionary is at the downstream database.

When you prepare a database object for instantiation, you are informing Oracle Streams that information about the database object is needed by propagations that propagate changes to the database object and apply processes that apply changes to the database object. Any database that propagates or applies these changes requires an Oracle Streams data dictionary for the source database where the changes originated.

After an object has been prepared for instantiation, the local Oracle Streams data dictionary is updated when a DDL statement on the object is processed by a capture process. In addition, an internal message containing information about this DDL statement is captured and placed in the queue for the capture process. Propagations can then propagate these internal messages to destination queues at databases.

An Oracle Streams data dictionary is multiversioned. If a database has multiple propagations and apply processes, then all of them use the same Oracle Streams data dictionary for a particular source database. A database can contain only one Oracle Streams data dictionary for a particular source database, but it can contain multiple Oracle Streams data dictionaries if it propagates or applies changes from multiple source databases.

Capture Process Rule Evaluation

A capture process evaluates changes it finds in the redo log against its positive and negative rule sets. The capture process evaluates a change against the negative rule set first. If one or more rules in the negative rule set evaluate to TRUE for the change, then the change is discarded, but if no rule in the negative rule set evaluates to TRUE for the change, then the change satisfies the negative rule set. When a change satisfies the negative rule set for a capture process, the capture process evaluates the change against its positive rule set. If one or more rules in the positive rule set evaluate to TRUE for the change, then the change satisfies the positive rule set, but if no rule in the positive rule set evaluates to TRUE for the change, then the change is discarded. If a capture process only has one rule set, then it evaluates changes against this one rule set only.

A running capture process completes the following series of actions to capture changes:

  1. Finds changes in the redo log.

  2. Performs prefiltering of the changes in the redo log. During this step, a capture process evaluates rules in its rule sets at a basic level to place changes found in the redo log into two categories: changes that should be converted into LCRs and changes that should not be converted into LCRs. Prefiltering is done in two phases. In the first phase, information that can be evaluated during prefiltering includes schema name, object name, and command type. If more information is needed to determine whether a change should be converted into an LCR, then information that can be evaluated during the second phase of prefiltering includes tag values and column values when appropriate.

    Prefiltering is a safe optimization done with incomplete information. This step identifies relevant changes to be processed subsequently, such that:

    • A capture process converts a change into an LCR if the change satisfies the capture process rule sets. In this case, proceed to Step 3.

    • A capture process does not convert a change into an LCR if the change does not satisfy the capture process rule sets.

    • Regarding MAYBE evaluations, the rule evaluation proceeds as follows:

      • If a change evaluates to MAYBE against both the positive and negative rule set for a capture process, then the capture process might not have enough information to determine whether the change will definitely satisfy both of its rule sets. In this case, further evaluation is necessary. Proceed to Step 3.

      • If the change evaluates to FALSE against the negative rule set and MAYBE against the positive rule set for the capture process, then the capture process might not have enough information to determine whether the change will definitely satisfy both of its rule sets. In this case, further evaluation is necessary. Proceed to Step 3.

      • If the change evaluates to MAYBE against the negative rule set and TRUE against the positive rule set for the capture process, then the capture process might not have enough information to determine whether the change will definitely satisfy both of its rule sets. In this case, further evaluation is necessary. Proceed to Step 3.

      • If the change evaluates to TRUE against the negative rule set and MAYBE against the positive rule set for the capture process, then the capture process discards the change.

      • If the change evaluates to MAYBE against the negative rule set and FALSE against the positive rule set for the capture process, then the capture process discards the change.

  3. Converts changes that satisfy, or might satisfy, the capture process rule sets into LCRs based on prefiltering.

  4. Performs LCR filtering. During this step, a capture process evaluates rules regarding information in each LCR to separate the LCRs into two categories: LCRs that should be enqueued and LCRs that should be discarded.

  5. Discards the LCRs that should not be enqueued because they did not satisfy the capture process rule sets.

  6. Enqueues the remaining captured LCRs into the queue associated with the capture process.

For example, suppose the following rule is defined in the positive rule set for a capture process: Capture changes to the hr.employees table where the department_id is 50. No other rules are defined for the capture process, and the parallelism parameter for the capture process is set to 1.

Given this rule, suppose an UPDATE statement on the hr.employees table changes 50 rows in the table. The capture process performs the following series of actions for each row change:

  1. Finds the next change resulting from the UPDATE statement in the redo log.

  2. Determines that the change resulted from an UPDATE statement to the hr.employees table and must be captured. If the change was made to a different table, then the capture process ignores the change.

  3. Captures the change and converts it into an LCR.

  4. Filters the LCR to determine whether it involves a row where the department_id is 50.

  5. Either enqueues the LCR into the queue associated with the capture process if it involves a row where the department_id is 50, or discards the LCR if it involves a row where the department_id is not 50 or is missing.


    See Also:


Figure 7-5 illustrates capture process rule evaluation in a flowchart.

Figure 7-5 Flowchart Showing Capture Process Rule Evaluation

Description of Figure 7-5 follows
Description of "Figure 7-5 Flowchart Showing Capture Process Rule Evaluation"

PK< PK&AOEBPS/strms_apmon.htm Monitoring Oracle Streams Apply Processes

26 Monitoring Oracle Streams Apply Processes

The following topics describe monitoring Oracle Streams apply processes:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See Oracle Database 2 Day + Data Replication and Integration Guide and the online Help for the Oracle Streams tool for more information.

Determining the Queue, Rule Sets, and Status for Each Apply Process

You can determine the following information for each apply process in a database by running the query in this section:

To display this general information about each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Apply|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Apply|Process|Status' FORMAT A15

SELECT APPLY_NAME, 
       QUEUE_NAME, 
       RULE_SET_NAME, 
       NEGATIVE_RULE_SET_NAME,
       STATUS
  FROM DBA_APPLY;

Your output looks similar to the following:

Apply           Apply                                           Apply
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set        Rule Set        Status
--------------- --------------- --------------- --------------- ---------------
STRM01_APPLY    STREAMS_QUEUE   RULESET$_36                     ENABLED
APPLY_EMP       STREAMS_QUEUE   RULESET$_16                     DISABLED
APPLY           STREAMS_QUEUE   RULESET$_21     RULESET$_23     ENABLED

If the status of an apply process is ABORTED, then you can query the ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_APPLY data dictionary view to determine the error. These columns are populated when an apply process aborts or when an apply process is disabled after reaching a limit. These columns are cleared when an apply process is restarted.


Note:

The ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_APPLY data dictionary view are not related to the information in the DBA_APPLY_ERROR data dictionary view.


See Also:

"Checking for Apply Errors" to check for apply errors if the apply process status is ABORTED

Displaying General Information About Each Apply Process

You can display the following general information about each apply process in a database by running the query in this section:

To display this general information about each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN APPLY_CAPTURED HEADING 'Applies Captured LCRs?' FORMAT A22
COLUMN APPLY_USER HEADING 'Apply User' FORMAT A20
 
SELECT APPLY_NAME, APPLY_CAPTURED, APPLY_USER
  FROM DBA_APPLY;

Your output looks similar to the following:

Apply Process Name   Applies Captured LCRs? Apply User         
-------------------- ---------------------- --------------------
STRM01_APPLY         YES                    STRMADMIN           
SYNC_APPLY           NO                     STRMADMIN           

Listing the Parameter Settings for Each Apply Process

The following query displays the current setting for each apply process parameter for each apply process in a database:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN PARAMETER HEADING 'Parameter' FORMAT A30
COLUMN VALUE HEADING 'Value' FORMAT A22
COLUMN SET_BY_USER HEADING 'Set by|User?' FORMAT A10

SELECT APPLY_NAME,
       PARAMETER, 
       VALUE,
       SET_BY_USER  
  FROM DBA_APPLY_PARAMETERS;

Your output looks similar to the following:

Apply Process                                                         Set by
Name            Parameter                      Value                  User?
--------------- ------------------------------ ---------------------- ----------
APPLY$_DB_3     ALLOW_DUPLICATE_ROWS           N                      NO
APPLY$_DB_3     APPLY_SEQUENCE_NEXTVAL         N                      NO
APPLY$_DB_3     COMMIT_SERIALIZATION           DEPENDENT_TRANSACTIONS NO
APPLY$_DB_3     COMPARE_KEY_ONLY               N                      NO
APPLY$_DB_3     DISABLE_ON_ERROR               Y                      NO
APPLY$_DB_3     DISABLE_ON_LIMIT               N                      NO
APPLY$_DB_3     GROUPTRANSOPS                  250                    NO
APPLY$_DB_3     IGNORE_TRANSACTION                                    NO
APPLY$_DB_3     MAXIMUM_SCN                    INFINITE               NO
APPLY$_DB_3     MAX_SGA_SIZE                   INFINITE               NO
APPLY$_DB_3     PARALLELISM                    4                      NO
APPLY$_DB_3     PRESERVE_ENCRYPTION            Y                      NO
APPLY$_DB_3     RTRIM_ON_IMPLICIT_CONVERSION   Y                      NO
APPLY$_DB_3     STARTUP_SECONDS                0                      NO
APPLY$_DB_3     TIME_LIMIT                     INFINITE               NO
APPLY$_DB_3     TRACE_LEVEL                    0                      NO
APPLY$_DB_3     TRANSACTION_LIMIT              INFINITE               NO
APPLY$_DB_3     TXN_AGE_SPILL_THRESHOLD        900                    NO
APPLY$_DB_3     TXN_LCR_SPILL_THRESHOLD        10000                  NO
APPLY$_DB_3     WRITE_ALERT_LOG                Y                      NO

Note:

If the Set by User? column is NO for a parameter, then the parameter is set to its default value. If the Set by User? column is YES for a parameter, then the parameter was set by a user and might or might not be set to its default value.


See Also:


Displaying Information About Apply Handlers

This section contains instructions for displaying information about the apply handlers for apply processes.

This section contains these topics:

Displaying Information About DML Handlers

The following sections contain instructions for displaying information about DML handlers:

Displaying Information About All DML Handlers

You can display the following information about all of the DML handlers in a database, including all statement DML handlers and all procedure DML handlers:

  • The owner and name of the table for which the DML handler is set

  • The operation for which the DML handler is set

  • The name of the DML handler

  • The type of the DML handler, either statement or procedure

  • The name of the apply process that uses the DML handler

To display this information for each DML handler in a database, run the following query:

COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A7
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A11
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A9
COLUMN HANDLER HEADING 'DML Handler' FORMAT A13
COLUMN HANDLER_TYPE HEADING 'Handler|Type' FORMAT A9
COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15
 
SELECT OBJECT_OWNER,
       OBJECT_NAME,
       OPERATION_NAME,
       NVL(USER_PROCEDURE,HANDLER_NAME) Handler,
       DECODE(HANDLER_TYPE,'PROCEDURE HANDLER','PROCEDURE','STMT HANDLER', 
              'STATEMENT','UNKNOWN') HANDLER_TYPE,
       APPLY_NAME
  FROM DBA_APPLY_DML_HANDLERS
  WHERE ERROR_HANDLER = 'N' AND
        APPLY_DATABASE_LINK IS NULL
  ORDER BY OBJECT_OWNER, OBJECT_NAME;

Your output looks similar to the following:

                                                      Apply
Table                                       Handler   Process
Owner   Table Name  Operation DML Handler   Type      Name
------- ----------- --------- ------------- --------- ---------------
HR      DEPARTMENTS UPDATE    "STRMADMIN"." PROCEDURE
                              SQL_GEN_DEP"
HR      JOBS        UPDATE    TRACK_JOBS    STATEMENT APPLY$_PROD_25
OE      ORDERS      INSERT    MODIFY_ORDERS STATEMENT APPLY$_PROD_25

Because Apply Process Name is NULL for the strmadmin.sql_gen_dep procedure DML handler, this handler is a general handler that runs for all of the local apply processes.

Displaying Information About Statement DML Handlers

The following sections contain queries that display information about the statement DML handlers in a database:

Displaying the Statement DML Handlers in a Database

You can display the following information about the statement DML handlers in a database:

  • The name of the statement DML handler

  • The comment for the statement DML handler

  • The time when the statement DML handler was created

  • The time when the statement DML handler was last modified

To display this information for each statement DML handler in a database, run the following query:

COLUMN HANDLER_NAME HEADING 'Handler Name' FORMAT A15
COLUMN HANDLER_COMMENT HEADING 'Comment' FORMAT A35
COLUMN CREATION_TIME HEADING 'Creation|Time' FORMAT A10
COLUMN MODIFICATION_TIME HEADING 'Last|Change|Time' FORMAT A10

SELECT HANDLER_NAME, 
       HANDLER_COMMENT, 
       CREATION_TIME, 
       MODIFICATION_TIME
  FROM DBA_STREAMS_STMT_HANDLERS
  ORDER BY HANDLER_NAME;

Your output looks similar to the following:

                                                               Last
                                                    Creation   Change
Handler Name    Comment                             Time       Time
--------------- ----------------------------------- ---------- ----------
MODIFY_ORDERS   Modifies inserts into the orders ta 12-MAR-09
                ble                                 07.59.56.9
                                                    46180 AM
 
TRACK_JOBS      Tracks updates to the jobs table    11-MAR-09
                                                    10.47.52.7
                                                    76489 AM

When the MODIFICATION_TIME is NULL, shown in this output by Last Change Time, it indicates that the handler has not been modified since its creation.

Displaying the Statement DML Handlers Used by Each Apply Process

When you specify a statement DML handler using the ADD_STMT_HANDLER procedure in the DBMS_APPLY_ADM package at a destination database, you can either specify that the handler runs for a specific apply process or that the handler is a general handler that runs for all apply processes in the database that apply changes locally. If a statement DML handler for an operation on a table is used by a specific apply process, and another statement DML handler is a general handler for the same operation on the same table, then both handlers are invoked when an apply process dequeues a row LCR with the operation on the table. Each statement DML handler receives the original row LCR, and the statement DML handlers can execute in any order.

You can display the following information about the statement DML handlers used by the apply processes in the database:

  • The owner and name of the table for which the statement DML handler is set

  • The operation for which the statement DML handler is set

  • The name of the apply process that uses the statement DML handler

  • The name of the statement DML handler

To display this information for the statement DML handlers used by each apply process, run the following query:

COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A10
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A10
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A9
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN HANDLER_NAME HEADING 'Statement DML|Handler Name' FORMAT A30

SELECT OBJECT_OWNER, 
       OBJECT_NAME, 
       OPERATION_NAME, 
       APPLY_NAME,
       HANDLER_NAME
  FROM DBA_APPLY_DML_HANDLERS
  WHERE HANDLER_TYPE='STMT HANDLER'
  ORDER BY OBJECT_OWNER, OBJECT_NAME, OPERATION_NAME;

Your output looks similar to the following:

Table                           Apply Process   Statement DML
Owner      Table Name Operation Name            Handler Name
---------- ---------- --------- --------------- ------------------------------
HR         JOBS       UPDATE    APPLY$_PROD_25  TRACK_JOBS
OE         ORDERS     INSERT    APPLY$_PROD_25  MODIFY_ORDERS

When Apply Process Name is NULL for a statement DML handler, the handler is a general handler that runs for all of the local apply processes.

Displaying All of the Statements in Statement DML Handlers

The query in this section displays the following information about the statements in statement DML handlers in a database:

  • The name of the statement DML handler that includes each statement

  • The execution order of each statement

  • The text of each statement

To display this information, run the following query:

COLUMN HANDLER_NAME HEADING 'Statement|Handler' FORMAT A15
COLUMN EXECUTION_SEQUENCE HEADING 'Execution|Sequence' FORMAT 999999
COLUMN STATEMENT HEADING 'Statement' FORMAT A50

SET LONG  8000
SET PAGES 8000
SELECT HANDLER_NAME,
       EXECUTION_SEQUENCE,
       STATEMENT
  FROM DBA_STREAMS_STMTS
  ORDER BY HANDLER_NAME, EXECUTION_SEQUENCE;

Your output looks similar to the following:

Statement       Execution
Handler          Sequence Statement
--------------- --------- --------------------------------------------------
MODIFY_ORDERS           1 INSERT INTO oe.orders(
                                       order_id,
                                       order_date,
                                       order_mode,
                                       customer_id,
                                       order_status,
                                       order_total,
                                       sales_rep_id,
                                       promotion_id)
                                     VALUES(
                                       :new.order_id,
                                       :new.order_date,
                                       :new.order_mode,
                                       :new.customer_id,
                                       DECODE(:new.order_status, 1, 2, :new.
                          order_status),
                                       :new.order_total,
                                       :new.sales_rep_id,
                                       :new.promotion_id)
 
TRACK_JOBS             10 :lcr.execute TRUE
TRACK_JOBS             20 INSERT INTO hr.track_jobs(
                                       change_id,
                                       job_id,
                                       job_title,
                                       min_salary_old,
                                       min_salary_new,
                                       max_salary_old,
                                       max_salary_new,
                                       timestamp)
                                     VALUES(
                                       hr.track_jobs_seq.NEXTVAL,
                                       :new.job_id,
                                       :new.job_title,
                                       :old.min_salary,
                                       :new.min_salary,
                                       :old.max_salary,
                                       :new.max_salary,
                                       :source_time)

Displaying Information About Procedure DML Handlers

When you specify a local procedure DML handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package at a destination database, you can either specify that the handler runs for a specific apply process or that the handler is a general handler that runs for all apply processes in the database that apply changes locally, when appropriate. A specific procedure DML handler takes precedence over a generic procedure DML handler. A DML handler is run for a specified operation on a specific table.

To display the procedure DML handler for each apply process in a database, run the following query:

COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A11
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A15
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A9
COLUMN USER_PROCEDURE HEADING 'Handler Procedure' FORMAT A25
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15

SELECT OBJECT_OWNER, 
       OBJECT_NAME, 
       OPERATION_NAME, 
       USER_PROCEDURE,
       APPLY_NAME
  FROM DBA_APPLY_DML_HANDLERS
  WHERE ERROR_HANDLER = 'N' AND
        USER_PROCEDURE IS NOT NULL
  ORDER BY OBJECT_OWNER, OBJECT_NAME;

Your output looks similar to the following:

Table                                                           Apply Process
Owner       Table Name      Operation Handler Procedure         Name
----------- --------------- --------- ------------------------- ---------------
HR          DEPARTMENTS     UPDATE    "STRMADMIN"."SQL_GEN_DEP"

Because Apply Process Name is NULL for the strmadmin.sql_gen_dep procedure DML handler, this handler is a general handler that runs for all of the local apply processes.

Displaying the DDL Handler for Each Apply Process

To display the DDL handler for each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN DDL_HANDLER HEADING 'DDL Handler' FORMAT A40

SELECT APPLY_NAME, DDL_HANDLER FROM DBA_APPLY;

Your output looks similar to the following:

Apply Process Name   DDL Handler
-------------------- ----------------------------------------
STREP01_APPLY        "STRMADMIN"."HISTORY_DDL"

Displaying All of the Error Handlers for Local Apply Processes

When you specify a local error handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package at a destination database, you can specify either that the handler runs for a specific apply process or that the handler is a general handler that runs for all apply processes in the database that apply changes locally when an error is raised by an apply process. A specific error handler takes precedence over a generic error handler. An error handler is run for a specified operation on a specific table.

To display the error handler for each apply process that applies changes locally in a database, run the following query:

COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A5
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A10
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A10
COLUMN USER_PROCEDURE HEADING 'Handler Procedure' FORMAT A30
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15

SELECT OBJECT_OWNER, 
       OBJECT_NAME, 
       OPERATION_NAME, 
       USER_PROCEDURE,
       APPLY_NAME 
  FROM DBA_APPLY_DML_HANDLERS
  WHERE ERROR_HANDLER = 'Y'
  ORDER BY OBJECT_OWNER, OBJECT_NAME;

Your output looks similar to the following:

Table                                                      Apply Process
Owner Table Name Operation  Handler Procedure              Name
----- ---------- ---------- ------------------------------ --------------
HR    REGIONS    INSERT     "STRMADMIN"."ERRORS_PKG"."REGI
                            ONS_PK_ERROR"

Apply Process Name is NULL for the strmadmin.errors_pkg.regions_pk_error error handler. Therefore, this handler is a general handler that runs for all of the local apply processes.

Displaying the Message Handler for Each Apply Process

To display each message handler in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN MESSAGE_HANDLER HEADING 'Message Handler' FORMAT A20

SELECT APPLY_NAME, MESSAGE_HANDLER FROM DBA_APPLY
  WHERE MESSAGE_HANDLER IS NOT NULL;

Your output looks similar to the following:

Apply Process Name   Message Handler
-------------------- --------------------
STRM03_APPLY         "OE"."MES_HANDLER"

Displaying the Precommit Handler for Each Apply Process

You can display the following information about each precommit handler used by an apply process in a database by running the query in this section:

To display each this information for each precommit handler in the database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A15
COLUMN PRECOMMIT_HANDLER HEADING 'Precommit Handler' FORMAT A30
COLUMN APPLY_CAPTURED HEADING 'Applies Captured|Messages?' FORMAT A20

SELECT APPLY_NAME, PRECOMMIT_HANDLER, APPLY_CAPTURED
  FROM DBA_APPLY
  WHERE PRECOMMIT_HANDLER IS NOT NULL;

Your output looks similar to the following:

                                                    Applies Captured
Apply Process Name   Precommit Handler              Messages?
-------------------- ------------------------------ --------------------
STRM01_APPLY         "STRMADMIN"."HISTORY_COMMIT"   YES

Displaying Session Information About Each Apply Process

The query in this section displays the following session information about each session associated with a apply process in a database:

To display this information for each capture process in a database, run the following query:

COLUMN ACTION HEADING 'Apply Process Component' FORMAT A30
COLUMN SID HEADING 'Session ID' FORMAT 99999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 99999999
COLUMN PROCESS HEADING 'Operating System|Process Number' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Process|Names' FORMAT A7
 
SELECT /*+PARAM('_module_action_old_length',0)*/ ACTION,
       SID,
       SERIAL#,
       PROCESS,
       SUBSTR(PROGRAM,INSTR(PROGRAM,'(')+1,4) PROCESS_NAME
  FROM V$SESSION
  WHERE MODULE ='Streams' AND
        ACTION LIKE '%Apply%';

Your output looks similar to the following:

                                            Session
                                             Serial Operating System  Process
Apply Process Component        Session ID    Number Process Number    Names
------------------------------ ---------- --------- ----------------- -------
APPLY$_EMDBB_3 - Apply Coordin         17      3040 9863              AP01
ator
APPLY$_EMDBB_3 - Apply Server          58     52788 9869              AS02
APPLY$_EMDBB_3 - Apply Reader          63        21 9865              AS01
APPLY$_EMDBB_3 - Apply Server          64        37 9872              AS03
APPLY$_EMDBB_3 - Apply Server          67        22 9875              AS04
APPLY$_EMDBB_3 - Apply Server          69         1 9877              AS05

Displaying Information About the Reader Server for Each Apply Process

The reader server for an apply process dequeues messages from the queue. The reader server is a process that computes dependencies between LCRs and assembles messages into transactions. The reader server then returns the assembled transactions to the coordinator, which assigns them to idle apply servers.

The query in this section displays the following information about the reader server for each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display this information for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN APPLY_CAPTURED HEADING 'Dequeues Captured|Messages?' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Process|Name' FORMAT A7
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_MESSAGES_DEQUEUED HEADING 'Total Messages|Dequeued' FORMAT 99999999

SELECT r.APPLY_NAME,
       ap.APPLY_CAPTURED,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_MESSAGES_DEQUEUED
       FROM V$STREAMS_APPLY_READER r, V$SESSION s, DBA_APPLY ap 
       WHERE r.SID = s.SID AND 
             r.SERIAL# = s.SERIAL# AND 
             r.APPLY_NAME = ap.APPLY_NAME;

Your output looks similar to the following:

Apply Process   Dequeues Captured Process                   Total Messages
Name            Messages?         Name    State                   Dequeued
--------------- ----------------- ------- ----------------- --------------
APPLY_SPOKE     YES               AS01    DEQUEUE MESSAGES              54

Monitoring Transactions and Messages Spilled by Each Apply Process

If the txn_lcr_spill_threshold apply process parameter is set to a value other than INFINITE, then an apply process can spill messages from memory to hard disk when the number of messages in a transaction exceeds the specified number.

The first query in this section displays the following information about each transaction currently being applied for which the apply process has spilled messages:

To display this information for each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A20
COLUMN 'Transaction ID' HEADING 'Transaction ID' FORMAT A15
COLUMN FIRST_SCN HEADING 'First SCN'   FORMAT 99999999
COLUMN MESSAGE_COUNT HEADING 'Message Count' FORMAT 99999999
 
SELECT APPLY_NAME,
       XIDUSN ||'.'|| 
       XIDSLT ||'.'||
       XIDSQN "Transaction ID",
       FIRST_SCN,
       MESSAGE_COUNT
  FROM DBA_APPLY_SPILL_TXN;

Your output looks similar to the following:

Apply Name           Transaction ID  First SCN Message Count
-------------------- --------------- --------- -------------
APPLY_HR             1.42.2277         2246944           100

The next query in this section displays the following information about the messages spilled by the apply processes in the local database:

To display this information for each apply process in a database, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A15
COLUMN TOTAL_MESSAGES_SPILLED HEADING 'Total|Spilled Messages' FORMAT 99999999
COLUMN ELAPSED_SPILL_TIME HEADING 'Elapsed Time|Spilling Messages' FORMAT 99999999.99

SELECT APPLY_NAME,
       TOTAL_MESSAGES_SPILLED,
       (ELAPSED_SPILL_TIME/100) ELAPSED_SPILL_TIME
  FROM V$STREAMS_APPLY_READER;

Your output looks similar to the following:

                           Total      Elapsed Time
Apply Name      Spilled Messages Spilling Messages
--------------- ---------------- -----------------
APPLY_HR                     100              2.67

Note:

The elapsed time spilling messages is displayed in seconds. The V$STREAMS_APPLY_READER view displays elapsed time in centiseconds by default. A centisecond is one-hundredth of a second. The query in this section divides each elapsed time by one hundred to display the elapsed time in seconds.

Determining Capture to Dequeue Latency for a Message

The query in this section displays the following information about the last message dequeued by each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display this information for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATION HEADING 'Message Creation' FORMAT A17
COLUMN LAST_DEQUEUE HEADING 'Last Dequeue Time' FORMAT A20
COLUMN DEQUEUED_MESSAGE_NUMBER HEADING 'Dequeued|Message Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
     TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') CREATION,
     TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
     DEQUEUED_MESSAGE_NUMBER  
  FROM V$STREAMS_APPLY_READER;

Your output looks similar to the following:

                  Latency
Apply Process          in                                              Dequeued
Name              Seconds Message Creation  Last Dequeue Time    Message Number
----------------- ------- ----------------- -------------------- --------------
APPLY$_STM1_14          1 15:22:15 06/13/05 15:22:16 06/13/05            502129

Displaying General Information About Each Coordinator Process

A coordinator process gets transactions from the reader server and passes these transactions to apply servers. The coordinator process name is APnn, where nn is a coordinator process number.

The query in this section displays the following information about the coordinator process for each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display this information for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Coordinator|Process|Name' FORMAT A11
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A21

SELECT c.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       c.SID,
       c.SERIAL#,
       c.STATE
       FROM V$STREAMS_APPLY_COORDINATOR c, V$SESSION s
       WHERE c.SID = s.SID AND
             c.SERIAL# = s.SERIAL#;

Your output looks similar to the following:

                  Coordinator         Session
Apply Process     Process     Session  Serial
Name              Name             ID  Number State
----------------- ----------- ------- ------- ---------------------
APPLY_SPOKE       AP01            944       5 IDLE

Displaying Information About Transactions Received and Applied

The query in this section displays the following information about the transactions received, applied, and being applied by each apply process:

The information displayed by this query is valid only for an enabled apply process.

For example, to display this information for an apply process named apply, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN TOTAL_RECEIVED HEADING 'Total|Trans|Received' FORMAT 99999999
COLUMN TOTAL_APPLIED HEADING 'Total|Trans|Applied' FORMAT 99999999
COLUMN TOTAL_ERRORS HEADING 'Total|Apply|Errors' FORMAT 9999
COLUMN BEING_APPLIED HEADING 'Total|Trans Being|Applied' FORMAT 99999999
COLUMN UNASSIGNED_COMPLETE_TXNS HEADING 'Total|Unnasigned|Trans' FORMAT 99999999
COLUMN TOTAL_IGNORED HEADING 'Total|Trans|Ignored' FORMAT 99999999
 
SELECT APPLY_NAME,
       TOTAL_RECEIVED,
       TOTAL_APPLIED,
       TOTAL_ERRORS,
       (TOTAL_ASSIGNED - (TOTAL_ROLLBACKS + TOTAL_APPLIED)) BEING_APPLIED,
       UNASSIGNED_COMPLETE_TXNS,
       TOTAL_IGNORED 
       FROM V$STREAMS_APPLY_COORDINATOR;

Your output looks similar to the following:

                         Total     Total  Total       Total      Total     Total
                         Trans     Trans  Apply Trans Being Unnasigned     Trans
Apply Process Name    Received   Applied Errors     Applied      Trans   Ignored
-------------------- --------- --------- ------ ----------- ---------- ---------
APPLY_FROM_MULT1            81        73      2           6          4         0
APPLY_FROM_MULT2           114        96      0          14          7         4

Determining the Capture to Apply Latency for a Message for Each Apply Process

This section contains two different queries that show the capture to apply latency for a particular message. That is, these queries show the amount of time between when the message was created at a source database and when the message was applied by an apply process. One query uses the V$STREAMS_APPLY_COORDINATOR dynamic performance view. The other uses the DBA_APPLY_PROGRESS static data dictionary view.

The two queries differ in the following ways:

Both queries display the following information about a message applied by each apply process:


Note:

These queries do not pertain to persistent user messages.

Example V$STREAMS_APPLY_COORDINATOR Query for Latency

Run the following query to display the capture to apply latency using the V$STREAMS_APPLY_COORDINATOR view for a captured LCR or a persistent LCR for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A13
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN HWM_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') 
        "Message Creation",
     TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     HWM_MESSAGE_NUMBER  
  FROM V$STREAMS_APPLY_COORDINATOR;

Your output looks similar to the following:

Apply Process                                                            Message
Name          Latency in Seconds Message Creation  Apply Time             Number
------------- ------------------ ----------------- ----------------- -----------
APPLY$_DA_2                    2 13:00:10 07/14/10 13:00:12 07/14/10      672733

Example DBA_APPLY_PROGRESS Query for Latency

Run the following query to display the capture to apply latency using the DBA_APPLY_PROGRESS view for a captured LCR for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') 
        "Message Creation",
     TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     APPLIED_MESSAGE_NUMBER  
  FROM DBA_APPLY_PROGRESS;

Your output looks similar to the following:

                                                                         Applied
Apply Process                                                            Message
Name              Latency in Seconds Message Creation  Apply Time         Number
----------------- ------------------ ----------------- ----------------- -------
APPLY$_STM1_14                    33 14:05:13 06/13/05 14:05:46 06/13/05  498215

Displaying Information About the Apply Servers for Each Apply Process

An apply process can use one or more apply servers that apply LCRs to database objects as DML statements or DDL statements or pass the LCRs to their appropriate handlers. For non-LCR messages, the apply servers pass the messages to the message handler. Each apply server is a process.

The query in this section displays the following information about the apply servers for each apply process:

The information displayed by this query is valid only for an enabled apply process.

Run the following query to display information about the apply servers for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A22
COLUMN PROCESS_NAME HEADING 'Process Name' FORMAT A12
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_ASSIGNED HEADING 'Total|Transactions|Assigned' FORMAT 99999999
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_ASSIGNED, 
       r.TOTAL_MESSAGES_APPLIED
  FROM V$STREAMS_APPLY_SERVER R, V$SESSION S 
  WHERE r.SID = s.SID AND 
        r.SERIAL# = s.SERIAL# 
  ORDER BY r.APPLY_NAME, r.SERVER_ID;

Your output looks similar to the following:

                                                             Total     Total
                                                      Transactions  Messages
Apply Process Name     Process Name State                 Assigned   Applied
---------------------- ------------ ----------------- ------------ ---------
APPLY$_DA_2            AS02         IDLE                      1012    109190
APPLY$_DA_2            AS03         IDLE                       996    107568
APPLY$_DA_2            AS04         IDLE                      1006    108648
APPLY$_DA_2            AS05         IDLE                       987    10659
6

Displaying Effective Apply Parallelism for an Apply Process

In some environments, an apply process might not use all of the apply servers available to it. For example, apply process parallelism can be set to five, but only three apply servers are ever used by the apply process. In this case, the effective apply parallelism is three.

The following query displays the effective apply parallelism for an apply process named apply:

SELECT COUNT(SERVER_ID) "Effective Parallelism"
  FROM V$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY' AND
        TOTAL_MESSAGES_APPLIED > 0;

Your output looks similar to the following:

Effective Parallelism
---------------------
                    2

This query returned two for the effective parallelism. If parallelism is set to three for the apply process named apply, then one apply server has not been used since the last time the apply process was started.

You can display the total number of messages applied by each apply server by running the following query:

COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total Messages Applied' FORMAT 999999

SELECT SERVER_ID, TOTAL_MESSAGES_APPLIED 
  FROM V$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY'
  ORDER BY SERVER_ID;

Your output looks similar to the following:

Apply Server ID Total Messages Applied
--------------- ----------------------
              1                   2141
              2                    276
              3                      0
              4                      0

In this case, apply servers 3 and 4 have not been used by the apply process since it was last started. If the parallelism setting for an apply process is much higher than the effective parallelism for the apply process, then consider lowering the parallelism setting. For example, if the parallelism setting is 6, but the effective parallelism for the apply process is 2, then consider lowering the setting.

Viewing Rules that Specify a Destination Queue on Apply

You can specify a destination queue for a rule using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package. If an apply process has such a rule in its positive rule set, and a message satisfies the rule, then the apply process enqueues the message into the destination queue.

To view destination queue settings for rules, run the following query:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN DESTINATION_QUEUE_NAME HEADING 'Destination Queue' FORMAT A30

SELECT RULE_OWNER, RULE_NAME, DESTINATION_QUEUE_NAME
  FROM DBA_APPLY_ENQUEUE;

Your output looks similar to the following:

Rule Owner      Rule Name       Destination Queue
--------------- --------------- ------------------------------
STRMADMIN       DEPARTMENTS17   "STRMADMIN"."STREAMS_QUEUE"

Viewing Rules that Specify No Execution on Apply

You can specify an execution directive for a rule using the SET_EXECUTE procedure in the DBMS_APPLY_ADM package. An execution directive controls whether a message that satisfies the specified rule is executed by an apply process. If an apply process has a rule in its positive rule set with NO for its execution directive, and a message satisfies the rule, then the apply process does not execute the message and does not send the message to any apply handler.

To view each rule with NO for its execution directive, run the following query:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20

SELECT RULE_OWNER, RULE_NAME
  FROM DBA_APPLY_EXECUTE
  WHERE EXECUTE_EVENT = 'NO';

Your output looks similar to the following:

Rule Owner           Rule Name
-------------------- --------------------
STRMADMIN            DEPARTMENTS18

Determining Which Apply Processes Use Combined Capture and Apply

A combined capture and apply environment is efficient because the capture process acts as the propagation sender that sends logical change records (LCRs) directly to the propagation receiver.

When an apply process uses combined capture and apply, the following columns in the V$STREAMS_APPLY_READER data dictionary view are populated:

When an apply process does not use combined capture and apply, the PROXY_SID and PROXY_SERIAL columns are 0 (zero), and the PROXY_SPID and CAPTURE_BYTES_RECEIVED columns are not populated.

To determine whether an apply process uses combined capture and apply, run the following query:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN PROXY_SID HEADING 'Propagation|Receiver|Session ID' FORMAT 99999999
COLUMN PROXY_SERIAL HEADING 'Propagation|ReceiverSerial|Number' FORMAT 99999999
COLUMN PROXY_SPID HEADING 'Propagation|Receiver|Process ID' FORMAT 99999999999
COLUMN CAPTURE_BYTES_RECEIVED HEADING 'Number of|Bytes Received' FORMAT 9999999999

SELECT APPLY_NAME,
       PROXY_SID,
       PROXY_SERIAL,
       PROXY_SPID,
       CAPTURE_BYTES_RECEIVED
   FROM V$STREAMS_APPLY_READER;

Your output looks similar to the following:

                     Propagation    Propagation Propagation
                        Receiver ReceiverSerial Receiver          Number of
Apply Process Name    Session ID         Number Process ID   Bytes Received
-------------------- ----------- -------------- ------------ --------------
APPLY_SPOKE1                 940              1 22636               4358614
APPLY_SPOKE2                 928              4 29154               4310581

This output indicates that the apply_spoke1 apply process uses combined capture and apply. Since it last started, this apply process has received 4358614 bytes from the capture process. The apply_spoke2 apply process also uses combined capture and apply. Since it last started, this apply process has received 4310581 bytes from the capture process.

Displaying the Substitute Key Columns Specified at a Destination Database

You can designate a substitute key at a destination database, which is a column or set of columns that Oracle can use to identify rows in the table during apply. You can use substitute key columns to specify key columns for a table that has no primary key, or they can be used instead of a table's primary key when the table is processed by any apply process at a destination database.

To display all of the substitute key columns specified at a destination database, run the following query:

COLUMN OBJECT_OWNER HEADING 'Table Owner' FORMAT A20
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A20
COLUMN COLUMN_NAME HEADING 'Substitute Key Name' FORMAT A20
COLUMN APPLY_DATABASE_LINK HEADING 'Database Link|for Remote|Apply' FORMAT A15

SELECT OBJECT_OWNER, OBJECT_NAME, COLUMN_NAME, APPLY_DATABASE_LINK 
  FROM DBA_APPLY_KEY_COLUMNS
  ORDER BY APPLY_DATABASE_LINK, OBJECT_OWNER, OBJECT_NAME;

Your output looks similar to the following:

                                                               Database Link
                                                               for Remote
Table Owner          Table Name           Substitute Key Name  Apply
-------------------- -------------------- -------------------- ---------------
HR                   DEPARTMENTS          DEPARTMENT_NAME
HR                   DEPARTMENTS          LOCATION_ID
HR                   EMPLOYEES            FIRST_NAME
HR                   EMPLOYEES            LAST_NAME
HR                   EMPLOYEES            HIRE_DATE

Note:

This query shows the database link in the last column if the substitute key columns are for a remote non-Oracle database. The last column is NULL if a substitute key column is specified for the local destination database.

Monitoring Virtual Dependency Definitions

The following sections contain queries that display information about virtual dependency definitions in a database:


See Also:

"Apply Processes and Dependencies" for more information about virtual dependency definitions

Displaying Value Dependencies

To display the value dependencies in a database, run the following query:

COLUMN DEPENDENCY_NAME HEADING 'Dependency Name' FORMAT A25
COLUMN OBJECT_OWNER HEADING 'Object Owner' FORMAT A15
COLUMN OBJECT_NAME HEADING 'Object Name' FORMAT A20
COLUMN COLUMN_NAME HEADING 'Column Name' FORMAT A15

SELECT DEPENDENCY_NAME, 
       OBJECT_OWNER, 
       OBJECT_NAME, 
       COLUMN_NAME 
  FROM DBA_APPLY_VALUE_DEPENDENCIES;

Your output should look similar to the following:

Dependency Name           Object Owner    Object Name          Column Name
------------------------- --------------- -------------------- ---------------
ORDER_ID_FOREIGN_KEY      OE              ORDERS               ORDER_ID
ORDER_ID_FOREIGN_KEY      OE              ORDER_ITEMS          ORDER_ID
KEY_53_FOREIGN_KEY        US_DESIGNS      ALL_DESIGNS_SUMMARY  KEY_53
KEY_53_FOREIGN_KEY        US_DESIGNS      DESIGN_53            KEY_53

This output shows the following value dependencies:

  • The order_id_foreign_key value dependency describes a dependency between the order_id column in the oe.orders table and the order_id column in the oe.order_items table.

  • The key_53_foreign_key value dependency describes a dependency between the key_53 column in the us_designs.all_designs_summary table and the key_53 column in the us_designs.design_53 table.

Displaying Object Dependencies

To display the object dependencies in a database, run the following query:

COLUMN OBJECT_OWNER HEADING 'Object Owner' FORMAT A15
COLUMN OBJECT_NAME HEADING 'Object Name' FORMAT A15
COLUMN PARENT_OBJECT_OWNER HEADING 'Parent Object Owner' FORMAT A20
COLUMN PARENT_OBJECT_NAME HEADING 'Parent Object Name' FORMAT A20

SELECT OBJECT_OWNER, 
       OBJECT_NAME, 
       PARENT_OBJECT_OWNER, 
       PARENT_OBJECT_NAME 
  FROM DBA_APPLY_OBJECT_DEPENDENCIES;

Your output should look similar to the following:

Object Owner    Object Name     Parent Object Owner  Parent Object Name
--------------- --------------- -------------------- --------------------
ORD             CUSTOMERS       ORD                  SHIP_ORDERS
ORD             ORDERS          ORD                  SHIP_ORDERS
ORD             ORDER_ITEMS     ORD                  SHIP_ORDERS

This output shows an object dependency in which the ord.ship_orders table is a parent table to the following child tables:

  • ord.customers

  • ord.orders

  • ord.order_items

Checking for Apply Errors

To check for apply errors, run the following query:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A11
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME, 
       SOURCE_DATABASE, 
       LOCAL_TRANSACTION_ID, 
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
  FROM DBA_APPLY_ERROR;

If there are any apply errors, then your output looks similar to the following:

Apply                  Local                                         Messages in
Process     Source     Transaction                                         Error
Name        Database   ID          Error Number Error Message        Transaction
----------- ---------- ----------- ------------ -------------------- -----------
APPLY$_DB_2 DB.EXAMPLE 13.16.334          26786 ORA-26786: A row wit           1
            .COM                                h key ("EMPLOYEE_ID"
                                                ) = (206) exists but
                                                 has conflicting col
                                                umn(s) "SALARY" in t
                                                able HR.EMPLOYEES
                                                ORA-01403: no data f
                                                ound
APPLY$_DB_2 DB.EXAMPLE 15.17.540          26786 ORA-26786: A row wit           1
            .COM                                h key ("EMPLOYEE_ID"
                                                ) = (206) exists but
                                                 has conflicting col
                                                umn(s) "SALARY" in t
                                                able HR.EMPLOYEES
                                                ORA-01403: no data f
                                                ound

If there are apply errors, then you can either try to reexecute the transactions that encountered the errors, or you can delete the transactions. If you want to reexecute a transaction that encountered an error, then first correct the condition that caused the transaction to raise an error.

If you want to delete a transaction that encountered an error, then you might need to resynchronize data manually if you are sharing data between multiple databases. Remember to set an appropriate session tag, if necessary, when you resynchronize data manually.


See Also:


Displaying Detailed Information About Apply Errors

This section contains SQL scripts that you can use to display detailed information about the error transactions in the error queue in a database. These scripts are designed to display information about LCRs, but you can extend them to display information about any non-LCR messages used in your environment as well.

To use these scripts, complete the following steps:

  1. Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View

  2. Create a Procedure that Prints the Value in an ANYDATA Object

  3. Create a Procedure that Prints a Specified LCR

  4. Create a Procedure that Prints All the LCRs in the Error Queue

  5. Create a Procedure that Prints All the Error LCRs for a Transaction


Note:

These scripts display only the first 253 characters for VARCHAR2 values in LCRs.

Step 1   Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View

The user who creates and runs the print_errors and print_transaction procedures described in the following sections must be granted explicit SELECT privilege on the DBA_APPLY_ERROR data dictionary view. This privilege cannot be granted through a role. Running the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package on a user grants this privilege to the user.

To grant this privilege to a user directly, complete the following steps:

  1. In SQL*Plus, connect as an administrative user who can grant privileges.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Grant SELECT privilege on the DBA_APPLY_ERROR data dictionary view to the appropriate user. For example, to grant this privilege to the strmadmin user, run the following statement:

    GRANT SELECT ON DBA_APPLY_ERROR TO strmadmin;
    
  3. Grant EXECUTE privilege on the DBMS_APPLY_ADM package. For example, to grant this privilege to the strmadmin user, run the following statement:

    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    
  4. Connect to the database as the user to whom you granted the privilege in Step 2 and 3.

Step 2   Create a Procedure that Prints the Value in an ANYDATA Object

The following procedure prints the value in a specified ANYDATA object for some selected data types. Optionally, you can add more data types to this procedure.

CREATE OR REPLACE PROCEDURE print_any(data IN ANYDATA) IS
  tn  VARCHAR2(61);
  str VARCHAR2(4000);
  chr VARCHAR2(1000);
  num NUMBER;
  dat DATE;
  rw  RAW(4000);
  res NUMBER;
BEGIN
  IF data IS NULL THEN
    DBMS_OUTPUT.PUT_LINE('NULL value');
    RETURN;
  END IF;
  tn := data.GETTYPENAME();
  IF tn = 'SYS.VARCHAR2' THEN
    res := data.GETVARCHAR2(str);
    DBMS_OUTPUT.PUT_LINE(SUBSTR(str,0,253));
  ELSIF tn = 'SYS.CHAR' then
    res := data.GETCHAR(chr);
    DBMS_OUTPUT.PUT_LINE(SUBSTR(chr,0,253));
  ELSIF tn = 'SYS.VARCHAR' THEN
    res := data.GETVARCHAR(chr);
    DBMS_OUTPUT.PUT_LINE(chr);
  ELSIF tn = 'SYS.NUMBER' THEN
    res := data.GETNUMBER(num);
    DBMS_OUTPUT.PUT_LINE(num);
  ELSIF tn = 'SYS.DATE' THEN
    res := data.GETDATE(dat);
    DBMS_OUTPUT.PUT_LINE(dat);
  ELSIF tn = 'SYS.RAW' THEN
    -- res := data.GETRAW(rw);
    -- DBMS_OUTPUT.PUT_LINE(SUBSTR(DBMS_LOB.SUBSTR(rw),0,253));
    DBMS_OUTPUT.PUT_LINE('BLOB Value');
  ELSIF tn = 'SYS.BLOB' THEN
    DBMS_OUTPUT.PUT_LINE('BLOB Found');
  ELSE
    DBMS_OUTPUT.PUT_LINE('typename is ' || tn);
  END IF;
END print_any;
/
Step 3   Create a Procedure that Prints a Specified LCR

The following procedure prints a specified LCR. It calls the print_any procedure created in "Create a Procedure that Prints the Value in an ANYDATA Object".

CREATE OR REPLACE PROCEDURE print_lcr(lcr IN ANYDATA) IS
  typenm    VARCHAR2(61);
  ddllcr    SYS.LCR$_DDL_RECORD;
  proclcr   SYS.LCR$_PROCEDURE_RECORD;
  rowlcr    SYS.LCR$_ROW_RECORD;
  res       NUMBER;
  newlist   SYS.LCR$_ROW_LIST;
  oldlist   SYS.LCR$_ROW_LIST;
  ddl_text  CLOB;
  ext_attr  ANYDATA;
BEGIN
  typenm := lcr.GETTYPENAME();
  DBMS_OUTPUT.PUT_LINE('type name: ' || typenm);
  IF (typenm = 'SYS.LCR$_DDL_RECORD') THEN
    res := lcr.GETOBJECT(ddllcr);
    DBMS_OUTPUT.PUT_LINE('source database: ' || 
                         ddllcr.GET_SOURCE_DATABASE_NAME);
    DBMS_OUTPUT.PUT_LINE('owner: ' || ddllcr.GET_OBJECT_OWNER);
    DBMS_OUTPUT.PUT_LINE('object: ' || ddllcr.GET_OBJECT_NAME);
    DBMS_OUTPUT.PUT_LINE('is tag null: ' || ddllcr.IS_NULL_TAG);
    DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
    ddllcr.GET_DDL_TEXT(ddl_text);
    DBMS_OUTPUT.PUT_LINE('ddl: ' || ddl_text);    
    -- Print extra attributes in DDL LCR
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('serial#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
      END IF;
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('session#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
      END IF; 
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('thread#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
      END IF;   
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('tx_name');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
      END IF;
    ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('username');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
      END IF;      
    DBMS_LOB.FREETEMPORARY(ddl_text);
  ELSIF (typenm = 'SYS.LCR$_ROW_RECORD') THEN
    res := lcr.GETOBJECT(rowlcr);
    DBMS_OUTPUT.PUT_LINE('source database: ' || 
                         rowlcr.GET_SOURCE_DATABASE_NAME);
    DBMS_OUTPUT.PUT_LINE('owner: ' || rowlcr.GET_OBJECT_OWNER);
    DBMS_OUTPUT.PUT_LINE('object: ' || rowlcr.GET_OBJECT_NAME);
    DBMS_OUTPUT.PUT_LINE('is tag null: ' || rowlcr.IS_NULL_TAG); 
    DBMS_OUTPUT.PUT_LINE('command_type: ' || rowlcr.GET_COMMAND_TYPE); 
    oldlist := rowlcr.GET_VALUES('old');
    FOR i IN 1..oldlist.COUNT LOOP
      IF oldlist(i) IS NOT NULL THEN
        DBMS_OUTPUT.PUT_LINE('old(' || i || '): ' || oldlist(i).column_name);
        print_any(oldlist(i).data);
      END IF;
    END LOOP;
    newlist := rowlcr.GET_VALUES('new', 'n');
    FOR i in 1..newlist.count LOOP
      IF newlist(i) IS NOT NULL THEN
        DBMS_OUTPUT.PUT_LINE('new(' || i || '): ' || newlist(i).column_name);
        print_any(newlist(i).data);
      END IF;
    END LOOP;
    -- Print extra attributes in row LCR
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('row_id');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('row_id: ' || ext_attr.ACCESSUROWID());
      END IF;
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('serial#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
      END IF;
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('session#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
      END IF; 
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('thread#');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
      END IF;   
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('tx_name');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
      END IF;
    ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('username');
      IF (ext_attr IS NOT NULL) THEN
        DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
      END IF;          
  ELSE
    DBMS_OUTPUT.PUT_LINE('Non-LCR Message with type ' || typenm);
  END IF;
END print_lcr;
/
Step 4   Create a Procedure that Prints All the LCRs in the Error Queue

The following procedure prints all of the LCRs in all of the error queues. It calls the print_lcr procedure created in "Create a Procedure that Prints a Specified LCR".

CREATE OR REPLACE PROCEDURE print_errors IS
  CURSOR c IS
    SELECT LOCAL_TRANSACTION_ID,
           SOURCE_DATABASE,
           MESSAGE_NUMBER,
           MESSAGE_COUNT,
           ERROR_NUMBER,
           ERROR_MESSAGE
      FROM DBA_APPLY_ERROR
      ORDER BY SOURCE_DAT`ABASE, SOURCE_COMMIT_SCN;
  i      NUMBER;
  txnid  VARCHAR2(30);
  source VARCHAR2(128);
  msgno  NUMBER;
  msgcnt NUMBER;
  errnum NUMBER := 0;
  errno  NUMBER;
  errmsg VARCHAR2(2000);
  lcr    ANYDATA;
  r      NUMBER;
BEGIN
  FOR r IN c LOOP
    errnum := errnum + 1;
    msgcnt := r.MESSAGE_COUNT;
    txnid  := r.LOCAL_TRANSACTION_ID;
    source := r.SOURCE_DATABASE;
    msgno  := r.MESSAGE_NUMBER;
    errno  := r.ERROR_NUMBER;
    errmsg := r.ERROR_MESSAGE;
DBMS_OUTPUT.PUT_LINE('*************************************************');
    DBMS_OUTPUT.PUT_LINE('----- ERROR #' || errnum);
    DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
    DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
    DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
    DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
    DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
    FOR i IN 1..msgcnt LOOP
      DBMS_OUTPUT.PUT_LINE('--message: ' || i);
        lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid);
        print_lcr(lcr);
    END LOOP;
  END LOOP;
END print_errors;
/

To run this procedure after you create it, enter the following:

SET SERVEROUTPUT ON SIZE 1000000

EXEC print_errors
Step 5   Create a Procedure that Prints All the Error LCRs for a Transaction

The following procedure prints all the LCRs in the error queue for a particular transaction. It calls the print_lcr procedure created in "Create a Procedure that Prints a Specified LCR".

CREATE OR REPLACE PROCEDURE print_transaction(ltxnid IN VARCHAR2) IS
  i      NUMBER;
  txnid  VARCHAR2(30);
  source VARCHAR2(128);
  msgno  NUMBER;
  msgcnt NUMBER;
  errno  NUMBER;
  errmsg VARCHAR2(2000);
  lcr    ANYDATA;
BEGIN
  SELECT LOCAL_TRANSACTION_ID,
         SOURCE_DATABASE,
         MESSAGE_NUMBER,
         MESSAGE_COUNT,
         ERROR_NUMBER,
         ERROR_MESSAGE
      INTO txnid, source, msgno, msgcnt, errno, errmsg
      FROM DBA_APPLY_ERROR
      WHERE LOCAL_TRANSACTION_ID =  ltxnid;
  DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
  DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
  DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
  DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
  DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
  FOR i IN 1..msgcnt LOOP
  DBMS_OUTPUT.PUT_LINE('--message: ' || i);
    lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid); -- gets the LCR
    print_lcr(lcr);
  END LOOP;
END print_transaction;
/

To run this procedure after you create it, pass to it the local transaction identifier of an error transaction. For example, if the local transaction identifier is 1.17.2485, then enter the following:

SET SERVEROUTPUT ON SIZE 1000000

EXEC print_transaction('1.17.2485')
PK0͜2t`PK&AOEBPS/index.htm Index

Index

A  B  C  D  E  F  G  H  I  K  L  M  N  O  P  Q  R  S  T  U  V  W  X 

A

action contexts, 11.1.3
name-value pairs
adding, 18.2.2.3, 19.2.1, 19.2.2
altering, 18.2.2.2
removing, 18.2.2.4, 19.2.2
querying, 19.2.1
system-created rules, 11.6
ADD_COLUMN procedure, 6.1, 19.1.1.2
ADD_GLOBAL_RULES procedure, 5.3.1
ADD_PAIR member procedure, 18.2.2.2, 18.2.2.3, 19.2.1, 19.2.2
ADD_RULE procedure, 11.1.2.2, 18.1.2
ADD_SCHEMA_PROPAGATION_RULES procedure, 5.3.2
ADD_SUBSCRIBER procedure, 16.1.1
ADD_SUBSET_PROPAGATION_RULES procedure
row migration, 5.3.4.2
ADD_SUBSET_RULES procedure, 5.3, 5.3.4
row migration, 5.3.4.2
ADD_TABLE_RULES procedure, 5.3.3
alert log
Oracle Streams entries, 30.4
alerts, 30.1
ALTER_APPLY procedure
removing a rule set, 17.3.4
removing the DDL handler, 17.7.3
setting an apply user, 17.5
setting the DDL handler, 17.7.2
setting the message handler, 17.8.1
setting the precommit handler, 17.9.2
specifying a rule set, 17.3.1
unsetting the message handler, 17.8.2
unsetting the precommit handler, 17.9.3
ALTER_CAPTURE procedure
removing a rule set, 15.1.3.4
setting a capture user, 15.1.5
setting the first SCN, 15.1.8, 15.1.9
specifying a rule set, 15.1.3.1, 15.2.1.1
specifying database link use, 15.1.10
ALTER_PROPAGATION procedure
removing the rule set, 16.2.7
specifying the rule set, 16.2.4
ALTER_PROPAGATION_SCHEDULE procedure, 16.2.3
ALTER_RULE procedure, 18.2.2
ALTER_SYNC_CAPTURE procedure
setting a capture user, 15.2.2
ANALYZE_CURRENT_PERFORMANCE procedure, 23.5
ANALYZE_CURRENT_STATISTICS procedure, 23.2
ANYDATA data type
queues, 3.2.1
monitoring, 25.1
removing, 16.1.3
wrapper for messages, 3.2.1
applications
upgrading
using Streams, D.3
applied SCN, 2.5.7.1, 10.7, 24.1.12
apply forwarding, 3.3.4.1
apply process, 4, 10
applied SCN, 10.7
apply forwarding, 3.3.4.1
apply handlers, 4.2.4, 10.3.5
Java stored procedures, 4.2.4.6
apply servers, 4.2.10
states, 4.2.10.3
troubleshooting, 33.10
apply user, 4.2.1
privileges, 33.7
secure queues, 8.1.2
setting, 17.5
architecture, 4.2.10
combined capture and apply, 12.1
query to determine, 26.16
conflict handlers, 10.3.5
conflict resolution, 10.3.4
constraints, 10.3.1
contention, 33.8, 33.9
coordinator process, 4.2.10
states, 4.2.10.2
creating, 10.1
data type conversion, 4.2.7
data types, B.5.1
data types applied, 4.2.6
automatic conversion, 4.2.7
DDL changes, 10.4
containing DML changes, 10.4.3
CREATE TABLE AS SELECT, 10.4.2
current schema, B.5.5
data structures, B.5.4
derived values in DML, 10.4.3.1
DML triggers, 10.4.3.2
ignored, B.5.3
system-generated names, 10.4.1
DDL handlers, 4.2.4, 4.2.4.3
managing, 17.7
monitoring, 26.4.2
dependencies, 10.2
barrier transactions, 10.2.6
troubleshooting, 33.9
virtual dependency definitions, 10.2.5, 17.15, 26.18
DML changes, 10.3
DML handlers, 4.2.4, 4.2.4.1, 26.4.1
change handlers, 20
dropping, 17.16
enqueuing messages, 17.10
monitoring, 26.14
error handlers
managing, 17.12
monitoring, 26.4.3
error queue, 4.2.14
monitoring, 26.19, 26.20
high-watermark, 10.7
ignore SCN, 10.5
index-organized tables, B.5.1
instantiation SCN, 10.5
interoperation with capture processes, B.1.6, B.5.7
key columns, 10.3.1
low-watermark, 10.7
managing, 17
message handlers, 4.2.4, 4.2.4.4
managing, 17.8
monitoring, 26.4.4
messages
captured LCRs, 4.2.3
persistent LCRs, 4.2.3
user messages, 4.2.3
monitoring, 26
apply handlers, 26.4
compatible columns, 29.3.3.1
latency, 26.8, 26.11, 26.11
transactions, 26.10
non-LCR messages, 4.2.4.4
oldest SCN, 10.6
options, 4.2.4
Oracle Label Security (OLS), B.5.6
Oracle Real Application Clusters, A.1.6
parallelism, 26.13
parameters, 4.2.12
commit_serialization, 33.9
parallelism, 33.9
preserve_encryption, A.2.6
setting, 17.4
txn_lcr_spill_threshold, 26.7
performance, 33.10
persistent status, 4.2.13
precommit handlers, 4.2.4.5
managing, 17.9
monitoring, 26.4.5
reader server, 4.2.10
states, 4.2.10.1
RESTRICTED SESSION, 4.2.9
row subsetting, 5.3
rule sets
removing, 17.3.4
specifying, 17.3.1
rules, 4.2.2, 5.1
adding, 17.3.2
removing, 17.3.3
session information, 26.5
specifying execution, 17.11
monitoring, 26.15
spilled messages, 26.7
SQL generation, 4.2.8
starting, 17.1
stopping, 17.2
substitute key columns, 10.3.2
removing, 17.14.2
setting, 17.14.1
tables, 10.3
apply handlers, 10.3.5
column discrepancies, 10.3.3
trace files, 30.4.3
transformations
rule-based, 6.3.4
transparent data encryption, A.2.6
triggers
firing property, 10.8.1
ON SCHEMA clause, 10.8.2
troubleshooting, 33
checking apply handlers, 33.5
checking message type, 33.3
checking status, 33.1
error queue, 33.11
approximate CSCN, 8.3.2
AQ_TM_PROCESSES initialization parameter
Streams apply process, 33.6
ARCHIVELOG mode
capture process, 7.4
Recovery Manager, A.4.2
ATTACH_TABLESPACES procedure, 36.1

B

buffered messaging, 3.2.2.1
buffered queues, 3.2.2
monitoring, 25.2
apply processes, 25.2.8
capture processes, 25.2.2
propagations, 25.2.3, 25.2.4, 25.2.5, 25.2.6, 25.2.7
transparent data encryption, A.2.4
BUILD procedure, 2.5.7.2, 7.5.1
troubleshooting, 31.1.1

C

capture
explicit, 2.7
capture process, 2, 2.5, 2.6, 7
applied SCN, 2.5.7.1, 24.1.12
architecture, 2.5.9
ARCHIVELOG mode, 7.4
automatically filtered changes, 29.3.1.1
builder server, 2.5.9
capture user, 2.5.10
secure queues, 8.1.2
setting, 15.1.5
captured LCRs, 4.1.1
captured SCN, 2.5.7.1
changes captured
DDL changes, B.1.2.3
DML changes, 2.5.4
NOLOGGING keyword, B.1.2.5
UNRECOVERABLE clause for SQL*Loader, B.1.2.6
UNRECOVERABLE SQL keyword, B.1.2.5
checkpoints, 7.2
managing retention time, 15.1.6
maximum checkpoint SCN, 7.2.2
required checkpoint SCN, 7.2.1, 7.4
retention time, 7.2.3
combined capture and apply, 12.1
query to determine, 24.1.16
creating, 7.5
data type restrictions, B.1.2.1
data types captured, 2.5.3
DBID, 7.5
downstream capture, 2.5.6
advantages, 2.5.6.2.4
database link, 2.5.6.2.5, 15.1.10
monitoring, 24.1.6
monitoring remote access, 29.1.2
operational requirements, 2.5.6.2.6
dropping, 15.1.11
fast recovery area, A.4.2
first SCN, 2.5.7.2
setting, 15.1.8, 15.1.9
global name, 7.5
index-organized tables, B.1.2.1, B.1.2.2
interoperation with apply processes, B.1.6, B.5.7
latency
capture to apply, 26.11
redo log scanning, 24.1.13
local capture, 2.5.6
advantages, 2.5.6.1.2
LogMiner, 7.1
data dictionary, 7.5.1
multiple sessions, 7.1
managing, 15.1
maximum checkpoint SCN, 7.2.2, 7.5.1.2
monitoring, 24.1
applied SCN, 24.1.12
compatible tables, 29.3.1.1
downstream capture, 24.1.6
elapsed time, 24.1.5
last redo entry, 24.1.10
latency, 24.1.13, 24.1.14, 24.1.14, 26.11
message creation time, 24.1.4
old log files, 24.1.9
registered log files, 24.1.7, 24.1.9
required log files, 24.1.8
rule evaluations, 24.1.15
state change time, 24.1.4
online redefinition, B.1.2.4
Oracle Label Security (OLS), B.1.5
Oracle Real Application Clusters, A.1.1
parameters, 2.5.12
parallelism, 2.5.12
set_autofiltered_table_ddl, 29.3.1.1
setting, 15.1.4
time_limit, 2.5.12
PAUSED FOR FLOW CONTROL state, 2.5.11
persistent status, 2.5.13
preparer servers, 2.5.9
reader server, 2.5.9
Recovery Manager, A.4.2
fast recovery area, 31.1.6
redo logs, 2.5.1
adding manually, 15.1.7
missing files, 31.1.6
redo transport services, 2.5.6
required checkpoint SCN, 7.2.1
RESTRICTED SESSION, 2.5.8
rule evaluation, 7.7
rule sets
removing, 15.1.3.4
specifying, 15.1.3.1
rules, 2.5.2, 5.1
adding, 15.1.3.2
removing, 15.1.3.3
session information, 24.1.2
SGA_MAX_SIZE initialization parameter, 7.1
start SCN, 2.5.7.2, 2.5.7.2.4
starting, 15.1.1
states, 2.5.11
stopping, 15.1.2
supplemental logging, 2.5.5
switching to, 15.5
SYS schema, 2.5.2
SYSTEM schema, 2.5.2
table type restrictions, B.1.2.2
trace files, 30.4.1
transformations
rule-based, 6.3.1
transparent data encryption, A.2.1
troubleshooting, 31, 31.1
checking progress, 31.1.5
checking status, 31.1.2
creation, 31.1.1
capture user
capture process, 2.5.10
synchronous capture, 2.6.6
captured LCRs, 4.1.2.1
captured SCN, 2.5.7.1
change handlers, 4.2.4.1.1, 20
about, 20.1
change tables, 20.1
maintaining, 20.4.3
monitoring, 20.5.1
configuration options, 20.2.1.1
configuring, 20.3
KEEP_COLUMNS transformations, 20.2.1.5
managing, 20.4
monitoring, 20.5
preparing for, 20.2
prerequisites, 20.2.2
replication, 20.2.1.8
setting, 20.4.1
unsetting, 20.4.1
using existing components, 20.4.2
character sets
migrating
using Streams, D.3
checkpoints, 7.2
retention time, 7.2.3
managing, 15.1.6
CLONE_TABLESPACES procedure, 36.1
combined capture and apply, 12.1
apply process
query to determine, 26.16
capture process
query to determine, 24.1.16
propagation
query to determine, 25.3.6, 25.3.7
stream paths, 23.3.1
topology, 23.3.1
COMPATIBLE_10_1 function, 11.7.1.2
COMPATIBLE_10_2 function, 11.7.1.2
COMPATIBLE_11_1 function, 11.7.1.2
COMPATIBLE_11_2 function, 11.7.1.2
COMPATIBLE_9_2 function, 11.7.1.2
conditions
rules, 11.1.1
configuration report script
Oracle Streams, 30.2
conflict resolution
conflict handlers
interaction with apply handlers, 10.3.5
CREATE TABLE statement
AS SELECT
apply process, 10.4.2
CREATE_APPLY procedure, 10.1
CREATE_CAPTURE procedure, 7.5
CREATE_RULE procedure, 18.2.1
CREATE_RULE_SET procedure, 18.1.1

D

data types
applied, 4.2.6
automatic conversion, 4.2.7
database maintenance
using Streams, D
assumptions, D.1.2
capture database, D.1.1
instantiation, D.2.5
job slaves, D.1.3
logical dependencies, D.2.3.2
PL/SQL package subprograms, D.1.3
user-created applications, D.2.3
user-defined types, D.2.2
DBA_APPLY view, 26.1, 26.2, 26.4.2, 26.4.2, 26.4.4, 26.6, 26.12, 33.1, 33.3
DBA_APPLY_CHANGE_HANDLERS view, 20.4.1, 20.5.2
DBA_APPLY_DML_HANDLERS view, 26.4.1, 26.4.3
DBA_APPLY_ENQUEUE view, 26.14
DBA_APPLY_ERROR view, 26.19, 26.20, 26.20, 26.20
DBA_APPLY_EXECUTE view, 26.15
DBA_APPLY_KEY_COLUMNS view, 26.17
DBA_APPLY_PARAMETERS view, 26.3
DBA_APPLY_PROGRESS view, 26.11
DBA_APPLY_SPILL_TXN view, 26.7
DBA_CAPTURE view, 24.1.1, 24.1.6, 24.1.7, 24.1.8, 24.1.9, 24.1.12, 31.1.2
DBA_CAPTURE_EXTRA_ATTRIBUTES view, 24.3
DBA_CAPTURE_PARAMETERS view, 24.1.11
DBA_EVALUATION_CONTEXT_TABLES view, 27.6
DBA_EVALUATION_CONTEXT_VARS view, 27.7
DBA_FILE_GROUP_EXPORT_INFO view, 37.2.3
DBA_FILE_GROUP_FILES view, 37.1.3
DBA_FILE_GROUP_TABLES view, 37.2.2
DBA_FILE_GROUP_TABLESPACES view, 37.2.1
DBA_FILE_GROUP_VERSOINS view, 37.1.2
DBA_FILE_GROUPS view, 37.1.1
DBA_LOG_GROUPS view, 24.1.18.1
DBA_LOGMNR_PURGED_LOG view, 2.5.7.2.3, 7.4
DBA_PROPAGATION view, 25.2.3, 25.2.4, 25.2.5, 25.2.6, 25.3.1, 25.3.2, 25.3.3, 25.3.4, 25.3.5, 32.1, 32.2
DBA_QUEUE_SCHEDULES view, 25.3.4, 25.3.5
DBA_QUEUE_TABLES view, 25.1.1
DBA_QUEUES view, 25.1.1
DBA_REGISTERED_ARCHIVED_LOG view, 24.1.7, 24.1.8, 24.1.9
DBA_RULE_SET_RULES view, 27.8, 27.9
DBA_RULE_SETS view, 27.5
DBA_RULES view, 27.8, 27.9, 27.10
DBA_STREAMS_ADD_COLUMN view, 28.2.1
DBA_STREAMS_COLUMNS view, 29.3.2, 29.3.3.1
DBA_STREAMS_NEWLY_SUPPORTED view, 29.3.1.2
DBA_STREAMS_RENAME_TABLE view, 28.2.2
DBA_STREAMS_RULES view, 27.3, 34.1
DBA_STREAMS_TABLE_RULES view, 24.2.2
DBA_STREAMS_TP_COMPONENT view, 23.2.1, 23.6.1.2
DBA_STREAMS_TP_COMPONENT_LINK view, 23.2.1, 23.6.1.3
DBA_STREAMS_TP_COMPONENT_STAT view, 23.2.1, 23.6.2.2
DBA_STREAMS_TP_DATABASE view, 23.2.1, 23.6.1.1
DBA_STREAMS_TP_PATH_BOTTLENECK view, 23.2.1, 23.6.2.1
DBA_STREAMS_TP_PATH_STAT view, 23.2.1, 23.6.2.4
DBA_STREAMS_TRANSFORM_FUNCTION view, 28.3
DBA_STREAMS_TRANSFORMATIONS view, 28.1, 28.2
DBA_STREAMS_UNSUPPORTED view, 29.3.1.1
DBA_SYNC_CAPTURE view, 24.2.1
DBA_SYNC_CAPTURE_TABLES view, 24.2.2
DBID (database identifier)
capture process, 7.5
DBMS_APPLY_ADM package, 14.1.1, 17
DBMS_CAPTURE_ADM package, 14.1.2, 15.1
DBMS_COMPARISON package, 14.1.3
DBMS_PROPAGATION_ADM package, 14.1.4, 16
starting a propagation, 16.2.1
stopping a propagation, 16.2.2
DBMS_RULE package, 11.2, 14.1.5
DBMS_RULE_ADM package, 14.1.6, 18, 18.1
DBMS_STREAMS package, 14.1.7
DBMS_STREAMS_ADM package, 5.3, 14.1.8, 15.1, 16, 17
creating a capture process, 7.5
creating an apply process, 10.1
DBMS_STREAMS_ADVISOR_ADM package, 14.1.9, 23.2
gathering information, 23.5
DBMS_STREAMS_AUTH package, 14.1.10
DBMS_STREAMS_HANDLER_ADM package, 14.1.11
DBMS_STREAMS_MESSAGING package, 14.1.12
DBMS_STREAMS_TABLESPACE_ADM package, 14.1.13, 36.1
information provisioning, 35.2.3
platform conversion, 35.2.3.4
DDL handlers, 4.2.4, 4.2.4.3
creating, 17.7.1
managing, 17.7
monitoring, 26.4.2
removing, 17.7.3
setting, 17.7.2
DELETE_ALL_ERRORS procedure, 17.13.2.2
DELETE_COLUMN procedure, 6.1
DELETE_ERROR procedure, 4.2.14, 17.13.2.1
dependencies
apply processes, 10.2
queues, 8.3.1
dequeue high-watermark, 8.3.2
destination queue, 3.1
DETACH_TABLESPACES procedure, 36.1
direct path load
capture processes, B.1.2.6
directed networks, 3.3.4
apply forwarding, 3.3.4.1
queue forwarding, 3.3.4.1
DISABLE_DB_ACCESS procedure, 16.1.2
DML handlers, 4.2.4, 4.2.4.1, 10.3.5
change handlers, 4.2.4.1.1, 20
managing, 17.6
monitoring, 26.4.1, 26.4.1
procedure DML handlers, 4.2.4.1.2
managing, 17.6.2
monitoring, 26.4.1.3
statement DML handlers, 4.2.4.1.1
managing, 17.6.1
monitoring, 26.4.1.2
unsetting, 17.6.2.3
documentation
Oracle Streams, 1.5
DROP_APPLY procedure, 17.16
DROP_CAPTURE procedure, 15.1.11, 15.2.3
DROP_PROPAGATION procedure, 16.2.8
DROP_RULE procedure, 18.2.4
DROP_RULE_SET procedure, 18.1.4

E

ENABLE_DB_ACCESS procedure, 16.1.1
error handlers, 10.3.5
creating, 17.12.1
managing, 17.12
monitoring, 26.4.3
setting, 17.12.2
unsetting, 17.12.3
error queue, 4.2.14
apply process, 33.11
deleting errors, 17.13.2
executing errors, 17.13.1
monitoring, 26.19, 26.20
transparent data encryption, A.2.6
EVALUATE procedure, 11.2
evaluation contexts, 11.1.2
association with rule sets, 11.1.2.2
association with rules, 11.1.2.2
evaluation function, 11.1.2.3
object privileges
granting, 18.3.2
revoking, 18.3.4
system privileges
granting, 18.3.1
revoking, 18.3.3
user-created, 11.7, 11.7.2
variables, 11.1.2.1
event contexts
system-created rules, 11.5
EXECUTE member procedure, 17.7.1, 17.12.1
EXECUTE_ALL_ERRORS procedure, 17.13.1.2
EXECUTE_ERROR procedure, 4.2.14, 17.13.1.1
explicit capture, 1.2.1, 2.7
features, 2.7.2
message types, 2.7.1
transparent data encryption, A.2.3
explicit consumption
dequeue, 4.4
Export
database maintenance, D.2.5
database upgrade, E.2.2
Oracle Streams, 21.1

F

fast recovery area
capture processes, A.4.2
archived redo log files, 31.1.6
file group repositories, 35.2.3.1
monitoring, 37, 37.1
using, 36.2
first SCN, 2.5.7.2
flashback data archive
Oracle Streams, A.3
flow control, 2.5.11

G

GET_BASE_TABLE_NAME member function, 17.7.1
GET_COMMA, 26.20
GET_COMMAND_TY, 17.7.1
GET_COMMAND_TYPE member function, 17.12.1, 17.13.1.1.2
GET_COMPATIBLE member function, 11.7.1.2
GET_DDL_TEXT member function, 26.20
GET_ERROR_MESSAGE function, 26.20, 26.20
GET_INFORMATION function, 17.12.1
GET_NEXT_HIT function, 11.2
GET_OBJECT_NAME member function, 17.12.1, 17.13.1.1.2, 19.2.1, 26.20
GET_OBJECT_OWNER member function, 17.13.1.1.2, 19.2.1, 26.20
GET_SCN member function, 17.7.1
GET_SOURCE_DATABASE_NAME member function, 17.7.1, 26.20
GET_STREAMS_NAME function, 17.12.1
GET_TAG member function, 17.7.1
GET_TRANSACTION_ID member function, 17.7.1
GET_VALUE member function, 17.13.1.1.2
LCRs, 19.2.1
GET_VALUES member function, 17.12.1, 26.20
global name
capture process, 7.5
GLOBAL_NAME view, 24.1.6, 32.1
GRANT_OBJECT_PRIVILEGE procedure, 11.3
GRANT_SYSTEM_PRIVILEGE procedure, 11.3
grids
information provisioning, 35

H

health check script
Oracle Streams, 30.2
high availability
Streams, 13
advantages, 13.2.1
apply, 13.3.2.4
best practices, 13.3
capture, 13.3.2.1
database links, 13.3.2.2
propagation, 13.3.2.3
high-watermark, 10.7

I

ignore SCN, 10.5
implicit capture, 1.2.1
managing, 15
switching mechanisms, 15.4, 15.5
Import
database maintenance, D.2.5
database upgrade, E.2.2
Oracle Streams, 21.1
INCLUDE_EXTRA_ATTRIBUTE procedure, 2.2.1.3, 15.3
index-organized tables
apply process, B.5.1
capture process, B.1.2.1, B.1.2.2
synchronous capture, 2.6.5, B.2.3.1
in-flight transactions, 31.1.1
information provisioning, 35
bulk provisioning, 35.2
Data Pump, 35.2.1
DBMS_STREAMS_TABLESPACE_ADM package, 35.2.3
file group repositories, 35.2.3.1
incremental provisioning, 35.3
on-demand information access, 35.4
RMAN
transportable tablespace from backup, 35.2.2
tablespace repositories, 35.2.3.2
using, 36
initialization parameters
AQ_TM_PROCESSES
Streams apply process, 33.6
instantiation
example
RMAN CONVERT DATABASE, D.3.3.3
RMAN DUPLICATE, D.3.3.2, E.3.3.2
export/import, D.2.5, E.2.2
in Streams, 2.4
RMAN CONVERT DATABASE, D.2.5
RMAN DUPLICATE, D.2.5, E.2.2
instantiation SCN, 10.5
interoperability
compatibility, 29.3
Streams, 29.3.1.2
IS_NULL_TAG member function, 26.20
IS_TRIGGER_FIRE_ONCE function, 10.8.1

K

KEEP_COLUMNS procedure, 6.1

L

LCRs. See logical change records
logical change records (LCRs), 2.2.1, 17.12.1
DDL LCRs, 2.2.1.2
current_schema, B.5.5
rules, 5.3.3.1.2, 5.3.6.1.2
extra attributes, 2.2.1.3
managing, 15.3
monitoring, 24.3
getting information about, 17.7.1, 19.2.1, 26.20
compatibility, 11.7.1.2
row LCRs, 2.2.1.1
rules, 5.3.1.1
XML schema, C
LogMiner
capture process, 7.1
multiple sessions, 7.1
low-watermark, 10.7

M

MAINTAIN_CHANGE_TABLE procedure
examples, 20.3
preparing for, 20.2
prerequisites, 20.2.2
MAINTAIN_GLOBAL procedure, 35.3
MAINTAIN_SCHEMAS procedure, 35.3
MAINTAIN_SIMPLE_TTS procedure, 35.3
MAINTAIN_TABLES procedure, 35.3
MAINTAIN_TTS procedure, 35.3
MAX_COMPATIBLE function, 11.7.1.2
maximum checkpoint SCN, 7.5.1.2
merge streams, 30.3
MERGE_STREAMS_JOB procedure, 30.3
message handlers, 4.2.4, 4.2.4.4
managing, 17.8
monitoring, 26.4.4
setting, 17.8.1
unsetting, 17.8.2
messages
captured, 4.1.2.1
captured LCRs, 4.1.1
dequeue, 4.1.1
enqueue, 2.7
persistent LCRs, 4.1.1, 4.1.2.2
persistent user messages, 4.1.2.4
propagation, 3.3
user messages, 4.1.1, 4.1.2.4
messaging, 3.2.1
buffered messaging, 3.2.2.1
dequeue, 4.4
transparent data encryption, A.2.8
enqueue, 2.7
messaging client, 4.3
messaging client user
secure queues, 8.1.2
transformations
rule-based, 6.3.5
messaging clients
transparent data encryption, A.2.7, A.2.8
migrating
to different character set
using Streams, D.3
to different operating system
using Streams, D.3
monitoring
ANYDATA data type queues, 25.1
message consumers, 25.1.4
viewing event contents, 25.1.5
apply process, 26
apply handlers, 26.4
compatible columns, 29.3.3.1
DDL handlers, 26.4.2
error handlers, 26.4.3
error queue, 26.19, 26.20
message handlers, 26.4.4
capture process, 24.1
applied SCN, 24.1.12
compatible tables, 29.3.1.1
elapsed time, 24.1.5
latency, 24.1.13, 24.1.14, 24.1.14, 26.11
message creation time, 24.1.4
rule evaluations, 24.1.15
state change time, 24.1.4
compatibility, 29.3
DML handlers, 26.4.1
file group repositories, 37
Oracle Streams, 22
performance, 23.2, 29.4
propagation jobs, 25.3
propagations, 25, 25.3
queues, 25
rule-based transformations, 28
rules, 27
supplemental logging, 24.1.18
synchronous capture, 24.2
compatible columns, 29.3.2
latency, 26.11
tablespace repositories, 37
multi-version data dictionary
missing, 34.1.5

N

NOLOGGING mode
capture process, B.1.2.5

O

oldest SCN, 10.6
ON SCHEMA clause
of CREATE TRIGGER
apply process, 10.8.2
online redefinition
capture process, B.1.2.4
synchronous capture, B.2.3.3
operating systems
migrating
using Streams, D.3
ORA-01291 error, 31.1.6
ORA-01403 error, 33.11.2.2
ORA-06550 error, 33.7
ORA-23605 error, 33.11.2.3
ORA-23607 error, 33.11.2.4
ORA-24031 error, 33.11.2.5
ORA-24093 error, 32.3.1
ORA-25224 error, 32.3.2
ORA-26666 error, 33.1
ORA-26678 error, 31.1.8
ORA-26687 error, 33.11.2.6
ORA-26688 error, 33.11.2.7
ORA-26689 error, 33.11.2.8
Oracle Data Pump
information provisioning, 35.2.1
Oracle Enterprise Manager
Streams tool, 14.3
Oracle Label Security (OLS)
apply processes, B.5.6
capture processes, B.1.5
synchronous captures, B.2.5
Oracle Real Application Clusters
interoperation with Oracle Streams, A.1, A.1.1, A.1.2, A.1.4, A.1.6
queues, A.1.4
Oracle Scheduler
propagation jobs, 9.1
Oracle Streams
administrator
monitoring, 29.1.1
alert log, 30.4
alerts, 30.1
apply process, 4, 10
capture process, 2, 2.5, 2.6, 7
compatibility, 11.7.1.2, 29.3
data dictionary, 9.2, 10.9, 22
database maintenance, D
directed networks, 3.3.4
documentation roadmap, 1.5
Export utility, 21.1
flashback data archive, A.3
health check script, 30.2
high availability, 13
Import utility, 21.1
information provisioning, 35.3
instantiation, 2.4
interoperability, 29.3.1.2
interoperation with Oracle Real Application Clusters, A.1
interoperation with Transparent Data Encryption, A.2
logical change records (LCRs), 2.2.1
XML schema, C
LogMiner data dictionary, 7.5.1
messaging, 3.2.1
messaging clients, 4.3
monitoring, 22
overview, 1.1
packages, 14.1
propagation, 3, 9
Oracle Real Application Clusters, A.1.4
queues, 8
Oracle Real Application Clusters, A.1.4
rules, 5
action context, 11.6
evaluation context, 5.3, 11.4
event context, 11.5
subset rules, 5.3, 5.3.4
system-created, 5.3
staging, 3
Oracle Real Application Clusters, A.1.4
Streams data dictionary, 7.6
Streams pool
monitoring, 29.2
Streams tool, 14.3
supplemental logging, 2.5.5
synchronous capture, 2.6
tags, 1.2.7
topology, 23.1
trace files, 30.4
transformations
rule-based, 6
transparent data encryption, A.2.4
troubleshooting, 30, 31, 32, 33, 34
upgrading online, D, E
user messages, 3.1
Oracle Streams Performance Advisor, 23.2
gathering information, 23.5
Streams components, 23.2.2
viewing statistics, 23.6.2
bottleneck components, 23.6.2.1
component-level, 23.6.2.2
latency, 23.6.2.2
rates, 23.6.2.2
session-level, 23.6.2.3
stream paths, 23.6.2.4

P

patches
applying
using Streams, D.3
performance
Oracle Streams Performance Advisor, 23.2
gathering information, 23.5
Streams components, 23.2.2
viewing statistics, 23.6.2
persistent LCRs, 4.1.2.2
persistent user messages, 4.1.2.4
POST_INSTANTIATION_SETUP procedure, 35.3
PRE_INSTANTIATION_SETUP procedure, 35.3
precommit handlers, 4.2.4.5
creating, 17.9.1
managing, 17.9
monitoring, 26.4.5
setting, 17.9.2
unsetting, 17.9.3
prepare SCN, 2.5.7.2.4
privileges
Oracle Streams administrator
monitoring, 29.1.1
rules, 11.3
procedure DML handlers, 4.2.4.1.2
creating, 17.6.2.1
managing, 17.6.2
monitoring, 26.4.1.3
setting, 17.6.2.2, 17.6.2.2
SQL generation, 17.6.2.1
unsetting, 17.6.2.3
propagation
combined capture and apply, 12.1
query to determine, 25.3.6, 25.3.7
propagation receivers, 25.3.7
propagation senders, 25.3.6
propagation jobs, 9.1
altering, 16.2.3
managing, 16.2
monitoring, 25.3
Oracle Scheduler, 9.1
RESTRICTED SESSION, 9.1.2
scheduling, 9.1.1
trace files, 30.4.2
troubleshooting, 32
propagations, 3, 3.3, 9
binary files, 9.3
buffered queues, 3.2.2
destination queue, 3.1
directed networks, 3.3.4
dropping, 16.2.8
ensured delivery, 3.3.3
managing, 16.2
monitoring, 25, 25.3
queue-to-queue, 3.3.2, 3.3.2, 25.3.1
Oracle Real Application Clusters, A.1.4
propagation job, 9.1
schedule, 16.2.3.1
rule sets
removing, 16.2.7
specifying, 16.2.4
rules, 3.3.1, 5.1
adding, 16.2.5
removing, 16.2.6
session information, 25.3.8
source queue, 3.1
starting, 16.2.1
stopping, 16.2.2
transformations
rule-based, 6.3.3
transparent data encryption, A.2.5
troubleshooting, 32
checking queues, 32.1
checking status, 32.2
security, 32.3

Q

queue forwarding, 3.3.4.1
queues
ANYDATA, 3.2.1
removing, 16.1.3
browsing, 8.3.1.2
buffered, 3.2.2
commit-time, 8.3
dependencies, 8.3.1
dequeue high-watermark, 8.3.2
monitoring, 25
nontransactional, 8.2
Oracle Real Application Clusters, A.1.4
queue tables, 3.2.2
triggers, 3.2.2
secure, 8.1
disabling user access, 16.1.2
enabling user access, 16.1.1
users, 8.1.2
synchronous capture, 2.6.2
transactional, 8.2
typed, 3.2.1
queue-to-queue propagations, 3.3.2, 3.3.2, 25.3.1
schedule, 16.2.3.1

R

RE$NV_LIST type, 11.2
ADD_PAIR member procedure, 18.2.2.2, 18.2.2.3, 19.2.1, 19.2.2
REMOVE_PAIR member procedure, 18.2.2.2, 18.2.2.4, 19.2.2, 19.2.3
Recovery Manager
capture processes
archived redo log files, A.4.2
fast recovery area, 31.1.6
CONVERT DATABASE command
Streams instantiation, D.2.5, D.3.3.3
DUPLICATE command
Streams instantiation, D.2.5, D.3.3.2, E.2.2, E.3.3.2
information provisioning, 35.2.2
redo logs
capture process, 2.5.1
REMOVE_PAIR member procedure, 18.2.2.2, 18.2.2.4, 19.2.2, 19.2.2, 19.2.3
REMOVE_QUEUE procedure, 16.1.3
REMOVE_RULE procedure, 15.1.3.3, 16.2.6, 17.3.3, 18.1.3
RENAME_COLUMN procedure, 6.1, 19.1.2
RENAME_SCHEMA procedure, 6.1
RENAME_TABLE procedure, 6.1, 19.1.1.1, 19.1.3
replication
split and merge, 30.3
required checkpoint SCN, 7.4
RESTRICTED SESSION system privilege
apply processes, 4.2.9
capture processes, 2.5.8
propagation jobs, 9.1.2
REVOKE_OBJECT_PRIVILEGE procedure, 11.3
REVOKE_SYSTEM_PRIVILEGE procedure, 11.3
RMAN. See Recovery Manager
row migration, 5.3.4.2, 5.3.4.2
rule sets, 11.1
adding rules to, 18.1.2
creating, 18.1.1
dropping, 18.1.4
evaluation, 11.2
partial, 11.2.2
negative, 5.2
object privileges
granting, 18.3.2
revoking, 18.3.4
positive, 5.2
removing rules from, 18.1.3
system privileges
granting, 18.3.1
revoking, 18.3.3
rule-based transformations, 6
custom, 6.2
action contexts, 6.2.1
altering, 19.2.2
creating, 19.2.1
managing, 19.2
monitoring, 28.3
privileges, 6.2.2
removing, 19.2.3
declarative, 6.1
adding, 19.1.1
managing, 19.1
monitoring, 28.2
removing, 19.1.3
step number, 6.4.1
troubleshooting, 34.2
managing, 19
monitoring, 28
ordering, 6.4
rules, 11
action contexts, 11.1.3
adding name-value pairs, 18.2.2.2, 18.2.2.3, 19.2.1, 19.2.2
altering, 18.2.2.2
removing name-value pairs, 18.2.2.4, 19.2.2, 19.2.3
transformations, 6.2.1
ADD_RULE procedure, 11.1.2.2
altering, 18.2.2
apply process, 4.2.2, 5.1
capture process, 2.5.2, 5.1
components, 11.1
creating, 18.2.1
DBMS_RULE package, 11.2
DBMS_RULE_ADM package, 18
dropping, 18.2.4
EVALUATE procedure, 11.2
evaluation, 11.2
capture process, 7.7
iterators, 11.2
partial, 11.2.2
evaluation contexts, 11.1.2
evaluation function, 11.1.2.3
user-created, 11.7.2
variables, 11.1.2.1
event context, 11.2
explicit variables, 11.1.2.1
implicit variables, 11.1.2.1
iterative results, 11.2
managing, 18.1
MAYBE rules, 11.2
monitoring, 27
object privileges
granting, 18.3.2
revoking, 18.3.4
partial evaluation, 11.2.2
privileges, 11.3
managing, 18.3
propagations, 3.3.1, 5.1
rule conditions, 5.3.3, 5.3.4, 11.1.1
complex, 11.7.1.3
explicit variables, 11.1.2.1
finding patterns in, 27.10
implicit variables, 11.1.2.1
Streams compatibility, 11.7.1.2
types of operations, 11.7.1.1
undefined variables, 11.7.1.4
using NOT, 11.7.1.3.1
variables, 5.3.1.1
rule_hits, 11.2
simple rules, 11.1.1.2
subset, 5.3.4
querying for action context of, 19.2.1
querying for names of, 19.2.1
synchronous capture, 2.6.3
system privileges
granting, 18.3.1
revoking, 18.3.3
system-created, 5, 5.3
action context, 11.6
and_condition parameter, 5.3.7
DDL rules, 5.3.3.1.2, 5.3.6.1.2
DML rules, 5.3.1.1
evaluation context, 5.3, 11.4
event context, 11.5
global, 5.3.1
modifying, 18.2.3
row migration, 5.3.4.2
schema, 5.3.2
STREAMS$EVALUATION_CONTEXT, 5.3, 11.4
subset, 5.3, 5.3.4
table, 5.3.3
troubleshooting, 34
TRUE rules, 11.2
user-created, 11.7
variables, 11.1.2.1

S

scripts
Oracle Streams, 30.2
secure queues, 8.1
disabling user access, 16.1.2
enabling user access, 16.1.1
propagation, 32.3
Streams clients
users, 8.1.2
SET_CHANGE_HANDLER procedure, 20.4.1
SET_DML_HANDLER procedure, 4.2.4.1.1, 4.2.4.1.2
setting a DML handler, 17.6.2.2
setting an error handler, 17.12.2
unsetting a DML handler, 17.6.2.3, 17.6.2.3
unsetting an error handler, 17.12.3
SET_ENQUEUE_DESTINATION procedure, 17.10
SET_EXECUTE procedure, 17.11
SET_KEY_COLUMNS procedure, 10.3.2
removing substitute key columns, 17.14.2
setting substitute key columns, 17.14.1
SET_PARAMETER procedure
apply process, 17.4, 33.9
capture process, 15.1.4
SET_RULE_TRANSFORM_FUNCTION procedure, 19.2
SET_TRIGGER_FIRING_PROPERTY procedure, 10.8.1
SET_VALUE member procedure, 17.13.1.1.2, 19.2.1
SET_VALUES member procedure, 17.12.1, 17.12.1
SGA_MAX_SIZE initialization parameter, 7.1
source queue, 3.1
split streams, 30.3
SPLIT_STREAMS procedure, 30.3
SQL generation, 4.2.8
character sets, 4.2.8.4
data types supported, 4.2.8.3
examples, 4.2.8.5, 17.6.2.1
formats, 4.2.8.2
interfaces, 4.2.8.1
procedure DML handlers, 17.6.2.1
SQL*Loader
capture processes, B.1.2.6
staging, 3
approximate CSCN, 8.3.2
buffered queues, 3.2.2
monitoring, 25.2
management, 16
messages, 4.1.1
secure queues, 8.1
disabling user access, 16.1.2
enabling user access, 16.1.1
start SCN, 2.5.7.2
START_APPLY procedure, 17.1
START_CAPTURE procedure, 15.1.1
START_PROPAGATION procedure, 16.2.1
statement DML handlers, 4.2.4.1.1
adding statements to, 17.6.1.2
creating, 17.6.1.1
dropping, 17.6.1.6
managing, 17.6.1
modifying, 17.6.1.3
monitoring, 26.4.1.2
removing from apply process, 17.6.1.5
removing statements from, 17.6.1.4
Statspack
Oracle Streams, 29.4
STOP_APPLY procedure, 17.2
STOP_CAPTURE procedure, 15.1.2
STOP_PROPAGATION procedure, 16.2.2
stream paths, 23.3
combined capture and apply, 23.3.1
Streams data dictionary, 7.6, 9.2, 10.9
streams paths
statistics, 23.6.2.4
Streams pool
monitoring, 29.2
Streams. See Oracle Streams
Streams tool, 14.3
Streams topology
DBMS_STREAMS_ADVISOR_ADM package
gathering information, 23.5
STREAMS$_EVALUATION_CONTEXT, 5.3, 11.4
STREAMS$_TRANSFORM_FUNCTION, 6.2.1
supplemental logging, 2.5.5
conditional log groups, 2.5.5
DBA_LOG_GROUPS view, 24.1.18.1
monitoring, 24.1.18
unconditional log groups, 2.5.5
synchronous capture, 2.6
capture user, 2.6.6
setting, 15.2.2
changes captured
DML changes, 2.6.5
data type restrictions, B.2.3.1
data types captured, 2.6.4
dropping, 15.2.3
index-organized tables, 2.6.5, B.2.3.1
latency
capture to apply, 26.11
managing, 15.2
monitoring, 24.2
compatible columns, 29.3.2
online redefinition, B.2.3.3
Oracle Label Security (OLS), B.2.5
Oracle Real Application Clusters, A.1.2
queues, 2.6.2
rule sets
specifying, 15.2.1.1
rules, 2.6.3
adding, 15.2.1.2
modifying, 2.6.3, 18.2.2.1
switching to, 15.4
SYS schema, 2.6.3
SYSTEM schema, 2.6.3
table type restrictions, B.2.3.2
transformations
rule-based, 6.3.2
transparent data encryption, A.2.2
SYS.AnyData. See ANYDATA data type
system change numbers (SCN)
applied SCN for a capture process, 2.5.7.1, 24.1.12
applied SCN for an apply process, 10.7
captured SCN for a capture process, 2.5.7.1
first SCN for a capture process, 2.5.7.2
maximum checkpoint SCN for a capture process, 7.2.2
oldest SCN for an apply process, 10.6
required checkpoint SCN for a capture process, 7.2.1
start SCN for a capture process, 2.5.7.2
system-generated names
apply process, 10.4.1

T

tables
index-organized
capture process, B.1.2.2
tablespace repositories, 35.2.3.2
creating, 36.1.1
monitoring, 37, 37.2
using, 36.1
with shared file system, 36.1.2
without shared file system, 36.1.3
tags, 1.2.7
topology
component IDs, 23.3
DBMS_STREAMS_ADVISOR_ADM package
gathering information, 23.5
gathering information, 23.5
Oracle Streams, 23.1
stream paths, 23.3
combined capture and apply, 23.3.1
statistics, 23.6.2.4
viewing, 23.6.1.3
viewing, 23.6.1
trace files
Oracle Streams, 30.4
transformations
custom rule-based, 6.2
action context, 6.2.1
altering, 19.2.2
creating, 19.2.1
monitoring, 28.3
removing, 19.2.3
STREAMS$_TRANSFORM_FUNCTION, 6.2.1
troubleshooting, 34.3
declarative rule-based, 6.1
monitoring, 28.2
troubleshooting, 34.2
Oracle Streams, 6
rule-based, 6
apply process, 6.3.4
capture process, 6.3.1
errors during apply, 6.3.4.1
errors during capture, 6.3.1.1
errors during dequeue, 6.3.5.1
errors during propagation, 6.3.3.1
managing, 19
messaging client, 6.3.5
monitoring, 28
multiple, 6.3.6
propagations, 6.3.3
synchronous capture, 6.3.2
Transparent Data Encryption
interoperation with Oracle Streams, A.2
transparent data encryption
apply processes, A.2.6
buffered queues, A.2.4
capture processes, A.2.1
dequeue, A.2.8
error queue, A.2.6
explicit capture, A.2.3
messaging clients, A.2.7, A.2.8
propagations, A.2.5
synchronous capture, A.2.2
triggers
firing property, 10.8.1
queue tables, 3.2.2
system triggers
on SCHEMA, 10.8.2
troubleshooting
alerts, 30.1
apply process, 33
checking apply handlers, 33.5
checking message type, 33.3
checking status, 33.1
error queue, 33.11
performance, 33.10
capture process, 31, 31.1
checking progress, 31.1.5
checking status, 31.1.2
creation, 31.1.1
custom rule-based transformations, 34.3
missing multi-version data dictionary, 34.1.5
Oracle Streams, 30, 31, 32, 33, 34
propagation jobs, 32
propagations, 32
checking queues, 32.1
checking status, 32.2
security, 32.3
rules, 34

U

UNRECOVERABLE clause
SQL*Loader
capture process, B.1.2.6
UNRECOVERABLE SQL keyword
capture process, B.1.2.5
upgrading
online using Streams, D, E
assumptions, E.1.2
capture database, E.1.1
instantiation, E.2.2
job queue processes, E.1.3
PL/SQL package subprograms, E.1.3
user-defined types, E.2.1
user messages, 3.1, 4.1.2.4
UTL_SPADV package, 14.1.14

V

V$ARCHIVE_DEST view, 24.1.6
V$ARCHIVED_LOG view, 2.5.7.2
V$BUFFERED_PUBLISHERS view, 25.2.2
V$BUFFERED_QUEUES view, 25.2.1, 25.2.6, 25.2.8
V$BUFFERED_SUBSCRIBERS view, 25.2.6, 25.2.8
V$DATABASE view
supplemental logging, 24.1.18.2
V$PROPAGATION_RECEIVER view, 25.2.7, 25.3.7
V$PROPAGATION_SENDER view, 25.2.3, 25.2.4, 25.2.5, 25.3.6
V$RULE view, 27.14
V$RULE_SET view, 27.12, 27.13
V$RULE_SET_AGGREGATE_STATS view, 27.11
V$SESSION view, 24.1.2, 24.1.3, 25.3.8, 26.5, 26.6, 26.8, 26.9, 26.12
V$STREAMS_APPLY_COORDINATOR view, 4.2.10.2, 26.9, 26.10, 26.11
V$STREAMS_APPLY_READER view, 4.2.10.1, 26.6, 26.7, 26.8, 26.16
V$STREAMS_APPLY_SERVER view, 4.2.10.3, 26.12, 26.13, 33.10
V$STREAMS_CAPTURE view, 2.5.11, 24.1.3, 24.1.4, 24.1.5, 24.1.10, 24.1.13, 24.1.14, 24.1.14, 24.1.15, 24.1.16, 31.1.5
V$STREAMS_POOL_ADVICE view, 29.2
virtual dependency definitions, 10.2.5
object dependencies, 10.2.5.2
managing, 17.15.2
monitoring, 26.18.2
value dependencies, 10.2.5.1
managing, 17.15.1
monitoring, 26.18.1

W

wallets
Oracle Streams, A.2.4

X

XML Schema
for LCRs, C
PK<^  PK&AOEBPS/strms_adqueue.htms Advanced Queue Concepts

8 Advanced Queue Concepts

The following topics contain conceptual information about staging messages in queues and propagating messages from one queue to another:

Secure Queues

Secure queues are queues for which Oracle Streams Advanced Queuing (AQ) agents must be associated explicitly with one or more database users who can perform queue operations, such as enqueue and dequeue. The owner of a secure queue can perform all queue operations on the queue, but other users cannot perform queue operations on a secure queue, unless they are configured as secure queue users. In Oracle Streams, you can use secure queues to ensure that only the appropriate users and Oracle Streams clients enqueue messages and dequeue messages.

Secure Queues and the SET_UP_QUEUE Procedure

All ANYDATA queues created using the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package are secure queues. When you use the SET_UP_QUEUE procedure to create a queue, any user specified by the queue_user parameter is configured as a secure queue user of the queue automatically, if possible. The queue user is also granted ENQUEUE and DEQUEUE privileges on the queue. To enqueue messages and dequeue messages, a queue user must also have EXECUTE privilege on the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package. The SET_UP_QUEUE procedure does not grant either of these privileges. Also, a message cannot be enqueued unless a subscriber who can dequeue the message is configured.

To configure a queue user as a secure queue user, the SET_UP_QUEUE procedure creates an Oracle Streams AQ agent with the same name as the user name, if one does not already exist. The user must use this agent to perform queue operations on the queue. If an agent with this name already exists and is associated with the queue user only, then the existing agent is used. SET_UP_QUEUE then runs the ENABLE_DB_ACCESS procedure in the DBMS_AQADM package, specifying the agent and the user.

If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create a secure queue, and you want a user who is not the queue owner and who was not specified by the queue_user parameter to perform operations on the queue, then you can configure the user as a secure queue user of the queue manually. Alternatively, you can run the SET_UP_QUEUE procedure again and specify a different queue_user for the queue. In this case, SET_UP_QUEUE skips queue creation, but it configures the user specified by queue_user as a secure queue user of the queue.

If you create an ANYDATA queue using the DBMS_AQADM package, then you use the secure parameter when you run the CREATE_QUEUE_TABLE procedure to specify whether the queue is secure or not. The queue is secure if you specify TRUE for the secure parameter when you run this procedure. When you use the DBMS_AQADM package to create a secure queue, and you want to allow users to perform queue operations on the secure queue, you must configure these secure queue users manually.

Secure Queues and Oracle Streams Clients

When you create a capture process or an apply process, an Oracle Streams AQ agent of the secure queue associated with the Oracle Streams process is configured automatically, and the user who runs the Oracle Streams process is specified as a secure queue user for this queue automatically. Therefore, a capture process is configured to enqueue into its secure queue automatically, and an apply process is configured to dequeue from its secure queue automatically. In either case, the Oracle Streams AQ agent has the same name as the Oracle Streams client.

For a capture process, the user specified as the capture_user is the user who runs the capture process. For an apply process, the user specified as the apply_user is the user who runs the apply process. If no capture_user or apply_user is specified, then the user who invokes the procedure that creates the Oracle Streams process is the user who runs the Oracle Streams process.

When you create a synchronous capture, an Oracle Streams AQ agent of the secure queue with the same name as the synchronous capture is associated with the user specified as the capture_user. If no capture_user is specified, then the user who invokes the procedure that creates the synchronous capture is the capture_user. The capture_user is specified as a secure queue user for this queue automatically. Therefore, the synchronous capture can enqueue into its secure queue automatically.

If you change the capture_user for a capture process or synchronous capture or the apply_user for an apply process, then the specified capture_user or apply_user is configured as a secure queue user of the queue used by the Oracle Streams client. However, the old capture user or apply user remains configured as a secure queue user of the queue. To remove the old user, run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package, specifying the old user and the relevant Oracle Streams AQ agent. You might also want to drop the agent if it is no longer needed. You can view the Oracle Streams AQ agents and their associated users by querying the DBA_AQ_AGENT_PRIVS data dictionary view.

When you create a messaging client, an Oracle Streams AQ agent of the secure queue with the same name as the messaging client is associated with the user who runs the procedure that creates the messaging client. This messaging client user is specified as a secure queue user for this queue automatically. Therefore, this user can use the messaging client to dequeue messages from the queue.

A capture process, a synchronous capture, an apply process, or a messaging client can be associated with only one user. However, one user can be associated with multiple Oracle Streams clients, including multiple capture processes, synchronous captures, apply processes, and messaging clients. For example, an apply process cannot have both hr and oe as apply users, but hr can be the apply user for multiple apply processes.

If you drop a capture process, synchronous capture, apply process, or messaging client, then the users who were configured as secure queue users for these Oracle Streams clients remain secure queue users of the queue. To remove these users as secure queue users, run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package for each user. You might also want to drop the agent if it is no longer needed.


Note:

No configuration is necessary for propagations and secure queues. Therefore, when a propagation is dropped, no additional steps are necessary to remove secure queue users from the propagation's queues.

Transactional and Nontransactional Queues

A transactional queue is a queue in which messages can be grouped into a set that are applied as one transaction. That is, an apply process performs a COMMIT after it applies all the messages in a group. A nontransactional queue is one in which each message is its own transaction. That is, an apply process performs a COMMIT after each message it applies. In either case, the messages can be LCRs or user messages.

The SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package always creates a transactional queue. The difference between transactional and nontransactional queues is important only for messages that were enqueued by an application, a synchronous capture, or an apply process. An apply process always applies captured LCRs in transactions that preserve the transactions executed at the source database.

Table 8-1 shows apply process behavior for each type of message and each type of queue.

Table 8-1 Apply Process Behavior for Transactional and Nontransactional Queues

Message TypeTransactional QueueNontransactional Queue

Captured LCRs

Apply process preserves the original transaction.

Apply process preserves the original transaction.

Persistent LCRs or Persistent User Messages

Apply process applies a user-specified group of messages as one transaction.

Apply process applies each message in its own transaction.


When it is important to preserve the transactions executed at the source database, use transactional queues to store the messages. Ensure that LCRs captured by synchronous captures are stored in transactional queues.


See Also:


Commit-Time Queues

You can control the order in which messages in a persistent queue are browsed or dequeued. Message ordering in a queue is determined by its queue table, and you can specify message ordering for a queue table during queue table creation. Specifically, the sort_list parameter in the DBMS_AQADM.CREATE_QUEUE_TABLE procedure determines how messages are ordered. Each message in a commit-time queue is ordered by an approximate commit system change number (approximate CSCN), which is obtained when the transaction that enqueued each message commits.

Commit-time ordering is specified for a queue table, and queues that use the queue table are called commit-time queues. When commit_time is specified for the sort_list parameter in the DBMS_AQADM.CREATE_QUEUE_TABLE procedure, the resulting queue table uses commit-time ordering.

For Oracle Database 10g Release 2 and later, the default sort_list setting for queue tables created by the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package is commit_time. For releases before Oracle Database 10g Release 2, the default is enq_time, which is described in the section that follows. When the queue_table parameter in the SET_UP_QUEUE procedure specifies an existing queue table, message ordering in the queue created by SET_UP_QUEUE is determined by the existing queue table.


Note:

A synchronous capture always enqueues into a commit-time queue to ensure that transactions are ordered properly.

When to Use Commit-Time Queues

A user or application can share information by enqueuing messages into a queue in an Oracle database. The enqueued messages can be shared within a single database or propagated to other databases, and the messages can be LCRs or user messages. For example, messages can be enqueued when an application-specific message occurs or when a trigger is fired for a database change. Also, in a heterogeneous environment, an application can enqueue at an Oracle database messages that originated at a non-Oracle database.

Other than commit_time, the settings for the sort_list parameter in the CREATE_QUEUE_TABLE procedure are priority and enq­_time. The priority setting orders messages by the priority specified during enqueue, highest priority to lowest priority. The enq_time setting orders messages by the time when they were enqueued, oldest to newest.

Commit-time queues are useful when an environment must support either of the following requirements for concurrent enqueues of messages:

Commit-time queues support these requirements. Neither priority nor enqueue time ordering supports these requirements because both allow transactional dependency violations and inconsistent browses. Both settings allow transactional dependency violations, because messages are dequeued independent of the original dependencies. Also, both settings allow inconsistent browses of the messages in a queue, because multiple browses performed without any dequeue operations between them can result in different sets of messages.

Transactional Dependency Ordering During Dequeue

A transactional dependency occurs when one database transaction requires that another database transaction commits before it can commit successfully. Messages that contain information about database transactions can be enqueued. For example, a database trigger can fire to enqueue messages. Figure 8-1 shows how enqueue time ordering does not support transactional dependency ordering during dequeue of such messages.

Figure 8-1 Transactional Dependency Violation During Dequeue

Description of Figure 8-1 follows
Description of "Figure 8-1 Transactional Dependency Violation During Dequeue"

Figure 8-1 shows how transactional dependency ordering can be violated with enqueue time ordering. The transaction that enqueued message e2 was committed before the transaction that enqueued messages e1 and e3 was committed, and the update in message e3 depends on the insert in message e2. So, the correct dequeue order that supports transactional dependencies is e2, e1, e3. However, with enqueue time ordering, e3 can be dequeued before e2. Therefore, when e3 is dequeued, an error results when an application attempts to apply the change in e3 to the hr.employees table. Also, after all three messages are dequeued, a row in the hr.employees table contains the wrong information because the change in e3 was not executed.

Consistent Browse of Messages in a Queue

Figure 8-2 shows how enqueue time ordering does not support consistent browse of messages in a queue.

Figure 8-2 Inconsistent Browse of Messages in a Queue

Description of Figure 8-2 follows
Description of "Figure 8-2 Inconsistent Browse of Messages in a Queue"

Figure 8-2 shows that a client browsing messages in a queue is not guaranteed a definite order with enqueue time ordering. Sessions 1 and 2 are concurrent sessions that are enqueuing messages. Session 3 shows two sets of client browses that return the three enqueued messages in different orders. If the client requires deterministic ordering of messages, then the client might fail. For example, the client might perform a browse to initiate a program state, and a subsequent dequeue might return messages in a different order than expected.

How Commit-Time Queues Work

The commit system change number (CSCN) for a message that is enqueued into a queue is not known until Oracle Database writes the redo record for the commit of the transaction that includes the message to the redo log. The CSCN cannot be recorded when the message is enqueued. Commit-time queues use the current SCN of the database when a transaction is committed as the approximate CSCN for all of the messages in the transaction. The order of messages in a commit-time queue is based on the approximate CSCN of the transaction that enqueued the messages.

In a commit-time queue, messages in a transaction are not visible to dequeue and browse operations until a deterministic order for the messages can be established using the approximate CSCN. When multiple transactions are enqueuing messages concurrently into the same commit-time queue, two or more transactions can commit at nearly the same time, and the commit intervals for these transactions can overlap. In this case, the messages in these transactions are not visible until all of the transactions have committed. At that time, the order of the messages can be determined using the approximate CSCN of each transaction. Dependencies are maintained by using the approximate CSCN for messages rather than the enqueue time. Read consistency for browses is maintained by ensuring that only messages with a fully determined order are visible.

A commit-time queue always maintains transactional dependency ordering for messages that are based on database transactions. However, applications and users can enqueue messages that are not based on database transactions. For these messages, if dependencies exist between transactions, then the application or user must ensure that transactions are committed in the correct order and that the commit intervals of the dependent transactions do not overlap.

The approximate CSCNs of transactions recorded by a commit-time queue might not reflect the actual commit order of these transactions. For example, transaction 1 and transaction 2 can commit at nearly the same time after enqueuing their messages. The approximate CSCN for transaction 1 can be lower than the approximate CSCN for transaction 2, but transaction 1 can take more time to complete the commit than transaction 2. In this case, the actual CSCN for transaction 2 is lower than the actual CSCN for transaction 1.


Note:

The sort_list parameter in CREATE_QUEUE_TABLE can be set to the following:
priority, commit_time

In this case, ordering is done by priority first and commit time second. Therefore, this setting does not ensure transactional dependency ordering and browse read consistency for messages with different priorities. However, transactional dependency ordering and browse read consistency are ensured for messages with the same priority.



See Also:

Oracle Streams Replication Administrator's Guide for information about creating a commit-time queue

PKZ=ssPK&AOEBPS/img/strms068.gifGIF89a@@@???000 ```pppPPP999///___oooOOOvvv;;;rrr,,,XXXggg[[[JJJ 888<<>>444kkkttt^^^˴)))eeexxxuuu&&&ǹjjj~~~'''+++]]]VVVTTTCCCDDD\\\EEE(((BBBĘLLLNNN ZZZFFFSSS{{{333KKKMMM111III...HHH555zzzQQQAAAGGG---!,H`A4 a >Сˆ.4"lj8bɏCJLII(/ScK"aĹgG?oڜ9ǝE화ҠDT tӪ: Q>zlӱZ6뷳bv۶.ܹxw߲:ÄNx㴍!?FK.{3U,s弘7G~ ERpZ֮Y3Dji߶zk woÁVvω 7xrԣ3/;tґg}zwч^vg=y߾7 8߀@D$Vhfv ($h(,*:80h8<@)dD24H&L6PXdTViXfNN`)dfl& tixy矀&j衈裐F:!:`f馜IiD@ͨhP!*>0@魸:gD7P0@:"f*4;j"[7Vk\ @:ElA6 0 0`=pJH7zc@Fix#4~c.) Njwl" I776`r+n軲AĒ*, 3F[Я4{lXW)G\Ȱ 4 +7 jLp2C~#jkovҩ>fC]suD]Mۀ:@,&Ҋ7?nP{+[:썼Ė:C yllW k*C-#zA*㽲.|ṂJoĺA`˰:u(KGB*NHlW{˺WrIKd̗Nn#3䷳e@^ 0a2 jc̡ zS{YA? VdE@H`Aѫ`$LV85EL^4g2e; | an)R%*d#&@STS eFC\& կ^ղ`l(G97(1 a]\"Dn5XDpzk( e̳4Y_:`e` 8dqDQ9vsBVdF!̧`gDRIЂj {"bzRS$]4G8пI O@2T_vmN566Py)@ ӈZ&J`zE8@^g%/I~T%!:բ vYbǬL"xB B,/ S #RompݐFٵ V"V#X[ S\ش(`-Eo<֠jx/(4+; ~D8Q8ZId_OF"b'Bõ  EKLDdz ɚ8OЗ>{M>>8 S&{@Y0*P~~va{f{8/5-#?Xk;U2}+s pi햀4W]0*Z ;x~%psrOSX3"3S U4Db,ei#4P#j4kxF,]9j(Ƿ$2h-'b؅89P# P >*]@m6mPp@(`LH~$.@2RIE|E+.U-^)-`,e=ELR0c,>\N҇2}D"x]3,p a@P2b`:7؄O(c|`2-!0 d-dLqI3!9/b8/G-T0H4n%hpP#`8G8ȍ@H@=<*G).M E(XrX1ygA#  jԐJ!E-b/Y}r5BvY0u9ٸH6:ɓ> BIFJɔO9r Sy +(/r'=ރ0ņ#YbEbI -)h3KQґ&$yp!& #y``1 xY!YљǙ`9b}9"igH0i3Y7;ٓ?CYGK9x~p@o_2.GK0Ċ"9Ej82d*Ǥ39D" z ,M› 0*p7*%p9$0*+ }E$P0pR+ ;#y/aK +)NIp!Pc8/~.~v/i5A|5+:. /41X|LbtY  O9[ |Miz~vbLv5J."&Z7jbW 9}ڛ k:}0K 'i{y"78@0P^ @@ ?I G PZwyt*5YүvypސԢ7$U М`#۞%ۭuƦвJ GP _0*.u TZo`pd!P#) !PWKzj +٬K +P0p˗h!ַkv2USA_  [2p j y~0~F5x{蛾껾۾ qR+שUZ۫E:O!;Y*Uڥ{Z#1k rO"n k l C0L[02<4\6|8:<7, 1 iB @!ʪ٫!{ٹ5q  "Z,y[a  `"| J ZD><Ȅ\Ȇ|Ȉȇ 0µRG*N%RƮuxk !O 0*|  h MQ2V|˸˺˼˾<\|/ұy %L)j̲5t  _0 b 2l;>!6:W[r!y#byɇi y( ЎB6Cl!63o4<'2ͩvВ4P!P=ZW[!! ƝIn6c e&yҔDK 7p!d7=!6ʰސL:*!`H&J!&[(+1}zC[_ :-YK0XgkI-ܺ<*4 ސ*MՑ i ͹F=Ycڹ&o!qm} ڢPx 2` fp.\$֮=z-!&!mƽvް bP)@ ܸЧ|~y=ݭ ط-٬"2\u]E "" 6 L/j@"z$tީ֭R '"q%譓 sR#cf0 |9˹iQ mLw}=ЬP?iހ @ Fٸa Bݜx3?)4 `]m` *0*g.GmoNOB `t=. "d `=ް*"x+m}Q*p\ FZO<"BԻv^բ=) O  cބ Op<~s;#Vj}-)2 A@8D ?_!27(*~p َRYډZ:P2oO pP #U 0*eAFrhB 0T_VXZ\^`LO0`*`0"$ i\p*7ϗ?_OO *SVr< +`?*_@ @G=0?_!s#zVp3fg;)0$B_v#AK ) XE MGJ9o": $XA .daB$NXѢE\G!E$YI);r˗>TΤYM9uO7%ZQ,["eSQN UYC*WaŎ)Y=e[8 dL`6]߫sԵ 0ٽ/fx%_MoW^佝S|qѫSc=;tfg.}ϋ^{wG~0?3=Cp@/@Lt? 3p5p +C?4D# 2x[9lӆvtqlApum×t(x}5t@)h}ƙv&x񴽑XZb MΫ4&O"Jk[ݺ²d8kDIDp( L^V/NnWW4o8@0A։OU"oDÞ-]p@`9Q}bhD%Z3]Ɠ9hE#yK&D@qLf߀˺bcvGEu t_g-,Q~V0p u= cNȺ9σ` &b7( 75e}GD@ +$&F TOx0{!$AIliȖ"F6]$c8`baV:2~bYb- Jq>f) L]i&bVPRrdF%M"@&^`̘7.Y\V7!<z#"*zJ$@3Pm+m(B(lIxB۴*U}'E#SiPydžf6JlRseO6Ap"[X4[[4`l#Lh]fPnjǒ5Nt@xZtw6H@D%Еt`*Silm+-}ŗ[[\|z{tSE! G@氹6Y&'bc5)8 n|Yz\A" .ZM2Xf9jpU3* H`@aQ65ʑ_gF팳ӝ.Ϭ9xڀy`"#MȓB{~2i-"U}d Q'W>htD߼zg11Dr$q|fh:ɨ} F o-^ށ.[)yB-Ksl@3.vf[tJp2z7d*"ֻ] <= ԋ/iE UkVu=S1JQBr0Βwc2_y#F\m5]x3߳i~s;gyCgЉ~Kҙg9}K@խU:EOWve;uj?b={G]>< ?K><+35Og}MzԻ^gi}P DKI7O<0'>o}c¯>>o?w~?}_?@#@??T@?ܿ, \@ d@ @A % AC7D9@dA;=T.tGC:DBLDC\JCF,:͋<uf2A%7ӎS|uJS!Hʊ!t!RFz3x)wUᄗ ʊ UQ|G~|GiVm Va OSLVsdIl ڌ v=Wh}׎A*̖H#2O 9 IЍ L;xI6ژUB,Hfme!ʳzU"=WdMF.b19)Yܗ˼'JmPvQ K;Ζ"!)ɖiqZZIm}P0T!JHRqӗ1XΨ9)~!uʝ}T] .{/-W:נP[TKeYײ-R%Hd['"OiPIJS͂5MK٨`]TeRUu uª\a!Uie[(t)bͲ[2ֽ:U,55%^܍\HW߽R(ڝE6 mڗ9\t^ m%PU ‚[R=ݝ pnԷ!F!SBRə2=`.#/I/,&E >Pua}$~-SbTBK*!Sӈ_?B' PTWL&mǨIuɉHmvAEN7cI$4a|J rcbsJ%h=4dJFKf~YM6 eS> nI_UtA3XT&R36 (e_e(x!})ZΔ['+(fjfkv('P>fHI*/0kNgu^g*aUpqi (`g~jƂ*(ZgEg*f^v'fM&hCk0tfh^ h&̂}hָ^i)(铎iv!`i6+ouoiV >jk+n'@j& UGb'hi *떸.k8big*hkb #X'pt.e!Nn n(h*iǦ(aւ=B8B9oVHB6-A Qi#l>^v(h옎/Ȃx_`.c޺Mc!ݎڏbiCnUX(pN v,WIn*r\uBHI& eIA52pa5Y7:&PzԱH6RlKnXx@YxbP6h0@֖>ITz"p)wqKaInˤ7OKb_&wiY<'^%nr(r `h PlրOAV)X5RX&"1/>s Hm9Ўywd ;&/&C&uzH1/"aoiiHp9nQJLMwrOQ/u $ P#H`u\u^'ʖWL(["rEHyx3^Dg"(sW_[CLJ%'w|'u%~?_[]s7O )̟ `Raˡ"KuBi*yyMoNyPu'zx'xGxgxx_p8 %MJ QнZ2ϧ?'ro H\ҙ-q}t:t` G/faѡ% !#*IɃ .4R7h!c ?xsb">@h@pj!cZy>x/ô~xˀAܗ3n1#Kl21gּsgϟA=tiӧQVukׯaz{*sAopK|awy/n8rM@9˛iĩOBESRZŪWb22P@mr tYZvqgwZy'z02DEkI!8"!\)X`ua'vyZxyG[?TP` 7x$I 8@ (9%r,zv]hڑi6^kldYaC  d7=D_|ig`h8q ApU䝄]dL |PI%z*YyZ.8(;`EE +EAx#ATdjXzC75*  榝;/KѪ.f]:hcX7{AYB %~eE ,wqYd] : <)ml4@|39/ e5adV MːB~P]2@nj>h 7pt`wO:3b@O[2SDQ (LȘZ7($!V R дS31D„C,_E#p:P p TЃF,$ s\`pjBA~j:aA  `T˛ |ܠYWT„`CҐk8KpR Ԁ}-" [`3  a$HP:P̶9䓢da}.tcRuabwn% W-/zQ(9{W%/~C~e w^|+*I'S#d92 sÌٰC,␀x&>M"`AxZ0Shnt1u\asƂͱ9LdO ANr [P3Lr'+{9V᤮2Z4OvMvfO~3-AІ>4hA$}^4#-IS:эV7NPMsӞ4C-QԦ>5;$G8ө~5c-Yz,q{a Xv/'^#`+I6%cډv}$.?.Zې 6w[nuHܹ~m Yr-o?|r9Bm!8?@`R$򑓼&?9S򕳼.9cr_!0♙8Edp̛u8}! lr>Yns^:.f:Wnc <7<`oQ*ze얿<3s~xTRo  H  ^rY/y] x{!7 #I_O~b ɸ3m»O+*-ӿc ~: ʼnߙdC0ӑ7@I9D}f2U@.I | x](uD] 0nZY>ݠAĠ{\ EBD U|C-`@a7y XVa xݠU@aU_lG!5& |! lYaFD7`^UOD>@ B=%".DI7-!)z[iZ*e a$݀5\"B_CC""!z G%1"q~܅0V_)J.~! !+&[9^,J1^ P]YT f3vi\\,a8(5*Z7`b3"?eb}4 L̀C\4 a Ș )V7cVP@`dC\"!~C< !4!C@7(yD>*Ed[M6 $mPCVTΝP@`%~ğ< 8DD$B<ȻAD 2@  M`֝(&" `: bʡ Z%Deܛd9YĽ@a;*Moe$*F(Z8EE0DtF}$}t\rnphtDlI4g`] @0wއ]ue]9 am~U@ݦ}@o* 7I5M#rUnQĚAD0'pL%%V<@ 7DQeDPD͊7 D@M.).hF^;ΈDVWQs銌JBȘ&׸EHĥ IviDC4HDQbv2Dt*d_\$#>2dE|(@ia*HYH[ji6^]6Ďg(i)6"DB**V`MG'hkё鲖~Dd8@d@\[xΥ*` eg*JDr *xkD+CE:od+$7 äR-rNKh`6ϾP~ݮP޾DH>ުPnODJL}j AD,n4nvi.pJHzyn>Bk5+몥kƕ*[..\b.쀮N/6>/!i)kSMB/vnz&UoGo;i/,Ⱦ/Ư/֯//U ΖsW >G&x5oQp}50܀.lƮd!ŝ0 0 0 ǰ 0 hw!E 0 װq0.@O1W_1SqPw1_1 q=Dqk111 1qK 1  r c/1!'"߱"?2$!SDB"G2&q#{C&w'O2]'&)2(Q?("+r!c,g2,2.rD@ل2003111/33733U/?35Ws0'sX6o1Gs2؞ٌ8 /X9s33: <+ <3>?@"3AAt$B+@7t<4DW CO4H4T=WE3G׋;Fwt44IJJ3G4Bljμ4~LCNMg[LtԳu4uP4R?H.uD+5T# VN5VDSSoIJ5XUuX4Z燘uT5h􌕵7.\XJOǗou^VG_Ǘ%k`\La[jbg^Jc!F ~Tvz5QcvZEr*hfvZ @} @jGkXDAA8DAq#@78@Fbvd6ZӇlO`Aw7x# D@~6uCuoDIP |ϷwcAr 0uhQx`M=0IUEauqm.J 0*R6(Ȓ4Why^W~`M d  jђRWf2`邭Zކ$@.-.@}@@ݾ`P=5DSILDVU|EYau7RyϘ0`X_ j .2tPTKUWMUc}Vu{Ð7 w=aOo#׍A N>%]Qw8o%vgcx X |;H (PXt 7QoF*Me*1@Z Kșؕ$La~׀ `!ٗ"eIK0azkYiΠF%4 hZiZZUa4ODH=A&R68"+#H+A@-E1Q#2PX'[b# Ij٭pPڊ(d!8aHXGYPpBzIQR%Sd$Q0ROXNǍ}%53v[l?1dպI܈ /azNHִ}P js[K"1RE !Y;LDʢִNw!VQVk`!&PLAP9Wv]%! Br@1aj!)yӆALb `@g2ڔt;+lБH@tT=ӹ̩7VM|]&Փ&2щOs8+Rk_ lǒX|2+pq;%k8m*+q~EZl#eoLBV7l|m(rhPȄ7 j7H8#=l%9V&tv qxH E*N#xTVo[Kż`8 ΠZ7RB eм"Q$$P"` $"bb ѐ"&RbCRPb!(&&dRhGvG!^t";2)#R:J@B2P/0S00 01Mg&͐r $@'1L$pGlK"HFzBgf)od-ë`"AƢ^dL/ m 6k6o7sS7w7{786=RqRՊҀH2d!-t2T %KB.wRPFS'04q< ,hQ>S>>U 8"r,f_f)Pdhʢ=$eD0B8/C3TC77Ia0??.g1  >RkC_0D7?qⰀ4mTr# FU* oF5"qT]Tӂt0(QT @KrLTLǔLLMTMLO FaIYJKBL@ȴK/DMP PQMaTNN"<8NOe%Q?TCUTGL!NU[U]V[Ңb>25Uj,| @W6U_NUWuY WFK."&XVNWu\kU {6P11_U___`V`` '"UU 5acNc+<(DV\gcK[Z +Q [EU faT?u+HXCg}aG^ ҢxoD'PJ(hgbaVVU[n)*hYV?Y~eCV;cUD6R׶+ `^eI*`Z*in$tgy.va?pC4q6^S Kt$jcs"ty)Vt6t5qvpVmEHVd c6o+65a 7?`oWye'Ot?5!26be&xOjYevst xvVnE6o8c- =c0HF]Š+ʀ] Acwcw~d(6k7e'^ O;?C^A^B FS^W>= 6= [Z;hSZU[[ֈ@l:}{Y韾䟞] Jyֵu=Vxy魞?>ꧾ^]+}u[=ia(x?C'g1ėC2_7;?5 BOS>ߏ6D?|lijeC3>P[N46 0:uZ\^ܡϵɣ{CW I:y(LbcC|ֿ$)՞· T銡'{;brW|3AԄPߜZ@*^G,-뫬m?9V 7mUPLVR yAxAs9` TXyca Z7 gSL ~"iQ_6ߍ:75]8br 2p$GB 9%gLvQ~s%`ԉ&bIhT=S9fPWnp9+gCN W2a !|T`&%7GNZҚ^)Np$gfWPT0`BF$^* i*(+(*hffj[Ѧuj'p7&jAsxM'Su֎9(Pqez Jf|,u@(`Elc+o[b`XghyW]H@<|QrkdHdojd*,AګJ!r.ks>(@ߏOȢ?L/ dxBp{ $ js 2­"KBf5yM Mp4W k&'ܡqj]I,%6 O@#ZXth-v_D4BьaTHF#"\X31{\c7|$8HDҐ\" Cёd%%9H6Ғ&1N2L'݈@$ZAZپoҕ%e [ |e.yK_Re1Kc&+Le> c26Xb8itzT'8Yvݤ'9iYxړt>{BP Pjt Uh@p>ԟ-hE шӚ`E5Q~t"-CM*Q“'miJ]R3}McJӄ޴8ݩN{Ґ|5De[AB ըY ԧĩJjU*աZ[*VJԬR_%kXTիgU+8Vu]kYTbV׹u`5lbXBc/k2լd9YuD̍KJﴪmkvZ-mo۞6v Wtqk3"wݏr u8@5uu7 `+H@xQrމ8@h{]~p0@ԫѤ= lӀ $}-\ 2` n!:v@6 xCp&@0/sC./ox@0 x/A 8 剀1@Zb%˘8 <V8n4 @h@;yƛ8޸.| 4(ЀHU&`/SΝ c߈q+2\N۸͏F}h?syэNt{soc@5.BSͿ A,o W+wvǼ9o J<sm-塓} $v߽[!3ܴ޹c βlO7Hl( ƫ.wON^AXڭe< ڦ gswӽtԫvxmOǛ|"_y=>o3/L~)P&V~PQJfjgwpW^u_EG}uDq1|V3'u!C=%O^224Ȃ4!Ti2=9@aEhGIKȄMOQ(SHUhWY[Ȅ8R#cHehgikȆmfs_6oHuhwy{Xq(LCf7'HhȈ舏(Hh ~)8HhX花؉(hhu_](x\h[Ȩx[Z(XZhCبXCB(Ah8A)XUY%5XHhEi Vِ WX9YiyUN P 9EeER>2S<- +S2Œ4 T/:i3ٓ5iQ79CɓA Lt S H ҔO REDV M>yGT`YYiFegi9E^X4JzJ}ɗI{~9yJy9JԘTJɘ JəyIEp `隫IE`@)&$dY& KIam2ũYql r+i5ќiRɝ+ =:2i/17Rɞ*6)%5&m^ǟ zFxq.}\!CgCq.y80y:cAzY/k.d-& y6:6xs:\IP`vՠNFf1^a 恲 iu֠ pub EǤHV~-fsL&&fg6POwIg2:C4j#Yb^ n ^/JH&c- ~efVbՠSvMH6^2_b fu lj2t`Qfd`aT 6?^]~}m2e hZuPF1Qk.p𤠊BnZ#h_0) /Vb觧spi`}2jJ3c.a1*B eE:{ܪ^ rdkJF%6c֧'h a3Ѱ6>2e1^7{J4:kg&UpJJx^*5d'ڠ6n f#`ad^h%`~0[B[2&v ^9[ffzz fn p&c^ya`6f]N6: {pZ6绔k@ˡ; jX  nB*]3Zb8zKƺ^G [+9];,ɮ{f2Ƽs̊f]gaʵˋ]Z`D ,XJʽ3)enz88x񒜡K2fk?q|P< Zt+;1 +Mm;{bCөU{^bHf8dz'Җt eśe^Gw"۩ܴrp8WfwL\2)c&j eh'kDQjKyK= Vu[w!hTKk&9,=Z2ܯjc Ǐ3yz`1d,隳Ħ]:LѣCX&Uʓwj.͖J:+{w ?!qkiZ{S,/9ک^uРE[@A"v"K:oEgn:kƪgҽBZ^ae+De՚jgd L9,c%NvζK_6פ [dQvf=_l/&w[2B*oedV`Hfl7^TdlF dmc:WJh&٥}f]m{s7hxf|{5Yu#NwjWc!:Qik_j]ֵ]}e] tv%m@۩%x`m^56G_fkmx^oVlzF6>^@vgfchƒr@(lRůeKtkk|U'tvy@tWt4vEtE{CNC6!2tMOQ.SNUnWYPmbʙ}n0GkƗa][c\gnPtK& H**Z.NV:/*S]J3Q9GFe'mFl^=ytvFntMMd6)9뙂ܽq'vadt:yԥnup_ ƾt41xOQs8)aц۞bd]`5+gwJOem]whꭦ޶bȇ(^OW35qS`776n#_G7 g6x)|(o)oCFt|7,>:$mS" l۸5dR-+ '1t&]D1mYdBouWDե%t">#@h [q4Uτ?1`2pQ%x .RJq!C0A=cEO'$0' 0s11?0ߑ1`?o'p2HJ!$?`!q%Xƒ7oB|0`o%z+̈́7'[6|RH%I@%o&<SL5męSN=}ɠ$oB=pPBԧM^ŚUV"Q I$0rZmݾRJWL\-U TmPz(&<Q',X8 JKTa @bjr~]%PڵmƝ[7M'l*)h/%kT).:u]v\S #魅!ZI{AJ%8>oZP;P &B /0C 7C?1DG )8 ?V\Eqs wǵLI+OJoP(B p>JUc7f!ـA_ wV4K2`:$yvτ Jς9萴e766 `m߆;n离n. NWN{z p÷pDvˀ&7V/T*N!jpG6qd-8},)}[?TtoyvLw؇ xAm7>yLW>'yCz߲~{k@Оѻޭg_+?|V8yAЕֿV7X~ƻ@[O$`iK"(wؖ/ !AP#dK wwBù+dl@Eav@PzC_ +0E ͊WbV%SbV&#(tыs)8{eۗ5mXȾ0be.ЀD[#>>QI"F>2}^!rHAa lˀ"t%O1!1%*UV/%Y8CA @xLD`%W^1$śg89NrӜDg: Nxp#Id. 4ْkf}\g@:P˵TWh 20 ` P8 H`3!YҤ';["!(E-Qrԣ 'Qӓα-eLp@Ҥʏh<s40Evի_kX2 ů[˝@@@s1@zm0HC` rXj@;1@e^1p2ә p"p (s/^u7`JEKj[$M5*o,8H$kgǙ~v+qS C];R'݋RvBmvoxd' C X퍛Yg_oj^cܼ`'| ħٶ^`x-l̏dCr{Ofnk݅oޮm}puxݡ X}I+ P_bǴ Ȁ* x>۲6€ڕcۺV|3ӻ ;@U/(k {d,ÿg1D6e [7LM&|M?g³6#6[" d12{B ?Bq}z{j«47,4#0̕B|* B3YC 24CK= :2 x@r]!BµixH:PB#x ѻZ7*BIC?lQ S4ᙉ[;Œ5OK L4ņi3\|y4M6{c$}kS&(x7ej +.Ud]Ç0ڙxD;W4c(2* r>zS% k aF3|:K 4D "TG*C&ksiRFLJ(ǀB4R\6Y=VRA Aykz1H@HC  {6$8SgJ6,@l. hޒ>k¹> G$=ʕz5ndE8!4<@z9d&{)VҀ pʯK Lɍ  "BNK$ V˨gAL$ F'ѤZ>ŏ 4%$ATEI`K;*،MJdt6FMgͮ(QdTN$ ptN ,N9N(DTdtOOMO5EMO É5O8RPDMpP 4 5-QgCQ ǮPg#ѫYt4Q?Q-4`!}>+0R(#,]S. %bH]:V(ؽ`@U8d4$`Bc ]A[*1#UPd%5\QIFb8yCRmFyDb!ba!p9 aFѕ],];^Z^&P8RvUYAd,(Nnz[̩n\p& 륋iuN.mXo g"n(d^e\egq}MUj[ep%(}&]{ݎhDVŒa?&U@ hD~I "F^f%nWȨP D銁za,~砾i >ii$碆UZV> ѫ@Swi6u vta~)ITGXI/5]5S1!k50SVl `l1 A &0lDY6Nm^Fkl?m&k/֎p>clXaand8X5mډ)o8ֹn>SyVyӫ  H vVe&P ouk  >ryo7/UքEYY{EmU~mvefMqjp@ dmZ1i=}-A^q&GqH X֙쏰qمgORTM1Թ}Zښ}q\U=AϘ  NTf dyUEZx T%[_rGH>ZҍSyԄxS!sh 9zp7x]m\m'YtVWm!;Kf,az ,Qޱ%o0 -aҕ7/ـe{Gq]]1rR_]׎:7  h mp;A +6XFot*֝JƵhwXIDaw Rx1`ߢn[u _?Bl=xv1WUxy,Ɍ F/8_/uyPF' Xxo\ vdvwMjqǓhm& 0evEcuvtwͯ pz@7i)X)z×5"UR߁V߅rqz^}^mX'r"/"HM & ᘣ;أ?Cc IXV E2٤OB)$ &)ْQb[8ep0ؘk٦oseՙ{'T觠Z((c@j PZbrѩ) w^@ڪ 櫳)Zjګh૰Ǻ(mլB6+ARPrK"FrR)Eۮ/"[tBlopVį{d[qo2ؘk7r0 -\)1Rl̳DS^;ܳ? AC#KKRKChF]U7Ws4}tY\s7{7}h]lM܁^w]xl{t;6A4[%.yWd7k~7䜧I:k{޺tW0ޑ#;unwvWdE07d8Mx#A:E7CuJs7C/=_/@CPm}Wi{Cg>A%ʳ.) \}fWo\ :ɠ&}Iotp Kx p88j`uQ,$" VGZRx, X(Q.0o<K"Chg"h1y@cEF1#.я4 ӸFo1T#'H51z$`WmsmI1##HT%7)3#R'].qi]\EE9}_E2iQď#|5+Ҁkfsf*qb ;NlS= |yI>h6 qdJs&pӝC Q~]F7jJڧg҈NhKUR 4YMJ<"ȜD驽|"2K4 ˑrJejfPT*EzK1jVZdU=%VzKTWvĈ]W†#'kb}ZHhK}.y@5KYNMrgM];[8pFLMִj[rKѫxS1Yyn6#<9-WMTsٕ|3LWc۞~2nHo$oM7&_V2 8V$ 64l?9si }ksblu@@h>4Imb񕱏&$ $98]&{9 Gh >XWQ16ڌGzճjʱ$"@ UcFܞ7:Jg/UeՀܷ3#?0 Q$ gYqTARE]$ !ר֣1Z3?pEz:ժ`y:p΄f\Hus1 od:PLW wݼǹA*ڍ 7 %U;^ߵ&A {I8wNT*B%ToxSs熞&݈Zڈ|I(]ؿA0U:ﯷb8|= 胓me]%;-1y; w0{JrL6޴gQd|Eu~6KoӇ(@ko>={^j!'(C?ҟ>oc?3D|'zc7VtB?bŏ8{BOe_Q~uҌGR 0 tH m} '@`Q }C_X VGe\ Dr Y]fULGuܠm\zP` X8[T@RGb0Fb$E^EUTFFFXT~qV0e|FbVRla_F:ETd  aa |ؠ7 7F@xA7HA@!z   Y4"A 0D$7,"F܈bB%~ #(!{P,#z_X,b 4!ZTB($n7fXY-$oN]TJ%'*Te3bFl+.=: x%Vd$6lF,D*!. PשhVi&cx n.a(FT4;_Fi'VRUX'G0[O\a||bXl$Zngu'ؗPXEWb[@ ekYM^(>H|hW D o hē )h_[`ݴ&b'JN Vʪi阒iiL kp BjH&Zi) ft\kjg SD]ܥU GzFHޗZE#cNFGS qf"Cdd,3L^*Mfb^%G|ja#7Rf |4~dDgcV'Fe-*b]:iaFR[xC |h*%Xݱ^`"6CR#K.`p+-F+j[#X*~9#;nKing4.!bs:c*EllI^ Z\;\'\  Vv+aɮVQ˺,MiM, l"fF-- "kъpBm ҶhzڕjIZvbm}) Kٚ ~ɼ۲L-,-ܶjm mmn(2nMJ&ˋ>bnjnzvRGؒnGnnn.nnnf>𮭲*ތ&>bN^^b~n/FV/f/z/v//ޯo//0p7;?KpCpO/ pS/o0pp 0 p p 1q/171 _[0K1?1_W13owg11s01I@;PK;1PK&AOEBPS/img/strms040.gifGIF87aX?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,X H*\ȰÇ#JHŋ3jȱǏ CI?Ǐ?$XР@~ϟ? H*\ȰÇ#JHQ?'P @},hp>~ϟ? H*\ȰÇ#JHŋǏ>} HP @}O@0Ǐ?8`A&TaC!F81~ӧ`> O@(> 'p AǏ? H*\ȰÇ#JHEO|ӧ?~ Ǐ_? H*\ȰÇ#JHC0@,h „ Ǐ>} o~ H*\ȰÇ#JD?} O?~'p "L?0 <0… :|1ĉ+o|ӷ?-Z<}˗O>~,ZhѢE `>,h „ 2?} o? H*\ȰÇ#JHa?~˗O~*VX~˗/>~*VXbŊ ӗ/_|WbŊ/_|bŊ+VXbł˗/>~*VXa?~˗o_+VX@˗/>~UXD˗/>~*VXbŊ+J/_|WbŊۗ/_>}UXbŊۗ/_>}XbŊۗ/_>}UXbŊ+V\O_|XbŊ ۗ/_>}*VXbEۗ/_>}'p "Lp!Æ o_|"D!B"D˗O?!B"Ą˗O?!B"˗/~!B"Dӗ/_} B"D!B|"D!BO_|A"D/_>}A"D!&/_|A"D!B`}"D!Bx|"D!/_} B"D!/_} B"D!B/_| B"D˗/?!BDP} <0… :|ѡ}1bĈ#F1?/bĈ#F1a}1bĈ#ܗ/_>~#F1bĈۗ/~#F1bĈӗ/#F1bć/bĈ#Fo_|E1bĈ#FD/_|"F1bĈ#&/_~#F1bĈ˗O#F谟|E1bĈ#FlO_|"F1bĈ#/_~#F1bĈ ˗O_Ĉ#F\/_|"F1bĈ#F/_>~#FQDE@˗,h „ 2l!Ĉ OD% /_~%J(QD˗D%J(QA~(QD%JXp_|$J(1|I(QD%JO_|$J(QD ˧ϟD%J(Qă'QDOD%J(QD'QD%J_|$J(QD%./?%J\/>%J(QD ˗ϟD%J("|I(QD%Jl/_>~%J_|$J(QD%JDO_>~%J(QD/>$XA .dC%"ܗoĉ' O'N8qĉ˧oĉ'NXp_~'N8qĉ˷oĉ7qĉ'N8q"~M8qĉoĉ'N8q"~M81|&N8qĉ'N4/_>'N8q}M8qĉ'Nt/߾'>ܗĉ'N8qĉoĉ'N/'N8qĉ7qĆ8qĉ'N8qa>}'N8q"DM8qĉ'N/ĉ˷oĉ&h&h"o,h „ 2laB} B"D!B0>!B$/D!B"D!2䗏D!B!| B"D!B_~!B(p_>!B"D!Bp_>!B| B"D!B_>}!B(0>!B"D!B|D!B| B"D!B_| B_>!B"D!B/D!B|!B"D!B80>!B̧"D!B"D!"D!B`>~ Hۗ/_~ *T8`>O@ gРA 4hР4hРA 4H0_? 4hP | /;x? ? 4/_$D _>!*O| H0_> <0…cȐ!Cǐ!C ? /| ,/| 4H`>'p |/,h „ ӷp… .\pB[p ̷o… /%̗o… .,/… .D/… 7P_>~ 7П… ԗ_|˷p… ˷p… .\pB[p O`?}'0_> ̗_˧?~w0_/_|ӗ/_˧߿|o|_|/_|.\|.\pA}.\h0_`>O`}  >O`O>~ '7_|˗O_?}/_| ̗O_|˗/~˗o?? 4x@~&L0a„ &L0~%L0a}O`|_|/_>}(p|?/_}/|˷|'p O?/߾O@ D_ &L0a| &L80_>˷_|_}/߿|˗߾|˗O`|/>/_̗`|_|/߾0ϟ˷|? 4x?~&L0a„ &L0A%L0a˗߿| ̗_|/ȏ_|g0_>ׯ|ϟ~/_>~/_>~O`| &L_ &L0a|&LP`|˗ϟ|ǯ|/_>_?~ ̗O`|70_ ̗`|_~/?/|/?˗_| ̗ϟ|&LO@ DPB >/?!/߿|/_> ̗O`|807_>}˧/_>/_>}˗o`|˗O_|˗/_? <| &L0a|%L0|˧`>׏`'7P'O80_/_|/?ӗ/|/_|/_|O| H˗0a„ &L0a„ ˗/a„ ˷o`|'_|/|7| ̗o|_|//||/_| &L`| &L0aB}%L0|/_/_˗O`|_|/߿| /߿|g0_//߿|/|ӗ/_|K0a‚K0a„ &L0a„/_~ H'pO_~O/>~뗏'p`'?O?@ o?_>~ H˗/a„ &LO_ &0˷_|˷?O? '߿}O_~'?80߿''?O˷?_>~ HӇ!B"D!B"/_}"D}7|/_>~ ̗o |/_|'P> o߿O߾߿~}? 4x!B}%L0a„ /a„ 'P>߿Oӷ?} >'? 0@o ?}o ?}ӧO~ o?~@~ Hӧ/a„ &L0a„ _> &L0a„ &L(П &L0aƒ˗0a„ &o_ &L? /a„ 0a„ &Lo_>~&L0a„ &L0A <0… :|1ĉ"EOE)RH"EOE)R8_H"E)Rp_>QHq>~G"E)RH@㧏"E)R/@~QH"E)R~G"E)RH?}G"E)o|(RH"E)O@(R8PH"E)R8Q_>H"EGP_~)RH"E˧o|)RA}QHEQDEA?}>$XA .dÁԗD!B"D ˧|!B|_}A"D!B@#OD!Bq|"D!B"ā+/Dgp_>}!B"D!B/_>"D!.| B"D!BO_>~C}˗D!B"D˧ ?}!B"ą!/?!B"D˗A BP"D!B~CO,h „ 2l@ӗoD!B"D ӗ_>!>O!|A"D!B|%"D!B_"D!BA~S/DП|A"D!Bq|-"D!B_"D!B@c/Dװ_|A"D!B_|"D!B\/~"D!B?}k/D_|A"D!B |'p 'p "Lp!Æ"|"D!B|="D_|A"D!Bxp_|A"D!B_>˧D!B"ă? BP?˗D!B"D˗oāA"D`|"D!B8_|A/D?}"D!B?~ ?}!B"ąA4o_|A"D!6/_} "D /_|'p "Lp!ÆB|O_|E4O_Ĉ#F|˗/߾~#F1bD˗_ăE>~ۗ/_>~"F1bĈۗ/~1bĈ#F$/_D˗o_#F1~˷_ĄE>~ۗ/_}"F1bĈ/_>}"2/bĈ#FH_ ˗/߾~#F1bD˗o_ 1"C}">/_|1bĈ#2/_|E|O_Ĉ#F|!o_| <0… :T؏|@>|p>~O_|Ç>|8?}Â=|ÇC˗/>~>|Çӗ/_}=|X_>Ç ˗/_}=|Ç/_|{!?}>|Ç{P?~˧o_?>|A˗/>~>L/ÇÆ˗/}=|Çӗ/_|{!?}>|ÇO@ D}˧_? &L0a„ O_|/a„ ˗0a„ &/a„  o_|_„ &L0a„/_|/a„ /a„ &L0a„ /a„ O|,h „ &׏> ǯ,h „SPB PB `>,h „ 2?} _? HOB *TPB ˧PB O|ǯ?$XA Ǐ>} o? H"OB *B *<}(P>~O@ DPaBۧ`>? 4xaB[p… .\p@.\p!Aӧ`>Ǐ_?8`A?~O@}? 4xaB[p…[p…o>̧O>~? 4xaAO>} O?~'p "L!?} .\p… ./… .\_?~0ӧ ? ,O>}8>}? 4xaB [p…[p… Ǐ?}'p | P@ H0>}? 4xaB ǐ!C 2dȐ@2dȐ!ÂǏ} 0 O}ϟ?$XA .T/C 2C 2dП? >8` @ <0… װaÆ 6lР| 6lذaÆ 6lذa5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6lذaÆ 6lذa5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6lذaÆ 6lذa5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6lذa 6lذa5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6lPC 5PC 5PC | H*D… .\p… .\p… ӷp… .\pB-\p… .\p… .L/… .D… .\p… .\p… ӷp… .\pB-\p…(,h „ 2lС|>|8P>|Ç>|>|Á=||>|Ç{Á=|Ç>|Ç=|ÇÇ3o_>~ۧO`>˗߿| />|_>Ç>|Ç>|OÇ>|@>|pa>o߾ '0?~O` ÇÇ{Ç>|Ç{Ç>`>$XA .,0@/߿|_?~_>~O@ DPBkذaÅ5lذaÆ 6lذaÆ  䧯aÆ 6lذA6lذ| /| `>/߿'p "Lp!C5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6l`>o߿~/?}_|_Æ 6lh_ 6\_Æ 6lذaÆ 6lذ@~6lذaÆ aÆ O@~?}o_>}'߿}O@~ H*\| 6lP 6lذaÆ 6lذaC5lذaÆ 64/_Æ 6l0_Æ 6lذ!A6lp>~ 6lذaÆ 6lذaÆkذaÆ 6lh_ 6l0a 6lذaC5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6lذaÆ 6lذa5lذB}6lذaÆ 6lذaÆ װaÆ 6lР| 6l`>'`>'p H*\h? 4/A 4hРAO@ DPB >QDQH"ES> 4xaB80@ _ 8p| H*\H0,h_> 4hРA <0… :|a?$XA O@~8`A&TaC˧_>$XA  o 7pO@ DPB1L/C 2C 2dȐ!C  ? 4xaB'P ?} H*\ȰÇS/Dg0_| ǯa>!BH_>C} B"D!"D+OD!Bq| @} + |O@O@ DPB1L/C 2C 2dȐ!C 䧏!C W> 2dȐ!C ˧_> 2|/@~!C "aB2dȐ@}2dȐ!C 2d ?} 2d8P1dȐ!C 2d8_>1dȐ@} +o@~O1dȐ!C1L/C 2C 2dȐ!C 䧏!C W> 2dȐ!C ˧_>ǐ!Á̗/_>}̗o߿/_` ,X`,X?8`A&TP .\p… .\x[H0ƒo… .\p…S/A~p` p‚̗߿}-\pB*o… "o… .\p… o@~-$oA} ӷp… .\pB)~ /_?O_>} 'P?~P |˷_˗߿| ܗ/ ˷P| .\>~ .\p… .\ ?} H`>ӗo@}o_? _A~8`A&TaC˧_ '0|/| Ǐ_>| G0_/?߾|| "ćA"D!B4O_C'0_>~"D!B_>̗_>O`>@~`>#/߿|˧O`|O| 6bA BP?!B"Dk߿~_> Hp>~O@ DPB >/B#_?}o`O` ?G A A//|˗/|˧/_|$H A#H AO@ DP!B}.\p… .\pA~ O/߿|/߿}'p A? 4xaB 6tp| ;o@~_>_>'0?}wP? a>_|_ /O~|!B|D!B"D1o>~o߿|/?~9ǯ ?}!B"ą)!| />}ӗo|˧O|˗B} */_>};o_|/_}ӗ/ۧ?}_>? 4xaB 6tC~7'0|ϟC} 1bĈ#F$/B">~O}  <0aB*/B *T8P? *TPB *TP!?} +| W> *TPB */B*O„'p  'p@~? 4xaB 0| 2dp>~ 2dȐ!C 2dh>g_~ W> 2dȐ!C ˧_> 2`> 4xa>~O@ DPB1L/C 2C 2dȐ!C 䧏!C W> 2dȐ!C ˧_˧C O@ D߿|(? 4xaB_>$XР| 4hРA ? 4xaB 6tC~˗/>#ǯ ?}#F1bD)`EHP>$X`} ߿O@$XA ߿'p 3hРA 4hp>~ H*\ȰÇw0@~"F$_A~"F1bĈS/|Ǐ߿}/_>?}/_>񋘐_|3_Ĉ#|#Fd_Ĉ#F1C~#>~˗ϟ@~/_>˗ϟ?};_A~"F1bĈS/|߿}߾'߿|ۗ?~1!|!/bĈH_#2/bĈ#Fq?7>o?_>~/>'p "Lp!ÆO|˗O| /@ O>P? 0?!B$/ĂA>~!B"D ㇐˧O`>o|o?K_A~ B"D S/| 70߿|(߿~o_>~ O? $H AO@ DPB1L/C 2C 2dȐ!C ˗/_?g0|G`>/#>$XA .dÁ)`>_ ̧@~ o`Ah0?!B$/ĂA>~!B"D 'P`?#(0@}O> Gp>~ <0… :|8_>g0@ϟ|ӗ/_|"DA"A "DA$XA .dxP|?} /Aӗ/_ϟ|/_~׏@}? 4xaB 6tp| @} B4D!bA BP?!0 <0… 珟>$XA W> *TPB */B*TP?}*T0? *T| OB *B *`> 4xaB 6?} Hǯ ?} *TPB *T8_>)TP)Ta> *TP!B*/B *T8P? *T8_|8`A&T!} ߿'p "LhP)TPB *TP@˧PB PB)TPBS| *TP@}*TP@SPB */_|OB ǯ ?} *TPB *T8_>)TP)Ta>8`A ϠA3hРA 4hp>~ 4hРAϠA 4hРA />3hРA _A~8`A&TaC˧_>!"DA"A "DbA"D󇐟>!ǯ ?}!B"ą)> 4xaB'p "_„ &L| /a„ &L_„ &D/_>~ &L0a„ > 4xaB'P ?} H*\ȰÇS ? 4xaB8`A /a„ &LX_ ˗0a„ &/a„ "/ &L0a„ OG A $H? ? 4xaB 6tp|!B| B| "ćA|A"D"DA"D"D!"D |!B|D ˗D!BxP_| BtOD!Bq|!B| B| "ćA|A"D"DA"D"D!"D |!B|D ˗D!BxP_| BtOD!Bq!O@ D_„ &LP`> &L0a‚%L8_ &L0>~ &L|%L0a„ &L|%L0a%L0a„ &L0aB%L0|&L0a„K0a„ /a%L0a„ K0a„/a„ &L0a„/a„ /a„ &L0a„ /a„ ˗0a„ &_„ &L| /a„ &L_„ &D/_>~ &L0a„ &$/_> &L(? 4xaB 6tp|!̗/_?!>"D |!B|D ˗D!BxP_| 0DA"D}"ćO@ Dh,X_> ϠA 4hР4hРA /?$XA .dؐ|o"ϡC:tp|O@~ H*<… .\(_ [p…[p„p… .\|/@}O_}O`[p… .\p@.\o|'p "L`> .\p@*o… "o… ˗… .\pBG_ >~߿|׏‚-\p… .\P| .0'p "L`> .\p@*o… "o… ˗… .\pBGp 70߿|'0߿}o… .\p…[p> <0B-\p…[_ .\P .L/_>~ .\p… ˗A /߿߿|_>$XP ?} H*\ȰÇ| <0‚-\p…[_ .\P .L/_>~ .\p… ˗~/} 䗏B-\p… .\P| *O|ӧO… .,… .\(_ [p…[p„p… .\| o`?}˷O`>} 70[p… .\p@.\8P_| .\0a> .\p@*o… "o… ˗… .\B |'p  +8_ ? 4xaB 6tp|!ԗ/D!"DX_>!>"D"D!B w_~"D!B\/D"DO@ Dh,X_> ϠA 4hР4hРA /?$XA .dؐ|9t ?}:tСC s| ˗CСC sx_>:~9t0?:d/Ã9tA}:t_|:tСCOa>СC:t_>˗~ /_?O_>} 'P?~Pa>:t_>sСÃ9tp|9tСC* }?~/|/_߾|/_> 8>$XA .dÁAHP_|O`>_>|ADD!bA BP?!/?!B| |?}_|%"D!B_> ˗` 70߿| '0߿|0?!B$/ĂA>~!B,/_>~!BA}o?#/߿|70߿|KOD!Bq|!ԗ/|맏@~߿|߿|0@8`A;x~'0߿|O@ (>$XA .dÁAHP_|o߿|'0'0߿~@~ "D |!B|D ˗D!BxP_|'0|| OO ?"D!."D+/_}˧O'0'0?a>!BH_>C} BX_| B"ă7П| '0?~/|˷/?㗐@} H*\ȰÇ| b| B| "ćA|A"DB{OD!Bq|!ԗ/D BlD!bA BP?!/?!B|AT/_|"D!B\/DWp_|| B| "ćA|A"D"DA"D"A}O>~Aq`>8`AO@ gР4hРA 48P? 4hР <0… 2ԗ/?ϡC:tp|"ԗ/A}_~O} ܗ>Pa>8`AO@ gР4hРA 48P? 4hР <0… 2ԗ/?ϡC:tp|"ԗ/|?} _|9TϟC2A:t>~:\/_>~:tСÂG_|ӗ/? СC:t_>˗`#/|70߿|s0?:d/Ã9tA}:t_|:tСCo?}}ϡC:tp|"ԗ/| '0߿~߿|߿'P?$XР|O`>~x0?!B$/ĂA>~!B,/_>~!BA}O?~ۗDA"D"A}/߾/_>}'0?ϟ|ADD!bA BP?!/?!B| | "?}!B"ąAHP_|'pDA"A "D <0!BSPB *Tp| />}˗B ӧPB *TPBSPA})T_|)T0? *T| OB *B */_>~ *TPB ԗ/? *4OB *TPB OB˧PB OB *D/BSPB PB ˗B *TPBOB ӧPB *TPBSPA})TPBO@$XA'p 3h@ 4hРA A 4h`A#/_|8`A&TaÁG0_| O(`>(@O,h „ 2l@ >?~Ǐ? B0?!B$/ĂA>~!B,/_>~ "D!6ԗ/?'0_|!70B~ B"D 僘_?~8`AO@ D`> *TP!B*/B *T8P? *T8_|O?~/>} '_|O |)TPBG0~+`>COB *TPB OB˗/>~S_?~˗O? *T0? *T| OB *B */_>~/@~/|O ?/@~ *T|G0|̇0|SPB *TH! 8`˷A /_| 4hРA3hРA 4hР@ 48_> 4hРAgРA 4X_|#/?O`>'|O ? 4hРA`>'P| 0@߿?} H*\ȰÇ(p_|A|O_| BdD!bA BP?!/?0@/߿o@@8`A&T(P_|O|̇0|[p… .\p@/ / *o… ./B-\pB-\paB#o~ O߾ _ _O ? .\P|7P_? +aӷp… .\pB-$/ /… p…  oB.\p!B}.\0|̗/_>~˗?'p_>}ӗ/?ӧ?}O@ DP@}#/_|'P(߿o@? 4xaB 6tp|Q| BLD!bA BP?!/?A"DC~ B"D s/D "| B| "ćA|A$"D! ԗ/?"D!B\/C} Bd/_?W? 4xA$X| /A 4hРA3hРA ,/_> H*\Ȱ!C}s!A~:tСC0_> 䗯?W`? 4xA$X| /A 4hРA3hРA ,/_>ܗ/ ? 4xaB  ԗ/ 䧯aÆ 6lذA˗o_Æ ˗ װaÆ | 6lP 6/_>O>~ aÆ 6/_> 6؏B *TP|)TPA~*TPB *Tp| ϟ|)TP}/_? *LϟB *T_> ˧PB OB /?;0@8`A&T@}kذA~6lذaÆ ?}ӗ/>~2/_|/_? 6aÆ !B6lp>~ 6l_|#>װaÆ  ԗ/ 䧯aÆ 6lذAǯ?~˧?o|ПO@ D80 &L0aA&/_„ &LP &L_|#o@~O_„ &L0!B}K0a„K0a„ &L0a„KXPO| `>?? 4x|&L0a„Kp| &L0aB}&L0!B#/_|˗o|%L0a„ "ԗ/ &L &L0a„ &Lh_KXП?~ӧO}@$/_? H0a„ &,/_„K0a„ 0a„ ˗_„ &L0a„ ˗_„ &O_„ &L0a„ &4/_‚%L @?$XAK0a„ /a%L0a„ K0a„/a„ &L0a„/a„ /a„ &L0a„ /aA}&Lx? ,h O@ Dx? ,/AgРA 4h@} 4hРA ˗,h „ 2lP_|:tH>:tÅ9o!|O@O@ 8`AO@ gР4hРA 48P?8`A&$/B8`A&TaA}kذA~6lذaÆ ?}'0_ 'p AO@ D80 &L0aA&/_„ &LP?'p "Lp|8`A&TaA}kذA~6lذaÆ ?}'0_'p_|/>$/_? H0a„ &,/_„K0a„ GP &L0|&L0a„  ԗ/ &L &L0a„ &Lh_K(0_>_>~/|П  &L0a‚%L8_ &L0>~K0a„ /a„ &L0@}K0a„K0a„ &L0a„KXP(_>} ߿П  &L0a‚%L8_ &L0>~K0a„ /a„ &L0@}K0a„K0a„ &L0a„KXPO_>}ӗ/| П  &L0a‚%L8_ &L0>~K0a„ /a„ &L0@}K0a„K0a„ &L0a„KXPO>~ۗ߿}o>$?$XAK0a„ /a%L0a„ #_„ &L`> &L0a„ ˗_„ &O_„ &L0a„ &4/_‚%/}˗@}o_|'p |'p "_„ &L| /a„ &LA}˗O &LP`> &L0a„ ˗_„ &O_„ &L0a„ &4/_‚%L @?$XAK0a„ /a%L0a„ #|ۗ_„ &L/ &L0a„/a„ /a„ &L0a„ /aA}&Lx? ,h %L0a„ ˗0@&L0a„|˷_˗|ۗ/ /a„ &L0@}K0a„K0a„ &L0a„KXP O@ ~ H*\(_>$XР| 4hРA Ǐ>~#/߿|ۗϟ|o_>3hРA4hРA 4hРgРA $O,h „ 2l@0_>8|8`AO@ DPB HA 4hРA A} G0_O|/>4hР| 4hРA 4h`A}3hРA ? 4xaB 6tp|s/D H_> H&/? *TPaA*TPϠ|ۗO߿|˧/_|/|SPa> *TPBOB ӧPB *TPBSHP?#/_}ӧ/@$/?$XA ˗ϟB *T| *TP@}g_>}_>@}_>}P|*TPB ˗ϟB OB *TPB ˧>~ 7_>~?}'p A'p "L_|*TPBSPB GP˧Oۧ?}/_}˗B PB *T8P_|*T ?} *TPB *T8_>SH0_ '0|8|8`A&L/_> *TP‚)TPB#*TPa|*TPB ˗ϟB OB *TPB ˧>~ 70_O @,h „ OB *TX_> *Tp>~SPB PB *T8P_|*T ?} *TPB *T8_>SH0_O|'? ? 4xa„PB *,/B *T8P?)TPB SPB */_> *Th> *TPB */BS8`>߿@ԗ? 4xaBPB *,/B *T8P? <0 ,h „ 2,/_> 6 6˗… .L/… .DA}.\p-\p… ˗…  o… .\p…[XП|-\x?? 4xaB`-\pB-\pBo… .… .\Р|-\p@~.\p… .\(_˗…o_|-\pg0@˗}O`>'0|o‚-\pBo… .… .\Р|-\p@~.\p… .\(_ ˗/~*O_|-\pBg0|߾}O` ߿|O ?$XРAӗ/_|?O| <0Bg0|__>~ '0߿|[| .\>~[p… p… .4/_> .\( .\p…  oۧO|$O>~8`A&T|7П /߿߿| /߿ ? 4X_>w|$XA .dÁAlП@ (,h „ "/?O?_>O?_ !CO>~ cȐ@}ǐ!C ǐ!C .ԗ/? *䧏!C 2dȐ!Á1dȐ!C 2L/_>˗@ϟ| ܗOO|/_>~ ~ۗ/?O_>}ӷ/?/_> GP? 2d0? 2dp|1dP!?} 2dȐ!C ? 4x?~ &L0a„/a‚&L0A +o?/|_>/@~ GP &L0|&L0a„  ԗ/ &L &L0a„ &Lh_ &_„ &L0A~K` &Lh_ O`>_>_>~%LA}&L0a„K0a„ &L(P_|&L0?~&L0a„ &L0A&Lp_| &L0a„,h „ &`> '0| /| `>'p #_~ H 'p 3hРA 4hРAϠA 4H>$XA .dÁAH0_| B!|gp_|'P?$XP`|3hP`>+o@~/| '0߿| _ ϠϠA 4hРA3hРA 4hРAϠA 4H>$XA .dÁA8P_|AbC~3 ?}o?}-'_>~g_|ӗ/_?߿| 'p_>}ӗ/?#D!"Dϟ|"ąA"D@ ?'P?$X |}`/?'?};/ 'p "~/,h „ GP .\p`> .\p 8,h ƒ%L0a„ &L0aB%LР?̷?#| /?'P?~G A $8_| G_>(o`(_>~߿/߿8|˗/> $H A#H@} <0…cȐ!C O@ Hӗ0a„ &L0a„ ˗0aB 87?~o?>_> $H GP`>'0@~/|`>__>/G A $(0A Ǐ>~ H*\80? 2d0!| <!?} &L0a„ &LР| &,`>O /?/߿_>$Xg0_A~'0@ o? O>~_|'0A #>~O>} /_?'_|˧'P߾|GP?"D!‚!D!B"D|O@ D &L0a„ &Lh_ Ǐ|Ǐ|O@(0|70,hA~3/_|˗o|ϟ|̗/_>}˗?'0߿|`o߾ >~ o߾O`>۷}o>~C!B",B"D!B˗/?"DX>"D!B"D@"D|7П>}o|/|/B/?"D?!} 2TOC 2dȐ!CcȐ|#|/_~˗|˧OC !C ˗/?+`o`~/|/|O`>GP H'p 3hРA 4hРAϠA 4H>$XA .dÁAHP_|!B_| Bq|O߾ }>~?O߾ '0@}W`? 4xA$X`> 4hРA 4h| 4hРAO@ DPB >/D"D`||W0|˗|/_>}ӗo_ӧ߿|/_~ 7P?Ab| B"}!B"D!B_> ˗| "ă/_}'0@~ BB ̧П? *Ǐ>~!BX0?!B"ĂA"D"A}-O>~A!|_>~ >}O`?~ӗO?~ }'p ,X| ˗/ ,O?'p "Lp|'p "Lp!Æ'P ?} H*\ȰÇ|7߿|/_~O| /? 'p@~?~o߾?}/߿}ϟ߿|߿|O@O@ DP!B}p… O| H*\ȰC(>$XA .dÁAHP_| _ '0_ !| ?_/߿_߿/߿߿|O@+X? ۗ/ ? 4hP?|Á=|hP_| '0A_'Ç8߿OO`/߿|맏| ̗o`>#H?|'p?G A Ǐ>~ H*\80?1dȐ!C 24/_A~2dȐ!C 2/C ˷0} >~ '0߿|O@ H8? /?}'0__>O`>}'0|_O_>}(P_/,hР>~;x|Ç{Р|'߾?>~@~>/_| ;?}O_}˧O?_|߿|O`>Gp߾W0_| !C}Ç S/Ç>|0|{Ç>/Çǐ_||/|˷/?|{0B{Р| /_|'p?  4A}S/_| B$/|W0_| 7> #_~ H 'p 3H_>4hРA 4hРA 4X_'p "Lp!Æ"Dq`|`>}!B_>_ '>q>~"Ă)`>~_>'0|70|/_}߿}/_>~˷"D"D!B_> ˗"D"Ć/_~̗/_>}˷o`> .Ǐ>~!BX0?̷|7p_>|o߿|//?_>~߿| O`|˗o`>/>'0_0a„+O_„ &L0a„ &4/A~ H;x ;ϟ|̧/_|O`/߿|O>~˗/|˧_}[pA ӷp… .\pBԧ ? 4xaB 64П <0…O@8`A&Th?}p… O|_>O~ '0|/߿}ۧO`|O~ ׏>-\Р|[p… .\p@p… .\0-\p…3/… .4/_>~[p… _ /|ӗ/?o`>}O|/@ϟ|/_>}.\h_-\p… .\P|[p… Z(!O@~8`A&T|cȐ!C˗@}2dȐa|ǐ!| ǐ!C +OC 2dȐ!C;C 2dȐ| ǐ!C Ϡ| 2d? O?$XA .ϟB2d|2dȐ|cȐ!C 2dp|cȐ!C 2_A~2dȐ!'p| H*4`> <0…S/C ǐ!C W>a> cȐ!C;| ǐA}˧`Wg0Ä`> 'p "Lp!C8`A&TP`> ;/_|chA} ?#/,h „+O_|`> SPB wP?*T/B K`>+O_|W0B`> O@ DPB(@~,h B H| Ϡ|| g0_>gРA 4_~ۗ| W_|ۗ|ӗo|߾~/?ϟ|4hРA3O?/?~'0߾|'0߾|/_>/@} ԗO@~+o|o_~ ԗO| W0~ ̗_|˗O//| ܗOAO}8`A&ThP|? 4xaB _> _O@~/_>~`|O`>~ 'p_|2dx_!g0߿|߾O ?___}O  /1dp|#|#/߿}/~o||/߿}o߿|+o|_?}ۗ߾}+O_A} '0_}?}߾/߿߿|_} H@ gРA 4h?~ >~ H*\80?ܗ/>~_}ϟ|/ GP?/߾_>~8`A ܗ|?'0|o|O`>_> /?? 4xaAG0|̷_| '0| /|_}˷_|g0@/|˷_|+O~O`|ϟ|`|O`'0_B/a„ & ̧ ̗_>o_>/߿|G? 4x~ W߿?}߿|@ ___o| 7P ~8p`@ǯ_|'P`>7🿁'0_ۗ|߿߿|@ H@ gРA 4h?~#A 4hРAg|/߿|'0_>˷/|`|O`>/|3hРA W>'П~EV '0?~O ? '0|/?}'_>~ϟ>~70A 48_>#O`߿~O`>'0߿|/>O__'0??(/߿|˷_?} ̧@~ o>_ 7p7p| H*4>~[p… _>/_>~o|߿|#_|o߿|… W˧| '0O`>O_}/_~O_|ӗ/o…;|70?/| ̧/?˗@ />_|7p_|ӧ/?˗O䧯 |/>˗߿~O_|߿||ϟ|'p 3/A 4hРA#A 4hРAg|˗/>ۧ>}O_|ӗ/?#/A}O_|O_| 4hРA+OA ϠA4(_> 4hРA3O? 4ϠA4hРA ?} 4hРA 4_>4hРA 48P?4hРA 4hp`>  <0… :L/_A~&엯CÇ ;Ç˧Ç+OÇ6Ϡ|>|8P?=|a|Ç>L/_A~W0Ç>|_=|ÇW>l/A>|p>~{Ä)Ç>|_=$O`>|P|{ÇW>˗OC=T? Ç#_A$XA8`A$/,h „ 2l0|C߿||㗯Ç>|x_=|Ç+O|{0ÁÇGP H'p 3H_>$XA .daB Ӈ/|Ç>~'Po@}/?~'P|/A>|p>~{Ä)@~ SϟB~ )Oa? 5 ?}/߿| '0|>|AP ?} S_Bӧ0? ?}߾|ӷ__~_߿|۷}?8_>$XA "Ǐ>~ .\p|˷Р| S|˧>)o? ӗ0~O`>[p… .L/A}O>~ K/B~0`>_>70߿|'0| /|70߿|Ϡ| .\>~[p… _SOB}_>)Oa> W㧯߿| '0… .\paB @~8>~ $(>G|?~ G0߿~/|_O`>O~_? ? 4xaB#… .\80?-4/B~0)O>~ [_%| _>.\p… wPS|˧>)`>3`/|>~O`>O|O'0_A ˷p… GP .\p`> [h_>)Oa> SOB}|[p… .\p@ ?} S0?/A#HP`> |㧏|/߾| /_>}O_|O`˗o_>O_|_/,h „ GP .\p`> [h_>)Oa> SOB}|[p… .\p@ ?} S_Bӧ0? ?} .\pA ˷p… GP .\p`> [h_>)Oa> SOB}|[p… .\p@G|,h „ 2\߿|(>$XA .~8} Է`| 엯`} ׯ`|/? ? 4xaB g_> 2d8P?1dȐ!Ä) @/_,o_$/_,o_,| O <0‚ ˷`| /‚oA~0)Oa> ;_A~.\0…3/… .DA} 'p "<  A ,/A$A/A$A4/_A~ gРA 4X0A3h`g@ Ϡ$A/A$ϟA_A~O|oo@}|˧? 4H_>~8`AO@ g|  A~  |  A~  |  ?}˷/?~O|ۗ/_O_|˧o?~߾|/_| ˗|4H0?3hp|g`A~  |  A~  | ?} /|O?}_W0߿}gРAϠA 4hРϠA 4hРA3H_> ? G@}A#HP ?} Ǐ A$H_>#Hp߾_o?~ ϟ|/?~'0_#Hp`㗏}Ǐ|O`> @}$H?} Ǐ A$_>G|O_| 'p_?~#(0߿|$H A/,h „ GP .\p`> [h_>)Oa> SOB}|Co`>}_˗/|/߿|˗/oa+/_|O`>>~ ӧP?%O!?} S| W0|O`>˗o|`[P|[p…#… .\80?-4/B~0)O>~ [_!~߿/_/߿_o߿_?߿~$H?oAGp>~ $>G|/A#HP`> |㧏||o@} '0> $H_>O@ DP!B}p… O| ˧>)/| S| ?}˗O߾}/|˗/_~|O/}'p˗/_˧o /_~o7p?}8p@~Oo 8?}7p| o~ӗ/_?_|/_|˗/@ӗ/8p@? 4xaB#… .\80?-4/B~0)O>~ [_-Lo… .\…;B)Oa> SO|w0-\p…3/… .DA}.\p)oAӧP?%O!?} So!|-\p… ./A}O>~ K/B~0o… .8`A&~ K/B~0OB *T8_>)TPB#B *T0?)~ S(_)TPB *TP@ ?} S_Bӧ0? ?} *TPOB *A}*TP„)OAӧP?%O!?} SϟBOB *TPB wP?S|'p@OA#HP`>G?} H*\x_>1dȐ!Áǯ`?$XA8`A$/A3H>3H0?3H>3H0?+O,h „ 2l@?} S_Bӧ0? ?}˗ϟ|/_>~˗O? 7P,h B ˇ!B"<A}"D!B P|@~ |@~ | ?}"D!B"D!!!B!/B!B ?}_ W0߿~_ۇ!B3/B"D>~C!B",B!D_>C(P?C_>C(P?CH_!D!B"D!B!4@'p?}$H?} Ǐ A$_>G|O/_˗/@~/_| $H AGP| H*DA}.\p)oAӧP?%O!?} So… .\p…;B)Oa> SO|w0'0߿|o70|[p@ ˷p… GP .\p`> [h_>)Oa> SOB}|[p… .\p@ ?} S_Bӧ0? ?}_>_>/O@ D/A"D!B#B"Da| !!OB!!OB!B!B"D!B"D8_!D>C(0?C(>C(0䧏|ӗ/_|'PO| H3/B"D>~C!B",B!D_>C(P?C_>C(P?CH_!D!B"D!BwP?C(p?!a>C!B"D_>!D!BGP?"D!‚!/B!@C8 p`?+OB"D!B"Dp`|C!B"D!!B"D?'p`| H*DoA}.\p)o… .\pBo… .\p…;… .\pa| ӷp…  | .\|[p… _ .\p… WP .\p…  ϟA}.\p… W .\|[p…#… .\80?-\p… .\H_-\p… .\p`>~O@ DPB @8`A&TP! <0BO@~,h „ ? P <0… :T?O@ DPB >$o~ H*\Ȱ!BП <0… 80?$XA "} H*\X <0… :d߿O@ DPB >$/D!B"D! D!"D!B"D"D!:"D!B"D"D"D!B"D"D!Bt/D!B"D!"D"D!B"D"D!>oD!B"D"D"D!B"D "D!B/_?!."D!Bo_>!BH_~!B"D!B_}!B" H O@ DP@~-\p… .\ |.\pB[p… .\p… .\p`| .\p… .\O… ./_ .\p… o… .\h0_> .\p… .\p… ӗ… .\p… ˗… *ܗ/_>~ .\p… ./ .\|-\p… .\p… .L/_> .\p… .\(_|.\0!|'p "Lp!ÆOD!2ԗD!B"D! /?!B"D"ā(p,h „ 2l?~AC}A"D!B"|A"D!*/?!08`A&TaCÇ ˧Ç>|Ç˧Ç>|}=|p>? 4xaB 6tȰ_|>|~{Ç>|CÇ>|0a|=|| <0… :T/>>|P|=|Ç>|}{Ç>|П|=|P| <0… :D/_>~>|!|=|Ç>|!|=|Ç>|/_}>/_>|B{Ç˗Ç>|Ç˗oÇ>|ÇC{Ç"/_~>|a}{Ç>|A~{Ç>||=|0_|>|CÇ>Do_|>|Ç>|8_|=|Ç>|p_|>T/_>|CO_|8`A&TaCϡC:tСC˗oC:tСC˗O_?˗ϡC:t_|9tСC˗/?:tСC/_>~:tСC:t(_|9tx0_|:tСC˷C:ta}СC:t!~ϡC:tСC ۗ/>~̗/C:tp?~СC:tX|sСC:t`}СC:tСC /_>}8`A˗!B"D!B/_}"D!B"D!B˗/>~"D!B"D!B ۗ/_}8`A&TaC!F萟|7`|&N8qb?~oĉ'Nl菟|7qĉ'JO_|8qĉ'N8?}o|M8qC˗o_'N8q~˧'N8Q?~˧ĉ'N8qĉ˗/_>~$˗oĉ/_|8qĉ'O_|7qĉ' O_|7qĉ'N8qC~˗o? 7qĉӗ/_|M8qĉ?}˷?$XA .d?˗/>~>|Ç>|C˗/_>}C/_>|8}˧_>|Ç O_|Ç>|X}˗o?>|Ç>|B'P>~7? 4xaB O|? 4xaB 6tbDӗ/_|ODo> ,h „ 2l!Ĉ'R>} (?$XA ?}(P~8`A&TaC!FH_?~'P>} <0…O|ǯ?$XA .dC%NXq?~8P>}? 4xAO>Է? H*\ȰÇ#J?~'p>}ϟ?$XA Ǐ>}8P>~O@ DPB >QD)˗~0ӧ ?'p`?O>} 8P>} <0… :|1ĉ o>} HP>}'p~  ?ӧO>$O>~O@ DPB >QD-̗/BǏ>}'p ӧO>~? 4xaB 6tbD)2?~ӧO>$X`A}Ǐ?~O@ DPB >QD-B̗/ń.^xŋ/^ŋ/^xŋ/J̗/ŋ/^x/^xŋ/^xExŋ/^xŋ/^xŋ/^h1_|/^xŋ/^xŋ/^xŋ-˗ŋ/^xŋ/^xŋ/^x|]xŋ/^xŋ/^xŋ/^/_/^xŋ/^xŋ/^xŋwŋ/^xŋ/^xŋ/^"'? 4xaB 6tbD)VxcF9vdH7rȑ#G9rȑ#G9R`|F9rȑ#G9rȑ#G ̗/ȑ#G9rȑ#G9rH9rȑ#G9rȑ#G)0_|#G9rȑ#G9rȑ#'p 8`A&TaC!F8bE1fԸcGA0 O@ DPB >QD-^ĘQF=~)~!8`8`A&TaC!F8bE1fԸcGA>/_|}O@O@ DPB >QD-^ĘQF=~`>O@ 0  <0… :|1ĉ+Z1ƍ;z>? 4x|,8,h „ 2l!Ĉ'Rh"ƌ7r#H8P`?$X H@~ H*\ȰÇ#JHŋ3jȱǏ 'p@~ H8`8`A&TaC!F8bE1fԸcGA:0'p O@O@ DPB >QD-^ĘQF=~| O@ 0  <0… :|1ĉ+Z1ƍ;z> <(`> ? 4xaB 6tbD)VxcF9v$~`> ? 4xaB 6tbD)VxcF9vD!8`8`A&TaC!F8bE1fԸcGAB/_|>'p 'p~˗/?O@ DPB >QD-^ĘQF=~`>O@ 0 O`} <0… :|1ĉ+Z1ƍ;zXQ|$H?$Xp ?/|//_>O_>}/?}/_|˷O |7P,h „ 2l!Ĉ'Rh"ƌ7r|߿? 0 /_|o߾o|o>O`'П|/|O@ DPB >QD-^ĘQF+˧~ H>}O@8`A8P`/߾'0| ̷_| '0߿~O~/?~ <0… :|1ĉ+Z1ƍ;V/_}'p O@@}70|(_߿| ߿@߿߿? 4xaB 6tbD)VxcF9v/_|O@$Hp,X>? O`>_} _>>'0|o@}'0@~ H*\ȰÇ#JHŋ3jȱcDo ?߿}'p'p H_>|O`| '_|/O_|˗O_|˗?}'P,h „ 2l!Ĉ'Rh"ƌ7r(1_| 0߾|O'p $Hp ,h „  ? 4xaB 6tbD)VxcF9v/_|O@8~8,X? ? 4xaB 6tbD)VxcF9vq#|`>O@'p g@$(,h „ 2l!Ĉ'Rh"ƌ7rFO ?? `> @}, <0… :|1ĉ+Z1ƍ;z2a|`>O@8`A'p "Lp!C8`A&TaC!F8bE1fԸcdž˗O ?? 0 $XA .dC%NXE5nx_|`>'p 'p A'p@}'p A$(0_ W`,X`>˗|? 4xaB 6tbD)VxcF9^̗/_>~'p 'p $/ ܗo_O>}/_~O|O>~˗߿}/_| ̗|'0߿}G0| H*\ȰÇ#JHŋ3j|`> ,W | ,8p߾_o?~ ϟ|/?~'0_+XP`㗏`>+/_>~  <0… :|1ĉ+Z1ƍ/˗/_> H`A$80_W`ӗ/_|ϟ| 70|O_|O` o`O_>}'0| H*\ȰÇ#JHŋ3j||,H? ̗ A~,Xp`>~O?/_ 70~/?~O`+H0|G0?~G?o,h „ 2l!Ĉ'Rh"ƌ7rĨ/_|0 O@+H_>~ O_|˗O_|/_}˷|'P}׏} _|/_>}/_|˗/_? <0… :|1ĉ+Z1ƍ1O@~ (?˗/˗oO@ !B"DaA H*\ȰÇ#JHŋ3jȱ#} H`#W | ,X |'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?V0 > Hp`| ? 4xaB O@ H*\ȰÇ#JHŋ3jȱǁ8@}G? ̗ A~'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?O|?}8| 䗏,h „ 2l!Ĉ'Rh"ƌ7r#H[/_|˗_>~ $?~!C 2dȐ!C  |˗/_%䗏|<>췏}˗O`!C 2dȐ!C I_>} ˗/_|O|˗ϟ|U2a>#ϟ}>~!C 2dȐ!C r |˗/_%䗏|O`>~_>}O|#/_>/@~`|8P`|ۗ | '0|O@ DPB >QD-^ĘQFӷP_|5̗B~M'0||/?~۷_O`|/aa>'0|uرcǎ;vرcǎӷP_|5̗B~M'0|ԗ/_|/|70||ӗ/_|o`>˧/|cǎ;vرcǎ;v/˗a|ob>߿|o߿_O߿||ӗ?~o߿8p`뗏 |'P~ <0… :|1ĉ+Z1ƍ5o|k/?O`| Wp_|'p|70?_>~'P}O~ϟ|80_|o|? 4xaB 6tbD)VxcF9nP_|5ԗB}u/buرcǎ;vرcǎ0?˗Oӗ <0a|̗/ &L?$XA .dC%NXE5nX>$O? /_>~_>} ̗,h „ 20 H'p "Lp!ÆB(q"Ŋ/b̨q#NJgs/_رcǎ;vرcǎ;vQc|Ǐ |.'p |8`A&TaC!F8bE1fԸcGAR䷯?} w@} <0… :|1ĉ+Z1ƍ;z2ȑ /I$I$I$I$I$9Q_>~(?}'p߿?#H A ? 4xaB 6tbD)VxcF9vc>O` /|/߿~ӗ? |˧o?~ӗo|_|/_>_| $H A $H _ '0}_˗/a~ ϟ_>/7߾ _?}O`> A $H A d|˗/?/_} _> (0|$/_|_>o`뷯?}O_|/_?$XA .dC%NXE5n#|`Oa>#o }/߿|O߿|/@ O 珟|?~Ǐ?~G/_~˗߿|˗_|˗/_ۗ_||Oo` ߿|O> O@~ H*\ȰÇ#JHŋ3jȱǏ C1H"E)RH"E)RHs/_>"E)RH"E)RH"E)RH"E)RH"E)RH"E)RH"E)RH"E)RH"Eq_>ܗob| /H"E)RH"E/_>}˧|'p  ,X` `,X`> ׏>$XA .dC%NXE5nѣ|߾|/˷}/_>~˗|'_|˷o_?}0_>71?Ǐ?~Ǐ?~P>~'߿|ܷ||珟|/| _o?~ ˗/_>~O`>>~Ǐ?~Ǐ /߿|Ko`>}|O_|߿/߿|ӗ/_|/߿| HP`+o_|+(0~? 4xaB 6tbD)VxcF9v>o`뗏>|70|/߿|O`>/aO| ߾?~Ǐ?~c~/_|˗Aӷo_|˧/߾˗_|/_}8>o_~O˗?}G AO|'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?6$H A $H A | $H A $H A ~  <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4k3Ν<{ 4СD)'R}&G߾}WhѢE̗O_|ӗ|/_|˗/To_>O_?~-Zhћ_>o| ߾/}ӧO_}/ӗO |/?ܗo_>_>+ZhѢEgo>'0_|O߿O_>/7߾ _󗏟$/}GP|G`>/| H*\ȰÇ#JHŋ3jȱG˧_>ϟ|_>˗/|'0| 70}/_|˗/?|0AO`?~Ǐ?~#A/| '0?~ }/߿|O߿|/@ O?~Ko`>~壘>~o ?~Ǐ?~|ۗo|˗/?}ۗO_>O`>ӗo_'߿>˗/_˧o/_|'_|O@ DPB >QD-^ĘQF=~/b> $H A $H A/>$H A $H A $H8? 4xaB 6tbD)VxcF9vdH#I4yeJ+YtfL3iִygN;yhPC5ziRK6ujTSVzkՀ;;PKI<PK&AOEBPS/img/strms066.gifdCGIF89a岲@@@???٫000 ```߰pppPPP///rrr___OOOooo333;999vvvwww;;;,,,XXX***)))777xxxuuu---888nnnȦbbb+++ 444(((666111JJJgggaaayyyUUU|||~~~HHH222EEEFFFˢʘ&&&MMM III§<<<555lll:::RRRfffeeetttSSSdddcccqqq'''mmm>>>NNNQQQBBBý]]]}}}LLLkkkGGGWWW^^^...!,@A k0*~;0p>h1be1rʔ#f9#Se-RӟCV]StեYt=iEi綝:oΕ[n^{+8Ɠ#_>9ңSwn=;׻ر?})6_?14}t`F)8 E߃Rxaqh!x(ء#Ҙ.c |>Q BxID}HɤN.Ye=i_PNTfiX*%c S&lj妜py_c͙g矀*蠄j衈&袌6裐F*餔Vj饘f馜v駠*ꨤjꩨjꀪ꫰*무jkݪ뮼[kk&첲v4F+V[fv~+kឫk`<,0٧$@:@2 K`\wV Ё 0+܁o 8sAl 2dA e`*Ap`}N=TD=L+]1h3u56n[xw$F$ӵC'3 NqZMqY~g8]'޵2 y$A+Lް.;Ix}1h@A#g0@(ӷp $LA1< h| +/+#rOL<=o?޵yF+CENvamZ^z3ka> r *ciV!'kCW.zX2%rA9r;ԀU/3b":1,?dY z sA3C"Npw3i`cY͎G/GM8.:Pvҋh( r @2щkI2j͕@7Q 0%I-L l4pP4(5nЌZ=@a&b8'e|o9 8>9xY@&Pmjg2 AodcRIN fՀ6) @`CMi$%Ku!mpQbNwj,}@ j &Ԣ`NQ1:A1WXͪlh2UUWժX:,)#Ӕ8%M8RT@44Rr@KFMPc 72@t3@JJ% %f7Ub,=\$p4MibȌ&89˒%Flb/K82hZ~N[˼:3j Iմ};1;ՑEϔhi!63e|םzĚ%_+/lq#c'ћM X1W3AL imm^ eH('FS*ڨܰL:aڙUx+5 +⤤ٖYN6?iĠ6BC3a&5%0j6ʶmmАl[1mm;H@eޙ4oaOx}5f;B•pf{TxndDž͛TD>}So}XdI}7~5|2~KO[Ͼ}+~A#D8%І&8?R?OϿwju0:ɀw~egV 6upR4ov1Ft.C;#Bqf8 K` HuD@ f6:`2/3E)S2 LD2%bp@<-Cx@@SX/ihHs9s3nElE|H6<„:cG&5P@ sD7a@-00m@ d=` @شW.\Q3SqC[w4ƁtW'eWj%:Gh1')x+3ɰ*(}oȈ/ )p H1Ur#\r, h7(9h϶-a&Cf>_PP5P^M`))` JDq1MCWQkU+! 2#YpbyVp)3 pMGFZp LPI POBIF N-sUaI>]6=t7CWƃ:;WQ _ %@}rJX( )p)'pR&~Ē.ٗ0)* i-NRYYG8( "O `^iƓ@,FDRJhy?6K AhEsG}2`ghٜp (jPi(zy$'zj*z)yDڹ yiy)iC 8( `}2Ry8S#H?sW39,8D?QL-5RL6iuδL)=DfY  .j/ 3~.ԩ '@1$.0 S `0 өz30'`)=ɗ~ ,^0S㩤扞)' )Vjr0 ZJ5,H5qhq46u76mrHs5QSAs:W#o`åbʀ' 0@$@p p@j&@j. p "`;~ { G[ rhV{-)*[MKj8iٙH)rp *jSJ+"k! $xЩ$Z)K^ԙ[۵j'$y~ʦ9iɀ Gj@ ` SJ 0S~8,QDSj!zzpWj$ +/"಑}v*/I k9H PW0i@ɀX ЬU*I Srz3k$Z["Z˜Z[J;l˿7cТY)47SP` ɰ-C, Yq @E.49@BZнCp% I=#2⇲J#` l֌5;I =! H@Bɠ6BP߻VSNlXn(wlb=0iȯpȕ)tNv{ʯ"Xüقb~dޏ{WP s䱍b*B3lP^m_ W 80(LP1^(alp艓 ` %pj`^Y -0N( 7lb=pMjP"f H- omفث % pny [A+(/o%u@5ĊIY}?_D(~A 3 pFjS-= Kɰu@;,,~8q7E=7߅0eN`|V1}?_ݡpr8 {IRO}D@  -1`O(`?_؟ӟ" 3@1H@ ܘY p'I0WN.PVA .dC%NXE FG!E$YI)UTqAhYM9kH7 %Zh2,Y I dYUYWaŦle̙[ծeV FaSw =A"$ !L4Q[ĉ/v#cȑ%O\e̙5o<˘2&]6Dۤ`:iرeϦ]1 `7D&^qw'g\bÝO^z R\z2ŏ'_y O{_]?}߰4- 0 2 20p΢B[*  *H*+ iCk\Bs\HC* YlAhP 8hȂ4%0`@H4ZEe@FeL4͂iPHSGR A @ R- P @,ك d8Q 8 73Ք8ԠDTf.Ѐ %HP,h[KK?vNEvPՠ 4`TCuTvU@15Xt,tK 4h  t_-`7 QJ]]'M7c#݂Y-YB'`@-;`]Vs-eW-COo㏳@~(0PS+4l V: Kx Bs̢Jkx. A#I$ȸɶn3;oo"*7p{(LE3qűeOs3t|TX 8p9@l~ u;M77.aUhN'3OH< 1WhV|E8ZTp~Z^ud`hv{nn# lWp(Sæ%:eHdd (2T  #IH}YWA@}_T&#l`]NL,g@Mb4^-Dc!%LW\u[ʸz~QICPm!|2М8- @+D!(#/*iR41J֐@r@hCld%j%a,IPgZJTRDXJXȕ?lyK\Re/} Ų"(e1yLd&(DDӤf5yMlfSf7MpVCCpSdg;QbĄ'"Vr!!@yy'bA-2PӟH@G8P6I?`ْO&B=d&7))dʳU Tܒb kj 0+e]2NɁt@0f?a R-V9t 2Mxї iGQzƿ.^M:4 1JTP5.}O$ 0K R,$!+&JF@1_7!mr <)-d xrÝ+QxլHA Xf&2Ue`ӣ Ь*^z dlB|ߡ va| 9-;$9>)>;dZB|e@ƀMm('/0&Të|KK BP̜[e ⭵ (0ZDHi'46ȏ/leN1Q5+2K*ٱ2^lOwfoƁ}~w{ i}yAyq/ޯY󖏳:sP F)BS~]eX/s$`]zz"&ewK XzVvF\T-OmRS?^ ]{l es|_edkd`+r&e̓`{C*]f<ycd3SSyl1]+@_x\[Ú-Yk7<@R- *sI* a=H2*ۀs=[s-þUB+6k[K/Xskl֓? 4p*3)t 1 |3/䀒ސ$ <9ýZ胵 #d* S྅qp2*/=9(B˴D; K83as3:X)%#Eڀ+'m+EERC/=LZA;I,{"TC hƄ682g)LLMAL(lL|L̥KдM<٤L`ȖL˔MxɔN.,N,ԧLL,|Ռhr',OE TC}A= =TFD5EmTIŏG-HTLJ%,TOM5/TR5P-2-UU=S]UW5V}UYXU[-Z ^U2V_=a(_V,eEfemփxV^VhuVi5jkVliւVnVo WpVqUr5rEsUteu-@|}=~5͎W}ׄ؅%؆EXU؈e؉uX،؍؎Xؐؑؒ-V.MY`EU֘mكxY^YYY^-P( Ρ5ȢE=M(`ڃpZ5mڠڂڨZ UZZژڰڱ۲%۳5[@۵U۶e۷u۸ՠ] %ֽۘc \[] -0Em\ex\ɍ\ʝ\\ƭ\\%ءR ~ U]euم]eݤ5-ڠ5EڹM[[[[}Zu[=^M^]^^^^^^%߯%u\eu҅EUѭ_-Ӎ@eHݘx]U`ڥ n࡭@9i`i抐 "J1rI.,ix@j( 0d^y;s~jjDn$*$90A(7%kkĶ P.(x*Il^fӇ `젞๒l>!YB>횐B }HkZFmkΦi0Y$| Kb!„VDVyVi9*FHі7>QCSI#zn=VS.xd0jy}S P - zI珒o]dLAɳC1Epa&1bl e90Qxj*};ݖ0xq 9f9EB0TI?S~ `y.*Zy-$WP8tHړlU$Xo֞=y Tр;Ir. sQ#chInҎ"b'90tآTn! fJpFמu*u&e`vb?0vd_vPvfpvhvvjvulv6vnv_E`=wN` `eu75tGv%`|?|}`` wY &x 6G/Ѓwx}xVxxR ^%y5y;=,yݖ]Gy_-~wwywuwwzwxxxxxzzzyyRy/w{e^{{_zGzWz_\gz{Ƈ|O|ʟG||y7}'TpUqK]}Lmx}ؗT٧?L}ݟT/!R ~TO~^~ȍ`~RuUT ~=~t@\q>g m *ÈmɼIWe,h „ 2l!Ĉ'Rh"ƌ7r(@d< JH&i8@Û raHJ-j(ҤJ2m!HOL`d t+ lA > 7G Al032*demA &ŁTAL4|!`2k9 #V%S$-"NcE%^|G[al|>%aD^ &'vE8TW0Sek HAPC"$4&mIЌo6d#Bƭ97q!4u]Hq5QOtI@K'bLM'N)dsN)TN%eiu!v ]曄Wũ2=A@8:,@ AU-*HeLlA87Ab1JfВp7gTH!7 ;Lk)~Z[+ ;RUq^$3w1v'uz)kA'3s mw"4kL,v &C<4E'q'd]dn@U]B|`Oa(wW`Cm`JPODnx5vM]^774J ~8+nZ`K>9SDUEPzVe9饛jDRxz:۞Pҹ N \([ P jp9;T eMPe<=kZİ/W2sӆ0H\pmR6*i&n NMvs?i/|x>K$_{( 6ҡO*pߩ B?l\2@K mZIHp8'wMo0P?/*%p 8@RP:9E@}*0v$!bw R$e 6=Z/JBؚ4"2j0 'eS\om[ƶ'd22 ,ۂMU$BE/` 0T`eK\E& AIu%WȮq35*Ik])p |lK(Z଼-QTr+a97`pƋ'NgOX AZiPC c@]1>+*,Cu&08XI缯<F7@,CPx^=57lkLEA5Ųؚ:倌TT5-a}r7ѵmQ ZS KWH1漴2mC-IRPmQJ+cӻ7ݍr_A.?83NH|TZ.S$8C.3<!p A!nuqţxX*׹LQ3>hOyʱ_\?W   L}܃`@t H?( ) x),`IA\IFP_%  D 2`J]yDҵ\^@2PY}`2_2~dlT ( b` $ T^D&\~tn~J Z *2l*C˥$\2a#Bb2'J]]@E '^\=2D\`!Dg >D_aǵ21"# bF$bA @ `*aP@& 5`c>ay"E\aH|ebH:=VAcD@# 6!1# dAZ J!bDѝ4D6fc+n&# #ecn!&@2\*@u$I!?z?2`\c2v\J$KP8IC[D"`%^@a'zG2@֡2V* [\ޣcNrNx#tʭe[`2fޛ,Et"H.@M$D~-$":D6cgD a1bDv"^bGd^nN&na_^kpV g1 dVdS*D*@!VI IN`uFQfnfn D@\@R A|4gDp]S 'hOPfBfugH樘y¡cގ}E9(B Zy-:b(mS'LU(\h֨.:!~J$u))&)B>)JR*ŎN Yh̝YZi2(*).gTi\)CDRD8\g)B)]JN3zjui#ӚFh&F<0iQSjiC*ֈ:EidrMO \(.D̾T%Q^-fn-v~-؆fHLMtX؀ Lkl-ז-ktAĕ~EU CL]VHLeTvڭZ 횸ִGܖp@d DþhP/o* /6>o消<ME /Io0 6= /€Z/tHu]ܟRJ %0/07 @@ynϪXlg0 0 CGU i|$[// ppH$Cp|AHM Z.lS0GOpgRؑЈoƅoBH.@10i @2$م@/h +'ջ\ȰnCD?tq#72Xtr\dH)-y^ij5Ɲ`Q9O>-pU=C6N7( &iEaȍqH@bHZEHatgxEȴ^4IA53E|I` Y I@oL~=N.Bl4tIyYR\OɿtѰ4IQ+nEHtYXSnNOnQyGn$ /U]TTLia]Tx5рuxkPdE]u\?DtlL]/Z(i_N`E_lX0r81A6wtTH@B@kgG= Ȕ(3G,~مD IccpZP&il7as;Bptvop@gk@p Ag.LP}Ї QH8v tt$YFE˘~wBwGHwAXt2i 10;ss FTq)hm8oSK;@w1X#I#Krx劳8Axtھ1nԺ$1y+ëst<IFJ{HIPL0w;zsJ[NA4{ q5DŽta_ȧӈ7G>;jzVwKVtXaoPXt6?K7;\Pӭs0kvL^ i\LAK- ;+ӈh].eKlyoXܖʏ7TLP<;u@R<O#DpFGtCK=R~d>f|sKFCWD],׾>SB~>wB lr'sOZ GO?W_?go?G?.<??{??@.r?2 *p`B 6tbD $1YƋ9vdH#A.)Yt@2+_ִ2L7yYqÌ}5ziR46-!SSmBe(jV@1jlX UlZȒUVm\1"xSn^35phP 0xT>@@N:e1ȁ^R.tM|<dNع矃H" de c*ZŌBtQo$i%:mg:Jj}iΈimӆPk )[k "k6:ɽ~{geZ4wQߞ,ϙzJeN^35Pu2 w 3$\p*\N BC+Jb[ D89W{ WZNJ×i_k7[Q$T)Aǹ/_'4%|K֧ ~@)2$~MFZ\2=& r@l`0L6pm d 27̡B>n& xVǟ,.cpS20w#P M2ls<DYPqMe'iIA$'y(j`sJ(&$(p&*#H 3BȐFDJیXs I"2d$!Ir$NFe Vl"$*|BQER2VFU  "| SuBF4v^ 5yi@2ޙF dX  x X35}gKe*sQA\5cYKݭ+{KH*%TCQKZ2,QFDzTFL_ZHʗU(Aҥr2I=ӒYK&%^9j"k8QpJS&j *A@2 N@M2Jր%OʔO,Od}: CY j. ɞjgPT7#0ӈp]&+Ь:YqֶjiC"Ԫe(UHo!PTIMi:,"V-~S8a+# R} ;I OI$4 7f, B3*V+MbD($./mI@[ wsۊfv0a'6 iE6FUq#ɭE|#k#^NIn=[WsÞ~MQg]gZ\ƾuG|!D-랄vVi8QI6ݐ o! D=.$;osdA58sy$m3 %poFfYJ}|. wwv\5CC0 3 9o}-mZ&f&(P;10PȠ/CsGJ=!w{Hk͜g bFeo(WL@(/I6׽549pS2ePGR &sJK`TyvC5̿uC^݌jUO $bj_GIZV.DbAɀ1 `5--s 8Xt,j`Z̸203BO@4F ~tKrBLWu/7(wu_tV%+J Jb 7,%:v1&7*xrBxsCfWz#uu[xze65!q3bv$dSw3qWuyWyW~/z~zs7~uLBo`$Kx}812Bg8x9l$x38+,,85!*BX! Y=xANx_Q ?nkMX 錇{Oo)88#:@}C"ur]N}~$@B l(%%XP(Za\be3LQH x+5% C\eZq7Bd)D6#;X8 YB4`lAp M,E&Y6@T6&ah|;،l8b&cp an1}-$lOp B!ؑMĮ0툹4+ 5(OYyHCعDXbN$jٖLhd4!ߙ."V#vGvذbQ*` ) v"3?uSf+ TBv♖ixxe0 `Qx5oR:eZ%ADzgUspL"ZR0ZF&y$yG68(NL$D:3F! 69wyc8 .®=.I9eZL2"+2yBZ$:whJv8a!%@aخ !#RƠL;,{ bv:ϲ(b/XM̄H&dOR&6`wgy0NiQ TPB[16{KB2YEDz+RΘ\e,©BG4b1|c#!-7˜sܹȧbO6@$Ĕ1fZB-|k|c{98ߊNҒ}k$_c C9=ҝ;;4d@Xrds#TbpO(zn\z8Ō /ɞ{%D4ǜ;{ z1d T޵S/~@d)yi{3MJ(Ure3annJ+.a𰱓W6ʡ+[6|0gGBVFyA 9 kEHH4_8QB@[l$D%ppPl;*$):P=ۮfC~pgA5\ 2 YD5A$N+b+eB`x1"y(XZ`-C 57Z&1x&#EtS̼l <-Q.ݙ,"tqIl81n-S;f< m}BPYT{tt-mUX`zD. 9㿬IP-PtuTt=;]PW4 ya`P:ǖו(%cЛ^pnS 'O?*nۤ45~anm6V9{ qฦLjS䖶xIQ%٤ Tx.9J |m9qLcAeF37:m+0Q5EMPMB *Jk+/Qb^XRmWVZ!T;5B#6^iV䑇+/c>N<3՛ ~"<=Iyӻ~L.g {G}c>KE<+oK<k_|1 |doK (`(Hh pg(H8 (~B~ PJ!(#H%h')h /1(3xAp,8.H=? AhG"h85QI(SOQHY)ava:@ehgikȆmoqaxc(wy{5Eyt(hzh6UȈz(Hhȉ(HH芁؄]H_臺苿{H'ňɘmǨ%(HhȊרȍxHh騎%qHH"wvx (iz   I99Yّ1 Yq ʰ В/ 4)6i8:<)5I@BɒD F9AyJ9% ,NPIRTٔOZY\S`ٕbWf9hY alypr Жvz9|9(`)IIi٘ i±y-iY)Aii)M I*qɚ雽 éќY)֙i9ٝܩ)Y♞ ٞYy $៦9YIʙpZEGY:Z :")(Jiɩ٢)+3 !ʢ8Z:*`>MXXhiII`sx$h 8K\&oU=q ?:*@/UrS ??8P@ T&P<0P 3K%X!P.aC` yX@=p>00J</DV?@tts-3@@el?T@A3(bJRA)Aet<*c+r `r 2; 8Ua@A\tF:X0%D`:fHL0 6 递7@@', 4B.)D w\=1D,|-1?$?rA8( ($Ĩ&"-C ÜWq)-$ zP.: 9 BN7 ?z|<Íh쒴t!}0 9܄(# `  t EL0(^\@?("4 2dA?Jj:OC &#B&/ :8I6R$:Tr R?X"*[?<\ P CЇ7H?x@$C W3'O(Xg#xxz@ D ȠIR - '@# pM ƛ!a E"FH#(k Q)kP"Z![ATE", ""s84,gc0@eOԈ:Q50,Y1j+Wa#ddah&r ,,bp)o,a0)? 5:palGl肁@ @ HGwz @ a0hΰ9Nr*@  nXΤ FI:H@^O -/2 N,,XF#%c0 8R"&<@A3A" nD!E@=~)m1 g@1+? X~ #@"l@ˈ< `h `@69&=I e1$PWDpBRX PZPhT8@n L рШ>0MTs _H8`PQi|bIaY) rFGE?8E9CQ[!Zh"h5ȢDd@#O ,0z`@HpRHd^3N@Q 9G")X q#x:9BrW9tM 6d`Rѵ<Г`X6@(k^q[x,~5((@ 4vgPEh &HAFI0Dj`D ð. Dø*1 6ʂZx2V $DPE !w0)kEP a @9>Ոw@A @ CE@ |X}3&"@BH! ` Ȫ sE1 $1" .&҈oF $$B 3!( R ԀOp8P \20JqFxca@&>»ыwû#;!RS\I 4X )yV!SbQ@~ 9X?z 8Ϲw@ЇN87&„cL;O!LX N_D/0Dh`֣`H-!xE= [`"V{B^˂i (htha!ݐX)F-0.09d$,d` 8>P@֨.$boř҃#? pN@ 3(B &n"P%%bP|x?fpX`AQk|)X'< ݁ z@r0}%x8APLSAp;A@{0p-X2I'q{G iqėwd'iL .#-pp )~+ W7~ a ymP #@ Q p7* q` x pF0((` % -Py'Gw$8 PPF@ 1 P0 `{.$@5EpX ` BF6 ` E e )?@p '@5pp_D 9(! ` 4a20s85X0w#10+`/ 7P fqp '`X(!b^3r`Ip$@V#P8x< @_&a&3 yQ0 W\X*0/e j@pr0wCH`%@/0F EБ^Yّ8F @r(8\6 $J` Av~؀'RC5pIP,,PL !Lh UBe1`P pSPX`sH~# G @@G -pqv0 ` ! , !$2`Py|P, 9P0 а!*`6 DPWJ*0r#P% BP#L'ր,# jp*3p1PPjPf'1s < 8 W%p90 3PWb@ 0%0'e.  F<V =*"01R p Q ,` 6DVP `A hP`5P H$s6O)z@ 3  1OV`y(P P Z`- Pi:) `5EʉН9ٍGİ'EYٟgz T#` ڐ?0I U{ 1W $X@0<'\GD3(GO XP9ACz7  p7`u!Uz6g@.0 ' .`I jV py ܠ EP2p 0@{@ e2\%8 Ea$)7GPN09{C~X*d+ % / 1 `~4ore]FC0Kb[0 mP_0 ;C"Q@)mů3<6X{hB۟@ LQ[!{)h`@L ` & P:H@K0 J fi J# J`E:`QJ`  _˓]([~ 8L>I0py @I`වPEy2Ы xp.p`yAz| 1`#t3 :1 4{ W@àC{ 5W*8 P-`_"pp2   %?0@q`{Z5%3KH@ P c@ ;D2q b + 3@0 !OB,R02@@ѵMO~e2FA|U!:` WǷh ɡR#Ġ (rי ,r 6g ad ~J @ O_&` A PHɷ[y). 0CP  @= 00 @3m@% P23 E=@0'#zXp]O @P( "xU VxVP7`OpH@ `*<,|RO =P(R*p`5<QP P&3 >+pT5C  *0p#3<(OL57 ߠP BBi0R:2]QlP Q `3  A(`PNdbD ܠ0 p %?FP՗ep4b%m@r *P %1 ph ; "UdQqLp6 WP@%p`HCKgp / pK1 <#z;@|p"##P>4\,΃  E>,pw qDQ W. OР. *],`Kw#p?PM`!~,/Wb -<O1[0 !*Xp^P`.C ;"TqӉ`%&R' g% @&J: *@(.% 5`Nrm`YYPPi,ĈP!+a0ALO@L]pC]pw' H$_ qA03d؟ @ƏtL` __|eQJlO=A x?_,"803#)*a/# hXdBB!E$YҤYbK1] .P$ %PCI#$KekG"1߈ N%[,.Nede#4ߊsDf7l"g5`FzRh '2HU6.JUiԩUeHHRxCx ?C*⟝OeЄd7z c@gKbjYs1 H܊A:b"F3$@Pဃ+!sU8!&`1̈́B1NL!+,nnDu*B1I(a\BLj4UuVp2K "C/oâ+"@Q#5Ojszʐ!9%RnPp#.P"FOtz32 l#!p/ Q$*"'2'anxۈ֐J8ᇳ}U QQYcM``( ܭE#Y,RLFL",)W"xJ'*:2܅L>#&@'F$! L`'P !kLUO]V gDĂD<)ҳsVԍޫp!.jh#C`᜹GYfO2C:`2wF<^dn T 8Qyp;"L# `G_{S0[@tF %uo'>"F&`lS%Tˬ @=-N<OBT(Xs`BtE"HE,_B`S]+eЀ@, `@mQ;4#,8!y$FE0$ -B@"!c Dv8͠Ä*:EdF+?|4I @|BR\@u"V@b_<U,'˓|tD%F7Dx#,@NxO,#H7p!)$J`-)M`It.81HDH p] {3vJ{pA D}P" P]fY[BTZ';5sPtcM+<;4T5x"@T9E)Ys u$d,4TCoI`xЫ$~ʄ By0x\%Q&BnV3 =Ԁ FP@ x n.-  AԠ)@g>8A !2T.ےg; w0jp qPC 2XL`2тY@q !<( N2G[H( J4uˍz,?aҖ5O `xUK BZ`[L(rza#$.tHd!Dg`U  2Fx #)J%2n!֣rwy&R kԱ 0/q#HW4: 4`\6a&hde>2焍f&j<@{3\Iykܰ@Сh-H*㵬gΌd uOgRc8ĩ}} 5M[_9f Y.F>Sx-.,oXVXIx  18uT R`Aٹ"F jU-00~ O؂{f Pl$"GHbp@H< `$$ 5xK^%` LHa&`t yJb @>!@2O,S!@"4Lq_Z%r#XġTcQpdbFp]-U#T02Np[(cEP-Sx*Ђ,0N{"HpS0P) (J6L,VE FSH?=0$ H7S 3؃ȻO#Mz4HJC=2P0^XnPH9Q=(љRhLɀE, >z$0|eV#`p(HN(=hhaUkhF@Wp$8€ pP RF'&E!Vt8haH  P,xY ,`@Qk3`ZY2 .;BFB1P/ȄOHU!np /aTB$+BxZ0 PÛShY x# Cp "H@ 8.9 :IXg x9IQ)P(( `ŵK`4$9 JZ38vXt&Ys8  xȀ'^@ah<<Ӈ8m ml#F a0FM (qjrLm9V8k2+ h!L"PUD`O 2Is<{$Ax|˛zy0Diz $XzCR`;:2s pJjź+XtYdo`#8J@P`D*;H`DY$ a` 8" x8LCSGirpL7 ePn x" "/r;# |D<(K`xܼ\*y/؂ഀЁ  X9s`D;g N Nn)09hbȀx5l4`ln2( ;  x P;j@N}xRM o 2Y@(Mĵ*"XS(Љh e OP\NA6R$"M#aȯE+"R`RP%MRy$_$  ІBT2x!&< :(V RF@2(TV;TTT6!Z8}~V<p~8)8؃ A`лy!nH\( 8 \Ȅ\-3=8U?ث}DPA";݁0s<: 6vC>#HY!Prm@&XW*U E<כ ҂- 8Iz$ D#(TDȤXNESo@vN#؝J`:p7P؇jn OM8h#HU/!=P < X hk(+PLPZY <:p}x(#O 8Dh}B P (!8M4{%3B0xXg"0ptGs^DK\ Qn0&H5$5@8"hxd99`! 4;Z&؀R Lx .SQ,[;؁UxeB#\p$(.dQD xX " 1؁ʘ#I2 + `pΓ@7o(8>ЃtA؂x: 'g *D(PXAXf sH=yj1m>pg1dD%Qyt z}fhVqW95P .i_#gINIBPg2.c4.3F6irh"IMinWWZ,'ЂIjC٭]].bi(ЁXO2UPx0 HPqG770-r&kpq'M{6X+#$ih/'v6+?s)9f/x>b\Krz3r&h7pg9;GtI4 ?Hsom7$QtgiGw4`L8nrLg( Vs愸e@'I8tE… H @9(P o8P HxXp%p U$Ak2(hCG(w|w}w~chn8U160a8Lx>Ą YD#d*~/ywx;ЅB=0+C#  LqȞ0腐\C@8ywV`0PyU0;(;'z/*O{=b x>1xWxm(7 /O\2Ex:{{/{W{>8fG870zu0z#@zEtSh|||{t&4}Ђ-]X%++&h~1,'p[`6(bP4?+X~+X~쇅0p g~'+WbX1X8qr _P`Qƌ7r#Ȑ"G, I3-*ҵ1XYs >81LH L-j(Ҥ!d5o*'(aZ+ذbAfò%1DU@8.޼!H+/7q D[1Ȑ)j%f&G T R- 1DΞe4"g/ ‡ex._0W﹁ۺSOsGn\rϟ;n<\qm^{ǫ:o>?-.x`~".:hB| ڃ )b! &t8bdU‰3$bX#Jb09v> 9"y$H7(p%Y%(9R>8eQ^Yf[20ېO}Bg9>+Așawh9&{`㜉҈0.Zc p1b*nJcYIBPb c )I+ ƒ닽f l (.*{+.{zQ.#j!nC03G)r//o0Ԑ@<{0 +0 ;0>x ]:FxB фT)Ab>UQZ* )|s^h;3$M̭y8^z&p4g馮`b8Y?: 801G*!S>+m|`TWs  E;Ա17@0q]X?`EH @ .˖ kW) x 4>b*> \4a)@ WBp@("#3G o?2L d8bKD4qnR#W"u RBp|f 8x6*1PT@ D|/sGS ( 1 .|@YF/p$)yGQm(4 JId+u`ڒ#)`GKj"Im@ T Dyx  a!37hEffęz$0P` k&3Sz0c uO ى&)4@Sȁ ? \B`f?:4.LD&u,i^ ST)0+3ŋLJfj U,AmjHj( SjOS Us\! {BH)`n}+\*Wޡt\׽~+`X{KO`2mk)H",c7֍XFXQђ=-jSX-lc+Ҷ-ns‚ -p+w%EeJbj!  V^MSv]*5"j(-oFK"bD( E>:^s]Q;" dd|__w+kB0 aCTO$Fo-+ 8a|!P`RZ4R9bAz(7&-.-b=vPLqREG+#K4HPv!م=(7d|D(`t3-A dpA&䁑D1?̂i0 A( 'A Y`A!%AaQ0#<(@bhKc# gT )@P$(G .oIRI2.B^QxC)(nh:A FhXP _q;0栏^6Ȉ3] T .(` [`T ,FPw!E@ qF̍! D# mWv D#XYԛ Ј F$"!v/` x1W0A{0`BЋT8 q 0Z ; _ R*,IO]I H2"x!FE@ &<@. p EPYp7&~m"{F.B>H?x\ $d  ª)y>|  2Y,A (6 >F=U [\u<\ `?| 3̀X@aDa(:tJA5C40t[Y?@ ?>4ÝG x `@A% [`AAXJDA>\©,' >,A)* $xBJGA(C$, 0A>H- (?B/A ZA9h'*?x0d?,e A,B.C0BB[! p (@ U1ɀ69,@aB\`C C TB C 8> 4؋Lb%?`ȀLAdD<|F?@AbD ȟ5 ?|0FFCP'`2ZJ3r8?Ђl6<4d:@F  ?Hy@0 .B#\6@?FxA&$hBG\@ fGG?$>h P;F?̀.9Z<F\0`)C @% u(dBl,@"߭B(48\ie pW:%Fp> ?4C[e\?e eB^!_* 4'@ $Y '8u"1cBd>,@F@hfiF'jk6flfmw(8#B.a@BgC-@r.?4sF'Hʁhl  @miB+$^yy6?g 4 '` $i•(CB-hX'ƃ¶\hw+¢,*-^_#ۍ-@0 &*A+h tAh3 x@d0 C/ (@c;##=ڣefiFk5co)D$!@0i"(@Fn )" u~:,&ԝ miDMN~0!@A0A;240D7Ȁ@ `AaV?_&C"[ Ѐ\ m_^_J*p0 h08Zcd? B.\| ,<]t Ѡ +a[F \h Ap@z `t ?~ĹlF?CP0$h&ѹ+v9 E  pBO*o˽@B(l`/p-iʱɜt]+T,CY&D0 HH8CD՝Cm-`D 7T@킨( ܂@7aD۽]]4@j@=F<C+` DwV 8@$?(A/Y j@+@qhjH-F@?eD @]7 I@@}@LhXp[F&iz2X@-%c1RV$d(2)gm*BCfD.9+txjnB/ I!(Ý*hZ'.|Yb3DF̂7'CY&X@#XCt  l@"s &37x(A1'4 p?A\Zb C/pDJX@9C ؂,@F F@4B#搜="'QD$ LF<@` Iۜ[HgHUV,#D @ 74/4kF"`3$,B`@,X@C4$h'Ħ'XΑhrC((1?X-Ao.&tZAD, B:,@aB-cv;΀.rIt&t"$ (tItQ31wD%-3.@zY@ Vk [$ uX!q_0T!j@#2DA7@ .B?<.D4,hK.0l7Z 4LFöx(DvF98XYY(8c(0? 2谧W?C& << )x{j (t b$CB (HA:@!\(\0dBC dy&nkk&4@:$A*BC`Ozád ,,l::?\@"5c%,$"\Rw(Cb@/"| $,A`' g1)4 HDc< |9̼aYI 0P@ķH-2| '(p@h;T7ýGͩ!7ط 4FܴoD 6~"@(/ $2,@ |^?, |SрEs >0@(4s*.@, (46L0c"b BYY9|v0_B 6tbD)R4BJDPD9dnF  5 qBs3(1K! #1"aa'4TtSC=ᢗ #[,MƜ`?YF" ⟋5(' F1,ӇHɐPHvR7grSb3#a +6TXHx߰H$@]q"-h`FT0a7|pY0RāhH,DV LJ#63>앐pA)@z'Av @Ч #'7@~0bXJ\@""z#X(r"$}"NI|Lh!$@0᬴1Bfފk0š @҈N,!OLhjƃf@EZVbЦV&"Zd2RM6lM7~ԩUBB @GarP阈"!4…b"<:`Ol#h#<af@"># Ph,៳H. va(hD |ؖbsa"JňvQf0AB p .J XH:`0B6@&Y)"hNHW=8@)"9(B4 u(@(˴r;U*[!Uy eQ 1 hTǙJ(;!6΂eEhaYo~][l&gZinA؆j)Y(4&Hg z#9ՒbQjHh!KK" +0oTx18!h&X/ԣ{ hjp%PBSX` OCWZ9#".;P#! ~ 8@2" ITD'N,xoD#)^YŅDыbXF3i-hƄq㏜a9*=V{ XGA摐 H`x`xABl mFe .zÄPMVPBhI2e9`3k]K?`DP0a-0|h?‰48Eՠ lA"Z^r/*\QB@kZ#dZ2` Kҵ2!:˕ Ti#]f9#~ځS/`Ju8b#{!/q@Pv,0ѕ8 g$Xxp?}s1GW)bts1Oԝ>u,FYֹFl莨̯ RVwFL̂L<*6x\q=}B;Q |3@% @jC2PÎXk+'<pB`))DF`͟p`TVx:9?G>!=.?4RF qn|%?.Јþ"(1  Rsʁ`,*P`.H֋?]$`b@ !>ڪX0~0 `a 16TTBT&6<t rPupڬ. !sT^,2P"6΁^ ,xD,`B6DB&Ĕ@! $n@O0 ń  B Xt@ a@ '1@<Ba&1ARD!f!* $zF@5P 07`Qq"jd 'V@DFD`B `&`,H N=a8T@ @T1*"+BbB0 aanFr)n Ha`(~0ro"+B"TrV)*}no*R+++Q""*BR-_8d)f)> yb" a&b)*  !^ UA.pff.."!! 销gr)*8vN!$=N &(VnArʀ$@0Z@DO(.r \X.JH@ t@P -nFE,Fl *P/g,XLd@LD4 s0b4+[X@P!LhTK"@>av 2$ b0 (@J't@ hZ .3?N`2e C d X,RP P`!At^L,d!P `xsx`F`*! H"Dlx`bBVSA  q!T$b @XQ `#  !8@ @*n` R! Ts@VO!4@F!! rn!!  [!&`F7gg(5(Be@p\T8b TENc`DM\ .0Zi*64!TT 6!rp@C,4!r[5!\Uf3^%߸*@5{zDJ!81GH !V{s2;B@DcA@@a0aZ<@=&d PU J$KS9SVj D` BX@ k j( [w\n? 6!`@HOB#p9I!^ ރqb!W$4BP!`b@nn`PWuS&>!@ 2a .XVAl>Jo.E l ᮤNv(a j,+ B2~f$(,@#bT`9Fp@ѐ@NRYA`2!N8 ;XT4z2VAhH!!v |)*x JtdaP9M @ &&ag@{(@aHA&@2G!@Z`@!4 A2  Z4"zX.Z_ 8nNAK&A0H hxyI^@v  +πewiۻڀB[כۻ!vɊ:  1) 'X! [Sb@\v 2! !SY`@^Z5 ttNl 7 ԡ>{U' ,` ,@ƋcA!2 l!}N@`4HZL@z4vf@t@Z>A H\Km)$CtρZ @z`ާdi2 :($J@0\ Z }z{]`j`f=!0K!`Ag" ul! (f 0!)6)]f*3΁" i`~<]jU0@$ F@(Ձr``R~`|1B`qQFAvP^>@TW T 5@`h!rV0ӂ_z=@\^Hf2^5!z6 T#`0`b`A6dTXlH (AƘu<_{o("^7- ;PK 4WWPK&AOEBPS/img/strms022.gif GIF89aZ???@@@rrr///___oooOOO000***```GGGUUUppp999Ȱ PPPdddyyy!,Z@pH,Ȥrl:ШtJZجvzxL.tQ {N-P,J ;!]d;;";PmY y ;E~Oi2FB;.a< xҚ!B,`<;1蹗B ;rp;D\֏fttO-rPȁU&Ωc܄r8Kb&yv?#<ԐEV3L-#S0CLjQi/|(k;Q [u=rSaX1Vcg2!AVIsd"ܺ!kpker@Vƭy MLql({I{d%^LլӲfIS|)ЦU2vƳh_aݺiېY7WCC܈tު7_%ntK0V}KmWn?dt(: QÇ~L4ls?Q7gLC|$ qw$4W#<Ňpӌ|G4@0`d4}^$?e{^<h[@H:C$?%O0@ge| Pv`)oG3BaO+EaK UP>)4^"#np ‹dkd/<$Fe+B0"Rl ?;Y"ydb163+_Q> giB "׻ 8uM_-p!SGT?L+$/g&6pnCQͰY>#&**Vų&:2clitmKj'.b7@2@!gnܾ2|f0aJ`ܧ*}E| ( O]fds>$FĀ(G;`/N颾0 Z-A,@p(E(*@jt. SK2FvwvU84Մ* ԧsI0߰%\uYfV-Er q /]ԍ#Sp  W*̶4bVAJp&4|#:i,l)iHkۋcR$R>V#+G`Ej[, 纷. j-m]hwmzS@P&no*Zǵ|]/]Zdw`NmM!X.n+\_yuRU?qmMaG;hkv* aDyz[=*<!B y a> 2V ʾr2 b"#JX(d ,zv:Hq 93,3oIZOF/tI@eh4G|'8R3ikяnM="u4Ӻԗ*kWq5@'V#*5\S)נ.@6hGǗMzη ӋoO'dz8ϸ7{ g%W(OWrL(< gN.o8,@ЇNAWAg Pԧng< ַWH7sFW:m:oxwkw6Z({OuK!OЦ?]'>>!,@pH,Ȥrl:ШtJZجvzxL.wn8k"~OJ4C |S4D54 k 5k B5 4RVGD5k x 5 55طBkERΧD !i5"| p0! b"'ɽ$ḙ 0NqBFth0)I_*pMF2nFKI(āj BeJs fh-5*1A4MdkD`put<ʗLU#XXP\HmC\!z}kƏ0PfKZ!(h`AYD 8ԐP%~/ N5ؗi+d{ C8< Z zӨ@:uG3qc<1Dty˟O}@c@ q4XH4 Fy93D[/@fv ("c峌w]PTpA-f[4hd$)\k5x`  h(%8UfHd Dt0f A/r 0t&1&Dđ%YP`@x4Va%V@d(ʨ9f4`  8eNT*klB9t ;dyiVÓUF5G5HGAo KniiN5@<@Acc}P 9)B»Zk xTdd`t5 ALBL(Ԁ :KDDJ,q ~A(Bm~p>T&հg0c#P A EPT"'Wp&P;̘ SxvۚEuZep#MX\UK \Osi`t19٪Blj90nGwQz, J|@   Xz0|0 nE,޸1^K9;1P pckǁeB P_5@ p" H-yN`f(L'(E?.] l r6ibQC;Lt|Ag2\A $vRd"RPBW#Გ (Ff2#HS:7@c/Ÿ,,jJ azv- X@$yE摈&8N,̧3`zNu$I J\⮂H 01I`D[H`Z0B@L-1HJOp!'J=mR'uT : EIs-!nEk})j=l $4MC,TH u|m*C%LB<Di b# t3M't*M1~C8$b'8@:C!hR aSlv<N 2B>5q,ZhsDz$6i{!0t@pb?@r E8GDfqZͮv^7W0k_Vm.c[!z #D@9'E)13(hT.hZ,mLXU%&ipnG"Y!,H.9;fN$S|2/_{TXNպi E7ؕ#$x]&dB^%)1t +#%tx<}Q(n3`ꗃI&=0~ɔL80y8:a8Մ `<G| @VA+u;5rm E5EX Uol'D~όc?<}dxJ*W$\XP&4&8>#wlF~W lpƐчy> Pqq3<329h X#b9!3#X-R|dBW GH) CG A:zt")kC#t`6sqVH 5b6ht:zbh9cr -t4Äj~?0c-E"h glizq?wa#" SiA02x P|*r8s}8׀8 >v14myD@-8-6 1҆Pq'phYSw{sfX r8+ߠ u;p-" W 4H5~$xoh8!G14 C w l !P`  tÐTT &lC 0-ɏ2cgFh }ǔ|}EjxVnGHX (*_ДKćؖm(|artI v|:Y{8S@PBq. h /())V0yِa $R@ Rhp@ B!C` a#s]XH!N9Vf{w.C@z dCS`-W5x 2 wyyxXx{نȉ@x  m=7My(6QCІ(2¨ 8 !-`" PEz,jF sȠw@39*Ԡ ېyi aRz7Tz]7I7&4E`*I&"Hb ^ r馪@4&rm)yIyYxY{)Sy}yz꨷w{}R8zz.\9k``& **zjP`0#򃫹`Z*J~ @'hC4b04"˚yj- @2u! h1) jyߺPc©KI.q-ТkX2 ~';u9Pj"BL*P;ew׬h CT4$y7_ tj}J;}=?Ak~C[K gYzR@۴N;whرYVkX`b۵83kRT{P+70%׷~q/`W`+tiˠp-}{{ {Z{ˠ z"`wۺidkйd[PG$в {໚pۼ@K {oыڽ [{k 0վ1b#r ";>."7l\x;e|GrwmN==4LeB|X s@J3R@H!EM[2rl~^M 4={ LQQE] QlFC!e[ն<~mҀ5lNZ]ԌySfLtQH.Q$$fN%Dhrl}ڎ}S0[k]]m۝إqMoܳԽܛ}ݭܤa]cP-0R]}-`u[{Nn .  u.A;PKESPK&AOEBPS/img/strms047.gifKdGIF89a omm~}~衡///a``0--LII___Ɒϭ{y{OOOtrsqpqURSussފUSSߞSPQ๸YWX978fdegee㟝IFF>:;GDE@@@vtuOLNjhi~EAClijȟQOPӽ/+,FCD<89cbcpno}{|蚚ڳ,)*ZXYի\Y[957_]^wvvZWW<99][\YVVb_`{zz񬫫???󞝝TQR>;;000*();78WTUфighywxmkl˺=:;픒ʙoooXUVJGHXUUB?@懆# !,  H*\ȰÇ#JHŋ3jܘGB"ɓp ! |H͛7G}4ӄ˟@K„H*]ʔG>QPI*ՠ%i%4UJSӳh> iװX VCB߿r ԯ&.RqYZ.հ$r03`N KuX+Ƒp,IkVB蹶uB"1]AZm0R$CK/GmwpGá Lkr8yw -bG#B^S9U yמlwc5d\wUy aYHjbHkLx$pYIHU7EYSU46&&eW#M-bJ܏x!4]o*Q`AD70ax%NARGF Ap`MEHe$h|`— iW" 'Kڷ%+(z#W啒W_rZ%$ij9 9rJCj|l& qZbB#'&(gHBW籝JlV% yGkΆJ]mqjm+ O*5ʲn]k@ K0q+, >|a9p(F$"Ib $I.l$)b5W =4CO(=#PR0 ;]B"I$ZgJR3U5d]]bvEX5RaCw5-́rE'7iPHJ XB('ȶ}^kiIJ$.:F[F° è>f+MWnLECi K?g?+.)d-+(w{DF,=ZJ"jkz$!}(ܧGJ `C"@>Eb* TXAAQp!젚.@ImP ATaPPX[$A G8ш w+ %魈aS""!$B]" $wLAi.CTNE##4<]EY$:$(wR$Y?"}d %$0#*;b-%$dBYe]" gQV$<"W$K:  a!*Yz %d g#6"ŐBARN8i^Q=h )jO 'AY259 nHa*J"sd!RkTs#ب:4cH(QwqCz0$ԓ ^DV;6}%T R%Fҋ @ dADJPwm".ɚu {?q9M[uTN1Bv[%Ȯh+4kt"ԨlRm= ]aLB1Aޖc:R?V d8R74@N2)pvSpGJ6(D3FȊފDWPpŨ| yddT2D"n,e,KQ01oV" |Ťe]"JK01+I{AG%D4z1:$vaLF9?)ecGA VBFfRe=f"J,@iNކ"5 `JCF2T1qdtNְ6mu_&Y8w֨Y6znO+2*1 uqg|ʌY$ʑiAS"' _!Ֆ3S^|V#9m'RpѮZi* ܵNfLRgVOy9:\[1N 9IGT%y[AUTK0SԐ&. `K*T6>3&ZErmɺE۬vc=QWB5AokJk@[f@u\G{V1R5 M7X b)G)+n}jWGqUaWq]Q9_;@TGVigY[RV1LhA~!D$L4fMEvTA-ddZ'lWa+Ayv)t Z{!0JPHCĈ QŁEMGl~s!n2xb6+ROAfbrP䊻Eg!n$%Md6]hۃ[!%58qABTMܸLYl60d~ǎEqsc'摏އd7 u F: y!dH,r8*~Bv Ԑ ty}B#il7@5FVHBVhJwHq zKUC=PUHy+QDvLv?4==yTY\IUaRhlXF0HMypy@IA_[=KNaYCgC el٘9xy7Pi4(A~#  Yu#3%3a恚 e5ɚ ~PYi7o钮3ə_u4)+69ќ18~#69[u5)Eq6W.YWٙ7 s+cCC3Dp&sVH+3Y RXݩF$jIpJID$'z!#71FZ4Î̖qđ W~ C"gLJDI +i "*R/b,lz[r(ŨDuC@4ᤸDrD)Ys*Je:gZ*R.#j:g (wC)f% rzz*+Q,0Z)x^iA6OB9RbrKC9IQ:B:+A2$HA(?U%Rb%~zbwb %+JJښSB1 Ӻ%Z)1$jAlDүaeM#"ڢ!% <e [*ᰆޑk f*"`!ڲV2#%m0{KA*N'rYqRK%0r1Q]'! tx-a౴:;&.+$Q+q\Qs~zH¶P2Q&M'"V;+[#1aԹu&*BɉlY{KAjQH~a˞{6A}a z[[۽J; Ž۾uѣQ MꜳJrۿ&h7;\mى4[d*\l; ̢:{\t51ڱޛ,Ta"l+; "ÝA! F,B9-l=\0|“pX IK;\ZMEP}L4Z,{_W[ۘM|JO K^rPX %&۸dq|ZKkV1,ɟ.HJ:wlE'PL!Ȋ|3 ąh,[>a6%8cC Mj&lƗ|T@ 8؜ڌ+ @'6iIzl|YS"QKlƌL{z^1dH\ܳW[RGT͜1^d 8P"dH5\7WivS'Ld=U[dciis,+`)@`)M0f*b5d,ϧLMBL\NE/Mqa-2Nq̜Fi+Bq=|0CI-Kp~_!} UJ_ԖRKdycmw0Hbh-ǘ5ڌ M v}׮-$ @Dx p q P P 0  fXTRl49T&D%U;a 9ى@|E ,}%iPͮڼuͦͼ iH s <SP0 1 ҅=&fSݶ|UqEu+ñ9ʁVa1QG4mjE],] A@A 3o .@4`N=sq !.M#p'nPs!2q1n8\g. RVNb0D-v0A P@ 3o)0!S S>~q )0!>t"V ;顆q51t` dTڧ ^ Q@  3o 痀@.c`q <0 vN}> P݀Д$^OڞɊ ? _ `@ O9~ >..03 p ýz.`yPnHAQLssS$=PТ w=ip`^ `e> r?;lnt2(1Ҟ33?oձLod1kV*lLOmHײ\yA i/ۥ韦]nG!E<oJ0tf xLIX /curdQ OINБVE(П^%,dF]bȿH&Y:v+Hd0E~or|0H&(pe P2/2+^AtѨD e2Ot7\\ժk!\D KP^&C۬ꖨŜyAfvTrg YTЇ$B !&f0` /XcdLH9H*=mb "BdDm-JA@HCDQSbxE,JQgP] `*yDF#{F8*fJQiZ*N`P-B 9p'< (t@pkn8$-$hA4Q%JGnvQ EY²7vxDb1y*T]̉Arg*%x)L{j&*Mpz$g9yNtUxǚ9vPH(h@fpoHHK 5Q*H&XRTC.a YbLxq`EM@OC"Rt$$%IIKb ўAʊ*kwڛ' DRjTtOTrLǦ`Ҫ\biT4-@E SmH bx6'U*Im`&kID)XV Z փ7`!m@ #ы^!:16G(%,$~XOTapCw5?Z uXl"չ3{ JUH HXc*hYO&d]d#NnUکʹJEY?$ 9Gn9ƃ86gg(q]Y?;HKBaG MAG9dT ?ΏD٩I/rRƖ{&8S r&5614! 9'`r]HO~YEmα !00 貈/@i>{1 >2+$PK(05TS5Vs5X5s9T@? !gdAD&bѿq[y APG bzP@@7,&A+™k0㽄r0_@)ˆ$ @@7l̂ȱD4 x$FH^"2#C2%!t8IDJ|<яq yoI! pR0hJ\EVlENBB J٢*.MFb,FcB+ZR00{ iT-=OP5+ ZҥW*O` 8ÅGtȦ 34]6`:dD˄f6SVA \4O0"+֐Fֆֆ E0 t݈*&,M8Uh:@U P}"A`b Q)0Q02`S\7XeG?4[\Hi=JhXj} 8i5PEa(<׽j%\C՞xX%lZl -8O(T&І,BڈE-H?&`ۘE]eHP[]d刎 %]d*-0祫ꤲ]}4\AR\˓[yP*\G(ՃPzϵFj08[J/H`Q]Ȅ8\`MѼQ}HU& 6MS2U-d{&YUy((+;TfSH0 @3\ʄ9 `XʉlQ325`M5KS@f`+nf<ܚUY_]Ȉ +_aP 8kè-bl&HDE,,8x*bKNbVK2\&iAGQSc-˒23^ $фE E4߼f߈4bC~L4(xGb8(pJNP'ifj,6-e6f  pgq&`i!BEׄEur`.dǜʭ"})`؂~f/'cIV@(^ g Xgig3~@B_f0xJ~JK€<104 PSxNtjhlњ4hXH6aieQx0M(rм{( 8ie(N X[5 Hh 8MX3jfzpNX2jveFc І_+k >E؋Кe& Pn脃_ c֊b?fLCIlmEPWAZ) V~M [_PU^h6[UHi34@l^ 2H/8S>o^Xk?u@rWhlnJ=f^ f^ Hyظ PLEA@V $XjxoIEyYMKÔ&pnso^m20/Ш,qO4|هcXRR}U#u]9fV'q!O9i@٩~W*bвɲ*DVm"KKtPPY>Zd(…\=0 LJBr"TnguU ʕj-wFI#&nO<+"DDQ*H:[{ 0 A ҂5 +c%|Ӵmt$RI0,G5yAhEǁ6"ȋr2Hlz,#{w,̽ёx$PwV0V{]>)b#F b34Jr e>9?xoȇ杜)0-ap0Ej8c癋t!}tOKb0(o&({ ? =ήK=۝d] ~;EG{ d*]X1>YŬ?VG֚  g3 wBe3g` "xD^D kXL3QVѕ$āVA:), EBX#b 9mt|_v-9I ]X ;|M]xB,̂+Ţ]ȈLH_-i8PЩ޼`+E)PED$`Pqz5&Ă- rB`ADB(0_$ 3-& Š$8HP7(\ݞo1A3xD d&kవ !# Bna"`EQFը/18@=AAB58B-bk )bALC8;M*Y_LX0*޺A ,@8lťC!`@4@-B"A-J 5`1!22 3&DI 6tAX @"@$@8 7`c`d:^:Jb;;cmqܣCP)C-dB&?lӲcJ۳,2 U QÉYVzaAa,⫮2$E( ka+C>[ْٺLڢ ,*P,EkE VU!蒨#b @.2@bv%E>"rjg6ef2D1 RoL,Tl0cیJٸBhz6iD2p,N=_Q_M[Ěo&Ts<@<0iF%u$]_rNpRU`ph0c&`Z{B%! zd׊/_0p`nm.qm (:va/n!;ֱ1hq"oMnc{Ƞ&!C("s"#B"Kr1RrkXDb$ ?t@(3X l/N1/^m%,ߥ,?&Cp%,hjC8ȘcӰ+4N35o2T:FrFg4K&=G-#>G=2-%O(P%'e63fMH+˳{t[,4pqaMc?#'2_Uq,V=_3D 5R'R/5S#I4IG5[tOF@3BA8_0IIG2KOU5HSH/cUvj|VH4q{~*Z۫}7T7@oFR-ۀsE3C`ߖF@ dCPFpD`7iKEO*P^QR7*7DBb-3 g>coQ\e6x5WC E, 8 ~#j7ĕC[`#SCۯCPXd7dcyy wCVKWW`I;[i|ӊ(D̹ikAAu2 J߸Gz y'3>AN-oW/::j__?`zDL "yzzThN : xK;xL[Lch{;^#v<G>kWFC7K>~{wnFoog\oA ~>XBz^/4> q˾A&,?ׄ 0?GO?WO|-i(^w(tČ hMջ~S&j??G,(H ¢!&E`beQ%#AC6`)EqKC-2!!Rˈ$ $‚0z@AA%ujTSVzSfc*DX_ڦeَM+Yt(dSK±kHAH+J俽}Yc 1 gŞ~ ZZhܥkۖ6}խS \^ߒ6XHAJ %ڤƣtR9MRt :b( $"1+6w$Z^:JLPW$%z)b* հ:b1ˤP†ɤ,k/1I12M r0 +OU%+B&C3@3J4ج *L&LjHY(٥4C@+$ Cm>eC~yxv񳡴3kUV"I>1% JG~?baL؏(iTZoUcEyؑ2䪒kY*0[ DM,$ ⒠dy[tG3jV﫭 ʂҰ"J*)j5K_SؒFV̤'ݙQ wZqhRH*;r{nM䒤 ڥ  v'C P4+EC%(8Zթ&AȖ*t6SVu .Aۅts˵y@QY27%{q (,|Z& 8r*rGhDQС j.q*3YWDIdsa7GEYjSzE)Qӡ;yG? YH>eECl >9P@.i^q| (IYJST*YJWcR̸:F&e|ڇH*)&m8>M$_\RQd-R= {~QivDLf ed2yLә4;"RS(ɣ:-`_<' 3499z.9YQ<4>bMm{!#T$?SPZTJrL SC_U]iAaFX .v#@chbi D$BdĦ&@dAHњ b[AM˰Da> l+ո?ng*$"0p<]S/deF,f!JҦb-6)6ѓ 3 IjS2 kmX>_)J:FZv>xˣLGf΍a1v҃P`2|bN9>3A)=:l=1Ee/8H̸)@3j]Aj$J XfaG[ {B$-KT>qi EE(#r9ⰶ!^o_7lB$R:zvw(pdDŨӪ2®h8vD @8/.a66Y'=SZz@\ti`(aY$>Q;aWI_RZܝT ;LSs?[yD;"AYֹG\8v ؎֖ȣ%/jsASJYݫi]m`RC+}W `^低E "-$8Ͼt:䤋"(#Wtԝ4 x!BDʹQӤ7M/ {}#؞,:a`A+ RQS_[E@ֳ$p8#?qp@RN;P#P'+/T3 .!"Bo0n..!R/AƬ.ܼC0 P  P   - ̯NnR ޏ0O0!N`*2@#Q'+/3Q7;*. Zp""ya/\RNo, -1*Q1A.Ζ2MaF1~p/.X@dG5)ڦQ+Q37Or/|c0sPk1P0J ! xa jŦ2%SR%W%[%_&cR&g&k&o's%;i Ӏ@/NNp^U)}0)Q ӂ$ *M O,' CAJ@OK``   -"!R !+۾+,,rrDq*(6da 1$QS"A$Xrs !VAB3w3,㰠V 4рsf+25bF99TFhA 7m./-P\lH5"9Ҩb^TA *,r 9` P l`m*t(S?3J2" ONEW4J A44]`"T$r9CF3:3odCS>:LA ,N.``x@`<hFaHSO=DT4PP5QQ5R'R/LtSͳ p}C괢tn4?U4VUP PUQQ!UR)RW5MӐ+G=Q3TZSFZ;tIOVVUWõW5SGU]sBU3^M^ITHt_5WiaWS\CƠX!H=YI:\sbGb]bM!cYȠBP< dER d 0@Ơt BZ8a=)`y?<p r =䧂Cս"%%K叉Ǐfݜ- B jy־ Hנ ` 4!~ֱ߳޽(% *"~A ~` ` j85@4/ K` At,``4 `|J*^58A5caCAt3OB!R!!Mc`?B 4;0… :|1ĉ ?O#;z2ȑ gIJ Qd̙ aNBrС-JGAѥLgP [b$V61fJ5"=LD 7"&225E$EK+NC]DjN 8P)`D`ZN&)gE,q&aj$llʆ5A6BX1b򾛗r1!' % >"x!oC>ww|G!p"ܽ7{o)rʘY(ZH" :7%H (&Ƣz-(2j$H7hxDiے  "Jd=Fɏ7Xb3Nfd$+) ]o$1$qPM#;*\.'Qډpބ$J"#$ )EYoC,LgUP _n*R A C* CJ)&kB$Cy٧" 6Y)RCꫴ:mJ| b~PZ.⮊F4 ڻ'$鈨py-.$oSj`򩻏 oHE8 jFGp@)2Ƌ<06l|7L/Cs"' xV+Ls;c,I?.m4b,ː@ vA?Est"w_\2$smwp)x՞}6?n8ʏ#N1$uvJy"7z馟訯zʉN{h{2ᮻOC||O_z_g}n}/ _l/kE.,`Hg*\J~7A?p$,jַ!l0:[c } Z9^Bþ$fBb3 m!EL$pL|FŕHmaTYnšho\9q/em+cG.d}k Hq$td96?҄|S 2F1:DF6Ar#:N:$ 'FM06rIeU S9JRZr"|d)F&E)L4se̴^Le$1=Ғݤ`9C2J _T*Nm&*'- ĉP6ن|ϊA#A=?OF!֌D%rР%T-юD+1*xurTl[(h"U=!#-H>7 xLJB2y\G&^HM1SM!&[(2cx-@WA꽪J\MDʯkA_QkUj)?B˗ِ#xSbؘ T^GMiU l`S>xm۵\VUQMH` RlkG8ıJ|D2`V'_!ȸVpiIMUVzvB]mʄ,eVkG϶ͻoX@GK\ifBVF7nQu(Ve -޾7|T RFȮ(\L0RaX>4/v`N{xL?H8vǀnJ$zjp%81rng=Ƶz,#dԐ2-Y˄2z(!]Y=\q0| $N|% #YHKIJ89JZ7 [10awQ :8gS4~aWHUS[0.rC`7 xznpųuzR.]^l Vvwv a Y;o?RA qm[#h%9&5uR(qQ.pX`uj68;H~UAdJȆ;tU1ol3i ~]7ak;ג/@^(,(u8Uъ+uHhȋ苿4=q(,ňj˨Ȉ &E-+0E  `@ ĨE *Bh, nH UpȎAIhh+?-(A? АU Ҹ!(4)2ٓ5 `XIY +  @1HE::+`Uy АD/I,'Ytɸ 2@,@,yȖwɖlو)i C(1! Gy )fYВs ByyAI??  i *a x9U0)Dn@A UnБE@ُʎ J(`DEʞg,j)ʆP顫;PK; PdKdPK&AOEBPS/img/strms039.gif8@GIF87aX?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,X H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cl 0@O@˗/70_8`A&TaC!FT/_|$/_>~o`~I(QD%J(QD%J(q` _>~|(QD%J/_a?~'0_$J(QD%J(QD%J80A~O_|G0~7| 'QD%J(_˧| Ǐ|د?}?$J(QD%J(QD%J80|?~+O? 'P/,h „ 2l!Ĉ o` '0? |_>%J(QD%J(QD%̗/_>}'P>O_~$J(QD% O_~O` /_O߿|%J(QD%J(QD%J`ۗ߿|/߿| 'P_|%J(QD/߿|/|o|'0@}I(QD%J(QD%J(q`>?(>o@~/_>$XA .dC7__>#O~'>˗OD%J(QD%J(QD/@/?˷o | 7_>%J(QD˗o|˧OG_| /߾ <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{ 4(B~˗o@~/_>~OP!ܗ/~/_}B *#|_>}o?}ϟ|˗/>BY˗|۷o?~˗/|/~ *TЍ_>_} O@*|_#O` ̗OPBnԷ_>_}˗/_'P`> H*\ȰÇ#J,o߿|?˗/>#/߾'N8qĉ'N8qĉ'0~ /_/?7qĉ'N8 ?} ܧO`|O߿~o`|&N8qĉ'N8qĉ'N|/_|㗏@/| 70_'N8qĉ˗/?W0_| 䗏ĉ'N8qĉ'N8qĉ//߿}_|8qĉM4D ߿~o߿~_>}_>~ H*\ȰÇ#JHŋ3jd> O ϟ~ <0… :|1D7_|/_>~> O@ DPB >QD-^ĘQF=~RH%MDRJ-]SLѤI&M4iҤIS#|hҤIs`?}ѤI3f?}ѤI3"|ѤI@~IfL~IfD~&M4&M1&M/_~4iҜ/_>4iҌ/_>4iҌO@}ѤIS&|ѤI3&|ѤI3"?}G&͘G&͘G&͈OM4a˗M4c˗M4#g_|hҤ%? 4xaB 6tbD˗E)RHbC~ӗE)RH"E"E)RH"łG"E)RؐH"E)RH_|8`A&TaC!F8a|QH"E)6̧? ˷? 4xaB 6tbD˗o@~,h „ 2l!Ĉ'>ԗ/>)RH"Ň H_|'p "Lp!ÆB(qA~H"E)RH |QH"E)R)RH"E!˗E)RH"E"E)RHbB}QH"E)R|/_>)RH"E)/?)RH"E G"E)R`> /> <0… :|1ĉ"E)RHB~QH"E%G`|7_|(RH"E)R,/_>)RH"E˗E)RHb|/>~ ܗ/?o@~H"E)RH |QH"E)Rl/_~)RH"Ň/_?'p_>+o>"E)RH"łG"E)RH|(RH"E#_>O| ˗@~O@ DPB >QćG"E)RH|(RH"E #_>O_| O`|"E)RH"łG"E)RH1|(RH"E #_>@}_|"E)RH"łG"E)RH|(RH"E #_>O_| ˗oA~H"E)RH |QH"E)Rt/_~)RH"E˗E)RH"E̗,h „ 2l!Ĉ'/?)RH"E "E)RH"łG"E)RH|(RH"E)2/?)RH"E ˗E)RH"ŅH"E)Rؐ_|(RH"E)R,/_>)RH"E˗E)RH"EG"E)RHbA~H"E)R_|(RH"E)>/_?$XA .dC%N|/_~)RH"E8}'p "Lp!ÆB(qA8`A&TaC!F8bE1fԸ>$O_>~ H*\ȰÇ#JHŋ3jȱǏ C^/|)RH"E)RH"E)RH"E)RH"E>'RH"E)RH"E)!?}OH"E)RH"E)RdC~˧OH"E,/_|ۗ/>/|1'0|(|ӘCO`>"Ey> <0… :|1ĉ_>~/|1E{/|/ń{oa>)RH"E˗E)RH"E ȯ߾|_|7p_>/_?߾| /߾/_?_|˗_|˗o~G0_>~'P|'p_|뗯|/_~|ܗ/| /O | /_~߿/߿'p "Lp!ÆB_|$J(QD%Jp_|/|˗o`|O?o`}_>#O`~+O`}/?̗o |'P̗o߿} '0_ۗϟ|Gp_>~| _/|/_/ |O`>~_>%J(QD(QD%J(Q"~/_/|_?~߿|˗O߿˧? '0|ϟ|/ ̗o`>/_>}'0|˧O`|̗o`'0~/|˗O?/| }/?~8`A&TaC!F8bE1fԸџO?}_|O`} '0|˗O߿~G0|'0?ӗ/_>}/| G0_>~'0|˧/_|̗o`'0|/|˗`>70|˷_>ӧƍ7nܸqƍ7n/O@}_|O` '0'P_?~G0|'0??}/_>}o_˗_>~߿|O/߿|o` ܗ߿}/'0߿}ۧO`/|˗,h „ 2l!Ĉ'Rh"ƌ7/_}˗o? ԗ`|ۧO`>ۧϟ|/_>}o`O`ǯ |'P ԗ߿O߿|_>/_}˷O@}ӗo>ۧO`˷O| ܗ/> O_}#/_>$XA .dC%NXE5ncE|>~Ǐ?~Ǐ˧ǏǏ?~Ǐ?~DŽ8`A&TaC!F8bE1fԸcGA9dI'QTeK/aƔ9fM7qԹgO?:hQG&UiSOF:jUWfպkW_||/_>~b>ǯ|#/_>~OXbo70| |'p? 1|/?+? O>~OXboO`>߿|7P_>}Ǐ?}ӗO| ~? /_>|?}/?} O~˧|Ǐ| '_~߿|  <0… :|1ĉ+Z1ƍ;2/'?~`>/Wp>~;_>} '0|_>}ϟ>'0_}#`ѣG=zѣG=n'}`>o /_O߿|'0?O`o`>O`>70~'0|ϣG=zGyGmP`>O?~ O`ۗ߿|7_'0@} o_>~ __߿| @O?~o?'0_>$XA .dC%NXE5nؑa>@~O_O #O~'>˗ ߿| /|/~'0߿|/|/@~/@yѣG=zѣG ?}O? _>_>~˗o@~o |˗?}߿|O`ۗO`>˷o`>~ӧ_/|_|?~8`A&TaC!F8bE1fԸcGA6Oྐ!C 2dȐ!C 2dȐ!˗/!C 2dȐ!C 2dȐ!C 2dȐ!C 2dȐ!1k_ G1_Ȑ!C 2dȐ!C 1|W1_| ̧1_Ȑ!C 2dȐ!C ِ_||˗ϟ@}ӗ@~/˗ϟ@~_}#O_|˗?} ԗO߿}#/'?}ϟ|˗߿_? 4xaB 6tbD)VxcF9vO| '0߿}'0?ϟ>߾O`>_?~o| W0߿}W0|/ _} '0|O ?=zѣG=zCO`O`>'0|#/|o?@~/| '0|3߾~O`>'G=zѣG=z!/߿|/߿|__?}/߿'P`?}_8߾|(߿| _ '0|/>~ '0|(,h „ 2l!Ĉ'Rh"ƌ7r1_>_>_>O`>}?| O?W0?}W0| o`>?_|ϟG=zѣG=zq!|/>}/_~ '0?~_>~ӗ/_ӧ߿| 篠|/_>~#?}+߿| /_| _|ϟ| <0… :|1ĉ+Z1ƍ;zX_> A $H A $H +$H A $H A $Ȍ'_|˗@~#$H A $H A $|/_?_>}_>o_> A $H˧ϟ|!Ǐc> A a/| O_> A $Hӧ1_|i$H AB/_>}|o`} ? 4xaB 6tbD)Vxa>O>~O@~/| Է//_>O߾|'_|˧'P߾|0bĈ#F_>} 7_}o | ̇#F1bĈ#F/|o߿~'߿}W0|߾}|/?O`}'00bĈ#F /|'0|/|1bĈ#0#((|_߿? '0| /| 7| ̗o`>O@ DPB >QDO?}/o_"E)RH"E)R$`>o` /߿'P`>_O`>Ǐ_>70|'p "Lp!ÆB(q"A'_|O@O <0… :|1ĉ+Z0|O>~@~/_| '0?}/| / '0}O`>1bĈ#F1bĈ#F1bx0}O?}O_|ӗOO`_|/˗|aĈ#F1bĈ#F1bĈ#Ƌ ܇#F1bĈ#F1bĈ#F1b8_|aĈ#F1bĈ"|al|8`A&TaC!F8bE1fԸcGAr//dȐ!C yQ_Ȑ!C 2dȁ ?} 2dȐ!CZ2dȐ!C r |$O?~ 2dȐ!+ 2dȐ!C 9_|BO`| 2dȐ!' 2dȐ!C 9_|Bo| 2dȐ!% 2dȐ!C 9_|B䧯|B 2dȐ 2dȐ!C/_>! gp_}!C 2dHB 2dȐ!C/?˧? 4xaB 6tbD)V|ϢE-ZhѢE-:/?C/_>-ZhѢE 'p "4? 4xaB 6tbD)Vx|alO/_?$XA .dC%NT`> 4xaB 6tbD)VxcFO@ <0… :|1ĉ S诟?-o)ϟ~뷰?Sد~_? [د~O~ [ϟB)oa~ S诟?-ܗ/?˧"E)RHD~_? #`~ S诟?Sد~_? [د~Oa~ [ϟB)oa~ S诟?-oa? ˗E "E)RH"D~뷰_?7p 8P`~دo8p~(_o8p~(7p@ 8P`?篟o8p~_?ׯۗ/?$XA ˗B *TPB *T> /,h?~;(wp~ ~$دA诟;H_;8_?w`~ wp~ ~@诟;H_;8_};xԗ,h „ 2l!Ĉ哈_|Cد~WPA)oa~ S诟?-oa? _-O~뷰_)ϟB뷰_?S~/_>%J/>%J(QĆ/|7p_| /_A~O~gP~_? [د~Oa~ [ϟB)oa~ S诟?-oa? _-O~ |'p "LP |.\p… .\pB/_?'p_>#o߿|;د~Ϡ-oa~ _-oa? _-O~뷰_)ϟ~뷰?S~_|.\p~[p… .\p… #_>O|'0_A~ O~ ;o`~ _-O~뷰_)ϟB뷰_?G~دA#HP~دA#Hp`? A/_> H*$/_~ .\p… .\Pa|'0|˗/˗A~ 뷰?;O~ [ϟB)oa~ S诟?-oa~ _-O~뷰_)ϟB)oa~ /_> .\`|.\p… .\pa|O`>'P_ W_? [د~Oa~ [ϟB)oa~ S诟?-oa? _?$_W~ ׯ`,_W~ o |,X` ,X |8`A&TaC!F̗`O?}C ?}S诟?-oa~ S诟?-oa? _-O~뷰_)ϟB뷰?S~_?/bĈ1bĈ#F1bāoa~ S诟?-oa~ _-O~Oa~ [ϟB)oa~ S诟?-oa? _/#ԗ_Ĉ#("(? A$8_?G@$(_?G~诟?G~دAG@دA#HP~诟?G~AG~دA#HP~˗,h „ ˗… .\p… .\p‚ oa~ S诟?-oa? _-O~뷰_)ϟB뷰_?S~_? [د~ ˗… .O… .\p… .\pƒ ϟB)oa~ S诟?-oa? _-O~뷰_)ϟB뷰?S~_? 3/_> .\|8`A&TaC!F8 ?}[دB)oa~ S诟?Sد~_? [د~Oa~ [ϟB)oa~ S诟?-oa?"ņH"E)RS~_? [د~O~ [دB)oa~ [ϟB)oa~ S诟?-oa? _GBH"E)R|,h „ 2l!Ĉ'Rh"ƌ7˧,h „ 2l!Ĉ'&П <0… :|1ĉ+Z1ƍO@O@ DPB >QD-^ĘQF=~#?} 'RH"E)RH"E)!?}OH"E)RH"E)RdB~ ˧ϟH"E)RH"E)RHܗoH"E)RH"E)RHԗH"E)RH"E)RȆ /_?"E)RH"E)RH/>"E)RH"E)RH"ӷ/>"E)|o_| O`|]П|壘/_>$X`|~/|1E{/|/ń{oa>)RH"EG"E)RH"|/_?/ ܗO`>~O`>/_>~˷>׏|/_?W0_>7_>}̗?~ '0| ܗ/+/_>~˗?}/_~/@}|_|_|˗߿/߿'p "Lp!ÆB(q"Ŋ/b̨Q|/_/~ '0|߿|G0|W0|o_>#/@~۷O>~#/߾O`~/?׏|O?|O_>_~ _>~?~/|ϟ|6nܸqƍ7nܸq~/_/|؏߾| 70_>}/>~(0| O`>˧O`|70_o|ӗ/| />80_/|ۗo`| /_>_| 70߿~o|O@ DPB >QD-^ĘQF˧/?/?~ܷO?'0|ӗ/_>}˗o| W0|O_|̗o`| 70|/| /|g0_/|o`| /_/| />~#O>~7nܸqƍ7nܸq#|˧_>}/?/?'0|˧@}/| W0|O߿|/߿}/_>~_/?}/߾|O`7p_>|/|o>_| /_> H*\ȰÇ#JHŋ3j_| /_}'P_ ԗo>˗o>ϟ|#O` /?ӗ/ o@}'P_/_?/߿O 8|/>}O_}˗o>ӗ/>'p_| 70?}|8`A&TaC!F8bE1fԸcG ԗǏǏ?~Ǐ?V/>?~l/Ǐ?~Ǐ?~ǏO@ DPB >QD-^ĘQF=~RH%MDRJ-]SL5męSN=}TPEETRM>UTU^ŚUV]~Vج;;PKc_88PK&AOEBPS/img/strms016.gif.$GIF89aq煅www|||>>>666))) ,,,CCC'''ۜrrr===ƿ???kkkPPP^^^###lll888]]]裣ㆆ$$$KKK~~~  777 --- FFF&&&+++JJJ߭***RRRXXX111222"""Ḹ %%%UUU<<<˪aaa(((555П333!!!LLL444pppddd000DDDVVVEEE@@@TTTuuuբZZZeeeǤmmm...\\\///jjjhhhooobbbSSSnnn```cccGGGyyyxxxIIIQQQMMMYYY___tttvvvOOOiii999{{{}}}BBBWWW:::[[[gggHHHAAAssszzzfffNNNqqq;;;!,q H*\ȰÇ#JHP3jȱǏ Cœ(S\ɲ˗0cʜtɳϟ@ Jԧb\ʴӧPJ&Xjʵׯ`Ê")ճhӪ]˶mD`VݻxK._tL"B<6XLSc^ L 5ʙ!aP @K?ߖUw:p0?%= _9ߠ xϗ2W?vec3t!G5O7E =w'1ٷfaJ4 L&8yHJ2Iw8樣AD6QFsb+B?5Z;Vi%K ? ,A ԡH0e8 W'pl.8+5?R vF$PL >F cxPDA+|BF!%h8SEa}̋ao eh4 R.WRq1q0L&4Vg(FPAG lC1"Ah( R+xA ,eARL?7-6R0ԣJ<+oс(R(@I8lh" XhtH?0D8, Kx.ї8m8-3@t/q(e @F°Ut8EI lላdD 2P`Ű&A#>܀3(RN ]FBBHu&EEQec#qGU F"+h\pJX3֙`?F4v(oi0<*#c`l:VS@V4X_P(ږ<"p2rJCRy%pͭnw+ 3حpsߪcŭ(%aE$`Hd6`%V"c(KA Q@#~C6a$@׾H2$qҗ @"K` o#d0\`n u8"[cC ,R *D-ЖG L2=d g @Ъ?:F@@1Z HqzhE&~E0„Bm ɂxbІ(dLJ `:۹ыLh;/`A hC3R ф, c q7PYp(>Xb]ԏZۺ0"0fbMZ;aa1KH`@ѳhrF^ U#~1h$9@b<? G(M<Ƕp;<@R'@ t@4aM\6$a#ƸDtA,@[6@HOz64UtHwSgaD\6dh$ G .c6\J0dAF2@ d@׌aʸ)سcS 9W`v,zJv1 CxP+CYc a Rţ$tDH (!fJ>пd8DFPlX3bQ 0?و)#l6`U|pCF0 bm 0Rmr ~@ J ?a1 k,Q8 Ed P\0B1[m x ưu Ѱ`214Q R`4q0 6AP p0ya@ ۇB` H P5@rRP 4aKw(U= _@ P{Zb OA S@ DWPpMpa @ao@΄}0 OQѠ _C@-@ $LQM~C~aǠ jp` kM {v $`~jzzpH0A FDpBvDx@/5x y@3.U 3rF 4r<` Ơ G0 = v@n 0]+ ؑ ~L@ ? TwB1@p j,0  ԠyYH ِ 5X| Y9iRq0DRt '@ 4 ` 0qJp Q~9r >wM P|DzR刊r Њ^ p xP ^Ipr~ ` `v`Nx@ Plg10& ,!R N 6zIF` ͠  ܕ` 0 @ p*@ :0  V@x!ib, Dh q {ap АQd 0' Б< ?z (P pP0@ @Yf I{ka($0aؠ @9 P QЌ9yqq) x @ M П c-$ <Ơ  *`zg`z!vk Ǡ 8` /_`a`3`EѪr vD | 0 qp h8 Āg RaPr? 8 z0oP~sD0qf1pHZbz Ex0 k 0j a<@pz Z@1 @*L`p `~P@w?Qp5 sr') `q ^a3Цk`jhp~늫0p IPz% g`pϰIT -U0`gK0 @t.8j!@ xQ  # nS (! v iA;[[] Q/ Mt|P  j@P Zs{ 2  &l?k=.p h]o]:\0 `j@ O0` 9,. ysPV|XZY`Wd\j|kPp S ht\jb }0vL@yapZ` [|OCq1`Wyɚ<\|~P ; LZwpP ʏL`@M` <\|L0İ<\\`XA<ìP W@Ԍ @ !@M*I a dpՐ5` , 0  -*.$ #;}f' رK + @&m %/}$ ܠ yC6}/;ӡ`h"D](u I P)R @Fg\}`W>H<}t`m؁Ra0hGw~ҀMPkw+ aN`1V q> @Y )@͐6M @4p }M]  x !PŐ427`m=A@۶= _P @+M`ȭ= U  z ]&=CM *K' _x@P \0M0@pP MNP`:`4rBqq>۵]+ۂkTRGNWE 4p2^ Rqp`MiuB<sa>0hu Gj=Pp``M-2=p HPm`A{`70/hEP UG E&PUNP1AN@]kCN0P !\4LP. +` p P[n BT}o]p Ѐf @`:,Rmu ^ Cph pJ   ` qt@P h6`  NP t P$`e` P N0 `:z ` Ip 4`7% D@ OuF @j  0l  # q `~.DVP` x vP 6O 0ypXPK DL#XrܰU`p 0J$Gx$ 4 Ǒ L!H,3w( V ɟ59Ӂ X, 0Xubp aBHELBs& 5Џ?3^IJ! /R)TY#-a *I =t"g,8ÃkdB3 z(h,npƻȢ?:r bM $q,HWVFMአ9B Fa^RzF+V3舁^Q@?2av@`1hd">G(ǝ HpX'zqyȠ,cZAӟG p? 0h?Qn%B  YHpa wG3`G \hf3TTE` "ZhA8b9b4pOw4b@q984f < PoP<$y-&za` l@n+zAЭ0⠷-N]:pj` $ kE-?*& Z#ѺMD? ( lGz7+ (B  E?>|2b<& #&fN ETA5q?X d Tx@;9hB8?J/I\`q!X@xDAD@' 6(3H!D1pjy0a)I!/s00PIL1ZhX& d$@2 Gh0M;F^ kd*?`?80Gdvphjxjp8;h8a pGaCy 5(8TYO>0p`?あ88$(p-y"n?A( &H0 `YtX$Bip Fh8(x#;:bXBp`>(F]CZܛ 4B'Lsљ?aL' y(0ր0 8` "? Mb4PXbvЅfjP?pB#Yhi oHa`Ĕex:8o18rp q\0D0i3d؄;X+}>!4_0~UR0/oXw(\!j("Y!ОX<(GX[B0 <j_x`@CPX-6˙"!M35 2;s *hcx{Q8?h H$6H 6@/DDμ%NXՏ8;!H\Z*#xG"AZ) €(HxHk8Y!8' ׽tЃ `:(X'(h3rX8]-8V 9 30(8 (?zA5tx-7X ,8=8ɗ=;dt (@,?qmqS'P%o( (- }@T{@L X\5 C؁\Nb  Fxp_(L/%4PwHq`SPGpݭQK- ,09Ypxpɔ+ܺ]~xf= 09H(i"(=ZWj`nAVrM1x07 3%bokGPb_'%3؂dMJta09<2` =;!cr`'6pH,?dpK8xRRhd>-H)V'xA43PL SJ`I6e.]f\ՂI0 > P5؁(>@ E@6,؅l+h="  Xu}gD*(}g"Àfd鐠rhDxA8 h8a8</@Adh.WpM< |K0!؁؂(L`ize~&[ped8BJeR oL!i †!fGx`phn L_<g\'IfXi3 9pk(YHX^@>yxM.lАnX8ak`BK$HPqiU)Te @(GH>|О4؂!s.x!L`o 直&$n&%_)pcPvGssH+^tp2Hpq6m1!pNHx4 P2VN 8 etSH);Hs"h ;XhA"h3 @_n;>d2/C8`2'GTevG_e 0.-(N؀ENpΏ8CI1hOMnp4x)xDڣHH_@\yșBK67խæzé]'s{!{lp6E_ɫ|xeKƿS]gF|-\|rtѯdX}؏}ٟ}گ}ۿ}܇}ȅ$`9>`~/~l0|Q"X~7UO~ v~/}N)X豞x|=+h „ 2la | Z#Ȑ CIPFl %̘ C2LQ&РETh `R h)ԨRRj\gEʐ[+ڴjJզBW)ߠ@ji2zjɔ\[,}3c[#8S 4/)7sʪQV.m4ԪWnd,6ܺYWx6/7Jq198:Խ ;Ǔ/o+W?Gu& ]J) )&HmYa~!Grxbz;PKӈ#..PK&AOEBPS/img/strms037.gif5bʝGIF89a YVWomm///Ͽơむ0--a``LII{y{OOOqppȐ___ϭPPPȀfdesrsֆURS}~ṸIFFUSSgeeZWW柝Ǚuss978>:;槦@@@̏GDEԀ~vtuOMNDAB<89-*+|{|~}~.+,jhiQOPڟlijΚ956ooopno\Y[[XYnkla_`~|}<99_]^^[\xwxfcd???TQR>;;000*()WTUFCD{zzĿî;78}{|cbcighJGH̰=:;xvwպ泲դKHIƷ# !,  H*\ȰÇ#JHŋ3jܘBᣤɓp!|H͛7e|$3˟@Kœ# H*]ʔG>}\PI*ՠBGbMiժ)BzԴ٦OC+ַ>U9䎐Ee˷߁L9I )NzI1$MZh3xTC&VL gǖBF-l5GͰc6sRT뾜8lC&i]z+(CmC^>q&>()r=7IGl8 w>> ku.0UmGDH;v@RA 5AXx %hxQpTI (MB MpW&XSpt))&gF2`U4qn+ wHF)I(R*#n2c @rɃ`%0DP))Xɛg] t!x"{2jr eA2IyM bp՝*]0ɗQ@k>Zi_pMęV{2Թc Ī%$Dkfz豳Ia^] *͚i%hy.ۭrB-x|&5%J ,~)K/uκuĮ%N_By;|k n{I8zC7IqLRp~0IJh&̐'3c%>kIyI0 E2~d5)q"RL1\2VLDdaMW$>wP)m-bM@M!-w{",.%$ mE$dQjRic+7cГh4J%%O YuݼyݥW*TI^gyIqCD%k+%>8?wٯ!$(1 ~S  JL  xA@Y 젔2,`%v*T"Ob)s^@ )'H~X i?RLrʄ?ׁQJ? " f( [d*Ě#ܢĈN{ ''5]4ւ))H=N$9gAY$|pqpL/MT4JWdQIM~$DQr JxZ)8Ub݊#:A6h pm0)穀*!ivRgY6PbeFqn)$7ZS$0EhVl%<*AVd0Z jXv LYK ܊Ԩ",\3C5&6Ũ*dJE&J u,%N[?R)\2VR?)IAkA[4D  4IaK`u6u,\o֤6=W.5P8 3tǕW@a|] D )1nJu GI`65bv˚JTb JaDKL) L<?$&nYKVPԊ#G!:CHf7QIDt7KT459WLfW b RwpO%v%)M|GE$&97EA`r'xCs4e!T\ YE(յJrM2E$5"Beg;S,ZDط*S&TfWp(6X}sCX %a waJP GxCx8PzPDL;9h[2Rw(6ֈ'Nb "P.DJUHw8ն|`>ZI"\QEcѳZm{@8kDh &L!5k10crDQDbq=c00ᎎVXbL8SDnk!~hҏ D|fĎ !$ȂJIbYD%ip72ƒ`?-ZÐ\$yq u 4)d퓋@Iӆr/+.81꤮"@&[J4ᣰ!6 (ljdʇ{2)PZ<*=j҉5Aw2sҢ4LkJ6(np`ѩX8(n]gAG8JrqB3dZb]*8#}6'UR ҩX(9R!/"r aR}LKFҨ5q{##/ êNF$3d$ N3#:(Q!~1|9M,!ݺ{|A"> ; "%k,b,k!m !h1~i]Zf!';XA{$?!J;7Qqa1f{"I籲YBN1rx;$abkmy?ڵ,20}KlkeHQE!{k{ukw3[ k{[[zۺAK;C˻;)  Pz${ P Q0?۽QRJ쫼+śJ  xP  px EӾ{[[QR ` ڻJ  6\+;<+3$\» HR K  ,Ġ XL [[P8s%H; C6 ͖;*}܊^>m 0m\fA, pNǯIJ>0ʿ.>(80(pQ.-M ^:\*ml_~rP 8^Q[Ė¾[[u 0L~p`>.=$P>CP ɸ7>5/P?{ +,8A2:O@[=$ > 3@^Uy:0$NQЇUFѮI0Vw0 Xv`N d13!q5DR*dR>V4HGx*BmozT7`H@kUV ZeȳFWNe(NTtmlK Vh=/&Ww nz - P Nb/KmAJKLBI4IXb#&RH.H"\hK&J^XE+Zዽ3DmEA*h=*P׸M?p*[ :](YpL ,d Τw&Wl5[,Y8!T@bs>C,@T/u9x0!r c$r=p^.FP !<4h~0)"wE!d'HA:E<@HLZ׻u}k`ډg1F\ (CD: ϒVUŒ>s#qLoӟg ! _I/ Fc.3)mpkӞI Xe*Z E*P'8QWKqצ(>Ăʛ.+,:v]Ĝ~ǀC.n8F]Sz[@Mt+*mL |Mtp8CF)!>4ݞGY?=yA x l"-#s,g:3N. yxnUҎtrbhzZ]Q}'*eVb}۱Eo`epCܞnztH@-lfjZ@) %(`^ ?#6{6Xc?fOh…e=2=4@?P8ɰ33F&?C@ DAq7@R 0*"K{wH{1#xTI31ዴX30M0i9A*h778S{=ʂv8Wˉ !&8'|yUUp볚H+|9S$@N1?3-PH `i2C6X{%0<(x0B&Rh&Ѳ ;)Kx)H ' \)}KAF'L;6 ` %@"(F[@ ЀCk{'$k;-۲J ]9o S麸QH=v\/X?`k0=a( 2HCcɄ)9MHh,zLQ7.4[ex  )6 x"p.sxЊ^NPjbKC 3"Ӛy L$R{$jCP$96j#$O?.3%?Ӷk=L5@ {$:HDzFؑ ӯ[!C]ݍ/l =U6 7-7EgͺY=tշkV^#}}]^ ݲ5,yID 1]JH[չ]-_ݴBf(^#&9s3X:^pݨ_=h|^Cޑ\M텾}ՀM[^ر-')&2ė_߽ +M+?\&X=B܍ |Q . 6M .ly`Z(շE%aE_OTI$uX&|իp !BhVx{.3`b'd2eHe( Hc\Bb $u 4~"OиiJ9.A'B ^mCbTʁpISbM@8*1@0001بMeb&b=QYN}XMpPKiX]ꩂeQKa&b>{"O!1Cdkn1It&3G008e^ [iepg՚^HXN pӇ .pP.c'n֦7 3cYc) AO_hV(oM``^MPH012ب4-xdixv1yjL#X&ٌ 8`j%f1l܄jv ȁok%n⹾V_cExЄt@6n~`np ?W\u}enlɦ9`: oZC]nh.Zm*h i]1vb܋or!upp'wr ` ? {YX֭ˀf-x^4Ά=!dcF@'oOPdFV;rd*@C!Q.LtKtLKtOLon'gg)(6`G s^.0E+Xs"l>78_VzO;3x1@0f =3؄&Pq_?}i{ .b&8Hl.s"+mg 7}w~wxIbq߰ ~5 x5,h71~=kg,AwCvwa0ZX6hE`1w3?'8 L>*by'[q_Xpx( %b3bv4xyRy&!]6y#9I.?wr?رoXgzp=7$N#bXcyԓ7.:dQ!'**g J(w`Z_p{Xmo{t{񅠄o_/,؋12X˟ Q)}4\_bhXpoj\o?*p}}ܵ}P8ѲYN"Y% ,M"V ܨp7FqЄF~Vl%̘2gҬinyӇ|d NE{zC)Zĸ\GܴI2-kLВfPň>ftf0FXQJt$HpD9N h$̚mIЛmu/a&Q# 3f &7ttfJ!"58#&izsܹ};Mߑ=a70FT0%myLNgJA`fw&4ЅAH!%7Ji{eFJ5zD!(CM"%&6|!Tp4 ɬC!>K |' TTҒ,',OT1]'U7qLdxXK+mA?p)J tLOV-xQ)-+?Rdg9NY_?ŊKJ׳Mc;A$!g?`r.tH­JK1"38!wC 6ob0C" ~0#Q0hWb|- 0K(沸qLp*.~1_\k Q,b& DF>2D.mIU=t +Ž-{sI^\kRperv &!Y暸ɗe@xP  s d\.pa%гPumoq:R^vc[ϛT@ [%͒'9ɾ5dA=nwW$g 47ѽgH.40% &^IGPziXw8M4le8u[0}s'唐׃K 'Ӹ{z 7W+z; Ik}w%xh6k*.`gGv(aa:;Ay' ڒr( s48ޱ7 s: >s{"Od!F`n|}Y1\=JL'h.ĒhAW i͟j!kQ4&YM}ޙDIq͒P`9T@"L֔EiT iPH\ !q-K $DS+dB D^reBi/?XqY!Ӭ}OtBpPJZ9IrUa ֨a%D?lPD%ą)\?\L& 'z"R_`:p 9IØL@Nhw!}<) X+eQ)DLJJTcf .+w`="zE16776|G%1D;%#;"AA-A5BLVC#oc:;@\}$HvNPሩeK8K$LƤL$M$PKKl$ϐ-O*<9ĄEJM.%S6S>a'$*]7>ǩIQRBT`7nLVW&N@Qv%]dd5XY%WN #?e)c.]M*\QH?.``a6aΡ1ZjbӀ-XCVW*^R$6^N)4b"c \'Z{oLNg^%d%$@Bcjc?\v 6^}PvklvdզKLȑ%%C+?P Kl"q$'o_(OXBgr_EdRd6'K@Drz'jZx%}`Z&NY†NNfU~%$C@(f.?\&D'$FaJLBq&L݄(ѦD Kh&)N hgf:@-)i 1(H$tJhuuRiL@ܗg£.ABݼPd†BF`q6j]BP̩0@n:3…Y Jj}v"*G>juioF} {u<`ޠZ@()h -(F *x6fPEn戺))؇N>kC@FBCrF00D)isjZg+v*j&b.[>gM&4`C9+'-"A@-@$<D·\,)Z1P@^l$@?Livl~yf@fc#6lUnie1$$AAf#LBDьČލ%,, ŲAmXZlP[jΥʂ1D.?܂Pn C(@Z-R-cZ+[jCk&8mC8@Bl>l(oN-?.o-@+/<.6PP@mf,ol+o$ʯ C?X/p,Ai V(Z(0A0r,nat+(3[G<"fJD piB  ݑ`߰r[M@$$ofJq$T1Zg-mՖH41W\1م$V)4%8!H*Ļ֍#' &lr&L}\%'r?0G2p*PD=-8ZV? `q[~%"?) 'P$gDIj 8kf3a&P7SSF'f݌J:Ƴ<3=׳=sP* t?!0ˁ+"%rKimfb4m(qLI%H#$GjFo+;]HtM>?)C h2Ac1,A+BCrD NuH*6(YK~b ֨Yg]MBlmuKL&]5^^^N:)8ܨ"0@ pD $RSrsf:hAB 0Qp5Gg5Jv)Dle0;qX]p7q_@?`#@*/H(u_\4C Pe sSPTTS))Czz6 !(jSiưV'}w$8!u8̖ +i'Wp_^wr)0w)|'B,1@$h H4 M( Gv(j̓tHH\|:o@@N>ρ?;#=x?ԋM>m}v}NF8N+zռf H8{ ;? D("?A l&4[D wTk>)3,o9:+7W'R? `@(H;C7?ꏺAS^=/ @tI4K*5fr(wH>> ?XA16l6'@7l Q\r)7u?B4yeJ&}XfL3ikYA&P$?H.=y$8r$ҕJ VIjV[AT5Ժ*+=Zɚ%썗Z"dEs i@Ŝ 7Jyqbŋ qkd7%Wr$8Lq!I%J:eeJ6wn>[lXrJ-!O`cZA,PqB*!Jh?CTb}9y{ĹsGK;|h/?ʺ`#I5jJ?+",X[碛0  #pAKţVl?м.GMK`QNB ݲP9:h CHO̔^ |F 6Q ,b0 ;$#t% 112ݔiEl |t4&]i@4G$EN:тÎ;g s`I ( } 00ԟPS S La: S$R Zbl9x!Ay Vǁ 3 0h} 3!54V. MkdS 8&H e!$ud0ޕ`a3CYbt4+g .obL},۠[&aBj&Ɵ"=99geD hB l 2쥉b,&q6$H.Q n:$$,uNf_i]'U 3pC P zX$H&hS2=qU8mxdHVWΧ<;_%`]׃xɆv/(5Y#JLa3gXSӟQ@Wda Assz%j Pg ua? JOFO(G%>tJaG:-*БPcfCNFTl  -I#AHh* KbK ߈8vd.z (Axȥ>Pt.^ O\ģPePǐh\Bryf")jV"x& ftS['49ҏOiMJ\hy-J^T fYPe6~D2'UeIZ䇨 s, RbԤ/҇Ң47,%* N(!/*1ˎ#)Nڑa-ZP#'O"Ia# `hQD,iX$kYzUu8rrLG꺅А,^{Hnӻ 8T=ASR Ա! XĂtu&#a$YcuZ-j]˱\0M^ă?>\xAH 2kBSc/A4 EMj&ѐ:ʬoVX^wg.+\|# aSFݟXx}Qv 4 0` HeN‰pmJ:2ʰ0ݦrgECvvJnMz h'Vo5|`Cl>ᴄBi;]a*qGna-/!(=뒗s2 =%[j W\?ߐ wp%2ƃ;*:(n!zL֞9WTioSv➹ĺ-LS#i5h\q_;Lgv@>˺V5~ˉJ zϮxGv#3>2SBOs̳rΣJWJ氥pȬFoO/S }pzJܶ0(k8 1(z xszk;;Sba~! |4Pd(LAK(,|',k&`&0 |D-#d~ϧ\|*pMzXPm",F}BP(.'YF4R`& \BpYP "%, a -@T$(s)ЂP+pAĨov  yhl?| Yoqho(Y(FL#$-2puӎ(3 &.w\w ,!i|y& 7iw&A('1MoPp+P+**/E6#F, KS A(A?v1] {"i1~ ': o 3}aB U-gգ9Etg vr"Lt'4v%i.v`a8<R&T%MA ( !p6bRKY 2h<2D&}=BGY?p$D }*O#+R'h+3$)a(|>N"!Y-.R.../-$qb2ƉUz>112#S2'2+2/33S3=SS=qH.%Ԅ:' 47N4#аRz1Q"6ULXSvr|MPtSxqLSxd1M.&12>1%$Nj)V$<73}Lsv>QE%)p4=D;;K;5-N<<*Ls$?&4((J#A#5AK(!&Y&aZQAA4Y|"$4ABDB3Es3$svXm@ߤ@hR`46HttY.`3&f$Efwp'$oR%|?HE?$KH=5*$v`RC$"D>YDFi$l24oHFTC#f#YHb]Po+ePow #S$xqo" .0ESߴ7^M U>b>nVa[q'5NRp2a6BoScG> AbUp'Iy.4`cYT?8U^Z{Y&U\VPS#v=g+5ATU&W4(awvOc|D iX?vFT llg*vh%v~?6>/awaq A"qqYz'tYvBGhScTfA#D2R|zV`6X6Y>kA o*3> A +6VKoWvhs#F?%W'GζbF!((E,x6ss;7minvHL79.usCtvHvNg)v'xyWyyyzWzWyswmw~zWAt+yPF|Ǘ||}W}ח}}~W~qG7FWNZX ؀X؁Xm=mG{9;؃?CX8;b#C$6x_cXg؀KX$LRJZG?B0A؈X؉X؈?ua炓SZ3ow5So[U$u㌍N~w؃8"JWfåYV["Bu{G$3 , >; = X<]e)>3B /_bB@>->XFENt7A";p'au[4]em~-vD#RM% e^5/ciCW^쭾E4V&%H@^cEC< B+׾K6,^B`/܎% pp$S?+ "%5a+Y9 Y\adib#]'Af^ ^=o,u_C`+7#(_*~_%ViFd@ݫ?Lnd^6?u+>ZGI Q0… */"$+.IӤ>"zcK<2ʕ,[|PFI)|q4AP3܉3)d } 5TOBl􂏐„(ekbŹt`Rۺ} 7PUMs'I={Pؿ54iΝC 9df^؟֝ԙ 6~<9ٴRkDž̼{ּa<>Lˈ~{)cč'=v{Y˛=II<&᛾2ۿ?~!eEHa^d`~b"a#b*W.c(c6ވcE4c>BIAdJ8N> qGFIeV5ZnɥCYv fU~)ffIjycl 'MIg(igz6~_JhyWh.h> iNJi^sB`JꂚBgʂ`E|a\BgSM2[:ɫ¶^!us$fׯ;l'ٲP%-bjRZCXoq뛷,[mI ݹ+ɻoFs$[.N$o>qD 3lC1Oy[g1V[,S*F rb08 ~DtTsJ4A-%HJд0[$Foqe=[2-'8)cKn ݽiwfB&Md8o'.'3J'uyE)Sg:4.S$<72Cib }ޛ$( Q?:* !$ИL8n?A/JSIJG?3?It,fRr*(P^Iefx9LI4Z{=~*"Q.JH.@J%E:܍vy"RЮƒD 'ҳ_H, cB$R;Ol {;'*RńYHFv⣈P9N!tACHPbcJđ2YF->>ǏbYF\ #S2M"T,W&#IG-%E4,%cuDxXob(ʍ0MvY O/0TM41H8,V3d.fj$y;߉ĕ 'ZEOx:zdDbK0d,?jHŜ"? JXlm8Mh̝,]e w(k3}>N@IT,n`BקB"9J͜蔀*?f*SPAhI²ByGH\/[T0To'UFʠ&VczTx-'euQZTWˠj\UdIڐ4D3žAH=l[Kb68z,_ƩJI-\$edU(Ovmak >}i#VwIlYX-1|WҠV!&P\l6 [9PzѴnj Q>g{߶t UHzQ޻IH Գ [+a,MF/id˥~scHyjqdj9.\fz4QfF񕏌ɴ!vs'>;NK~ 2%pAp%q|2GL{K0ك`?% #yw;;>A(}x x w}~}~zH`~P|8vpzos!Pz|WzK@+Hww~Pz\  y{8~̇7vz8P A~z7t :̇pz~}G$n烜{sƱqz~ xӇz({gS\wx yhLJ<pX?` H`:` 舚 xkFk  FHz:ЉvwGFǃ%;PK3:b5bPK&AOEBPS/img/strms061.gifQLGIF89aR???rrr999///___OOOooo@@@000``` PPPyyy***ְYYY<<<ppp>>>|||lll{{{qqqvvvbbb!,REEýɸ$ߌEO@~<( Ax "dРĆ-f|x/Ɖ+rcG'GzL $ =8a͋7Kd `':2i 9gQGy [SnO{&95(ѫF"ժ*SNB 7 Ҧ@IZܮm+."«_n[[s FŠGN|rœg~dϔZݙgӡV]iשAW`&//K93Dn*.xXrgC].p,a~zc: َt#o_oT꺈FT]r;ܛ9*!"eĨsФS~=ƈu-^-yXbEݦx:5 G𱡈al\ EBFp7L.[̳PdߵQ(Z{-Nj6iz^׽K/ڋa_&߆wʄ+3knydpL:7`ZAVz8|#EY*R QL{"4 p"8T==M+)!dD GfĎ(#d0&a(?8uki8JNY[ 5+ [X:\F sa\"B`MX;E(i˙D z2%"@ jA-hk $SZmahVm(N&ty{wog\ehZR *79Ak}eBs0K'e+S/Qtz(>a m=lόt0}8وPWJ4Q/uizmD4*fD“70;$ Α l#}{Kc@. &ZPqIDq#eC1lq "FD͈NvxL=ڵ:c.4|_I͙c!X9 ƾ{gl!xzŽgs_uw~{Wcvh^[8+eǖk/凅y<_ֱuOWֻGc/e:ww%`O[V& I[Ͼ%!uǟ%쏿o?'w^kTx h-$jԀ8X FF$_yYaǁ X%$(J%02*؂+x.h4wa:׃w&'EsV#G?a6,U K$D4oPh69%bkqRM>J-d1j(94,Ja#uXaH6I'w7yR0[3)YN\R{4S׵O~hEa'(jx0)b*q"LE<'ʢ,Tx<(.yT,\BbieMrU>5(,Vʈu#8,5DKE)R(4@8%C- {4dbXP9C/fDnX6#+(ų8Zk&RDAÔ3с68&Y&(9*ђ0I24Yq: Q(BYFa7JQ(MPY!8VPIzB)VYSDkc)#8CC4`)<TtEx%ҖngYfI`8TA pH59Yyٙ9 $7Yy#虮lfF\Tzxy=b zp#>SBF@ė$G{ w;3%*ϗca'*:s_+#_kžr+6_" /y (;(RKP2T[ @.y %+q(mR(9(e:YՅ" ڠDp3Hy0&'$ʩ AqZeTHJlH\>N2K\5Zk!_O)8/3Ƥ"c!h+a!k&?2mQ@hAȔWnDRbA<'<%ȋMi\',>ȎMl1trGTɑ"DQdřL5t|dʬɣ74Uء/(|"8Z8qq6EAX.R;&~\'u%"Q0J=B;#ˬa|s$VrOpdRMJIsF8i,i<)pd&'ۑqWh9,My"d]S@vfArDeacJO2nRs7e-(ԜiVR03fU땅E3v>r*_}QuwۨѪwև )%&)HS76gv+m.т"c(ьY 2|ij6];#zKQ{&ld V 'Is<2]c2atV1H-ݲ ro߈L,k#g10C5/1MDzSN:UN4;0z;z2K:2sz)&<4):?"kGm.(_64"-,qAs.)]6D~阞隞J>1A겷>^n5xB^D^>.Dîp 0Ǿx-P#`ώѾFN?yPH@޾ a>3!DuqɁn k򾢯$Uz}]NjR/EnrN ? ;2ztDFItr-ktn++3m$/t̒d'u_Qc%4PRZ"**:ů҆9z[P\8O+@rg?)&%O9]7﹏4 ,ToRΑk.QM<0D5  EDEE D  ED DDD EDDDCЪDE ׎E  EEDE "@b0`!yFYG(ж5k.g *' +nB?n i" es(RGBDJAT&S-k:#$"PFGu]1c(J,-ҨBxnW!ԗ ¨ fW^"FȾ$p”WoNqβZ4[ -k+3U u 8;ª3%bIx?ӂVR奤O Bejȹy(8pN糀i! πH|9glÕnj[\&L,n @R`6.ZUw#=rR7DIM}Da*\^m&Zn`B=Se 9#P4U<6#'eM"yY K8EBD$(L0*(?Z @$IpY&iXr:H-'cUHbd8=Of§hi0 H$B@M&Z>Jx`E(ꨤ"'?8<2A%"AX@!.u7A^;`QlF @;Bc>,AqHT{ns,[nL^J 4 N:rmV 0^^Kk{#f1n/"`@j\ bO8O,3Vlt|-;82u:7r/"%AoSE=tr,$<-J%Tr$L!d,ClcDQ },@$Ubz)r8>xς ,a B`gzNN,:E5^RV H$pMXb~XZоo9R-<-rfݲ@ JQĶ1h$MFQrT4>qH)TDzb(C@ZrX<߮QFV=wLTiI ,&Z4#x&!3RGVF+^2 *eA6m;eY 1\ZKb0\ UO~S; -R4%YNH^N~DcOs-_ y3tRZY@-ǶZi*z,àh<MRIT8^/ULX>}ܠN+5Q/c渕@FҮLTX J P@+p$Î=EH!IUQIŸU`]*[_"6@ckqN*Nqsk/-- y]}*V` C*V 4XЪPvarmsMӷp~0e)^ `c{b:5SW7H^VNubg7NoMhGQ0:lXT$ƙs@ xG)Nˋq6vNctl1˲Yy])2drMv#SSS%,U,'/tdr&VNi摗g/nƜ9σ3<9hTЈNF;ѐ'h?ˆҘδ7LYc $ԨNWVը΁*jZָn5͕QWԹWM ZȮ,PnMj[N*pn{6tpӦv{{-|rֶfo~Û:s܊[v=OƩp@o,GN(Oi0_>Q|8yɩr3 ª0ї3nPӧ.6StIueV:؝涎V?;v8a;ܗ'cvƗε}x{V+6høE>3s:bQs&O=/J$cg=[^HEρ+OwƧ77Ň,?'~dK~'_~?O?~?bg7GWn6|,Ѐ'Gg }(~}#(!ȁ (*؂2Ȃ43h,H 05=?ȃ88>H@`QAF(EGNQ`)HSL(]_ȅc؄bfX^`ixm*h gDzyz致ׇ8zXyGhyq H7 h8x7xy'1+c׉Њ1V4'Ԑ phe؋#"3%"hED HEH "0xٸ XEp討*ȍXH%VE  Dh%`XX6 鐪D   9'yp )`DБ+`0-YVh0`;9 VhCYD`GIMIOIQ9UyYI[9 i>9Б0Yhhc"`KIѐЕ`Gzɗr闊Yx 阎 y*`)К" Bٙٓy镵 IY, 9*pț 1' iuȜΉYُi阝EA]#KCP ;19ĐKħ$R! :E߂PT>'pd1@$= #5 2cl8 K8rlEt<@!w7$S\!Pb'1 ϔ ,j"a8\ L9|@OL<#Qdq pm4xaIb:ˇT—${ S ,2R 'D 'AE~'a#a*MV,dE;=M|@\ UDaYߠ`ǻ վLI࿨S..b4}Ь35M aaa;DGI[qώ7 ; Vݛ_P]z[ՙ)5Ӥ5-uWT ] jcBLYcf=9oq aˀ{EӃ|2G9bVd_l7^αC A K MU1" m01~^-qR؋''uEL+ٔ .K= bzM y1)wѪ3=d0QF< >S"u݉1DdĬ=I td2>׾E2xz,{<=)a#i h8:" p," !q!2= 62")!&7A!6!f<| >I3ސӎu4<&pQ @21.T' sk!ʮ '!@3;jș ڈ&;Fζa9e>c Ǵn"#4  1Or` € 1'N#G=bu;{>mU1 P놡"W@`=N ְ=ˎ /( >>mc44 Dq( s'٭ ]]L/*l 6JmL^ԗp^V/i~d ׏0-ѭ>+]N-f.T}f0_6޽8[Ju9e3pfyB D9AO䁃ygLwPtR8No7UtDs t@=?u]/r`:Y_7[?t&AgM߈s*6S,vLgSt/4A]dii7rիuc!ug@{i#f ) `7`.̲&qB­r4-AAl,o-+ k =-½?=5&Kh?Z*VYEdW^q+v,e|9,Bkؒ [p;_X=|X#/?O }УHO!5k3pE@eUs$ Hr50AW`Fk E:%2R.iT&\eXi[FH0 gP.iT̺Wɚ=ml5r}{H޽} cx$dnGYТΣ;|b2yab|~;q>jN up), AGϙ-OJ#}/7|PhhTu^ AbT_+Gr)+¿p Q#x ,pa"1%b&EQ0@ ( SB. w(MG<E(2$"Q8C *zO2NIA/zS 8ţ$ "è͍E@@ |A`^8MP|r8DQ|}r*Ae$06]17(ldI Ioed=&"ɹOz LJ2+E7Y|.+s]<\<',`ubI@Z&f$8<{G "!G駚_l n:83rBj &Iw 2yOA*l' ug &$^0QRk YG0@aRl> p$ H ;qZQ2MmS4.f!G-DRsyիbcDT8BZpғ&Pc$`+^_.z.S L`+*!|4uZDM"ⲱKd H^r]ZEjY qq$EVVc)F_PFܜ+Qx[QoîZ600iV%&+7}b]w8̯^C8-RDIe^͵хc]rMb r `bZl ,. ǃaZ3@eb5Z_V"")fE_u֖h#;<« -IPiʘ oQDOdwظt2"l)/U~@_d5g{9x:ن\T~gour2t` j\<Ss7'aMnt &{? tlwSޚ]zUt@.e֏~u: ;ʻB|qNxϻOl7ۃX/'O[| |{gpj yWzoogy/\٦}_hOMa)gO|Sؗ>g/eO~o~ gA}?w?;w}{Cn1P ؀8؀4 -P؁8AX&Hx,؂Z$uj`m.'T3h@8GU׃>?BXAFwH(=LtesQDR/Qq[Zm Q@a_ `9rZ4Jl#W 68Jpntq J  ) KчVr7W:' D yq5#P&v!҆tQx׉HAFcRa ~c >`6(, bVY >~gzU;¨&bD,9uM:gpHz 1V0%C  l$kh0Vd@4"C]ht"(.Z Z%i h1pY#6CDёaB]QNrI^h P!HPe܁В^S#Di⒔LIOi mbJ2TID&HIwR 4N P5*sgyLjy+V.m9+ }w9DHNᒃ7QF3%_I ߳"`JY#ĐGhvDH b4 a&s#aXD4 㒾 Qu]8@y[bb&> J׉h9tعo8 D XF CƟtAfZ u : <)ס Z:#*ᆢ3#*l#J_h1&>(s=3"rG& F:$fwIW>COZwQPUڡpGw۩DɣsZ6M)wn(3ԦqZJ Z#cPŇym{ }z*)*wDJJ 'w mB:J CשJ9Hqܐzq5C/1Ǣ04JJbtM/ꪹHؚ>iBn* *0 F:ZzؚںIQWڭ:ZJ֬׮jw{hxqwB\v;ZcS ۰KCjvs&WyC D#kW%{"뱯@uxn0ۉf5;5@yZB9 F;? mc"@xV J[jYdH<)H:봣M]sDDHM17P9Y MCukr \3s]Z*jKn{q;'1pMGH11d׋ tSz}{f0P˽{tޛp+=Z8=KJK4 !FLo7 ۃ¿ܼ^mF `[ϣ`xJ‘ %dǷ$z\飷 ghJkgD8 MW(bLF1^ 9vK<BbBNpaWmN.Q@cX|ЈhzrAV\DM seXf&& Hx$X3I i0̣fLݥDܠ@JVۍ~GNN], -^7*<<żNl>.y~@;6Ǿq9l;ij읢䄄b:>hb휢h /lh [nu0n;˹- 0M찫[Jl2ĹdEL񙰨n *AAoO A>n&˃,?C{~ >̤L\:<>@B?D_=~kJLN/#VW'o\X`߮Z{}xjo\ ʗpOlиBo?g}zOd}|r_WF{|:t/~os}. _n*wIdsX* @ITיZ J j0ɷK:MenN7bM,))0wY sIDEEDD D   EDEɅ̜ʥ ߕȄˇΈ ӆ  XD\!" i&kE|ZPS&01 V,HngB 39$Qb&`,GP,BDʣfa@!ҧPM,b B A & H4$-mԪWaׯaǖp6Z"m IW1Y#K&4@7jq.PKxz䕕/O̳Gf]:8C> yS XP6/lu Uq-o^luҩ 2;oe~Z|}pS!@8Ѐ `iEku] G H2bvT Th",SQ''x@ n4"{"e}CKD@"/.ȝ hy !^}Ra$\J!!A2L)4PA8 @ψ^%똉qnBhw2sg'? hncTJiP]&+rꪥ Zj+:.+ƾVlڠDK&k?ڤ$_ꝰB'*ŔSm*2~[%;ܹ,`;o]2΀ɘ 0;l\d@*Vv?{pZ|SŔI4+OڲDdz@9gs?LpFcS4ZK[t>K>XYO'[_}TbǒKg2uQ .Z*ZHQ+7ߦTS 6K2*. 7o^H[{NP"zխϾr룑/o'7/<WoCjШL/okN x/+Ҥ)[D880 %#2BZ@E@o8`qA>pD`"Ђp#(B pڱІ#I8vMAxh].5D"DnUQ,DYjvHJHL8&^YP&lV />a,H3*!!č<1,Oː< 5`HMc!HdC"XrHOJfuM#>.^c1 ` `N3GB=?VwM.x u7vB{{)xS4z0F|D&C[}$!D=al2)iQĐs֏qS V8hht cC2//3 ⋹[_g>1vH_Di}Y~a{GD1`Bs{#|!|h2VU0VR0PDp g{~'v\hU,t 8HX!$ pd(Sni'~nJ{%dž]% `Fфsu@#27 'm2EW Y…2`x(d[fh@1a^@m`aPjcuO6 Pbx@hivp(dfpDE4w楉xH% A~hʀpGȊT1Wp]E 'H]'хz`Ax xbɸ)PR$ بޘ 9( @戎8zg~(&hQEdsL:Q&oQkZsĐ6 1сгLZ0w~XYъ@3d)Eqƒ &ox2 s)8Aה+Aa\p]d?Sp IpDqhD2f+MFÖ(#qƁson7đ`;9hi;Ԩ~1S.gw!'`߆*-sh#2D RniPƛ`%:i)B3gi'R 3Wy虞깞ٞ yy4*0s7Zz ڠ=Y5ڡ yǗ2J*,ڢ.02:4Z+R6<ڣ>>b@zHJB: DjFR:TMm)QZ\ڥ^j'_Zf Wnb*1`Bnpr:tZvzxzoB 7mʧ:ZZ~+Yʛ㚌)I+y󛋪j+8+Y|,Xj+&*zaz-׫Nj)*-OZB:kשΊI`z٪תR ڭ&1jʭ窮j亭ڮʮ劯Zʯ *: K k {K P k+k.[* -[!k,{:<*v4>;=kE;JK;MO+L;# ){Y;[]_kacZ{\^`˶bdi[kkmt{|}~;q;PKKQQPK&AOEBPS/img/strms020.gifY=GIF89a@@@ó***rrr"""&%%---vvvhhhZZZ666jjjzzz;;;|||<<<PPPnnn```HHHbbbFFFlllNNNttt XXXBBBSSSDDD\\\^^^fffdddKKK000TTTLLLVVV888444222忿(((???yyypppᶶ݁xxxȻļʺ>>>~~~ƪqqq0./756GGGCCC(&'ggg777MMM?=>}}}ooommm 򧧧kkk534999хa_`OMNqopQQQzyzQOP312]]]_]^=;@ Lj##aR}6" J 4 Q H|-?Ji"zdE)9@(piPAHP(E. j"?(d$ }:-q gH/(\@ D\Af<QiHbRH*Z񊢘@rX\`"A(IԈp6p@6,+`9.Yc:y$"G@RD G=JhtdOA2D5Nzڂ?zDA!,x%re'aKb^yb&Jsf:6AXlaXF@jSଌ%)X Lge0Qf4 \XN`=B p hLV @ߔn0 pЉ ~08@ ܰx>X @, FbB,I̸IR-La l! pt%84a $A :hE`VѴ @!fL)79va@5h# @@U  ?&W`0@pXTU, iD^ZQX#!X:,b(@T;;pG4 U9u ķFv6A@$8 P-0nE qG[斱ZW)LⷕF`& ԸB>PP 2! >T $ . b`!VA`PR,0#9p#80 HU G*,!"B 1{`tĸsGjX3 Q懰y G p:*EB08b JD4a@ l# +p 4A1i87 @c0x> ;D`',!с/d$HGLAii3Ӎ6l@>p .'(V`H w2  p8a' #Pad pxB  d1%ieVJqhr:46%l@I]ڀڻ? AaQ00uD83NFP w1T\&D:`E*XbHP {@ȤMdafIw"Bv4?h%H`k" €?ox+І@ '}B 0(oUe81O&hAӫ,l` ^pA{aKCO|"` S |eRb0 gAP_>% .c6p2o60Q&`ށ s7 { y'4 /q|0u64 xBaq BQ0pC0 &]. ^NXP K R$U.%  9gxP_ 0%v0| PW7Ȱ@ ^ a^0 U(wPQ@@cA#1 p` 6`>|r ;p"}Q ``6P0h=ΐG{ @` h3`/]P#$(]?@[[0 8ൎ!&}00P #u GNc fJJ`u8:Q;@QHzPEp&y@[2cVyXZ\ٕ^_ ̦BB hjlٖnp'PdypO!0D6`WB&tywxZeP%D?6 wP)> #i5bQ!0QIq ; tPynQJ %ћ96V"G!@P&,dLј`męj Ӧיٹ Qlps30ʠRg癞#^a5i\` ^a7@>gɚ Pe` MP^tpm`/m+J`g S"@x  cJz@X0 |Y/@8@Hp n;p8p- p  s) o հ'sDjyQ6 M:Naa .`0 `LB ( xV0;q,I/3y@/0c+ tj++6RQ:mԄl 'P6@spCPt`C0M e@bFPT%J# Y T pZºZv )@JE H] j.q6 ej  @Ka, ' `@v| r;s`; w mL:G [!A,phpH^qBۯ; l!'-P rH`+TFPcO=&e p +!e$;yTCp0+Q𮀰n)ɥҳ!@ l@ u;`-:` ` dH09fyPV "VUGHQE5 A 0􊮷ʻ+P p {0@iPq]@tQ { v;e8p!#=s@~:4oBV'  ʡ*ð !0qy0XCc "`\[,Ep`icbuL\*P ' q;\j`8T Un 0 m0`6<0Yp̘` i&Łg@ e@& pa 7@B@Y ;^P#+9EpHj0|ܖ` Ƃ[3 E 3Pxpl@@ /7 04J@s <0 a˶Q\p # 1,PK`P0 @ 9FZsL.>P+ y# UQyP.p+P 'f+1ȕ̐9T]`j@J0_ *`p[!JMԽaAu6PCP/p = @ Bc IP-K-VPo,\p~ `mPܜw .|$|3!*Aב#dkү!"Aނlާl0'ݍ ՝ٿv<1A&Qep0  ~2ߣ9@Ï<2M0$^&~(*,$sp< :<>@ eٹ@ LN .tIXH5 ]WbG[>]NUa>lFeg^i`PTpxz|~TPe=%0a߈3@ߍ$ [ޘ.$QC5=) x^ a>~0 B,Ca` 2%N'ϱ'AbP Yq Mr 60M&#` n @P0@ Kpݤ[Qg+QmHC[LxȂI.!!E*$P~()a?3 qG!E$)K-%UdKVeD 9[~9^4q|RISx&PiѩdʮaŎU)Md T]]+ּ&۟c7]ÉE%`"J(r1((4!]iԩUfZ!\<1w:$%,gsѥO]Zڏy[kj@Z%|S_60qORA$Op4@PnAcD"(`8t Xᡉ'$F0ş%|pZIYֻ˸A! MF:#X>5A@ n2hǐX/  ɐ )"N@ -"4>d R%wP@vSN:#.w% p`>] i$0a mg*8 8ԙڎSdH>*@5u -1hBWFSУdBiŒ|uK6B$@w 9#!hIAg[tx"1cH' H(V2c$/D+3#6m.ƊD`By0N;Ù;:#Vklilr8!X,ф+2 J@5#|HD6ikM9†(0tS8%a#cr}wޛ#!3vA2XtPM 裗~zEFnf v?/sw+}b}ߤ7|6B %Pң @j@hoC , a Z,T *3 Q*K BTb  @>XA6Cq$<Bx"C(D":E4R/ g50&c >0Q` F$.rl2!z"p,"  CCvE{E"i08'ChI;L de+]JXRe,dR'G# ?vqJe-LhFV.Ϣ >G`~ ې?!,"ʬ.k%3H qx3 ' 6J`r;ˢl& xCRV`4@H `\aekCIBGs`8w0hPƴFp.`&TzJw4'0L@S7D! L( :<<  50@R$s{1 #5ALUZUِU/t V0C`L r[e}"$@.z]*7l@u l!Iؙ6 ]@B[Th@ RD*19Axφv{=)jUv*VgX6eB@A PADASP.s ]Њ6{5-q]6z(L+[چ}yѫ^rꗴLR$=0b[) w} Qp~[Z8ÀKH\X& P&<8eA3)ROj)|]|f4s$ ^/\B*p! 0++ pC4؃g}76r(j8QNӛ޴<6ċ]HA* ]D{֍H6?qJcȮ.}ncsmya/ AI  g| Ӟz8 }Sv/^+a?eWX͇tǑ@= >ի-S:ucH RWK؄IЃ71 `KN;Al @&Ճ??( 6$^z?˹L?(34[B&B@x í+#'zHY B!t^{C8Bi$A+AA['((2 `4=5 (7$8.9|3BCm=|J1@ 4C9XiizDdBXB! 8#h0B? \d00=.2B񉁳k0OE؁ Fcj!/HTB(%RQ:@%H@^Ի@0!І4(bBˉ(0.T/+/0c `"g04+E0G9Lخpi@B=>XXXE?FX1R,#;.h s(:HSCYnHsJ 8 @3{ӿڃG|K)AhC $LL\LlL|LȌL'LhP/@>(׻Ȼ̿6`cr(D<(zMl̼NHd#`gH5P>@ÔLN ĔRH 7+zO Ѓ# #S t /58)qH Ux 4?5>( /+"<(ŚPPH6u(GX8} LKQg7ѧ:#͉#RBzҗ@&6&8,u tءhJ9R9 SR }"4SNML :H( , %lBJT5,: ;x$ zx 0 q=7-2Ё(P8T`$؆t`IYTxtɗ\Ā[31Q`0y 8V|U(L`\+0ThՒzhPlmք֕\2J ЄIH@RwWȖUHE|( (XH؅MK'UȖ7*GЃ-@eVm|5YH(n:tXؚ0,1cYЙ0nn>[%$6 #% ! :8ZTR%iz[[L՚ n? 8=ȫ`˃*0`BGR ,Ȁ/[Yr-<uT`ЅUx(j)В'@n'u ?` Ё-Z( ۑ< :} JPU0Kn,x/k8H*P8 Haa^^ @X[@ɗKviGa  @ ! hC@WH#Pj(vHMGA|90lzaEa-Є`.8X2@ Sȅ!>ZM tLX ) *(VH"$k! XipLp؀j@V (ّ=H6JXq7b7XW4pEJK /LM9-w f(f@!HJ!76 in-gh;u'\=gtvjM!mtgν%v#opgg-2vgfgg)1fR"XJhP`h:Yi.i>i,(bgwC*B.iiiijF8l=<0nj~jjjj]#N[jj Z<kNk걆<6kkkkk 5FVFjl `(fSxƞlʮl˾lllllmvv m8K^mnm~m؎mٞmڮm۾mmݞ홄 P2v>nH&n&xGpH8?PGW98 &XHdX3@6 薓@j88|pi +H[ FHmk@咈1Pa 5ﻨo!akp `p 9Y ~8 3zp ؀F3x;H7:!>0j0|SZ EPbf)u8XRXVz`nP<Y 2T8p084qq >uXp9w23`ou^8tȇpІK`S601`i&h@AG"ȃM0 I0@tPta3E(95P. -5D8&(G$@jt@10P "bk؃r12X.h )Qp p_qE8 TugZȇ0Hh-P 8(p&hw8V^ ȃp5JP8ׂ@)q>:OXyw+` `0x y 1/z@*Ȁa8@cޏ5pǁY@"|d䕂)؀}X hGq?tW 'Qp&@H u#`M ! nx(c`0 "F;m@H p}@Z1p<?ܱPcRPFṵ٦Ɨ"G,i$ʓVeDM2g A ͜:K)"B1BT[q( F;dAG6"–B0`$ʍSRNi$bмBϿI(p*%T&Ԁu")$9W|9TN81sN@L%)5n5%:2]e"Y7p,Q_aBK3o9ҧSn:ڷS0`Xx ͰuCnGwkV$VQz|])ޚ( mB1z~W*z:!ߋ)녰k7)IOu_zS*zA~֓:/^@ϙ4ȰW9OcaB0R8~ >ڠC6av"$3S!Q%HOf`Y'SL,(Y&rpLdFy@l>`EdPZ b-R(eQul:C* ف<<'-T >f1$I0L<-0  V8qNF`4=$\X & 5h]h[4AYEBUb䙟H|;$%+0 (s0IG4#/"ɦz  &bD'٧-yDŦ"1C Pm$a  N. MjGr-aIPd8}[(zRLHccܙ b g=b =0F]ܮv&n,>ʞc| o>`E~S"1 c( >Oxė-Ġ 5OQ2q5.T@ $@CA(qӮ')H<]ATrӠp-4-9Hc y3,9ӹ @0wLgrj,0$u>4缄 I  pf .(|:`Ș+t o@>^@mN04h='@) k3*)b!3%ƐDC3&! @AtLIgI>p٢0`]gZv|R1exRCHIV<#C~$r&B}w6Fyl;|ǧ9?cj`I(>@LҽP<ޯÃiJ@.䥒c/'9yxin ap[ h{Ct[&ym((>U.,3 't8^ 48`v@!{]h0CNhD' a3 a{LʰT{]U·}yHO~$y壍hʞI^rZ=5#]h !@$/Og"tUt{nc2!^@8>0H(G>_J^I" :A_IHh'6%FD,B@>G^ xL`u_Mݗđ D@  RK`J_~_pD ]5K>L \o`Z}a!!a> 4o"!*!`r̀ v!J`!!I8@aoL"JT"!ơ!!"v}I8 PWJ>>EEELLL...QQQzzz&&&'''ƩNNNjjjWWWfffBBBCCCDDD===RRR\\\222!, A Ê /6ѡF-.8ACM&DP@] YP8lԙdϕ?_9fѝ>[&fӜOy*:UhUWfEJiW_UаC .D;PA+Nnv-7m߶;8/ߵ&۰qƑC>fre"1OrfΗAkkiIc5ukZYv=6Re~ڽmn㾑 G9`炡nb댱]]^Ã] R4YGϛ)ه?V oo *7FJXajߕqXs"yفX'v(vg-\,4 <@)DiH&L6PF)TV waI\v`)dihip)tikx|矀*y8衈&袌6Zg[:*餔Vj饖-`駠*ꨤjꩨꪬ꫰*무j뭸뮼Zz^z?.$,#4@VkZx@ηDlc8AH@8\k i|xH{n^@n /כW 美=N 80<+x@xP,p ,м\s(+@3)Ls@++@ @Ol\w# -nC 8ms@ 2;-Il.:<(v,h<7ȊS vBiзh[ [v8 ,̺8:7kOSGl n; .@gS|{Wui7qnS//nЬ@k ,˻@ r:_k [_8g:Wc(+t;ovh[@hp,`[W '1 v \ qPWKȼަmLYB/nGvbô7%^V<:;yhL0ů:T 5]! 5mYsX+֫{aT R`w;k}T&7*N p,WpDʒY!%1+ ǸD[uH$JN@0l {&@q+j,clfPX@3f3@&b:bXU+@PNU+'j4dzJЂTIٲІ:!s^&]zF7IJpQ!AR(i5YYʲ>1G L-,tqE@wM d6bk!M.oZ$@ =pZ}k]5 Ҁ{e1oy Q4FS/ 8/lKu1\KYKY/)Q =ϥ1c}KYGU]lwޯ.5[76Mn: 26@-]<5y'w1 鮘!m1HU/8>&Ղ)+wCpkƍ3 bY80YO[ɘw 32ba6[ǼRu)y6Dcbs\;W=_c=K=q{LQ,68nɌXeQ0ԭ-@,@7wifX逢M dX[iȿ~-PƊ9"z `k,~ovlBLUo5x`Z⏆v$opOi堷-f=SLY9xM3 @і!{qnx*Q H=twFoӣskmس eS9i6 <+,j>+r@[A5G%uT|${m;>xi i^≥vu@G{iuwQ\dGo Gx[]۫]=}`* p_-ǎ^+i3T6-t{~~9,^Xo, oX@م2buߓ*yY*mF4ʒ<7*1 pqsYd3U.W/d5;2M5#BZp !("P6u&\)+7'rrc0Q :G{sOw69븏1AT#@$FT5/ZS@؏WI+YɇhhAy^7}}wd7Nm<3sOtSK}I:˜cO@C,Cs0:y.cRZ&CI+0*$7OsSod7LJU"sq' Dlv96fi},'.0l6МɕI9+ $Wv{;&<~LDv;DpnI虗$o1Jrs\9ԏ[#VI0Y2{9%0&`PeBڠA$^35rj,Vsu;(0*uO$3S&)=Њ}i}DS2yCM3+1%QQڨAcAq]Qrw#0GM: ʠcrԉL6(Z@/} '?Y6aY>Ҭ<bBZCr p0% L*p%P*!p0&0%`! J!*Ja:#T*b"[~ :˪dȫf(X ` :@vR i*˪& Ҋ) 0Ѭ&8!P%ʩ$*'B;I- .4:YT .ۚ`.+&pMzB"' Q{`bbOFH%i?@| &b;(cW{$ 20._h%*`[}[j့|ЪrCZ"J:M $O+(&Zk␶D`q[Ph$pz[R$}{@7K( ?{;k> ߒp)]0KS  Dk&pWٶ˚)@{ۻ8{ :TB+%Kk#ɻ A(=+֛Y UܪMҚ"@:z'Ш*([A 1BFǟM  E ٰRSQSQ St-`p p5 $ G0)li4,2t#~L 0/-0 pW$ &6-ɚ${{<@X-}(sm6JvS/)gShWP0Ͷ䜰ÚlƾH][[yڲc˭A-îU|3\׌=(Yy6.< [ t $32$jW3@q7S׏,x2xv.~8-hgnmZЀ, 30)>q@^ \P`ACћ@:2Q/5mHn".;ghX7Kwݾ[ì+ru K!VbQ9ªw+{۷ {c-h7W{-˿l=jY/ MZr%z(?p/&zg y =ڥD HP.KMۺɿ@ $9v8M3zIO0uA [;߱k?<|,LkJP%`AI;wz((}K˯3?#>5w6ƢSd) -2N6:>B~۹} ¯M,n⁵gt NҀ0I*!0ZrJz곀G[DFIyJM)[3C&4O6&-d,-W|:ٟ=^7;?CF  #WGmQ >a]*mEڕt?Ƙݣ2Rd*7&C<*@OTM[6m6IcѳW?[^qxk WZX$5 < M n:P Lz/ka45(4F8{JMɧRgV>2mPO8Sa#obOF@|0Ic l2Ǝ7j``pWѢGѢ*B! ~"0  ^*7^P L8%Q2' TTSIU[ɱ!OZkK&g)@RH +V!pG9%,1,h!<[O 8!TЭRl w;FQ]T_qc5Tau[oW,ے.}@p"2#2 4ap֨,B(#Te?A n6>ף*Z&w(4yũJ5`~E `V xj!cʂ۔*_;z*{l{Jt` 0P:p'bONYE!NDj! l=$X̱5qPh'ry ,(Q Dߤ]zv^S:&:+-U2[!L%bXvb o(ljSȎ^\UM?XW| h-!Z 'k}[7O̱]G*@Jfb `t 0``6 oJp gP(E H[Ζ+/yLX^WzM=XB؁<FUI|5 6'bJ`.VLB0 `s&;Q)`!0 "tw' J`3 `xlɺ%J@&.]Bʱ"6$^C1yГa{S"DD'1KD)$6ʍMe1{BGq`Q )-"t'4$u Y݅5 ^5Nv'*&)ѦP8]G!&@MhSUÄ#BJ3lrMrZӐlO`QtuDtc)㉼ ̳CIDAQHDYJ+mP/ )HFUg A,PVBVD H D*Eb5W00Hn: 2PġJO~! hDʰĠ9l0V,djFULfdSR(9(P]g ܨh:A Eݐl 6BW5+r|GpUdi[S޶6pH;K<@0@?J(ћ@ @S?@|? LۿX@" ph%p!` SA!B"T,`~-l .d;doo&oFfmmnض=m0mxisoNV .*^pl '^,Y  Gp1  pp_opn XBq H%FsНMBr_qo+ Ӳwj2_a^s%_ jr& > )W*rPp>o*I f6Qi !0:crj0afHOsqOs֤;M'9ژ/ !t:uU#y9sP\sM2!ܸ8 IsxF2 (3c] W^uP`8nI:&aD0u&? {ʣn^gn_?'lp@R'嘍qĮ8cD2D 3d#Js / w^>xSg*2l4O|_|o||ȏ|_| yp?h)Nh{B|#4dG|O}Շb՟}6?P*}}~~/~?~}w} sz7}WX ~r ~}]J}cm^~Np2`8 s !D #BtH"ƌ7r#Ȑ"G,i@Vl%̘2gҬʜR <(Pt F@@CUqFXh$@ ״jײ=e۸rҭkw+F NQ*8դN|bر\ƷA.mt\zW_O \Z\P~Z;.̎pzfʗ9!zXx_ZM`܊+nⳏU|:sʯo~s9[`nm8 @S T J T!~眆zFw]_KPm  E(@Ax,(xWIơM:YމNhڄ!>=\I_.AYfWVYH>yg\L'WL 9(4Y(g8ڵAJ:iGRz)$1(ELAk |)2t*mъꚨhA0J:+F @p8ebe6+Z[ڔUx-}{-P[ERUF&B1Ahgftխ.-EAT@DxWh1B9L]8V%$)PfI&dO;[h`iʃvE+am2lG8=%@=({D/g^:{65a-@z}IYwMeK6CJu"k}~x>???3>sQ?7㟒F2| -.G` 3An U>G%Zw'Ҙ*$[fk:`8FUgutd] )@'!R=fd$aɜ\BH X>Ǚ4AM\dK D% SD: `s9%CRXg>S"02y~3ܤ7p!p@Lt4FFCO' zJ h)JRTe$I"ˁFT1K`cA1E3f2! BZSh8DpVtN,A X hiJq\T!f Q͕1 G+`ө*MbX"Ԭ~5C)b}Ӱ*;%T ` F[ڽH,ܤ)ejU"#.A MTFUnI6:e,Z3 d+8vftaX벶1be+5"`fNPu !@MWM_\!%@P$ {OX3Ӭ5fN$V*qn3<#3[L))z$B AcЊɡ/Gӄїr4+\)ZػR{"=c_iFxI__ "fR!7jd'W> ״ x7L pf/1`mC.%/IgPc BHj[˥Gdr;oPۥtW[c7[n(<ypq(ۗ,@R(g%1D0v2m9s<" rvfG^`ds7[ N֨*:'g:|/}nv8]7dNU/j ^Td^&fffn&gfJ gh&gZVe2ij&k&iI kΦfzf7&ni¦8o&p&oZfpmfqnfr6sp HuVu^'vfvnuJwxx@t2yz^z'|fyǔ| qLK}} N |%lʔKi聾JP0he:胖Je\ B^(XJ䇂(Djp%SF牢L~"JZh(I()) H,)()6 ^XɔFifiH)z)6Jd))̑5X)Si%)D֩)DdF**Cj&j.j:XHB*ZP*Z"N>*Fꨊj^j~@j**ʪ*v*C*@+j)"D+k&@,iP+>^kf+nktz+N2i)Cikj+B+Sꩾ*ƪjj*Ҫ,fFnj.lN2,V,Rnj,vl,j+k+ʞ츦,˚V붜+ζlɾ,̪̮Ц)+k2ҊCF,*NJ,"l>jmr-؎ؖmÆْmښmƢ-,,έmޭl. n疪頰 F.~L.^|d: vt|.i㎮v.j.އ.iĮgn.}I(Mk(BOhqD).oƅSx>|IL N|@t|ʕbP0M^CD˔H/Ro˼hV 4ȆN؈@ n@NCŰW B3"/o/h;{PVx1KlTn>E\ O<GK0KNDq0DK^Q5R'R/5S7S?5TGT6L@h@L$qRW l-GCШPlPrD0J W5XH et4FQDm53D\zGvsݨe_XJ؆wNTW?ȀKaopi DƊx4{ W7DGCtI6I sjG$POMx efgv/f' 3s LpPXDc`uR\gkCFN8\I`qB7p-WrouH4HM:_wIDM8W[h6-kI!Ǵ@MKE,i÷n͊MDh@yS 2n@nNŀۮJstUDmd8Է8O4FTF_6ݨxe 3sNDW dԴRPȉO7Bɛ//eN0BKIHx/~ yCΒ?Ok,@~RQv4@,Bho/27骥jnk6;lq.[N{ln鮛[o\"n"|"馼#׼r3|s;C7}tKs,gtS}Zuc}o}=w}x܋~vݓW`]~衟^zߗn)^^`O '|qG|g!?HG[`(<΁%AyPy`'ػ0( ?}+_7?~1 ཾܐ}ӡ kC шE b>׹60|I`bH)ҫ`ܢŠ;Kc34/X6gxcG?±c˱# HH7ѐzLd ɶ= ]t#IHr X!,YILzl %(% IҔ\*%.V \,7JZ.&.dK^&0Kb%BTpLg~ɘI2wE)P\Q4ij f7nS&߼U8g@t39焧K +vJd'dϽ|Zt? yt#|@ljPB]( ؉&:FU+D#Q]Eԥ(&8$t{G}@imNTӠ =J'St$EA\&BTiq vRYSX`lph@At H'9*X@8 H W=Q9t :AuAlX@12ڒXdU—7BjZ2ƫ & BC8ީ 6YNja.ۦCnqbּ̊|>iP #Z{%ȀumO2 i(07LɁ?Xpz͉^kd PD~OBp=B5)eZ*d)1K?ͺfM+ (>iaU!C*fȍӣXʥuT\.V:u B |4v}l(UʟEPFĤɁ,׸P+DvC2LJ6'[xk'.o4,xzem OM4ܤ*1d:zrer_ 2YsBP2̳; eUK pX?ЬUk Y)vQQIȺ,t=J'd HSq4\ɺ2h"Z[8i=|kE(Bsu$s͈BeNDY+chܥߔE1jsϬ?׹r>F2#sV) cfK:WwTWwʖ,aw hbaN@Z:`R VfԞN`kDNo!_ӏ@>lAyBvR}weܳ {?د9G:j۸3s7-ę}{Ek\C4F\X,ꁏˇ%ao |jLO}Gu*PZFGe5,c!f@i/~|h%_ݖL<.;~(NϨ@AJģT#Ϊ4 D-z;z C>"(c</b&<K1 P/ *Ӡ .42P+:T"nvk]&`dKhpnQPNLm*M$AlKӀ¡Aheڼv P u  . s0Ϥ0*j 222cbLPj z""!vp>FLcͬ'pK!CjoCB*Ӥ#*qpkIO:LOϴ 1p Qq pP/XqKx.nꮾX*CDP:h.6@( #2Do6'"q c"L6O݃}N*rK !uM"a#*"%#??rK!gKJ%_KB]m"`jEp0l'Y 0Ƒ:Ut$(]2YR) cI죐*rE*c=*X-NnFF(L7 P# ,UD,R/R2$+ 4!#0=CD<ޤX8&+*e/'<A<:;, jNIߞ11Ƕ";v6M6s- 6#/#e 9S999:S:s5//j9I7ES;!pLS9j:=S=s9o82R>>>?2,RS/2".@k*ذB"B#TB'4By[L  C;C?DCTDGDKDOEA5@3" +(0#">"+G! P*H EITICK@d CYV:宪JN.Y I_)JSDދJJImbLLY)M(lJt$Oi %YT~NSIO۔YRG`6@)RPsPL)R@MQ-bO1"ZbB@#0`fuL`?$5T-\hSSCuTWN S"*Z" "=H@?pRF &A:H-ТkntY@V]eX)XX#Y"B jTdH5Zšu`+@"̔L+¡P!aIVHa:V6@b!dc[@W%,<@B.GB *2T,X't N^TT7uX#._u@V!`v"(!v`áYL1> jo1j/`fov[бP Ã)QҢ!65!q-/!V"pgfbURV]WVL?UTZ^7U"0pD`náq)LYaMx+j6Zg5xaly۔j+nPypguNWUU_q'5[Er}/!nJ4r4h48`:ܕ"e=6B@nlO j9tN  8Z?l^%_DY)IOc:@n;xQyUV`6WPbWd6Z$@V$86}s\:Dc,BGqP<$BS-JtAzY5wy1xcykwݓw"jj{?nEmI@x"pV^u pC;B`Kb%w\dT" k7n4e:DAw"W`xLU:<8IlMØ*X" ׃9oY mnMIU 'i9>Loaj7qE@tyL۶c+y7#W8dKF͂*h%,RbTPp+t`yXM?VXu[\!dƶL$V`)%-@jc- ^0ni$#L8.ҵ"؊X_tL llUKYT#pOe"&0 `'ZdZd W֫Y}m"Pt79y"*ڶUCO@uYE@a՟%OQ/#@UĤ)">.O"l":U[Wv$e'}a]ʳyEeô#o=r~[۸{WjBIWr[" V[ۻcE$>)[כ#Ywb{>ID\Q@f\#:嶓/!O?C\GKOS\W|E%c\g<ƽzvkEw'{5z[W*Z.EX!M:dɭEWɇ*ߤNq72f@J5=̣+27dOaP"ת}^]CI`yDϝLSxe$lLZbMJ'~Z"(!Hlf=Z>v}esYN)փW S=gXdϼe8dA€ <`8H1ĉ+Z1ƈ< P܀D)BJ).8A"D(G$f̄J`pJ_b@͘'`Ԡp1رd+@ٵlFD Z0zIT8 D "Å0!\"(aBǐn2tX:լ'vٴ۷+]pQD mwY8T !w~c 8a p`i y{"HBT}PmT@M5R^/.7AA0 v()YbHPxYry%~zjIX)(@ EP6hVEheZ!jwgXYݙPgJ! V TAW)*` TPWXU}3S!]U @@&4^}B' e'!JKUZїZ AFKAEj;޺wYpBk~%Ӧ 륽eZQ_y | {x_o1qY,4 Yl \Q07Wulws/*VHSҦtB,G-Tce4W5]K@vfvjvn S{m8"čwz d/u-8A`l,^@<yONyG^Zqy^y# .zꪯz뗇*郣5_Ǖ3!U|O |.}OO;&tf^}/~7=~_S_ʿ z@g@,a lJЁ& j0/H 6p$`<(P[МVn2S i { upP""@& @@En#+ӊ%dXE`,&@DqY@ @#aG@ޑ ՈHqlkV!pQ=bL@&ǃhp <@x4`E9z2F)E=ZZ#ȄDW, RVҒ]A1"R8(L&R(p Pџ)@F Nz hB5X4$) гH(f* DDl8@T20]LɠT Z‡`HeF%i3gvF`LFζbTl$e_˯DXkXǮy@S/pѥd&==*fV5/+ܨ?J6X({Mgd G(=OrҲ 0Lն-s7[V?$QC}.ATչHt!W, 0XLv!'1*Noy=Hޥёj?IJ\g|өrJ,.Byy"V_mEW,&rWytc ;ዾ/(g\q( |)eIXC@PM2QȻ+UМ dEt\xRS5u9]h_[ō>L^^QkeM_WGS f@GO7 \n* ^`A[irg* DS&ռf?,-tXh1>@h$A_QPwe, 13;cPw{%Ԓ^9>فg]V+u<]DŽljlz F[7R,G`GM%hj`Qv2iMs^twjwmB#Vف7Uh|1ƙ],&ie0.4ݹyz<3BNF*z\'gNaͬUJycg։57N7&piQ1a;.sZ&DQW>0'~ ?B_G hN[ٽ6۞jr/j[ƚq|ؿ5?Fz Fg@/m lU5Gj#)A [ f`FQQ:/|2vKi/; k.@fK$VJ)WWWyr\K$_jćI]`!Q0R%.lDq`0'{N`duGTHZִDK$dEX6xd~ [\DtUzW4Q$GEt|b _gS0q"w0DFwFXeF'W`dotoWUR6p] z\aȈeL_.8yBw33`rQREOs9D'pZ'׈Ȉ=0n^LFWG%v=!EB~€!5&u8fE2uQA[t|G _c88.eWl}%WܦczEEQ X-q81OEi ɐ ̡qm$vQTbEbWlZh_hpaJKN^f]J-2r9;ɓ=? >rdCGNDp4SWWYZepYivvNZ^AoeigiRqTzd\Qv`$E䏉]p)INjWT`Gf$M4$)IiUA$q@4nW\fa֌kj隯 Ac|2hs=dBQ}RPsR?mVs5[1F nY.yQIÜR-W8 +-4yLmԂ}_'ɋU9CGhGLO7-D`rC&o.G6rӠ!\CIpQDX$JTG{C&ڝSRvas0I8 ТdF `qCI1RBR)P2#3K,8  v *NҸZ[7(;:'[nsL¤c\GOi?n 0` VvgQ%y}]0xh `p5වK J֯)Z"rֱ$p[: `[ p(Ҳ/!*;Ď"&.7'p.qPP%s=J5 $ƨ^ɚ?j'{ "[pܸr,ǎ1[-*8bxO1&Q{\"@k7k7qKL P"P{ VLĦhTHBNKo+0/3`GPeyb0@`;!KT3ZGI6wQ$[4IvG/d+5e(Iա#<ѝ8, '<=:BpqE,QEa|~g n"% Rk˦йUy$['(ӱÝ:4Pv>Έ|!+e?$X>1nj8WJ:=Л7}.LB{л!"%:${ ) (ǜ WR( \Pdٶ ^EHVLQ P4qPg\F|i5"]%`.!!N@ZPЀ (|8!C @ h Pq +U{!q" b  &

0Zo[@(u]w߅7^ym7!`JeĴM;=")#8 b&V $4bp'u39d^*< ́`e_9fgfoYpP@F:/09\t鼺-p@0¡H !!rڨfm.,1SD*`2ha!b1nW(qw;JRY p!~!ЯXe]~ߖ†JY8@yXЀ @BTp-/ >`j  V gU"Ƞ 0O _C}]J{w˛. %OqʡvH9-QĢWE5MvxA  …݅mkZJ`\4|@ݯQT %n3hDD@p#5gDk^d}D b`hgҒeh+_5e3?57"~< ȐEeFuGHIJK}MuOU P5SeC"dNuWUJAԼ ;]^UYUSYjcEdm`U֥d1'؀Az UelKfV6 Ѐn5sGnEW.{I)G  h*4u؀"uXى{怀 = +9 &-؍ش!؎ݖv5 h Xq WٜUàUe*Z58XiY٩Z X_B.Gi%{.uex%UE Uڷב[ xb% kmpWu]ܴaw)øvւ5 ѵVݳU=eҭUޕhu]w5ZUgE݁Q}- |^_`^X  x_%m^߭ Yq@b`+9ZuM6-;q ےxHٜ xY f2.0>%>pP[V}xܣ^6|aXYa-0c]Zf{+ȉx=0$ԬeՐlb(w]wym_E^1>V2S 0HX7-[ ]uYd@VA.SBNC& \( [sX6 U}ߠx8`0!ay&e#e%5eAehWhf/y">q>\a~HgXgF&Of{ Zs^ZD a bz`|f[eNz--h bxWގ5F}̓z4 =q(wqzwޙis}߈ q ژpaxڣH٫&?V4ߴkijO n^.VI^e頎xŮXU吐ɦldr6W^qf/e]M̎kX^\l֎m%VUVܞm.S.XͶK8] )nxfPVҋ؀V?tU]U+ sᖽgL_-o<@{ # 5i"~n͒  cd~ ,a0/ݨ 6n<~& 훚0!㌔"S_]BӉze v_HqH u&hNdU?J7 Q#!;?lӚrpp@@ 5 ۆ+&>䔵POac%혜Jp3+؂-qXs+FtJSr^ert"!5Bga+`?D#.yr-۶nE ..hR4Vxߚ_}(IJCl#Hɖ/WD@E”bpS` AaA Z`ς^%+a.\ 4,kT!Š[0A_0ރSV}ISϋH ]:ycF8!ct*c"Qx 2)zy4} ^tn ^wݵ#A ʈD{5'Y @ى(Xd[,9܋<$4:Iލ9NxP4[r٥_cY#o9osYw♧C.WP f Zc1wSJA(YiAj'Aj.٤BpHa{JƩ8**zeS2t*NZ)brۭ߂>ۖ~,9Z*.$/f ޭDoAŝH*Q /0Đ+k1^5 [M$iiQCPR)_B'| °!jmI[stq' ─b $>y2A23t@R.uQP U0BC}Recg&BP(t" Qqu1T 0tu ]AHNn?+, wtQx87ޔJOy5U!qmB%m(Ub)[w,}t󁕛(zZq<`ahUBp D%t=V?^A 6.hG[L/! `e (A%MAJD9*QD /l͉8N>xSL(Dxck 7 A*V  qBC_ 55)%+ 6$>M.?94P0<`~=fr)T2- " 8q<mǚd9HpHJ 0h.bB;|ziD9C lD@?LިB|`#)7:')#f8- Ve˰E'ٴPI"$ˌXrdAJpD29bN(JN&N `@( ]xYow@ć٦KeUİ$3)5 :X` =)[kƿ}%ʦ$butbr9k6p[+YZˍ^Wi!{ @R#2żQzS*\"1`ߊ oxFF,Jx<Fasxy n1N1Vጸr0yL67NѶr{  yȗ)rَL$=q,\؆V+˦e'یPj^3%/2>q3lg0dR.3L`:p3m_7%Ғ4+mKc:ӚHp:Ԣ5K]i;sA mW:֦#[:הFU]Zպ6KMZ{^ym`_U̞6)lT[6HtK 7CnA^x >7뽘 |$μ}H 8pSvɚ58U&}7\2&#nqM8B" e8[ i.s$&*s7>F& }#*הԖdpr/cHN{=z8.\j;roޙy9 4{7Hx y7D~?5C7y~|-?zΟEyԿ^g]OҷQO#>o{q~MV:'u( ~̿}k_G(p}{~Gi@!A_)_ *`A `@ N fU^ fn vI  ~ 8  Ơ`AH`A$A, >F!NXav~!J!2 B  aa 2_b&b." 6 ވ"F"#N#V"$#f%'a!aa!)!R*Z)+"*!+bhaeb",֢,!//0b0B #2A@2*#3:,A43NcATc2^#Ad3Rc4n8t4D Dc9Fc9#;A3#@c:$A8#2Bc? ?$D"dAN;&PdBcEJF.FzAHd4$AAdII:cJCKKd2jBLڤK$L&LPdMMOZ]1eSJe9TTbTT^eVzlSvWe%SeYݥZ%ܽe\%̥]e}evLG`V\eE8@ !t@f_B_bhG@C\@$B80XC0h@@gRU0abJH&A0`.lnNmgd&ufpqjfr:\TgB'@rpHg.w&s&DLf\'T'|ZynLw\|Jqf<@pVp&j"h{cҧ|L]'p>(A\ZLfU@h>jH \Dfk&թ(](tDh@١DgD@(0hA(R]8(w(jh&A諸ˑz])g҅j:ignЩ8(hk&hDBBlhzj)j)fMa<͇]&Gf>MAjxg2@]H ٩jBg8flN*cz@uFfA:J]*Axaj\l*A@H<@rv*JŶvaDhj]D+ALkOiu>k>abυqii8h@D@q&v]ZxGt&pf:\cR)xcI&l(Tgى\z@ϑ&Il,kǝnlHv,ulѕ(խ,lAi$v\kܾ:kj:Mnuvijg&, ⩃VB0"vr*Jt]Dl^frʊz*mZbqޫ l~njAߊZ*jn8"f*j\̢Df,k׹nDrÎ>kOv %44K*DoƊ(J ziji*/RnBd:"j&Έԭ(j,D+J p(B@wFgv&fckf8<2#pr$2VVKAi'RDҝj,fۨ 'p"Dk ' _хp0h/kyld'f> o:f2gykFqC1)wVvgx10yJ"kgm2o1m0b*D @F-IPl62B1FB\~,H2(3 {$7e%wEP*CL@`mp̲ϸK}^-/DZe3-ceXvhv_2Djr6g̥آ3s9Vj:~Ɖsyj hnrrD5q93V .\TDf o\nb򧂊grtnd Kf4p.h8g5E6g4KmA?(sKD4Sne\|)>iRݜ֩Itf2*i&*j:E'2h'[5B{zD(9Ϩ(v*)Shס)[˨gNt% ,(ut^*>8ܪi`2FĊ/zgm2ӄ6s_+jjns`KlVj\*95d+mvnnvoowp6esq⛃hlJ싆nrob/*l6s3F7+lO,-^}w]j, nnA/57+.c7ŵ*dB/,g_IϹ[;8kovZKumeI~xiCvf,&`37"o/p../nܢ*ҩ!/^0oynTS* sWvzZ +/B/Nk[e$yyyhj苞/;sSuAh[{W*hlxgP7ou >pTSRn++0{0yT5'l4g2?f ?`e:'3"'u{t"1q#zdf\;h2';Yql9{Ir/{{Kˋ |J3<0;WoK<,|œc|]gG{|| œ76=s/=_CFs$ׇCz}F=>R#o=Cڧ=H}G}؟}ܳ}=۽=}ReP%2 eQ$#+@#>QNS>/6>a!ơ郾8F>>Kb'>"j"R ?%?%?'/?&#00c\S[_?""*跾~~?~7G' j?@@\4x06|1ĆRTE?dŒEvk{lr?'‡ & K@Yquڀ2Y耀sw](_&U^ . ^ Zb|-`=Vd)Cޓ|4p4xp NJfq45zxTeN fu!d[V `8VS[v(u|V h;K3K  &X xqV~@wR' [[a]xޠs*n25qq;ΰ.G5)' j\e)v`t6zk[h vp Tš dy.xd Hxqx/w]73Q7Mu^ݿM[^VfijuOjUL*ͯY (~MTx]Av.Xhvke >VmҀ,km P_Ͻ0gp)lS"4MkrS{, sQ;+sZ5̭~Z fMbsԹ^d'7Ҭzz*#$ g׿i C UzTi,urT:ʠ@Um%5jeLx3O֊UJȕ h$,֚ N)])(-Q9nT8qZ+˔썖B9Wk&τe{'+)ked,Z=r׊b}:ܩ;2E(1Z/Ve3V`$) g/ ڷ` WǬU3XR}.u`!H~'3Vֺ͒pa%^++wN)QCFŀq>MNpqJ[yynU.V(re@*8.1K4hbe*s}3kRm~B, K9יA׀ z"4S&? 6.έƯ^ SzWe- lztgP^V"Q%y[FjBm۽dCfmTT[EeSl_kpqkne9e҅hR"/A|tDvELw`WHN&_]bgQLtګ36R\yt5t/MwӡщӨUձuo]uDT[!vmwwϝuo<*}m:@U1yoAzp5}>|%^=.ZyU{ȣ=,=sU{@Ha/~/w>NeOpP֯x $=#$O< A8>D H P/D~OHOOF *D / @ LHD0ašOPI*pC=0>`QO&0f$"[0B 0 N@E ! á$gCj!H !E@, , װ H~n/bPb P@ +@ H 5 B@`!N 0BPpp4Q8Q=8H0OpPmэP3D " +/6BQ0HRqFĎΈp ?K@Qđ1B (--ae1Ge($==! K( C0!a!q!!DAQ"g0p3$p p( %-5P* wp RO7p!n0+w; 1*?* +ae&%&1%[E "R))2 P`/90AS sq,FвQ4.#%}1͏lXm22-@0Q31SDs4 44id!Lp37us7n.[5)3C.'/9s9'!~8C1::3;s;;3;Q!dEWS:C+=3>3>p!3E"@3QS=6>Q$?k<5?>PtIR 4R4AE85AIDH&BBR3/BPt> E8A1B5CCA>`F#xQ>BDdHO4=0tDpBݣD >/W+/=0-0 pDp4GNDEl42)4-x4-ST.P-At! @3EEFQs<1`q+2!CЍ <0` @  <SO0SQEM!ڱ@ WQ wJUP@WPhrҏN1Sk!LU( M[[ E1m@yw243ڴR.=>7.JMPtMU5r.AVk~"p_0WI*1!]ҍ ) tP:H3Zo= H$]2cpH_ERU3K0C^9VWk> 5:LH1`"PuPKӎPI Nbignx5d /O;uf}Thv O ScIQ6T-@V +OidUK0jBU4J`/gkmtH+Lʖrlmt?rqv)vUN̰+.ms;<Ǒ)0jk9eIC`MSz6`30{awsyK_Mo3tcU_ F÷cgn3r%=Wc^ F9PNyCtR[9>^PbYCf>PBNsy3ϵ37tlcd9YF 09h6@W&/xyY4_p 阳ٝ[a|9Nu>9 yd/z :zZ*2`)-1:5@)`A:EZ-!i{zY]ڡ!`:lzfPy:zZg q:t:ڧzZšzZm>+d:zٺ z麮fȚlQ4"̯ #Eʰ5K(V&ٟʲ[>)+3P , x bP肤};YDz.摆} {[jd HGgx7u$vbg&3;sVɹa*m *;P|{! ^*Ee{^) Z%:°l~\*{:bf(KΜ"Ep`Ls3d^nƼ1Vܲqf< Uʥ`>:>EBV|FM{hV"\"`œ|Xq%ICTtVGh=N"3ڱ ujM u|M|dGg$|3`P~Vd! bX`z!NVZXR xH:@Th:`;g`d݇*HN=Dm=<`"@@YF,+V ik,X+Ggzy}MBvZb#iHIhW=nTij}CD=ZelxQ<Y GPڰ]mHJPT(*}fE֥0ݘy;L]ӡa:s$^_Z%E&|ҝ&g:?$sϾ0_m'_.ňfEc؛޹%W1DŽj G {A@p@ q3jȱǏ CIɓ(S( \ 0cʜI͛8ssfpV UBC*]ʴӧI QW'p  \` F|{ݻx=޿y LÈJĐ#Kܱ/?ƬQ)8=zfI^͚֓@4[T NS6nȣKhZ @p9ǫ'^Tᄒ}7Z8p!t`Bk57m@+'!fugA8tA\8@mxZ[8"SֈY~ U& 6@HZ-aPF-] 4S)V%#^pal]䛉 BF$ 3 p`QU p  t**N(#! U F$z 0 zEyZDTU8Z@RbtACg ]rcKbT(Zi !(D'T * uZ BYOi@r &3UZet}ɧX^R}ݗ2p]&,rSV[b#W >ө8$|Qꗧ.ã"DSX(ҫ<(UFcBEX,AhZXǝ~0 \[8S(*-"@z.Q~Ex.&ӫAJ՟JTܠTBA^UثoxD]"YX2-3̂WAbF3F@c mDʹ%|=&VQ AЯЁ q`Kj*6qOC4r淎Zn}+\ӑnD]v3>@[A TGP< ftM!`G2ܦtjs L {G$Ke #aħMuN} ~t>%I4ۇCH; [D/r@4qjs[′2mo}C 7Sqx@ja<*QI2LrVIT!^ 0Ib/# L ٲФR3RX̦6nz 65LIŠѬ3 V-L;%yҝXg)+iOub K~t<<& @)Jsf}HQ(Bf2$քJH?Rv!hGЕ1ZMH42őQ$#23м41i K7ʘ 8IaD}.ǷAP Lz^"8:8T@Uj S1tjqZW(PL7;Mo]@ıVf#lVv5nd4`)h B-@"p;Bφ$ A ;x[9;pg7ʸG.$Of'NHOҋ~TaPԧs]a1:;XZ>O{:h@zb8ov -%'S'~We?\${3t{&C,_Z40aő O>爛&r-t hp]9x$R\dE {qDAyV4px,sH03K ܮ1Ǥpps}s_ %K+pS:@X~=7 Dw+p! 5s:!#!xw,wp ? G l+Cz?W#x_B1sXX% 0pGA1W#`gW1S6Au x3~;AQpqrglQHSHQ}N҃q,?nq!,|P"0?_Ƈ/l0%vc'O@#!#+{!T"ϧ:iAG(br""'X} 1m12`!1Syb%kL_q?:0xX`jb6HQ%gqè8, cV&t\py~cy"'VX'X{\woSHx~U8PZj K(=,0sU)#i+߸qih|g{AyWP,3h; `h"MdCIysv#p1YIȢPKHU{ݡ]4fr(n3vxwm2#x'i xKIpYG #sXH1ڡA7q.+pav)T ZSQ` pQ S)HOK9?c[Q}qdQ??Q)c =&z(SpR1I\'F9a@7[1j6l!% P:a3ZuMީс@#iX]!%g'+d)Wmq!p"Cɒ  Єc? Raǡ1Y+"GZڨ&1GY Q +Y^a{04f4,GmqAQ&I Z*ș&xQH`񩰁{ҙW"{5ryS1jRY5z䚉Yh+q*MRP*::1{yMuIё?Tazy%u"Лŧ 6pFjq+? nR&f3BCjOT[V{X{1Y۵^_«'ݙ%!䇐V_erx6]pzNC#`y<;W8tb#aL;K\VAm۹;m0%y!z&@iky#ȶW~$RĨ{ۻjQkv N#$ N;M:¤}V(@@$qj$+ʝ{  !2f-NH;y~rA1#jU&gC+ru̻zɠl!15C57jLxjx&gx˴ JudڋZlC<;/eS6+{)W  o 5a@j@H#!8%nܪfFt.# PQ q ^@}F 0@/١=`w;lF? ꌩp3[ֲA'sne%8垒.(%2-?b9:> uӍQ%@S8h1#~Cc47u=>#y}C*D'S<=6'ћz1%?(1^LD HP.KMO n@ JUVLy78 G"AT4>R4$731H>擱$MN%#:[N8u%0S7>Boom)%@:cQ"+)[Z_9?C_GKO?~.N2n $a6/fX`Aў68JEn?-T4?0a-\3Uy8XaNd6Z\Nh-bo B*J"x@gd`|ʁ* +ݣ./P1$Hll2+¹,:,J;-:nQ.6p +ݢ<( 8-*  4x` "tɒ( G0X(ң(uQEϣdW]5KVcuYh+A*aĩ"" ; *rIvKZ7q x]JxvI`mN)\[+c@Jr L=IBN̺kAEBp6P,AldNt3D#4cP rd,xIz:j: Un%{zG`{ua{enfooxlTpg̒ #` # 636,OB!$.0Z>]LEv.ww mX]</d4O'啓ap~Y%C" G&a6E+ )g?Q#_f5 k'zEw S|W@ox D`9y Fv&iM[PB j|! a h;rXbL#$)!@NIr@#&qn , v6D(FQSbxE,fqk#ihC Qc$cx0邍pMu@)/0HfEO/tfQd$QQDш)PZIPF]ld)MyJSv$^*]JFqS2)h1 as 4pAC,R|dI&tpD&@ S$<T$ؗzx."Pg= 4JsiQg?O/2$hAC6= :Ζ-SD8`92(씆pX1 3hI AAD^DGҠBd%8`=,|n$xDUSjUzUfժ8SԠ0LЌPt@N^$T|AЃW)w7D=LQN#jyGr5ލ6y|sKxNl>e c&X&t\vpvP\!9x9y6q9T7}؎dK|qct+F c80ph2w+B 4A ;9oSQn AѮtkqyUi{ iQo^  x H,.:/[;< fI xr1H'@1 9\ܿ<Ł/58/(*BQһ(?A3U)@D?(H0V:<&pB .P=PMP]PmP}PUBP P  -P P Q P@=QMQF,PQPuQeQr R!R"-R#=R$MR%]R&mR#LR)5ܜR+R,һG0R/R0 I1-S3=S0%S4]S6m*uS9S:8S}TICTJTLSKTNT4zTQU7USR=UTMSNKXHxUXђX= \m]YU_uU`Uq`UXE` def=V0VxidVeVfVgVh- iVpVoqst#"u5vExUyebuz{}~WqHV8 %X`X(؄=XU- X]XmXX}X،XXX XXXX xUb՘V]Yٜٚ՛w|pV-W-Z=ڟeڠuڡMZ|ZUڦ]ZZ}ڬڭ5ٔEٕUY%ٓ۱۲eY-Y [m[}۴۵[[[ۼہ5֙ٞ5[_Ee\a\Zʥڮܯ\̭\ܩ\~]]{%]\ س}]ݻ]][ݺݿ]݀;PKPK&AOEBPS/img/strms055.gif XGIF89a}???ߟ///___oooدOOO999@@@rrr``` 000ӐPPPppp,,,!!!UUU***>>>;;;111蝝ZZZGGG!,}E)$'̪-!DD(ļ)%  ۈ 6+ÇؐaĶy + >( !\ Q @ɔ, %]w ``h"GbCX̩ի&l@blZ1DJL hPZ'Rk KSxes"HpEˡ @mD SJq0U %d>%b5 fei"s_Ck82wEkT5 oUx lǫ7 ET3R~nDD|*C#2 { 0V2D8m2 ':ADb:B~3>t;<S Ǣ#*PPCRY % r`ʒ5 4t@!7YH[#mPf, I"&@@{BW_s !|v 05ZG ɉ#P)*&`Sbzp@bA.Z$Д2}']@Tl@DR g!St@Pi`@. @AMYiZ@ dmٖz+1E4"xt  wrt+:BZp(|[Y5ʤH0J g @gq2-O6D/]ԁ.1Xh%P2AprI(ьN dYu&a'Kj ߂H<8Z(_s3MS bA J^&!(r #ppσ p@=Ы+iO@KAL qD }Gį(z}u[dlt-[l*)9dic?'֪!VzS >$" 8@D  ANoLDA(xpF8B$4!!PXP``(Ұ6t! oP)b AU<~_-j2j8E T"!X+rQ(1f"Ϙ-~QYc86UfH zJ(1ot#!GF1"%HJ>Ё"( >xq"1lRVؤ,"򕅸%-syZҗ,9Laʒ4&.)b mA,?: aÔ4 U~]ҞVp4Aucm5`S('ⵄ4a!1SFx%@t .{% 0W]ؚ%1m_e0X (N}[ǑJX+5 'E1nLC%T,ؤ $\=n2eg=,"FzEl9-M^u)~S;a870\aciC8^6b.>A2@ P@SLNHv9 `AG444U,<M86%Ԥ h4|bJsI 6>FluZb.Qr b59uN$؎J̌т#I[aknz @ ΨGw[cR7ӛ[iga\cf ,#TZ 正%@,y0@1*,ɰU5N#"#~e/G3>%D' ChyO6}tiʏY݅ɔ)Oĉ{D`R5:@1UEnHOՃ;#NX[= @2W>U|(P,w7"PI{\; "uteϋ`sKKv4] 53GP}!q}'R~xhiN2+5s_nt0j~KhN/^Ge 0w1kRd*lQaP!qxe&'jdvRb6@~T8^ g-pQSA' e^,X>H @/}'ppnD0akA`gZMX L3R(QS& &ysPe ^xJe6|iC b`7=M&i6gGkl&kLjp,=⇛Mbh7fyS0%teoV5 3pAp@y0X XOd񦊣rr%wPpVR +F73IsN$:Ç ID"0# rzer67*A8D=bls/"zkywD< WQTy`&/`nґx x4SA I 98 d4= 0PX(8% tDQ1r$H}`}09}2! ~8a\ya=ie53Q*S:ܢQP@bP"&^zNHT mbFO @.炟ɐ@J`,*KZ:y:2~#ZJp#U| #bp89 cJCC@ Z|4e zE ֢\P.=S$U`P' &:::bAJ38{`r:@ "CV/%'ko" ybsʛ @ꪇ&>R6harrD5XM7Q"ݳ1}rI( gAPg^iɚ;:8Z5g9OBwbbpiS4(Mb 7}ٍZbkƹ배h4Em쬉i`Vq?,( ,Kam+h|6K嵛gzAnl@0[AK$ Ll^?KKj/<\GJQ޺µm19TU'o|*m\tlf9Btk7`5}< F@'p L\|Wf.31/ (X.r'Y. upv"LMZ^ 4 2Έ`҃E rўk(ű.Sc-'S537vmhD؊bRO?S,#/W帱e&g1b"ňWfz2e.'S'ң:r ,rp{؄@G#:-Է$x, ma9MRo}ѽ=qW[{3E,17#:gr1kscM('9Nd(!ؼ82ta(ƉޒQŌBm w3<-}F(`5bp]51#i%d=mad*B,b7r+OA S o]Q*`"S܁>6 8 D^Gsbȿ'怶n| 8߳څ$"a2KJ 7!a1,m-!h^s(oYos269|dM^dC=,T8(V+ޏP̏NmJ6Hibሂ'! n*j9p ^5%$}-=g'&]Kdq'8:k;P9{5m. ̾ 5@ )Ճ07VB<=/S?Oh9- ]d^(bieT+6-"B@!-upVd6" ] p@kd*]_4awQ:Tёz %y}YM!QWYhI*NP]&y| J3B !,AkFz^bf%eU8˨cΨ`^m]%T _#CKN#!I di,AkZ lHiD"g:g@m9N*+B&EmE&:Rf P$ (TY٨bb #+p Tbn:RUA4)!5 0h@T oS1R.k@#uEMk6XZ+8 ;DIe4+ 42S v^!6>"P@ En힔m&r%YJe[uMe,@7@w&Y/+nBD$5A诼2ewvH pqg,_E3ӊaADWK,CDF 8O:%ʂ#G"+X1؀!/aC@d8 bPl6' h~E_X-Ȋ`ƔvUfMD`hM`c*a#A*!  ;3 -BFGo'Q"cش'ܕ }$T)%kW(B Ñ.{ef@ \M*:mL tSa*kx-GNMW`c=l0$NΈȁSEP9qDRQ`|`a&d^eJc( EPhHBЂVg~ @TA@$H%kErPDBjU+HFD.WDω ,lAᩊeIM @(@aV\V!#G'AIJҖ- RkM*HeJx9Ic'i>L 1%Y$Q]*޸B)0Dc}0!H\b}m}^FX_1VC6雷\Ej+㩅jDKeA\8㣞)֥AAMж X1@V[ 'l*UWEU6ʰ² \E ]:)3BlM$]hQ8@Kp&B^+LK^Fa+Uq:~ֹbdiD56Auf,*X W~N_aHa?{+R5MЏO[Ͼ@?w\O9.9Wv?8ڟh'X7vri w ؀~j4x|ׁy.7!gwV?0;fF{8:<؃>@ gpl1؄Nh_ OP wX H )FH/'Sy $jtCzpQwqZzXq8}Xh懄~HurnHMH1yx4H4h2Sx @i 7xCЊ>auXo~1 Dݳ5F.$p;@ Q5lwJ،#7gY&J " MB.\lGGewɈ8-茊xױ xK|Qqh9pdLA3s x'OPbV8[EN-ZOG8o+rr#3hm8 Ff T[ 6 !pnH  382 09QI3Zq&Rj3Io,B0$CRC9C/C 8W?iFqAqB@M)%e>1 sR%b3)ij[3hjh! ADƁAadD'I${}YK0WI)g+DTQ\;(a&d[hi G}A3IQyV)D*JT I ӚAS(OI褕`Bm 2B;Vd0QPqP:Qq9`2W$$- 0@\CnCAgBx1_BhС8aLLaTi_x(x=| F0`&+T/2%+#^G4=Y$8")W )s 1 d +"2PZ(PK/斞 w ^aHF,P(HS e2-c0$88 F3 ѧq,MB@[F3P&8u@ɢ-g! 2T&2BhU`@ )x.T.j 3Yf_HuAp x P]U6ѯI`刀a{rn82: XM bc"[,: !cz"ԭ2 ZP#@% Xa`2.z&;,*T%#{" [))[蔩M0{3 Q|<81loh,l˚Kac!z(7Ѣ/N 6Z' ' /rUP";6ꙗ ] i{Zck[bnekjQ2a'z f-j9K ;)Ixa PPˑyLrzz3 tA5EZLoېtۼ9Mx KC4`e_4ؤ;ٿԋjt!֞1A&|! {7zE LH8y_5 O|Bq!pXÇp91uy(<$6WǗfCyLŗU&VC\fܚLŋW Jp` IbEpvQF|Z\ȠEiȎ +Ho9 qHe77SȕȠH威FK%Cm_3zʠE= v!ʈiX<aɲ@˶L9z&hgan92vLu$LlQ{ȩ Z_|A2=,CB.\ЉPψ@2Xj4M=)Qb"< '], HY6 >EыFs'*ngAB:nF4ƂN q29A/ċX++aI&iMm<ѤX5 |u'djB Ryoyu qp`^jp2|j +*}x̎˶,)2/b2P!ର"fvdfeɆP\1s} `Y=K!˻IF(W)'  +1 7bҋi \G ȜV.Z<=PGm3S VO|(a(c-^e%& `+̿6=gQG<"+.w ͐M߾ѠU*xE[/#,e")P>&rUǁUڄGnrl zU& !!P&Zb&+. _b+,4!!hcm}t"<) J/[! "X;NM.HTj )W,+ Ta.g> `RTQC6Os@AƂkb" +=r<`T$ i}%.c ʔ/kTPI%2ҙbN 9S(U(N*")3B)&ZxGћ`+g$!}fI*gYqkNcK6$QgFU44 d8-FUڔ1wI`)~0*ҭ޵첉zT\8,F%SQϐ\ 3n⡺ˆ=ļj; ?0-~($R%c2R/}3>,#?mW\z`fFH J1AVPj.bL qc +et=F ==ָ gim1`=uckn:oqWMd S/pכzӟ4mMeٯhpϢ'{uQrJ{@  ED  ECCDDED  E Eר쒵D A1lDiZ,ѳS(4@ "& HXC2-#M.exӱETvrY;ScE6v8c\@5-NW 2WR\5p+4\'^DގRvRH~~y$ K4gv`#n8HXn)4Uy9PC9I%Wo=8/(3E<,;A艩Wl'pڑh~5>pKLwdOHAY$]0E +1Q悼ZQ2 F+$Wb-ޏ@[JȚ!j7v3f ?0"5JQ(܅e#3q dh4]1I8/DãF˭\ R#8~oC#.Yw$'I!m"Hxc;ULo45h#$R4JP+k9 R d̚Ve0[YfV`˴ qyK "c@~\hdfPjck\۸ $เL-0;XlK*nInY8-7Eآ:bX@< a(@ ]S! 8rv7 U Qb#'JM-ڱ1\qh'8HZ"M}jZlP;ìz#"XX9E tbCWH+묨$Hfo9K(  Pc?t_[[`B'pUXտYD`c&Gֲ$רI@8'Xu-ԑH@ê:'Bu8ZG%~5u*cR6z!)¿[j%9djщw%SlQComd.*JX!2:L]u bv ɃXH&PFZI!X;oi[ĨnpX)Ljʄ2eD^2\.c9]W՜;t,VpOL1`PpcוZD(` Mn A @Qkhj0 WY.o 3Pԅ0ePMUY֢ j\׽VDfI3JLm Yk6s Gxı-Um:uaNJ>Rk~;oz8MzSRw)\IDݰ_Զnqh{oN-x/@ I/}iKc`gYM8oFT"*=-5RmU* 'M^.`X]`Mw*R> n:hm4/M%4bB5 ^]%AxVnJB* Ch1NC=:8h0 :P+NHԆ1^ӛ=> 5x y=:P! ^ HW (08/ر7>SD⤀䕙ua7ˆ:xr_O./%c,4xMlf95;rcW:gPTo^'*t 䒅 ?v봣*%0=]r. {=9<' <ērx=(sw)sRs:b=O_H찟$@P[ gK1-S` ~ @Dl'!"q=0YQ(,QBl#' /8 ]S[Ʒ Q"Ce>,Y^q8 " XDQ:6j|#,#<871m87bG+GyKz095 dg:HHa&+IM#Cr"%ÿWxRRHeeC%}L% 8D`H'Z` s%(3\Zi/5WN0A@xX$kfNRxXsV*̤HmS4-ɊU ]*Fc8 26P,^I@4ʹ~âa2 #k@Hz'̤!9)JrBh3SP0UF X$jNMJU̠7՗<ZDiSѽj#$&V$W2ר,Y1+H({ߞHv]GOSIgZ L:gPo'lc%;Љ#(8 AԋUXϺVע1*DAwD.uSfNćN+~֏Gڍ7̄Hsw\M$Zv%WR{wC G ?V@$M+ThcMaBӞ>5<()Uqa!6 A[cwpuqE֬Uҭ KQm @\~kxh:ϫݏ2'*v 8<3eѶSׇ{g 0on&o'>r?nlRo@D{}Wp G.G% pkLMwpONh5ȁC?% QadEh-t$A}l(Gb"" s}yf'Y LVZ;bv5vc9f|G~I]xNb X O<>Ȉ2 (Yx&xfA!>:Yh\6|; d$,s=b:Za\\i/`G@1ɤ1{ (8xZ-U 8:x84%Dl*G(8)V ,VQb nE Wu옑e(<x!9M3E&*03Uד ~XCtdH?7Y{8Xf|GUr0fKog Pe^@fhIjim9oy0lYkyw Hɔ]E] ^YKŗ?2甓)M`I`ə陌𔳇@"i{T(gqhH@sHi9iY05Wy8d{.X5>y_ n?7#ih1@)xs4@xfE7Ҟ2)1ꡚBa)z4]8?R0%.h!Nb /Q8))cBvzSE' "1ŧQjS& }җ%m9١?9VzU53[KpXv{Fk#DW'p4⧱k밶 {34 EطKMKsY[۷y/CLK>+EL8Ҋk5q 0.4zD R41JG7` ][Xzn{[t PƀS8i#a<+ƦĽpC@8PJBNEP(Q1Q]c8鿧;fSր?1kdkl|E4/Sk퇷?š,zK۴u75 7,PV́O3GhR1,*W3L\iFQ̑<$=2Os1'AaK/ws>ˉ\Ȍ̥ol^ks7wC0êPpɠʢ<ʤ\ʦ|1Ȱ <˴\˶|˸˹ɋPw#SFkL\|ȼ{M#`wl<|؜̼ kQЬv1͋\@|S>b #jg߀7BB\d~a5#ֵЪٻG´0⫗vLnAjc,Ihm3 |rA/ \ KܸMn?B$0gJ ~*U", xYTm V(UxC1me<VTa$5P$F(A"h?\Y2A~=ơZA [p\ Tmǁ,-1*)E,>&!ܭ P2!)9Jn3N^#PA~,\J C[V$h\kV~oF0o uN^";Mp}Nxn"PW> p $鈃DzNŹlQ^C0~ "P,^C!@ @aDA%C{1n\*H@gaȱ  -rVώPe4'p~KaJXH )SP~%#Pdv6U(p)`-2OONk3_66p(R6`R-P ?|(MG' @BO%Z_ UV^@*5-pPjh<"Pw12VCVB5%hP۾W4~7AYɜ7'T#O40/L)@$p0hOOy;PK^X XPK&AOEBPS/img/strms012.gifz*GIF89a^YWX878URSOLMɄGDEzxy憅>:;䵴hefLHIwuvjhiUSTommɦ0,-OMNGEF肁LIIKHI_\]\YZMJL/+,b``967SPQ<89ǽQOPZXYDABLJKJGHb_`.*+ZWWTQR@@@ussgee```0-- օ000ppp>;;*()vstPPPؐ뜛Ṹ/,,cbcqpqFCDկπmklpnodab][\ƍѬùxvw:78rpptqrWTUecd{zzé<99̗kijXUUYVW}~qopϢigh# !,^ H*\ȰÇ#JHŋ3jȱGcɓ(Sv†K*cTʜI&A*Wlse΂TIcF1i&&Әa -:f鿪Fҳ׈T`fs&Lp KU8aP2Jdˊ)ifʍ{ d7HBV &<' u7;-LAMBDd G $y8@trGR9Xa 4^2#76d ӂ+a cÛq0 hC옠aw3 .G!]\໎xFp#m$dAnX)o^,ho ]hC'HAfrnGŀL΄a(ZS&%#19o'&eaJRjk V0"Lт``@ P H6 fK)Px\%ryPktV R0g` b+ r(hy})-P֗ʋ]h3+ uy]jD-lӠߚ+82P6!1,JězfL@@iAy9iĿr^TY/-VB抦zspPV;؇H!m"7M6 )lBTN[*ݷS( (Xnź{{.$)k~Puռ~IZ2–dŤP3 iu{ ٞp~ Lfipq'DBy+1h)MRTd'8:x$ނ r],]u IF bXbjab; 0j@jL<йrH.liH!]!F1,(n FvFM,mH?y 70teﱇ(pB&4&-vLa 9&*Sd!Pes4*iJx85eZ:ɳ71xbpfn*H[s4e0r:SWcM!eItU/D>$z:MVOWX:&P88mXap*7[s9t=Ԣ^s1sX!;SKi_xEf<5C6i'1cY)Aq(y1 A7t}(178ȏᎣ!>]V^kx(8-A5-AY:AFs'jAWWW a`&Fb`q]GkWt}QXw).0i2& XP1yw7Bjx(lW@MO0y(if Ywy7M!:|7W1Y{G{u{9?AGܸA9i09}ϗT)?aRyc&ceQqQa t7C~'LJI'DiIeZ<5ؙ1 r9vpop1AaN: MJu|B&*,3GS7V/GHn@$|!XLx)pB "|IRhK$0P;pImm۴1 Qz nu-o3O p{G `f5RVQo  _kG `Epcp{pбq+6/"`p9^[! b[E`Z| [@>/PźIP p ;Z $p:[]|!â p{ мR@ Sjg@z+ !Ģ # Ы Kp J,5k0zz {;{5Y¿37jض}~bçj,`|l>PxJ%~J:g87iJJ-~K72L6 O0  p;-` pI2 eIcӃ.̇%x36ȩ+5|s\Ǐ PP@ ` PȍH< `2@Q ~JGʅBa8dxgտ;ðdzlǶ˺˾ LƌȐL` +c?uxȣLl?#xDZ\KzAnp,tη˻˿\ǜ̌,kXB襡>Jc芭Ȋئ1(]2α,Ǥ`,$` ,< ҩ+zDƨGHZǣYҸlH,M= pQ=$]U}˼,-6U{0 ίԇ P˛PIs@0`` 8p 0 6 +e]טԲ Pp88k3  8[Pj,0r@  p`  ݡ ˰l1:ѯf^;@˶mܞ@  `,ͽT 0 8@ __.;Pp.aj?el=] 6 "O` BE>l׳?Pa0 ` ü3UʾK[,fN^S] ]P`?? >l?7P>o^80 P BqО BCLziIU 0@iƒ~(T"YMI䈏CB0JZYM9u$&AXR M/rK⳦QNj3~YRǟ?i "T!D-bXB,y2ñ"pysfU{dRR+e ,YljAeWۺ3N2N83AlÇ'Vq۸sISP5&P*Wo'\֩_ QU,CXN˚kvS%^uX}yկy3`?ĺ,XXv0žL]*daaP=: LX"kAAUQŘ]ЀP !t*F VFуG @Er̩Q R(Ҫ{01K1A(A2ל7kT ē;P䴏 A#3eP$Љѩd2Lσ6 QӨEOCԯ(U[,TXSm 17H7Tt W{Y)4N%`#0gLZ_i&7Ȩ)30&3]3]uc Øu^"yLj`ԛ ;j#7wb1+x*ӘRpS٠َfπ 3@yKWvGɍid* ;[q3 {77afˬ3|fcܰ3kTl~Œ3+SSNZnx70YUK6iC3Gi"*kbcߚ,8Y6F6 1ȸ ,Ht7s`c _/t 1%~/1&C+&4~2;*_ ?}X ivi ه mY|;T҇/Psة/y_{`.*dwdH&0x1g 4zl%2'BM@g $N6UIx|\vɜdre*qJj ,&K&T.W˚Z0i"L\3`2kLu1ә;$(CQṅJX7MpS$g9 LU HNxSg=yO|擞ň\H*lF%hA zP&T & ʀ2ThE Zt3hG=сb8pRT+eiK]RxCyS-;|Sz.jQzT&UKejSTF5_ԻzUfU[jWU~%rPzVUkek[VUs+ZH{ ^Syla9$7R \&2jbӨ.z6X( Z\4!L&fPX! ex `ɪo hԔQMHBd7MB8%/xa)B(IF&rmMF&l í]Fw)E 4÷VnMt I\Pꗿ-f-! ](RN9<& . lRHL0h `+ SPp,%wop0]7 T@&X4mi``mzPQ MR/8% [!)a,m4 .1f>J4aQhڶGF`2Ha5ѭ w̗)K9H rbL]N{:4)fhXf3mm0ȱH++S [˘̫SD[͔X)k N4%ڛE%Ce˱(iVso[4f.Nkmv;ć?pքkv7Lh(qk.MibkI ?PQV@cC qr9y026KM{uL8|ɼgs]c'{~vC;v]s{>TLz8iwޢ x'~W|ğǤ2|-yg^=y)O'}T^zJVg=5z*VG}E{@½moްyX_/We1k51B ĬC>XB2CRDCQ îwD@D *3\N*OR QV܉T&dYԪ5CXJX$2DTD1^qC8DŬZ8Ex yEEDXE[$F1CQBFf\/H2@MF1cP!GV6(&`S\ǯ“8(;Ge|/a9\Dn\F_|<2v/`DQa},TcD8gĪ*Ǒ,;Ł~ƬJǛؾ0 zlĉ{B5hpōd;}%=lDP:Z*٫E`':īl$ʫE̘BŤ*@Č)Fc*9G:L H͖\LBǴ**2'<$:F|,ȃSJi<K|EJ˜F(HQGK=`z&̃䪈 NKK>K̉{T㪷NP,LR$ ܪ$`ժ 4KN\PjP{$P ߠ Ʃ`6/PM*|*hHCѯ L;sΨ iQ6pҰQPL%1,EL!P1M'|Q*- >5b3-]$&L%IMқ;%mѶTD&T$Q E5:5(L? &K͉JS7aK S>@a >i?S]Q%8LILO]OEL Q՛HUA2PFMX-V N*IeP]UU:d*VTDWFQ5@;`*nf gptTX%Vv;ozWpm{-pāݬkO4U E=>XPWU-GC]|Mݪ؀TԪLX]X|uY5Yq}ׁeZFZ0+:L-ي|ٍ]VDUĻTuZvRLE ӵZu۩MQ۳-S Ӽ1ƱoŕU[dۓ=\)y\4=[w)[`6eʻ%\\݉ ]<\-+0ϩԭCՑe]OMYW]o5ݿڥXeʔ]ܯ(9ZhLޥM`)]נ|{$'(8ŭю5hpb:TDC)ߑ=p׬^ү]^\& Z̪;^71SM !FjW =^/WҬt/8E.-ĉ`7_ի|e CtGDU¤l #6bQ|m I<#^ +CM(h`ݛ^ ƌdUdD @X☂QP@?CJN. M&4:FFRLdJ䚐C)@ )0YY /@)be[JbF\V@VN:_.*.P/P`Nf.qerSx~ xd)g^]fUM.OPqJa SWb.@h/qOJÿldS`i>Fy6i&efdjtyNIFhNVe4QLeF)8gNjR~kp&MfdHK~>襦*dO\h:gJ~l.SFgfn闞/P䱖fJ0p^HkwyF ~sNQƄWgyfPƿ/Y)xec [ eHn|dn/`enV6REefLV+_PHoT ;PK *z*PK&AOEBPS/img/strms049.gifJGIF89aa???999rrr,,,@@@fffپ///YYY___oooOOO ȅ֠000pppPPP```333񳳳GGG***dddUUUԌ666lllMMM IIILLLyyyכbbb!,af@? £ǏѦݡ޿A 4 H*\ȰÇ#JHŋ3jȱ- MlH *&[ؔr$ 6D"@ղH$5$E cQ&eVf:I2;Q EeZt`3U!JSR'Q HͶ e\B* 4:b40pX9mbPsֵQ&o2$7ռMZ$ -X#`PE\3$>-46XƎC6˙ ɽࣅXq7S+V܂XҸhJ=;HZ .qX,i`mPdu9u]! @Rw*Gy{|eԷ}5!t P hAvdWt1S] 6E݆ ډXՙ]H1EiT5eS$Іfg2p6hg5 pL$jTZXewoBV*2lM5]W c} !\6YhaEjgH兯r+P+`遝*f֩5IF:DeL)HJT$PXwbukH)E* pjrW,ˊqF /§)\oފ$Ƭ y~2@+_ Ѯz `(Y, zrpBƔ R7Pl8̳>JtH;U2(|BUwVg "5rtv' 669 gFD~:+ guj7Ғ;P2nn9㖍 wo08 A-|+&0ϛX􃄈qdS|! poI×o觯>9=%ogDL:` P̠7A " `TGC(V0 gHP @ sK&:M @8C!ZX!*f` B$گ. /p 1D1x̣ 8:+C#PQ##qD:$ F5Q|&HMz2(7'HKdSJ9+">B2% CL2+D#MЀf:SLgh ɈeVD4k &)Lb <3pLBܲ0l@ A(CKHޠ !/꓄ u?*P T*ys4KXAs0dfrg2 8KL`+)LBiR4&4MCi6Y؀iʀ ݦД6LuNB2*| s`T=jRӹ5t!j#tP@IqRL0+'X@-eW`,Lh2P_TP2hvL7pDW(5/e[Qk;~ J{15@X-Ck[ضpk?H !SPm p245AȠ>" 8&@1 l!;elԬrt\-/W2@uoV`ڗ/_ د6 lY+ f LPe"kdNbepmqsG^m p@zE]0E)Bvf'b@ cYLvrF*e pr61Hu*Nj TY/VZ6#me+9-LaBŶ&leEwю.s#iȾg\pzճB:R. ~1O$s\~M ⠿h,Җtq+D2 rzlR e6 ֗~eLk ,m]/0Y7ܠ^g P$9fxuZ;F7ٶwN Cn.^xM_.FaqH{U::Ho#- #ԚҴἁL˚M2VѕjWu6Յ 7-n=Jg5NWuJu:B])wNVS,49wX( `/?33K^}!g<'Ё >.:CLxlz e(Po\%\z( 덯Gvӧ>{} %(:0;~߄қ7i1~OgG{|pgzwpH׀DX>GRX[iXg} Joׂ/$}|&zOX'}0p${5H.kxB8bTkD'eS\؅^`b8dXadՄ~Ml؆npr8TrPHx|ce "BeB8XxXoׄ+Dw($VO{؇Xz-D@d`؊h*|cǀ0vw iQw1"xȃ FqȘ HF{z;}=.dxvn PkhB(K蠏hgh瀐80Аטh"Hb0H I0p0 3Y5iz'ǒ0  E2ɑ I.hx򴎾%)ACY2FHɎNipe{PǗ0xy"5T yzGS9D8) 0'iAٙfp'Ii@))`pі-9MFPpix Hqiq4E5r{fЎ\OFFYOq[y0Hi@i*IzBI›Ȕ0DYpLdOyLĝItI JJ-Yx%9q)ԟ"EǝEjOy- Z jG:iO|lpɣAJO(P#V8pWg~+* 8D7ʥ*qk1l:k:QڧcFXO 0ڔqLpLx56*yj*ّ.  8jC)?Nz0jYЦj&8tk7*rftaɞ4a?JRI*<|ʓ* T-yΚ) Hy@Oߙ)Zwf ]i꤅PÊډwPԮQFA P:') lZ Z9Yf ˉ%wY{Y'J0; kp ` Z \ 9YyY0l n; pkRkQxow+`&PIM];z{Qgj3+2KǸ+|K~PW)_K4ع銹{y[|(:;[{kL%;˼ lۍEP ;[{1۾۾O@M {eۿ+{k/ \ Cl\"| S<,-ܼ.2/|:&|(\/Kr$|.LNPLQR|X þJCLׁ+ēzE W+iyf4j\l n܄pL7HJv,xǐd~̽|Njk4,qKHǖǩ|oJɊP x||6Ȥ|ɬʭ'X9ː,ˏLȺ\L|˜\l̡ܛlEL˛ȟl˨[،<̦|ć>JEF6kLͬ+q5ΑlM^]ȎIC,ڌH:ͩJlBm%s- MZ%mB/q˹--/Ԉx+]ӿJ(=&#M;.1OgJ2}tN ?sD]ȜJY4ȚK":h޷ށh[\–+ ﭐ>J~9ZO0t p* &N纴Rs ^92;ڡPzc{#d>׮fPP s xe7ҺWK~zJYKq0MCLV?DԏM4z[iDŽ$**.vjU^^uexwOD w6>CƭI36&*"B},ʮtꥲDL#JJ3oʧP7F]z w++OUj/PSES7utGwus%V0ҪwF`WFS%D٪sZRK"r:D5LDɩ% %eԬuu X2]vYP"Ӕ*pbTqUqeQYxlIutH\hPlHnpߒz YvLJ;~6J~9N(':!M߶g6te*_vMO"l8l2q<`8b*6;r8de dfee%f%%f fӿ֝ )%f ff @ %L.!B<QFD2!pく7rǑ;D)Cc D PL5WT䈨ţH%-\ʴӧP=B%Jӊ]0+*̀4W.03sm)p4AZKuz lqq"&db"\J| IZfjK9!dԤI$ɼVc8eQF 52ry2/ThСNuSSEcX1Sח~)H4ؽ=3G:)! (s5/?$N&Q[8 q<0RdQQp݇J'$>]U-{L}pW9#|qg_OAⓟŗێ~>#S^##_woDmG@2#AMpBȝ*!L''H(A(L W0I^HPDj2$@ H"1y9v&:) {D ꉬIlDK,\ ct^5/Fhth*jŠ 14f0`xG"| HHѸ1p H-Ozx(H>BR&AUN:吘H$(FdQ R,K{4Y W]D~Ij򘠨%n\j*łىF,8̦6nzS7ǙI1 Ҕ"qQ4+FMNagOp}KL@3J0<D'JъZͨF-xKV窖2/t(DA)qLqbd Ҩ̴iD 4T5MJwyD3b⣬R *e 9TE-aFV:8%J*UUBJ:hEW[ ȱ"5xH^JuTn\!qN=ªs-C] sab䀳uYjQUbXPe]*Wc?囲*U5-i3 (j#63+ %$n$ƗkH\d%&n"@Rh~J)`/u-1[ʊ]2G4_-DŽ^F @fL,"ʘVk>P&[q|FUR{CMLlXFz`@@X5而+ 894Wcdq?`<"břaa˟[LT| \YϞ0@Y)$R /W 63;W}i5xC"[)pr2P X#ڲ>Up(KBENƥ֘*F:5vϩ)QU3Nd D1'qeJvP଺d^geU${\-a@ AXXA c%oOpČeeXl yGjd^`;7?Ú+vmcGwȮH>R B lvIAS|c]{*Zh /9bAZ0w/.`'9Wֻq3WEMpn_|3IW/S?P{/N?2Ex@竆DLӏNAqK|!ȿWUX xyW$D 8 @E؁ "!#h׵H'PB >tD:ET`<8Df}}}TDSAA R@0B:xA*0hBJ0D^PJ0SzPzRJ`WYJ\Z]cQzeUjڥdIdL{zZ x}*J JJ0j5| jLJZpA0ک *@j:dK t{?0d;D 4;yd :ŊAAj ʬj:ϪٺЭ*\5;B7誮pʮ㮔*C`fZFƳ;#08۰ : 9k;;=;t:PGJTLI(;-3+/K7k1K '8гMcuK먂 *IKO QS+j5 Za۪e;c+gKkqЫLٲ``٣Ԣ!ķ|{ `+Jd2Iɸ⸏v; ; d۸NcRkcһۨzgۢKPċK^n'x{؛ڻ۽֋+{r{&{ћ]ٛOhR.x{;鷓 g#!u+ > k qSk˻I' *վ!lIhۥY) ||}/ݯKkvXqMI4=]}}}3_7& ܭ'蹟I՚-1X{Ém+}" ' ^ߔ{eӖ!~+ L=aM' !2 $"K5n&7㱤<~/6JJz AN&CN  Qn#SN@!OZ( `nV S?hUF2 B ;/IG.*]x異P$e#*LEye.0^萞TV~P4^ g^>nI = 5nLf.F  qh:jNϞ1N~NDu힔8l$-N.ԡ!øhɆd ֎}. , /n ?z  _RRzN0MBL~<;=:@|םp.~e ~HWwWw6޷IFK?&MLݸmC?^{˒ yU[/Kb03S|7orBt  C{]|O8iZ=`ȉ*x\+MOul<]yaM7eCq-Փ _o/DH/q0_Ͼ}&| B D}(!sXve7U %XV#=R$;PeV\_CXF{$Z.ꕨ,j 4RLg!U@s.)i䑭Z3:x4> 2X2d.Ef ])J6Q4T~chdֹȖtih*)R(InF1q2~"-z6jbDi-ɜ&y?^"iA"Zg]J4hvܡ$բ+p.>WGrfڙ5w jt%+ٕÊ㶫^ h{&kA HP<~ƀFj;q7N㯥Ha->$Pb hN}{fN.Ii⫶0M VBF =$퍋Et[eywDڲ'Z}۫fL\:/^FM`6H:K i61e2oo?=kxp[ۘϸ7/2(ۈ$h;-8aV7Ǎ^DrowE/vWnyHIwyV%eg~Hwjg-^~@Nn= [{uLtM_6!$>{{jX.(?x 56@NLckk1 PBvkQoegvWS -# 8 8 4r\ 7p?+(<)(+w &ppR*1Ps*;p%a eql؆w{pXqtHWkW( H*"* @7p d8@/}Ȋh| h x jxsQei2&Lqjxp 1 _8`(9 ep&1 * !d`HPr&@LQXH,8RCr8pX vX1h1@1` "`èp{XĨkȏ ɨe-&h IԈ 59e@< H* 1( xX8 }8`_Hy(Z"$sW/%#*@e 3IxP& 8  @o#PY#IJYx7)XXX&9eЈ{mIɈdpY0umh|YXdH8ЎɊ_j &(^ HRh/QU i/`pXIyU癞)Vh |)`--_ٟvwoٕ8y*4pu ʠ$vUZ5 "m$\&J+(.֢2J4j6ğ;y=j? |$CvE hG2KvMgO 3m\ڥ^_ڥbDXyX8Z/pr:tZvFnz|PR~c]e P 왒U/ ,g`FG pfzUf+3 ,e JU9 z@ګOz ˪;իoJ 124rk^)0劓(o p :P*k*)mAA0Pz|!q -8 bZx() R [jZ8 ;V p1e0)fpPIZ9 72  K X7'9+W0=f DI;f@2!OT0kk['m;q8a0`Q0o-; K1cKR" Z  PFRȸb7j K#* #?|’,1OvhϡHϨ(H|)vs9!:Y˴k M*jѥ|)/!ܫ%'ҵ C~2M4 6>8':ӂnFDͣI}J5Ll*S?բ!Z}O^^}`4"jYF8ehM\ֿ&wqyjmv׋p{Ksט v Ӎ@؅mv2ԏ֑*َ}/Dҁz =ג]ִ٨9٬}+ۣ=R۶)\-@'ܭMܭiXjn=Sph/ڽ-H U)FMj%^M  Iȓ>^aa=.Qٛ #9HL9Gc[ HG @ЈNN"Ý ^])˨:<[L3!19~dѪpML+ZDL ?~yY(; Z#ըT 1`,ԭkζK )P2 з!6'k#)&2a{֚tݽaRz݌ \)0:!}@粂ݏ# `]=Vf<@@KaRqY! -✽7~3;?CQGa̖GT٦ͫ~osqvQz}Q? p#x;a1:qɡP@H!qp}0_aa " S g ^!!O!򫭌5G+`XZJN&N>} MSPO _skR?.֑$Os^(+Hcn2 u_䤟/~&v균"/.ŏB/g]'/3>ڟ&ؿ54ثwϒy_n$ O99oeO&ddfee HHJ۶ڻ朿ީ 4nW5ǰÇHQ?p+jȱ#C ѣɓ. ˗@I!852ϟ@?JtfУv~ꉴS2JJlJ=1ʵ+TÊ,]˶رp[?ť۸L -'Mj#d&Ĕ9,jPɠmrkYhnyа^[:|U[a!w{pX[Js|ʭd<:+ vtk.=GWBW)b^~Sg Ho  Fha(>ۅ'MX!qǜrhI֍H"*vb/GF3c{>VcyCDG>IPo2I@Xf\v`jѕaeHiPOznG`x|1fzgeyɨ2@G9-$8i^*X`Cޒ0A֧ʀ vad za K#@򪯱lPf 2A+,rʰŪZFr-.,!:0َ*,># )KSylϧ% : @q%zf`ţ@ Llg0$(x2pI8"tf? o/ ^[!2q:c&Fd 2@._ ]L]7- 1@efF t 2^6hmR G7L\m_5["e:YNmu]=cS4(e4p ;!\AAqF]~eRD +9QRtHQx4dH[AҒ™V4Ft4/=HePԦhg-ir꧜ Eժ*P'DQH kt} 0epk뗾*Ѱz@j0ix`K:m a?5'Ĵ4)Ȳxb ٬gP(M˝TlgK=D^U`em'j%NP\PUǣfsek@@ w=YU` gB+eͯ~u_wi=|`sH ;{`4xG|$NQ00gL8αwx(fKLl ,G.cd4yP&œ2(W$;PK:1FJJPK&AOEBPS/img/strms026.gif,yGIF89a ZWWZWXǑ񬫫>;;uss0--LIIgee?;<䭬1-.vstֻLIJhef# !, 'dihlp Dmx|SpH,ȤrltJZ,zxLfB˰n\|l~u~x#}]|'{zdvEs&"(&ի   " # wk FPA0`@*HQF 6 }pWHȲ^<4pp#4 LˣHjAAE6ٹKhIÊu "b†^QK.^pE<(a*" (tkl__] 8@ ur7M80<Ҭā 721K-b).1Dm;^ :Գ^((ҫ_Ͼr? cC, (X RN߃v 'afXޅit~(bs!h"o%h)b]-(cR1h#K5ި>9.=(,Aid(hALBA8䔙hETfyX!`FbXi&"\Rl)'HxQAyG)1n袌YAF*餔Vj饘f馜v駠^hh@Ωꪬ꫰*무ڪKwBeĎa& Hg6(@k6Qvķ7+nhG;Ңm;DPJZ'  DQ&F/k,`*˄#F 0 7%pD@T{)W DJqQ`2Q<3D\)\hLą47[ / 1zp#9$ Ud@E}=00v4U}vS#Bc+;ZU$4:_ _ cN0?uF^ԑɋZxl~4=<#X#D`پ9V ļ1|,#)=$ 1<8/c 6D`bu(7.t%3\# ܥrM<(C5a6D߶KCKRyR#7xPļ1]MUNsE\5UMnD<7MNsD9ENC;=φSC=3xAf @JЂMBІ:ԠdטCъZͨF7Z[S,EB*C:)Jҕ⩥.LcʦL6)r,S)P$ԡZЀQل(IifJ< U-= Q*JhMi—ʦz`TdSX=0ִiLqr+t'iOq׾FHڤ pNJQ]Jl7x0TceD!&G.jh@i,}i+6&/x!-ζ?@6Ds(@ Axe%P*9j@^*P' AB2P@r KgD7;"Ű. š>ZV 6CX磶 4'B8X *V¹xg!l`\ sLi">G\}. y<2[:b90yb} FAq, `F^6TEp`@" e9iӌ:jBp@f:r$/ĶQF' TѦ[ˊmf!᱀' `RCPeA G=Q$8K@Ze} Z@l26GIz[m/C\%1L}kn/avYǤ[m=:N902^ Z.evܑ7͆"i͊vl@{G_(^Z,*C4!c(  el `M!wZ!gf E$ !doy1 ,p Wz&1~Z[Ptw%Lkݕo5.ر{,X]H`AؑI%+ H}lW+_ȌGöC(@4PWԅI.,n>P_FC%qCylZK"RSH& QPV8gdI6Q~DGH~ ِ$CxzH6'$0a xQGsc5]Uyuy҂1bv qz mjh$P`W%d&d53f j+@^GC/e ΄ox&0*vLppo1DoVPywlL`nC{i0aT_- `J3Fu346055f<#q95(4V!3oD G`2p&Vr}#V+#~B`dBa0ty !'g!88Oo66`dVqs )8t\gGfAĠns\CF/CQ>hg$RC@w fw: B#x.4CHF54 1hB+qa4#G"qFmkBviAi$y2d5gg !MT`sg-TɖD0)8!$8X>-YRGvlLx}gAtc $|AwQ=(ַIqx#6YE naBbfD89n!1@81m*XoQMrspPTsT4yE 8tFwD=aTP49|@8u-  1:yezso$H[v3T; 07#4JP:fc[8IHT_2Zk9 ЄF/%JKCdOV)7i}`y=K{ P'7T; y~rm&]S 6v`†Fn67HmjOH*8`Rl_ vW!uB0R ;@a<  hcg@Ww5qar2]haZSrf_-$!>fao zcgDeEs7y7K8ܰIs#9 qsŤb礘~coC'`tJ6(TITPە;>8'Ÿ J*%\զ@ SCEn!1j0QKzc>w+l{ !+)aҺI*+w/}|A3Iny4 u)0ĂO`D<s{ep<`tddns ٚ[D qF%#V_lcÆ5b6JXsG% ɢ!IW~d6p/G6KxƩ* d-]F@;wK54ֵzYf -h !xhKFn: {.VT>^$H !($OR`M5-<(@ujrUu%i!%mE^| bR  &(<|FRP"\2l`$e|Ɣ5 ,ʗ!OFX̋pY6lȫFR,W@PÍEdLϤDlViXVvUWUUЖfM}maVR#m`khfV'gk&-=|&!^tN.];Ama=Em$4pLNPR;=-$fuZ\>ymp?O^CB h >68Cݰ[c6Q` q[DI,  @K;u>o-/en; A3=S0aV! >Rѳ>fX!gJ7xHc~+B.N1G>81dK3n$TJVhu ^6Ofzh4sf6Ѕ %M"2D{~dT7}qLߐ01 59˽/GOFG1&  68%D.Lg뢐>8 HJJL_."CoQYo"[W_?"acSU.ie/"gooq!s]_nkOm.uw!y{!O!ā}H_!YըO! ?_?:}<וV-?To ]VSϿѯS?AOe׏"D ,zdQPl}K5U7k F'YP ^y±<ӵ}}?!1+"%Te|RʳjM-m˷l6e5*=T7zfا`XL6 :FJ]Lb@frvZQ,zJnUr6¾Ll~0Rhl"/;ft$a0)S_Mp0( | (lT`{Ya\Xx$ D[̩?_U0c c/!C$p q$T8P ?6)PC6 `AC>5 y`C2e)0w +@4M4<0{ZR1,Y*pve@ʒ 7v $݂AS OʝK-Cg ´{D6:2.vu2OH OlR9e^6@Ѐ|}@4 2kr3DZr'6r|@)&,@h|PbsN"OAŀlasˁ0l1]\ U- !pMit!NB6'nDɄ`ԪNq6l)]1Р`?Q&hҝI1!K"/ ':`&V|rM ZrR*]J$@(c;fB%ʭACikb"!( 5isSXpʤ'=p/\r"TxF8ObW(&;H(4E`DȘb0,$:F 2@]q7t(J4 $tNJ7 9&h>`J=pΐReG>2fJyS&r.x\'ɵe2[&7 W]q5m(!S&GqzBxV- lGSư?R>p _24 =Nd xDy[ UlB Zh%4$o%p oͩ;Qx n.M84Ã72LAF7aFV-7RuH ľ/!@tM̷zw-;cF*43Wi%!NA 3 ~@s I\up"sy#5=;KF7$ݣ%| +;sM0_͈߸-,qf">8NثMhJarA]i0gK~ȁ뻀D8']3(: cEcܒ @ϑRO{J#u9(wz>? <@@!AAbBFG@DJHDd@G~ud|5i$6Ld5]5{G5F5L;T8(Vg |y( Agz (*<>( ( gJ}Em΍ hǔ@ ZMmȃB@zh}-ATFtQ2d MMu]OMhprs- Qi@&{V.i N MHKEVeE\ژh ڮ |{9(J)S&exT}DN *8Ԉ^ iHHQ?E(UD8I8P4ZG0C8D3 Ȳz̳}hf:l4tIkI@JSJHM4NN'OK/tE_)DgLOR?POGc 0-{WuXXuYXFKYu[[uYWswvu\]u^Ud__v` D`a#v_kf&vc;`cKd#[ecadvg#R~C 5=6`T"36Ki_hmPkkSlno&np^j{oGqkprs?r t\uusts6q;wv;wlyz yg7 \omwnnwt7\7HwrxswwYFv?4%jGxiwp7p,8YL8mxktbfqv7wshO~_SJ87&ZUʇSs;HWj%3 98\BDO4y,@FKf_7lO ^Lg 9ٕy{B8v F%6}{y| X+9fùf#/d;CzKGݝPk"KzCW#'zMrC9jzXz:Czmz::'<>{ 5;:M#G{&UqO2d938.& ;esB;դsVN{&;;,d@F0IY|, 2{-J( ̪AɑR4BM]o{|fST nx{ /OG5| }cǹc<#}ңS`ǻqn, uRkzs+=;} O~@/ŀ+#}{f= E S}S#z l|^ޯ@sɪ+R@Rs~|| 0}ɉ0[,Kw~A9黻o)q*q'={; 84'#AYKcπ{{3{s?o:gK;PKE,,PK&AOEBPS/img/strms057.gifMlGIF89a}忿???ߟrrr///___ŏ믯oooOOO@@@999𠠠000 ```PPPpppYYY,,,uuu:::111 yyyUUUBBB|||֦!,}B!AA!#  #٪%&' BĿ '&%p}㥉 0P HoSzjȱ>n7 @:=0zPʜIsfef6,]f7CH*U *P`)*DC>`.mJB"T۰@Dtʶ}P0@wexP@&)P^8V ' `ظN[gAyhsR {lu;%PvmՀ!XMC dk[ Ó,BHC ѳOC!Ӏ'ay0;dz@~߸' !1ހ% }Y6蚂Apʕ]Җa`סid"~#VLi#`2{/҆f!DS#c&=` @Ad3ApVvK "1,R#|pUO&?&nz98&Me[,R)}e0R@DP8hZ!R(lp@wΔ|NIaXJ VBthZPU l lr]jG2'!Wģ jk&L,l?д2Av P!j$CT`ꛂ.zh Tn-"ad$C@!sq݃E5`!MOTwð[65.{E dNn0Sm'K3F͎?: ;@=, sĈq?kC#> uă35(x~pil.ۅ .`V1 Ý%˯(X1{̔, !yi$3Lt> e0υ`!Eјt]R33[eУ,iC<[^RE7(!PR[zƋJ!SjZ4i;XϨ'ԴjV aO֌ulWf| KUd4vࢨs(Vc*Uf }+kn#Y2||>Zp+Q`1 M[|3q`85A8-B8-'5A̜.9![~IpjFrwBhɟC])is\_.Yln+8HO**SA ;[8ާgv{nw t;_ď]g2MH6!C$RM,@Q } 1k Iac%|wdwdx$C;Bm`|K:aa(I=u#gFD*cx_}'^pzր"@G!D/~f mSVWq`)1%8HҰH}N6g%{(wz8+~ tł%eia]U (b]]B-`X3 80*%tPX"T}B,YCD[m}f*G) e{]gPpp7"692(%\x79pl4>r8 ' x>r]AHOa& 4G%r Љ@B|(PxR(%xID OLz/80Jo(N\"O;GXBJH tVBvCG Apt[_xR3L㆙@m Ea)=X_-x8h^j3JXY]oAFW- `n ۦ~A0 31:Y%H( A < fJ6ҀO::Ȓ)A-Y2@pVM7/UQ"IEiMZ8[](nmuQ3:YL^9X"'`msQ1hdOB-ʅHG3 草(Y H2oJ9bC,Š%DyfCP3#-rBK91ĩdJW%%XRqyE#-A(U6Uy~WHi*WqI @ mI`ivRڙָ]p(*,ڢ.02z  K}:<ڣ>@`jegm8JLڤNڢ;a1)WVzXYJew]J6W#zBzh꣍ .bweqkʟGsJ  ^ j[קGڥJ ڦb wl ,r&Ȏ`kR(*6ZZ&@A:t&0'5k6aC*t-2/NԒ9d#4U*`C|BcKm9oy q(" T+}_-z4º , Z‚Q3oVFb*-xw-oQ4_d`_# kuVu0>_ZH%)E YE%hZ4*[,i [a S;2,,pUYsYy9QzȰV;µZNh ._ (VTF@#Ap(DC˙h(ԸU[z| d3工⵶Z;fS S_Za`' C/o@P0;+]M(x_g'{`Z0p@pjb9fb1א|PjPy۪ #M$P3MFCGAAJ`໧;8Qdjd6iD3JzZ2 榼jv#k6)ELjlhM" I ܫ *< L6*;KZr[V3oϊig"FS!o!EҪ < |H8BB20,HʬL,Eoƍ4dA303"NmvQc\7 Jh)@K-aJȩK qBh:f-- _±m4%@#4Ȓ7/8r\c%R Tk8ǡ,DK_+*q¸ Z˄ڸƌ2gx(s6J\ 7͜`lYeF<͈KWTҕ~_4F2M+ G‰]=]` | P;22o(rPЮȧ1( [i:1ɶ pa+[&?|m\RV[>*P-…m\>ZF\MMAB8m29!+tEk%U!d"'E!DR9c I'b@}9QBq7bCE\;)S7 Y)( אC %z7HT㇎`l܏.;iFB-qx[pꕃ~K+$p|.; P7nӵ\ޒL3\U1C04g  TB\F<˵N.zPeFd4Ř.7aጃLI(' Үml>l_tdf D(t.S9Zgv%N,+gjM-sZ֞2$sHg 1ņ2mauj4l+|_X|V =N 3,x ;E2M%mzoV|r3xpć? *  ՜J/0.._}f>]T*>װ?_$cNb-s #U! d.BSsRV2B粠aZ8J֐NQ}XNSvBBB B BB AB BA AA A†ԆB B  "̫+=^2($a!sb2ː6l3jVǏ AFH25`ڸy{QBt h-{UvLiVC,y֭\j*ׯ`ÊKٳ`Ul] d6]*Y;GeE)^%@ } I>èsnx 0MJyܩ.EuR*`32(@bDX 3T9J4ËתkCa+$Cw$Azr70%Vsأ)9 rNdWvU6ޅf!Ij(&ED%y7̇,(͉h8Z00,VmD)^/dd4;.)!ɣTؤeX`ei&QbMkhyY٩bמI@zcEk&(F"a 馜v駠*ꨤ閟&6RʥTij뭸뮼kE%ji,!5FL0*v 7T2Cm!"TPD TK k0[ 1?>A 9&w; A6Kԓ,qp)A!M䟿 ? ? | 1A_@\djfiL`0/R(aC)+lȪ,e"୆AX!FL2O'BZ!D,fHA 1Ely)3(dm$xpy׈(p^Y.$H!WGz p~T !@eVɃ1 9I $Jɰ+(@^%F\2`EЀWᗣĚ0;6 @{8iD4KV A8&AzhT"H`8Nq`UDeә8 &IOC.tP 3c dizb&8.*܌^ 7×U[yk~sT>@ Ld"Q%X@t!8P@QԠV5P0`4 P|C5 `Ѐd8pҬP' $BbBTMZC֣"ʁhuxl!VN~*`GK҂2$,V٭dXlgKrEks[;oK) %q-]sK] MuT tbMwXaT hpF\C!> !d:4׾  hHCb/ /B)hx!8I,(܋͸_c<ޱz8@ad7.F򑕜d&WW&H cfB0}+j40;m~sgҹsL QgmYvs]sN!M&yV-|fz< <oT;؏$d10O!Xkl!Oְ: W̧9w*F@Qɖe.F ޠ|A詙_#@.rUefpQzV/6S Î D֯TԌp$Lx dO@|xC$k'SO^$/җ,Oٶ|!Jru^ -XMo)k)rvuh)HMlSأ8>Om^\{M0`/5Ci 8KMq6G1?#ueHt5R]|$eib8GR}d3&@gTSSV#S&S-V TP<QeT|OtVLOEEd9cK)N#N?w^j{BL.G4GT4P#nBLT~S!QHQ[ haC(E&q\sS5^(%#6$.\$7KlF.LoE1#pKV `HH1x#I @" 71(#&ItsJ0+C4S)wAD(%PE8yE)RTiB8D"s%y2 Fh(5RT(fL(Bh'Wh'-)#"( •%CΩ  3 BJE i96YS␈$p!& $@TAAA*$A  D j!@u dة /EAR 9 ):JF >P>س>ӤP=ä3Kz=U#(Ü|WQ M { TSLYQt:3R$Q^(' ;J<:J "zBP0QQTV%/ #/eItl ` c0?>`0ԙ{ 0?e5YЩ3`0. 0i ;:C 1?I:ӮpM.`1K1 9F&{H"C"  y鱝)@<=G`;۪ PI|L'zƋ4@dƯἆC 54@&s!{YcJ(ZMzE*ņĀ%*g<8GR {Mg nƕtX8<.Zlȗ2VlQoiK\1?͈U4Ȍ0t +S=q ,8:b[4Z  LAqfD!]L#8ӯTðrTSZTMFq @ 4 0LDpAqpQk@m}3 ~ۖpУ0i!c)iɫ+sbL)"Bk6^ݬ?Q7BB0?5G6|3 "!`ۖW (;%]]67I`TfA坌tkg[i , h2< "XёKB‚`ixː Q@!$~<+"ȦdĐ ":s*=DD#>M`k k*웎 -}<|DD-Iށ1?0MK\qs~'zZ$+NWZ"Y1 8 { ϷL"w '.:!;ŅaB,2ꋝt^ȿ 9"Pև$,"&n_¸C 쇍#QTvb@ut PM&\ 8Va. ެ hЅ ޞ?S H+X_0>#Y / 1_\SY? J}vL<9F#oW%? J`]\,M!1ϔb*@B?D*_YHV[9Ek"=YVXI TtD :'[O,L /o4L6_g#u]߭pqojS8M6|7~~p׹LSSy}ѵzu6A_!0adb p P/(#(00Qm'|S,QPSï" `>PR Ȱo8RT>;%-g$2,n%ڏ#V9RF%%NB  BAB BA B AAA NjB ֱ̢ߎ ABb%JZh+ $ ȁ\U@ЀXA>Dpa"k(1dVʜ[!\db@ٓAɴ)/{t@ x D.E0,*ȿB*l)HZXi?6RH۳iUT40"#̩ǿĭ?+T RȠACWE@y" z˥MPB>rh` slǕ*%bK6lhس׻G9.@0Ar ]s?-?{a.=v&L^R/"RQZX %0ju0UDr^ص}W<##F ?` QP. $GQ7D0tYы|8@Ys]BݏlfA2%+:(1D֦ 㫊"hi$cH fc:f{ jk Rmp ZɺnKv*qCVK ʥ2C}Y3_6.,d{LΘPUK{ dz0ˆHƁSUBG-S:+ĽW>ck9k~=Ёkca ]*v@$'|bk*l|.@A֡ &4 Y;to>;м8/}_/#6j?ԅ'\iq:-Bv\nRE>|dO[KǏXHxGڌ}-g{R@BdN>Bk8VZ"`BLa*Ş!AD!<-m);XDqjHbt04a24{/&+H4I'T1qiT=F-"S($Dv<ǰcI6CBQcdJuci H= oFAQDve2BޙB -5dYabS¿3HL:vdS Ǹ,g>n(bPXtWvD$#Y=HzH,@! g #E鋜Ր((MJWҖԥ*Yê!X"yzETQ‚+EEa`E*I-h$G-\>LU$]9ֶpu97i>]2 ~1i(8Orv3,"VAR$AOLevdZ3n9PDZ1ϔjy ?aOURH ;t$ɣ^6cr%\ -IQ VhZe XŪV# Nͣ7:dI\kjA~J /&'fKm$J(Qn8TɅCΎ<$QIACLQ3pl ('A]pfg<)I`F( PWNV+EJЦ6 G}6 sH]ܘGib[g9w  ZV]IiknJ8 7HmDR D:T$,}T`q4Yi\2\ʇӚV=IC3@A! 42mxc%dZDZȋ&Oa cR,!4mШ#GlSLY`%Nöjl!lQF@C*Kܿ_{9ts[Wz?g |#Қ&ȷDjlº.Y E8װ13b 57#O9N%&—lр|#&H!A 0`AN 1 {IȕN XpcS{n^x[<([h77=7mܫppDb fW`׎ }GU0}c 8!)Do!;ЏO[ %JǢ%mCOv!(צnnh2 kfFQ\7A|hw1\!G Gq|jMZ X 81xB%td+t #!ǧy^}&:' Rt j  gM H0jlg4dOH3iNzqj pO h<5HnknWg6nAl {q'y A^1gn |of@pkmbrvjqw  b~a 'YҖjbÁY! $1ItgOEʠ-moovjy=ƈgz}( EuQ kq?Xy ypČSWM@^eIf! v*‚JYGpc`y AKd "QdL0Nthvc -$hfvwOF^4xP0[xGi!ȶl6vp{ch]Ȇyqp$@qA%]K'mY@)1З1Wo$YI |RKwu (uw֤ BmHDy&XOy  i|7-hB8P8C)ə0›ٔU|U2pqV%x z{gGZAY%l]' `n؇qd?Q307lڦB"r:t:Bq74zN6z<7]9Z$ ( ʔáybFZ%(!:Jph7EN9ozPٝeRDT00IAZ Hrj JHnt[(Ge_`9g@uP?)zot&cj6pdJ+DFكYUdO$ ΠOuH=HbXj6At*Hr**h8 h@lhbx(F8DP j1 ˱ K2!˲+Qlx +v:4*[?[& M,}R;T[V[GJ[rwJ&d[fRPH)pr;t[vK렪mkط~;[{)ʘ e;[{<%ˀ;[K rۺ w{Б=۪3{? ۼK+0c] ]E:ڻ۽;[:?Xqg jH{#;Hw12h˿\+h q|Hk;sw[~gZt \t䴲&"b AƒѶr:<>|*×2`_?LK' 5DlY0\˾A@.~ XL#J4YW-+>ǐeB ى|ҁZcL2xLrt@#I/Zk:2†Lʪ+$ژIQ4` K4( G qKU"9B/;ŝ g_o(>%M7AX a% !QZ:g= ,@{P(fC:TaX\A#`!AN";&jn;,B;;Bm-Θl <# )i&, a`7BQWUɼ|j ^jq| <)ZԾ Զ`ixc! w! HhY&  0xz=0שrM]#QFӯq7XRNaWB8v  <J%f i y;瘥[yL {; DV1uE00g+p 2Ц! 2#˂&biT7t.1"4FD} 8 ~W%b{{ߥrߐxg4Bǣ-DA6)uVںr$I a <&&@Α&gIԩka*VYR uMj` Ӏy":>gIU1Q@ɘ*XrMg{|]ܰJ'}(Q(mN(Kq\](() (Bhmޅ]& UBe ~Z]]0Hfʤ( Y铌is*pa* 4 dL~*8آGL9SxKM3noО@`nFZ!~mS:Ӯ%qltm $R$ 6,G̻2&r|E"Wi.j(E#F_PeIk-nj„eW`7mL@Nh83Ꞓ*| NKw3ɩm: L#5L=2 IʍP G4Kr5e m ]̾@37s7Ҭ7yc  \H `ϊ3:X䣦Gm;ئAt;#.&+@= ]=?ֳ3m_޳V FI{ O?T=/cC=_7~:RqLBA]AAOLFr]5BBj% AABBABABAA A˗ גڗ 垡.IQ$" Gŋ.A8@b`z8(%c)bJObʜI͛8sɳϟ9ɁL"_*]:l%u2J*ʍ:$ډJKٳhӪ]˶l˅O7a xĪ%WN^1%d`È+^̸ǐcS * _|zdmS^ͺu2g+˕*Q3ΟszhMSF`FhwqsN?wS`bçkWu*ڙ7o'}4h`ڼ鼷\U$94L Uݭ]J'I1 Ƞl7(zT' Rv,H Dڸ@dXf@h! DЦ1h$fT͚E$AV՛_b *= У IL,`zN("ᲒhV^6j*uH+J\{!yjɜA;l" iF򬳑X:tm{jr6hI` 0+."pt㯇 PC#ͶȑUJuApWe™,H*ģ[ctIIs`yp0 X%`a.H u+& HpM@ XA2p 76Y`9* ؖ-M7!38]mriCzǎ'sЮ23 B [u]p\ } 4yc>n"ǗLඋ4l Kǔ#Ƴ]u[Ղ03o ASrp͈+oob09}s6znKS!T~a&ysWq$ثlb$^2@';^}oyn R&zm ^|j}YR`蓯1;ጀ|2zG}>5_Kq7MM3#0 ΈY1@s,Mxin]D7:ח%}Is$8 Pig'R{l|&54Gҧ$0 {"G8't&ɰ)R!(0  !$ ca #!@#p~S~8 ^8h0|$jfrjxa؆n~jEx38!s {x }-4@ x臑(ň,Y(ȉXx X(花+H؋8Xx ;PK*ijMMPK&AOEBPS/img/strms053.gif6>>UUUGGG|||:::ʻ!,}D+-"CC)&$(ޜ+*  #9K׀]a #zrZLz @5CO#93O& x3n8&ɕ038C `&c噻#{kH{=LX->(﬎x~NDX dJ:1Pfջe ez仞 . !|2`J>,B **8{S6L1os({ߢ'Meug.lX! G'}6< xocqů3~8jpE,,#BXDZWi\b"܏mA4#ߠgPM$ 8`#V e G@gBsYPNGhn$ᒇ*4 '>Zp'A9R|,>FeF \rhi#c7FzV19ԓ%XK\i,Uich4WF3a+(_^qC"!\bA<2دA 1J)yό9ujD8A`>Ǚɯ`R)щ K4$eXIqVT-5y[3rGKJЀlUKGJl(TЪ zmS~ E%X1KcQ{MeWg΃qp\&Y Dt2z- IQr.6#>%WI)Ҙ>ςvt,-(KJZƘ%1Ln3|(l`: wE.* v6?ej!薍1!PMfwR(+a%̋/?n.%e̦ @mt }! k]7-{7>x# n ^e7A.B, '&\W9ؾ0l6O7!"}cIFEJdUx .aaV0V{rmL1dϜaF`3|c&t^T*ЇVBdO{svE(ύ`fFdf FO 6ħ{hQDŽQX ZU ՍV:ҋ5RqYnm2` aWؐPglM)qƄ mILʶۦp'AxPK$(h6gCn5GH'hR0@%723JJƂxh1x: Y@Jh-I+.QW@!Xx؂vD1pA_H:e3:bu#D*H.8KW|f9-W2$!8Hk`CYp\CE(dͣVHJ8BLj;a%ۤcȇ^燈`$LCUј2u∌Е`bxPrARToBFimHU<~ d}C7'HU0`\3J/h5V)XJɒYy2+r6hщ*dX PHfA9:(3!]WE3T/FLP^| )癞gy4 x 6 `xXsX׉6iY)䙠Dg>C5G:34d"V\lcO(b ,ÏɢBࢃTPp`@B#-ڠ:מ@چ2z"DFძ@Ojsg≢FjTQӘsnY6Oeu | U 'z'zùxƆY2əCbV ꧋Ȧ}I[jR//JS~jgH;扬uTMꤖjam*oJPX*z\5F*z֚کʭӌ>0V4}@yi$sJu[uJ!uTHrpMG2L+[&"&s렋#'*L Tි.klb} i#3Eǝ_ZKǵA cM[a[| σiһ>m36L؃by >Kgz $5Lut$Xʴ>>\1Kv?هQ'IR0+v:F%q8+u5$A$Lف݃m=3Bj\[=d$0$mZIս6:R8,;CeB:o,%:\a؃mq*6⚼]44HDq\M`,~.0.-턹Jx$(V([Tp!8(!]|h]MChjl`Ml $f rʃPQ7~c$L 6)3q"/M?[}]‚\ ǜ9Lh`DV$7 nҤ 7 l E^61 5~^**Ҥ 4hP[!{Dҕ^&/,;N1I1_:pe4 Pę@k Q6PoQJ`#7BF]2OޔW=?Ap(`G} i3~ ɼ>< '@<>vdt8>+^;a)TKς`rS-> Yx&-y ')B/9+E@'9H«L#p-Lpဈ !P;8cJ b"\i 1MC > @$EER L<(h:xf]9h Lpe$ (# )'*q ~X2 ˢVclf \V0'ZJfɬW=+WB CDaɭ `;(" 9>ۉbӢ9,Bt8]E5k1CVj"ͫXV-ĩIg‹GE1`ǠhԎzF!5r 'iY<@-Qy+1wEr4_CTWmXg\wAFF?=J\Q/slp-hBvVfOZj1߀.x{{rwUyX8}/ΉTn) X"Cْ۷n騧ꬷ~z4z ?ɉʰ%Еk{'֪ E/WogG&>\p+bp닝i&~*UY_&@&T_wN[`[hYFD6Ab쁳34d wxe.]H@-'Տg`Dt$S $.In&်\'N CDJN,f'"TIJ#'H:rq1QBNP`zÐ@ cPf( EҟL+k$b]\!E -$@ܘi"tzӚTD O{zTԩEu3O`.RJ;BTFWyo%Q{^RD0PlӠ:-, kJ"c kJֲleMPyUQgn- ]T-VdyZZ5.h&SdlA$7X*und;+26nvKVW TC+EJ=RtL)܊Pa#KՉN]vLWB a =K *MrmJph/aZm8b k>q "Vq[x}/( S$CO G*x:(tFp>1T7<ؔD<)$rq΋_f3+MjmJu.S$n<mA;І=ћeg@5aa0idXa Wy-iMR?4 Mlag+۵kDW5M84h4;6y)]K{>u&*(k$bDHvZؖ0ǐ+)(" dB$IdfzgI(єP=2 16#nrB}U$O}~Pep"xEq#H }a02Fq1Gf~J)K܀s2)HriNQ6(%v'|q^A'pMWP(Qv# /} u7 @JVdTf$ eYE ӂ20/ nPX j'p/FvMäK>mD mp'.tbuRvD }ݷiayXaz$%  [^dq4bBy1v `S!  "0Dtfΰ-`)ݘx @'FviVC`EXa->X90JpqwHn,%93!?&gBq0rb]8~`2( u+lztrPp3%Q_Rr00+A9 gVa A GEDdz2 ڰH 1. 5RyfTEIQٖTev)Z&v{>XgZ<)&Zu3ٗ\ya~c.xHRhTl9Yr}䙰Y@O9fu0Vm; hI $#|9#01A`5 zb3#Xwy~768O=`2 M q(z=" *KHQ]F * m\'T~yky簑 6AMYA i A~0Ak^j:}RP h ]E J91~dz.1j7j~2QȣmIa Q?(=]}؝.P4$klS]8dܝ`DPU}wFC@KĐؗmJ 5~v ߇~7.Ce"ru н.02n݂׿ :<߆(mPXtHJcB>\lOD1\NW.%XT ljlnprF 6@|. HK#ݠ^ ",}'X5>^閾W0f/;1VݖfTz Tb˹Y0ͬآ-ΨU?Y h|pUv  NrZcAm "ݜ WNK 0a{ ӱe ;G *K Z˵/t2f{hlۡcplM fdU֢E'i2;$R"ZNצ,7WG /_lmqV-.YIG!v{u EHGv;H+Oא1/ \q4* .bvϖSHAfkkk4鉲地sX&;D!&A)g i)`vO Gg`{Fp%8 jwb-JAR RGP3ź kOh7 !$ DD CDDCBBCC C DC DDC֠օ  ˣ Cx4KDd5a!pW !4QR?bA% 0BqxK9lȁq%L8yxѣFWG*®BXnHPD"B M"@tT|%] * Wvb}7$ޥy!kx28YH"4uŽ5)،V c! 쳎&@`ƛD@U S&I\iƕJr PjUv*SIb}R 0X*(̏'X Xqbq5%'#`/Ic5%m%w݄.`RQa+D_(Wߑr' W(-(IA tW Dn+Q'TQh`}6f&d$ l FۋԶEXV`"\dN'%`D8@?BR|ژ?t3<5 _X+}!/7Iؒ-0MM@vyޗhSkȉwqb)##vT!*BD80*[BZ@H`U7bH%P.)c+`I/`)dec"!+++u-ֆ*I' H0,4l}igZ ܅p) (2ך[DqFIE= $R 7rno|߀.n8Y|G.Wn#K2u|`yOᤗn&_lǭ#.>s,ݟ>+).ğz7oy. dw// $:;_2 {XOf롄7ܠZx/B_DDR#L81@/m ,E7 f7 a*8h.8k(1j wȾjNQъ<$&P&o2$20v T8`80N}" /,0C`wȆIJhC]cvg;6/Y*vBBcy LZPH4pPw|_[^6$E8 H1yHwIH5⒒,4`*o ""eAq6s34,pq<~ה7Mys}H ub?0VS=lc1ul0S&={rpy@!DG >$"IXE"EfT0nddS NhʟH?9uiu@l*˜EQĸp' S]4!)z^-Y)/Ai4 @d@}O%of)Ǡ@!xǔb`vR0i{/N"-vH%3Ijd#MCԗuBR-/N;x^N#Z*'MQ ,uզUFh~MvdheMU2{v6Cj%s D㣴uG\ fCS #^Mkkǒ[6Me"Yw.S/7jch86X-o%A 0YΘN5qtae#NsQ }o !.u`]]I ztqvedd[YЌM d/,Q`C8$רNBG׸n/[)ơsCB4'VrE7'f,/w7J7hVLp;G@Ecr!{LAi%ԙRCX)Ǜ9;3?9(6Xuaeo @(}O $C/?~"@`hB tdi{qi*Зp 0%dR)GI ɜh"+jY v&-c9 mٝ*xIAUٚ& @#pՐlKY 9:!y #Z*`iW z+p7 9Ec'@y$)1iMY4+#6@NE%0.*A64`)F QHU4ZXjBfY<p-ktbR>%z*2!JI@aNj_7"ao'pB_AaJ3kg׀Lh|3PjCM(bcze7Sک*/J\٫*JؚڪZju1*J=: Qy<6ڮTӥ4 BERi_uz6K0;*{ [& !["۱"[ذ&+/ * 02Khm3k7 9+;K=K@Г0 78 4 M RMXmr[PS X˵_[c۴Ӵеضe p0Vn+y vx|u+}۴ps.@[B{E[xKK˸ ˹;;PK&!;<6<PK&AOEBPS/img/strms052.gifguGIF89a 忿???@@@///___999rrroooOOO 000```pppPPP;;;ggg<<<888vvv:::,,,XXXssswwwqqq*** 777UUU+++xxx666nnndddJJJVVV}}})))|||yyyTTTƝ̺WWWccchhh>>>EEEDDDaaa[[[˂444FFFttt(((lllMMMmmmbbbGGGHHHeeeĻuuuRRR...~~~NNNiii{{{fff555^^^jjjkkk333 '''ZZZzzzCCC]]]---SSS !,  H*\ȰÇ#JHŋ3jȐDH(Xd4Yr$AbNH͛6%$A #t0kTd!x`xd6 ly6PeMpd}j`M0؀  A !ho)Xb#&X`gm* vA)I>k!@~UZ'6&k *mp B wnX^snu `[)La!(At B_v4!얐dJ\M` 2m_~~}0 %M\_!0ګ_qO!8]Wa5ƴSu S"cn6| v}j :j @`76 pў%o%f <.:@'Ch'۫O . jjWȷ_CT6D0Ё7=4@,~ڃ{I{D~c>?Onf`H3 +u@lNWA" :h`ǀo(4Hn)R!ڰ!Ԃ FAV"@4D"!hӰ1s{R;!R Ug.A,6Mʈ}lҀyExNqe>Dh&[B `n4|duBP`& @>@ɷ(@löhB8xJrCB5iBfXG&j%M$,~TJF%e 1e4 `Ëhi@z`礼r*3L${Uy"6qf#8DRh4 g'C#*D|2 f` 9q裷5fC3*6#TQ )Iu(`m2gh yTQc5F#bZ'@+JfT,QUAg$*fuX@9A/WQYH*]e k/ƜXfS)@/K;E<,lHKfhZeaF RhUl3،,«]٫4M!8 J Y2Xl SO-7] nhA O$IATU$Ut\ﵳdYsLEm⪊H^˟xjYMmfL- W< ߆ ppu[KXʓ1 &d9_dFBZ)KuWձ@h@7Ukn-Ήs'XΆqٺ*m,"Wѳ0 GRiA@-s4ݨĿ~󯅬qEu4xHe/x\ \SF3GL>RuԨkQ_^'s ϛnt%SZѬL,b}~U$@tnMO'jE iu1Ð+HR`*x6[rJx?%nҺ]akkVKQ'gAR&5t0 5yַ^*?D/U~X.-`_%CUHt')a/Tlvm[gȮq;9OiYDb1@=;UC!O O`N B1F0tO@p2X [rdT DSG,8w}\r4yq0OTK&Hy54uaFU!7N]=eq>;s isBs;3=x1,8!7Gc|5YZ8"OVncdd?D#Yx%sNpXPɄ xɑCt,=AEih<]rN^j¡8fV֤&bUd;=S7o(+Hy춊^ee'ni RyHgWV?Lhb1xb茻sDNeeԅ"x;Ӈ BV<_׎6ax#RQ=CԌb  LebH S;C0 I?rbI%3iy 2ak q`&(H 8HmH:?ybQCA:q` ` P3qSL 2XVXq`搑TXl%s\bl#hpxNVv)5(Ș`D _y9c[,KZ8љ1 < !A3|`q6-p~1e Hʉ7 2'BhSZLui.YB\-5'SFF#R6WZY)1Ct0pr՜+A290?E)6R y W~2QSimhtO 9JWq%b[BycY8Q r[cp"Ii]F$-R5%A22%]$n!RLr,g]WQI=pخ]Nmx oۆW.I,U*T!0`E] `d B<:J m=` ݐEL,(]FNmopm=pC+#42;=! }"Ri NQ*<+C ()8`Q Rq!&[-)S|..BUP .)(;!;~m$#Nl ȫ\ͭCmn )pb7m<)(I[g( !+m;kJ.B?=}`:&b"s⦮'A ̸G2!龎>8毮=P>W~ ~0hA۽s2Ž 8}W°쁂9n Ƞv 3D" 6s 7 Q ;7nN '!M0(i܏I qп8rMJ˷<.˲="34a>  K}4PpWۦ~ '*?8ɱ$Y!)T/um E+KeN"9o W-i؀ y/)q8łb=$s;BOlLѓQ=~$x*W߻@*pHm֬x h6(b  ߀*ZDLDSH 6SE-ͬB5@ vX`̆#p1,κ|뤬(o-q +/ =蒯>1lM4뒿8|MKp{1!4RCS&GD#$'I4s G @BH# =2Q,N<тDnYOpʪN)M(訦]JdwŪgJڮ#@O`É\xR IPȈ$9MNl d;( k`~.f pH;2Pf8%f) B) $>@&zJ\ R\$i@Iy<5KX bUG!<"QXGO&EePe`.҂!ec "aI~!c[,Ɍ7^@B"H )_몓^ #0[.Am6dY_7a`AijV"z "m'5xr_ V;:--AO;7! ].oQ]lHm y{.( ·ذ0VW.&@5sm:K n@lPD0K}c YC&2aq ~P *P 0'5 a J~8@ T-9b(v"Ys3: VW K% ӵaó].S0ָz ?`㩇 S$|s||jTzwNq;I .mP($1`cCl߮ NfP1GSȸf Y5-lg[vmp[}nt NEyY3dơ аFڼpXLo;cEh9}9n幻0Aek8za\ F Gۀ rY02[y gVISRSdPpP`u$ǩv)m4]*Z~ #EH\`Yq^Bxqg|;u "MzA؍8z) G lAˉ0 aXA ԣֱ'r h( Be[ާMxveB) z!)2{>Chzk^]PBf8G"X [wnL@a0]ٗS NqE&Aby8>*>֡%!1U:p (H `lxN(rA-s/1CeX%,&h>0Z@Ll` 9cAk 8X BxpL؃1B <,$AJ ؁<#X]`0c W),']HRa. L\B1,T4|8jh47@DDXB:08#@ J<%P)H,1a䝃=؅44ÖAɳL P*_ *<((Ђ h8EL[?(5H, Pt;làBpJX0#%58h4TGzDh<D2$ ,IHɘP L<ɒGKȀ= T I` IښJpƋHaJx  @ pQl ØllʃʘD_KZIK(`ȸ `)؀ZR3k刀 ȀrIt)KKΌ, *ͭl@!@,KT  (lܖjK ͠ҎDl* l@PѼQ u 2lFѥqQRa4ȄG-Q OpD22D@RҥӍ(ʃDl0fMVSNnq x  lJLPELFm7GȀ DT@IXep8L̀t/ S6 MȀh9FEdmeeNh-MVMPC[P=UUs=gUUvuWNĖ4nPR{ PK?m TcMӂ)oA (A\Pll?llTqZX=lByt5l`Ulc5sml,ԍ0>A:ϗ-ʜ̝O  uỲx^ЍMA$YU~ VŶUZץY\X R7ҚQP=IrP4Hd\$܋mP}GX쐏nYܪlW0 IH/ݑ=WX'mp]p X]M V h^/ lϽ8>0p^%}R5ITOpiP̭\WmGu>% _ (,Xl)m vZI2LVZ LDdKll̀ \`plsM4#7ְ\xU[IـU$dO/(Evdߠ9]ͯ UQ p h_Z}Kl]ˍ]K)']V_(82^lhdcf- d U}\ Q^(=Llɬ̚xmP4 c<]a`fMsJdP^5X[Ȁj (O@ڧ@l-Tĉ%׌ļDNtR-T|zALd-U[PhX8ipa^d a ]MM<X ˻==hNxdZEm̻]WC災3΍LU @ \ l0l.KJQo̓=_L}ʮvk%vtTlΰD{]ib~p5ΣԙU(Dۗ%Y ƎܫTo܀HdXvdn@eqPhUkXW.'а`=؄GK=q ܍mTnu ^n_$i B, nb@Ӈ5P|n??HZ̳[ ke5 _]@0]Յ v$ "-ʠTKLh j oW%kxdxiOE^eb`iiQ?$>d`''_hpg s& 8?lv+CWt`rfg݊!Ơ ?5A9lC5ldL[raBa( l-[?l0j\!s >KJmP옎<_ue Vn(bow/Y lu(l~'h0w]^f4 眬vg%>}waF>2 `YM(`GK: G[f`YoxAgy a &h[kvm1}~ N3&f_32v1ǪgSv0x6耤.&0Y&\xdyeUk?vwGǀ70~ca 0Cз{ʧZ'd̗_pȂWmT0v߇ ņiF$>= @l!} Kz\V (xF` N/{~~lpzla800ۄ0MB' 6"ƌ7rcG A,i$H,leIl2i@l9fP  Hć9DApg J L{B9D4BWU)lI$0Ѐ 8QlmYqx夨cEߔ(דY_ H. EA$uDvu*xd~=tM48d:Pj E0 f#/!P"6\ ;o [}RI1{1!Mr'!lF3'Ы d@)[A40@$g#h>=)82$pZpM|l6m-'! o2#w6>姭F@A\ihЀm9`Үg駧ۭ&gAp b .xE8uM.r-,}KLz7Ib%?>X6t>>r{w,cQ@- A9)n8@l .kI@TJ @qaSg;bɫzB; @6q~<vL$8zC`6|xxMD`F a AB0guEBf-@ޕ] [@xU0r+FY7)m2RIU THirf逝4S de\>i$8#-Ԑ)aٕTm 'Y$/scV$1KuQּ&6mr&8)qb3 [/)%W5U _'yҳ'>9ikg М3@Hފ 0}(D#*щR .`(H=~p:7'0f|)Lc*әҴ6)N_ZP >)P*K՘@'#.*T*U=*V:~cW5VP5"O*ZӪ^Un}+Lj԰f\*:p~+`+=,b9 "},d#+Yt+:VT_XYaŏ^= Vd,1Z?}v%mR*ҩp]˽۾6-oKZe7…%lI{nn&4IH*$(pquN[D& -!m1+{4W nZ Dl8X#/o 1x DHL[+uKn;+yXW 2~܍ 3 yI;ʁ0Hl)q2vt&9k,n5O[H5`{b]~is7IȔ5Fj}SsMD[VK8ma=dn.½K~ +Ɲ؇/RD.؍dĕOnde3~T k<峣>/<Խjz>uxս/jx|+g>~rk6t>Ck_,׾ Dxwfu 꼵>ݞߑRB. fSOh@F䉕&fn v8Q2iD u Ơ H~y  9і+ `= ha z)9r>n ߢlb!Ki!I^Sr`bՄQX|Vq4yT!܌4I]a}[RD\M\QQt$z  ܡ!="]%SΑ} ,Bh @YĢhJԕ8@X[W8c/V''J.I!@AlD8f0xW"MiNV4xMbM4&9X݌)m4I 8 ץ@I^Zݤi.\rb#j$ȋ1%Q9 5i=Z}&"䍘,$@Z1ZFPм َi尙:)B)srR 8ΨZZb{݋ՓalDa}P*Kczc6RfIZ㪦E6|jnxB\j&t+&.eGMK"^+frt kK dL+++ƫ++ *v@ +,&žxjV^,T=aCbt*iIƎ,ɖl@@qĚ˾,f,ExDiď,vDTzDmjl6D@"NmZC`C,v~-؆؎-ٖٞ-ڦڮ-۶-Ve*Er --ߊHlvO8nr)@c!n<:.]|rR.ljM@jdm fG zjF .S&.4@l. ʭ~@uS`t"/0@bt*JjBN;ffuΝ,o.fmV/w4o;Fև9rd@m(Lc4}WQƔnV0Z06o}H0N6H(qt0Z06H/ D`SXﰯIo0YL0EMH4aC.~ذWG"1@s"q+Gz-6{pܱFqL1 Ϭt| cHe"Ga!"-r#g#/t@(.  #g%#Gq w|2ZLp(-0D챑r[. g- fw@`12}  D2df7@5F(˄6k2-k 3L|3k@r슮@I=#Jl3e7GB+ @g g/J,4h@&?Dc T} ]dAtO 4p4uͦ7cCn3Ũ}LMO&YIsq mMCmMIsu`{H85]}E@)I@ , e.y5~D XgU% i:a83ܠKPqP&͵@ȴ 2)"ID _@DfK9(f>+Bco Io`AZv^k[7HXn|eF,u4A;<aMQ4bgwr)鈀w+KxyQ5Y d\G@oDd{ k'lHϢC m0 Ž0ls@Blt|Iȥ:j Vɼ8`,L7, @B6f 'Z† r1<3(944[%93AN=i?drT>s1ɹ9{-`DK*MI/+C\ X\Tl,`͆Y2IY[ 3XrM QcjLkX0'B jUfpOy#MnQ@Aq6~?^/0@CU j qPKJF!ycMjS!tQóհLD@g3W_{`.D{YKhh-XNs, %ΰ Q64'݉;q7OE8.&|M8_SE:yP (NF}y!  =UI E?\)8886hNw:}KM@t0 43H(XӑgfAy+%P2HH$MNh; {T$/NJb*_ӹ s%bN(HEjID'榈JK|D+p^%E@e({Bhi" H\$NFG;Nw$ֆ0h idž "!E89ud$ObHl@`")y2Gn (`z x)QI%MI((\2- 83?B`Lg !怠tgZ{a$Ͳ S#򼹖H if9̈́s<ڀvY@i Yg=cN 9E`eAGσt:zfgE)P!E93T"-)!=*f#JO(%T [j>1 wjZQiQS2$zpԈ8bRJFT("ɮ.fSU5ӳ%@V"@1շ^1&Y6@Գ^ B2hW3#Obî)'V%xfTo5d<;HоXʩH֮ kkZ&e kb' lGO,Lv[!& p@jEIic\>1nj1^x oPpN(0KkN |6Gor@r̵W=BЯV,* ( f r\FoRa X2f0_X,`WsAqG tTqMᵐB1R/Y2teWRr]`NfL.ﳜ/k.<]$iaTH 4= ^{"uk:GF  l^;Now{N$'Q#m]2]\KY/;Ƈ揗M"qC;V/.p6)]ikߚ^X=6x*0}RyAw茱S!1Z $v7LH.bXA"p<@R$z.7dü4e$E§16#~@8|A% +0JE.b#  ;iBb&bp B & B f &$   l,n/" H, `:&@:pA> % #BХ " m O n0J F'$QcbQZذ @ }Q#`/; : i@>"1^¶f*Q(# ! RQ" ``*B` A`*DO%W%[%_&cR&gR%+#(&sR'w'{+!>G6AX$B#ab )QH#$"%}R,ǒ,W&'R-ײ, R(3QB$q0Iq `0p:@D`= _Q1'",ْ2+%2--S37s%RQ Bs4K01H$`+WXLsH6<* 889S999#::;R!R%msdI3R>5e<هғH3!|k&r ,P> " bϱs0&G%N,Ч@@ B3pBBOBBK@c;h<|9E;"CmCAI6 2<@DmBd檘&FG&ff/0kT4L) -pC>O: 7=HQT>yTE#@s4@60fo NFZGFKk#QH4>)tf"KVct>pfOMp'nPC#f?,PDI(E *ff UN$kTZk8'U4I N(8otXύKQU 9qeΔUӴB'PN.H"LrduKoSH]4 5Q5MTʬ;k4/86=Nd \EN;dUYGs#\[b(f+"Vv-jfOI^a6c¸s~ gH"$MLȞ0?iei;'8Zmgita6(^v!Oed"Ij,m'Vt!xW@4>Vok4.S.M4-wqqAD 4@4TN)q6s3hVQ.ƶsoS26t04dQ,Gj}u,0wcGˀöJ :'txQ7y#tu6MS{w%tlrW|{w{} 7w~sy}÷~U~wł||~7fQ&Mi~dx,8}1d4X8X|=\@Dx{I\LP8yUH/$v#XvsX+`H$hS\ẇBAkCqը|d>5Wt"4X*X4d8uxH׋XyRp)踎ᢂkɈFzH3Wd"Y'+ْ/3Y7;W.KٔO[lbKÓcYgkٖkJNwyo{608Ub.TΌlw=Ӑ923B,sM˺l$9-<{s,670 ph S'o<0 c$%e"'-NI`Pfw#6/_%LGzdG4P0 a`FǢ& z,JϦ;L,.% M&q.+\Zڲ#0ΣzW+ڮm bwb'dz. $({ 9D# ` lOo)fB!Y) m,L4g5migjLJpOIsb^cFgι&,{S+CvZ!iU)#ٺf6@VY#gx2Ym>b6`;#:+ϷVu#fGK@fFTZ2eY7 i}WGN1cþE4ő oIM~PvDGJI"`f́YqY7N[7oyoH0gwTcVq2sG-Mq}Y]1 J\A`T=۷|۩ cK.t9~]qV< rs}}:.6e!XG%ޥ=m;MK8 ZC['o#fZFdlQ?8GL:HK']vmI' XN]0[;n3^a}~6W(\:%՞JO̲&咽@lL~No0.3o0&N;S~-uFJft%$[l/ǾQYgxDm/`/6I# a$egU["fc1 W84'/1xY)*?(c-X"hwO&ӈe"}Yq[]Rb>hv߂z˥։_}H][xb<0… :|1"D$Z1ƍ;z2Ȇ/bq X|0ɔ* ҁI%s /+ ٴk۾;ݼ{ ,,6ovj"ˀ<qrꩍiif HX(Ik^{r-YAr@H>y}$~+.S2I6znr_?~UW20@}`9 wlDZV7 h1P-hС. $Nm P3'@F23%RՂ>.g. (`f6(B Re6#h%R<^8N y@rxO +ɑ p$, Op,l _%e;CHB qD,$*Q TarŎ(B ,jq\0qd,ψ+@p etă@\_N8mqN|c30) 3 "_H2Пc0MH 6F(V(GxeHzȠy0C8C\%!`o =S03<@|矀*蠁 $r2 2@2hP@c⒕^i T0;DT@)ƍ7N-]Wdhn2p}0)8 n43HQGr8;RDp D@3lZ, 0 d`p eR38 ZF<SH3 P, 0SK 0` l'{FP001*·A%a0ЕM(-qrb7rUb^GWU[B8<3Q2t p ( H=CN =.ڮCX5 V 6u ^luwgV` Es-2_xh^wAlr0< @h&cDzNPwi@{7rc JOX0`>n_}J[* !~<ÀDIi De!¡Ψ/ pgc9ȳLii@| {(Kϓ0ChOMӀIXFe׿J,"D{`9QR=Q=BxTZә42hC2gIr2#%15H3y0I 4M>!<Țg\ڔi>Y~s8І&1/p@&,{:3H*q U؞ *'Yr,-[or+N% 2' =3vT1nWg|(x? !ی :з0jpJ6uE:Ky@Hk]L_Ti}Q$U _06w>S HCg҆4%"3_u"Ln0TLJuiAcnR7 ۂ@GSǾ !l 4iN [8Cn(tߌ}c{ߊ^ l 68m'⌈q=X τ7~0y"FN(/$>2L ,o9Q3@ЇNt i0xt :/z^[GB!x`ؿbhOkuȄnFNu{=qN]O[OOSn'@<_]I_/>}jO~gߋ5kcoRg7%yb1OdRA>q|5,?ʟ%yD?9O⋥#SpaE$~{~e{Xt8X2\B'a q7$FTs % w!D B6 _2 r (puڸ@}SЈDq{_R1xVaZ-~6bQ18+0ӎMq#[q6E@uxVuUC{0+@& 3#Q@1 y/&i  )ŧZ- }чԧ"DLj2z20㌪.P[:6 1Y|!p/dx,;Jid2ᰣB5Ń;_G9:`jaQ:qz*7Wq{ࡋ~רuSh2)ǫ  z%^&L(&q2'@ mb Mg'*Zz--9z+`S+ڑ+Ē*گ+hSY q#"/C/FtK* q +#j{j 22/&+jI07{j@PJD[F{HJK)-{5Ya60Wk"Chz@ @#hjs#թ8S ){߸GI. JP㊩SXWyQ)yɬǸ X*x_ qyɺw^ Y jKtû+K[p\̻d||;A[ {{ŽZ p:0Pd'h  qD[p pᗽfXΓ 0bI)B;P RQC[ѕp/DY\\ qsұtM<KbKMѵu &a9b `qH  ab1,i~8~8q2luO \5 d(" 9Bڑ D0 .>20 KP,Amɗ=Q ?Ei-hۚ.p @R ,R޶Lr"Lb}f̳y ޡ m ЍXSYR`1B ݰ++9ҷ(8{~e}Dmt2RZ 㰆0h:@R$`Р@ r Y+}E0y3Y&Y2 h2frGUz xS $bې&~{>~R%3y)b k2C^xC!p>uR.|n)apA꿷_ E v& >MPJcP[b WMT&j sV!@ےۑ"_~v"dbJJL]`0ŐL{&Q4O L&lsMoղa agdT/Or7HC`E3X4Y"D>Q)l 8 L$d \~@OaV "df@ 4ܦ!#%gd . >:f<8.8,b\gikid֊f/ 0 .22.4e>CG9[iamm)g1((d2 8 TG1jeqsu!df>0ݞq.usoIz!F,mW`@9dI'QTeK%÷@ _>xDxF$x0SfSOF:jUW^p浜D&]Pe7%#ı]e{_nuBxM'tBA>pܮw@AnC#4誷`' ȣ=A(XU, f0`$\z#c%0 8PE# 48(Hr$> 2 \*)ߦ@'T.`;h V, ZB/xQ4.aLǰ 3r4яnl@$|PORA `(.%Tl `3jcK8xo6ӌ@$TFD)gt`XFX F%6DB7T7 W CN- Qs[cFŗP# `GJ `B ""RDiF3ѝD`%XL[-L $զfF ԧcu{nQnY6V`i,Yw3b|걽UŲ?ךۄr>PD2ˀzDZͰ;V7Lhƣ9[\ ԷM֞mjY2߃=e .0j{B`wjB6Ʌ2jZ_ܗ$pm2Y `/(TeoRskpm]\r [wrGpIo׉n"`g! #zɤꇜdɬg߱X:2x!ϙGV|1? HeHWST\zC_xͦLe"X>υy|6mx"$Aҷ|}o>|=7"G$Xϟ0cbu^g7%!0IpAޏ1i\F".eluEPP,0p9D d:& 9F g -Q-0QM4GtG+ `0D4`I*j^'+1/paL8XFlDXf` >$QE^(QؑTQHXF!-b@ \_ $S0 r"@{t1A"Ց"+p8$3gΣ=M1@V =FXJRJ4E$'<@e<)@#Ase"{2~R8- R22/!-EU2CSk40/!Q\`0 +tFk(/}00.ۑ$d3I.=L{H44-q5Y /i6m6q37us7=:36`o8887ph93:s:kh993;s;;-:;s<ɳ<<>>ֻUUU<<<!,@jH,Ȥrl:tJZجv]x4L zn|N/Tݼ~~뀁t fR}Xx}_UQ+{g{!!ZS#'T))¸ȵ ҹ֨T * ٘R zQ*\M9A*tBTs@E<p>KЊ(SAx+E80#cCyEmjvW-8dT=@~rES!#5y7h֏LavOT$2YH?6%US e=/K9Fi L$(K) Yzw<FWʅDިdT0TɡQX8L9A(D[712A]HR"l Bkjf=B$%%< &085@@3&ό+%ΫP"@VZDhdYY0<$(.#yE5BR_p2:E9)"€1>"GI]!ҘVSk\*m*`,ĤZ,r""]˵R4;U#'|Eᝀ\B3B+ l@ʡmZYI<Sޗ*GBN!|KAz4 >-`?]cDPV&*ǯ] )ݩYpNM|XWG_c/Rh˪EʼnH87|P^+1x!u/h-`*p[9oq@S.qmK/<-n6VRt*`p bw*}rp f~[Iy^'.Y1âtW /,@s" gppoF!]UFDD[X`[PP Ea)9-YEsE`$F$g@k[Ц_F &GR@GQLĆCcRpU)G&F_m3HuHQH$#$]cTSPI#sId}s\fo@fP6|wgJk`lcVf'L6s5J$dt=ihJN&dAKFKsUQ(Z>xgLL8=w|gktFhfl*k(f#T~DM&ԏ5MTMNhXeNNvztLjox+j_+e q3xQx) YqdIZcdTUqPxQmEQCm>ɇF a#PoE형tNW} cVIX)lXuqxa-eTq4S5% nS Sw)YBwTT#3XAc6{g{wrVk)g}w2֘|huV%XVNEP_a%sVcW3hMV[嚔!r}v{yJkP`@k9f'}%Y}g*)yeHtG}pRRY"yYYYSў,!fZZAFBvv` 8fI gY gX0c tj˹sg\ε[U(26Ren\0\9-C0py{ `!*. у;J\j}} CQ362^BW8EtDH5L BR:x$/PDU OrEaj9:-6Zz GUZy PW`4a,(8Mi"fR4 JC:cc"qxzJ qVWh yhmYFHggrjOčגDlzq%Tu I񩲙z'.jDȩXuSpNLEay"qyp)>ƛTWYɮ*eYpFK"~Z~1ZyZ-j1Oğ(Q:?c DV@Jj:;x-Mb^Daj>]5&wZ\}j^KX*VwةJ|h=>W{cĪ^K(I {QzyTɼI7{4ɵw={۱ɾWf`<=8Q\w]E/WwD*T]OBoL q,,K ?XZͿ2P͍͜0̈́ݫK d8#>$rLk(ҬS iCNcl> X q̎#͢6:Dl]yW<̄(=*m&̳l2L8ӕ(],=>]@mB.;<7 Qk#ӪK\=C=CE]Gc Ce g]"=/km mIKMd];-q- suN |~i=B uwy]?Kٜ P՗}ՎZV C%ڰ۲=۴]۶=$}=]}ܺғ1jҝڽB݌mO],Ԙޮ3v]6-0޻-`F]f(@ 6>^N&@^."ڽ ^&^"$~,⪝0;PKL$PK&AOEBPS/img/strms034.gifGIF87aX?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,X H*\Ȱ!C ?O@ H*\ȰÇ#JHB ?O H*\ȰÇ#JHŋǏ>}'p ԧO>~ <0… :|1ĉ׏}O@ D(P>},h „ 2l!Ĉ'Rh1~0Ǐ?~8? O@ ԧO| ԧ?8`A&TaC!F8~0ӧ ?8$ПǏ>}'p@}? 4xaB 6tbD)V_?~'p}? 4xao>Է_? H*\ȰÇ#JT>} O>~ <0AO>Է_? H*\ȰÇ#JHB˗/_>}ϢEO_|ϟE-Zh~O@} <0…O|ӷ_? 2dȐ!C 2dȐ!C 2> ǯ?$XA .d(_?~(p? H*\ȰÇ#O_|ǯ?%Jt}˗O~$J(QD%J(`?~˧o?%J(Q?~˗O~%J(QDӗ/_|'QDO_|OD%J(QDӗ/_>}I(QD˗/>~$J(QDӗ/_>}(QD/_|(QD%J(QA˗o?%J(Q@˗O?%J(QĄ˗/>~$J(QD/_|O@ DPB >QDO?)RHQ?~E)R߾|"E)R4߾|G"E)RH?~׏"E)R4o_|H"E/_|QH"Eӗ/>~(RH"E)/_}(RH"E˗E)R/_|H"E)/_|(RH"E)/_}(RH"E˷E)Rdo_|QH"Eۗ/>~)RH"E',h „ 2l!D/bĈ#F,o_|E1bĈ#FO_|E1bĈ#F_|E1bĈ#2/_>~#F1~/bĈ#F1bA/bĈ#F1}1bĈ#Fo_|"F1~1bĈ#Fqa?}1bĈ#Fp_|"F1bĈ#ܗ/߾#F_|E1bĈ#F|o_|"F1bĈ#/#F1bĈ /bĈ#`}8`A&TaC!F(p_|&N8qĉ˗Oĉ'N8qC}8qąoĉ'N8qA}8qĉ'Nܗ/_'N8qD7qĉ oĉ'N8qbB7qĉ'N|/_>~'N8qĉ˧ĉ'/'N8qĉ oĉ'Nؐ_|&N8qĉ'/'N/'N8qĉ7qĉ'N\/߾'N8qĉ7qĉ7qĉ'h&_|8`A&TaC!ԗ_Ĉ#F1bĈ 1bĈ/bĈ#F1bĈ1bĈ#F8P_~#F1bĈ#2o_Ĉ#O_Ĉ#F1bĈ/bĈ#F|"F1bĈ#Ft/#F/#F1bĈ#.ܗo_Ĉ#F|"F1bĈ#F/#Fo_Ĉ#F1bĈ/bĈ#F_>~#F1bĈ#F/_#B̧/bĈ#F1bĈ1bĈ#!O@ DPB >QD}&NP_>'N8qĉ'6̗oĉ'N/ĉ'N8qĉ7qă8qĉ'N8ѡ|&N8qDM8qĉ'N(_'̷oĉ'N8qDM8qĉ8qĉ'N8`>~'N,/'N8qĉ#7qĉ':̷oĉ'N8qDM8`>}'N8qĉ'Jܗoĉ'Nl/ ˗/_ 'P> H H| 4H0_> O@}8`A;x~B#HP`| H*\h0? 2d/C 'p`>$XP`|3h`|3h A#HP`| H*L… .\p… ˷p… ̷o… /)̗o… .4… .,/… 7P_>~ 'П… ԗ_|˷p ۷p… .\p‚-\pa|O_| ̗O`|_?}'_|̗o|˗/~˗߾|'0_>}/_|/_}˗/ .O… .,O… 7_|˗@˷ϟ|˗߿|߿}|˗߿~߿|?/_˧?}/_| ̗O_|˗/~˗o?? 4x}&L0a„ &L0a|&L|'_>}/_> ̗/~ |?/_}/|˷|'p }'/߾O@ DP &L`>} &L80_>˷_|_}/|˗߾|˗O`|/>/_̗`|/_>ۗo? O@~O?_} <>} &L0a„ &L0a| &L|/˗O`|/_~/?̗`|/_>_?~ ̗?~ׯ|ǯ| ̗/a„ 0a„ ̗/a„ G0_| ̗O`| ̗O`|/?/|/_>/| G0_>/|ׯ||׏_|o`|˗0a„O@ DPB >4/D/_>˗O`|/A@ O|˗O`|˗O_|/|˗_|`>'p "LO_„ &Lh0_ &/@'p`O߿ O 7_>}˧/_>/_>} ̗o`|˧/_˗? <_| &L0a„ &L0a|&L |/_>/_> ̗O`|/_ ԗ_|˗O`|_|O_|˗o?~˗/a„ o_„ &Lh0_~ &L(0_˗/?~˗O`|/߿|/|/_>/_>̗o|_|//|_|˗_| &Lh0_ &L0a„ &L@O@ Dx`>O߿7po?o߿~'P |'߿}O'߿} o@~'|? 4x!}%L0a„ӗ0a„80?~뗏?~''?O_>}'p |'߿}O'߿} o@~'|? 4x|"D!B"D!Bˇ!Bۗ/_}˗O_?O|˗/߾ |@~O`O>}7} ?~'p "L/ &LР>}%L0? ?~}O@~O?'P_'?(Pӷ?} o߿_o>? 4x|%L0a„ &L0a„0a„ &L0a„K0a„ &L/>~ &L0A}K0a„ 0!A&L? &L0a„/a„ &L0a„ &O_>$XA .dC%N/>}(R|(RH"E)R$/>~(RH"E˧"E)RHB}G"E}"E)RH"Ł㧏"E)R4߿~QH"E)RD/_QHQ?QH"E)R(p_>H"E 'p_~)RH"E O>~)RO|(RH"E)Rܗ@~(RH"E ԗE)RH"EE/>)RH"E$h_}? 4xaB 6tР>~"D!B"D#DG_>}!B"D!BHP_~"D!:ǯ| B"D!BH_}"C} ˗D!B"D!ӗ_A~ B"D3/>!B"D! ԗoA} BP?"D!B"Dg>!BC}˗D!B"D>~!Bd}A"D!B| <0… :|hP?"D!B"C~CD㗰_| B"D!B_|"D!Bt}"D!Ba|)"D [/_~!B"D!B$O_>}"D!BtB"D!BѠ|-"D cO_>} B"D!BO_|"D!Bt_C~"D!BQ |5"D s/_} B"D!Blо|'p A'p "Lp!Æ|A"D!Bt/_}"C} ˗O_?!B"Dۗ/?A"D߾|A"D!BT/_}"DA/_| B"D!B$O_| "D!BP?˷D!B"D@} BP?˗o?!B"D˗/_?"D!:bA~"D!B_|A,DѠ}? 4xaB 6tb˷ϟDI(QDX߾|(QD%2/_}$ODx߾|(QD%J$o_|I$o_|Ç>|h}C=|ÇC˗/>~>|Çۗ/_>}=|HP>Ã˗/>~>|Ç ۗ/_|=|x>|C=|?}Ç>|?}ǯÇ{C}>\؏|Ç>|`?~˗o_Ç>|hP`> <0… o_|ϡC9t!A}:t8?}˧_?:tСC˗/_>}9tp ?}:tСCs!A'P>~'p "Lp!C0@O@ D>~ *TP>~ *T?}˧O~*TPB o|ǯ? *LOB *TPB OB o|ӧ?*TPBO_|ϟB *B *TB *$}(P>~'p "LpA0@? 4xa„)TPB *TPA}*TPA0ϟ?$XA ׏}8P>~'p "L>~ .\>~ .\Р?~'p`>},h „Ǐ>}8P>~'p "L ?} .\p… .,… .D?~'p@}'P @O@8@~ӧ`>Ǐ? H*D… .,… .\(_?~O@@ ? O@o>}8P>} <0…cȐ!C 2dР>~ 2dȐ?ӧO>$XAǏ?8`A&TР>~ 2dP? 2dȰ?ۧO>$XAӷ~'p "LpB~2dȐ!C 24C 2d@ 'p@$XA .dxP 6D_Æ 6l@ 'p@$XA .dؐ 6lذaÆ kذaÆ 6lذaÆ ǯaÆ װaÆ 6lذaÆ 6l( 6lذaÆ kذaÆ 6lذaÆ ǯaÆ װaÆ 6lذaÆ 6l( 6lذaÆ kذaÆ 6lذaÆ ǯaÆ װaÆ 6lذaÆ 6l( 6lذaÆ kذaÆ 6lذaÆ ǯaÆ װaÆ 6lذaÆ 6l( 6lذaÆ kذaÆjjjO@ DPaA}.\p… .\p… .\ .\p… o… .0@8`A&TaC"DA"D!B"DA"D"DA"D"DA"D!B"DA"D"DԷ/?'0|O`>˗D!B4D"D!B"D"D!BtD!g0_>/@~ _> BA} BP?!B"D!B(>!BC} B |߿|/߿/߿|,h „ 2D_Æ "ǯaÆ 6lذaÆ 6lP ?} 6lذaÆ װaÆ 3O`O` _(,h „ 2<_Æ "ǯaÆ 6lذaÆ 6lP ?} 6lذaÆ װaÆ 3O`O`O ?>~kذaÆ װaÆkذaÆ 6lذaÆ 6O_Æ 6lذaÄ5lذaC _/|˧O?o_>}',h „ 2D_Æ "ǯaÆ 6lذaÆ 6lP ?} 6lذaÆ װaÆ װaÆ 6lhP 6D_Æ 6lذaÆ 6lذ@~6lذaÆ &ǯaÆ 6~ 6lP 6lذaÆ 6lذaC5lذaÆ 6L_Æ 6lذaÆ 6l>~ 6lP 6lذaÆ 6lذaC5lذaÆ 6L_Æ 6$| O@ H*\ȰÁ=|>~>|Ç>|!?}>|ÇK? 4xaƒ / _$H>$XA .,? 4O4B H" o 7pO@ DP'p w$H_> $H A~ H"@'p "Lp!Æ|!BO|7Ab| "D #OD!BOB}/_>|o_>~ '_|"ā"D!BP?Aq ?}g`>'p 'p "Lp|*Ǐ!C G> 2dP!?}ܷ}O`>/|O 1dȐ| ǐ!C 2dȐA}ǐ!C3`>O1dȐ!Ä1TC 2Ǐ ?} 2dȐB~+o`>O`>O` ䷐> 2_A~2dȐ!C 24B2dp ?}O } ?}2dȐa|*Ǐ!C G> 2C O Gp_O`>O`>'P?$O,h „+OB *TPB | *T?}˗O@~`>C/_>}*TP|*B *TA~*TPB Ӈ>_>O`>_|ϟB~*TP| ӧPB *TPBC/}SPaB~*TP/? *TX0? PB Ga> 'P? A OA˷?}O`>'0|ӧ?}#Hp ?} A#H AO,h „ 2lA}w>9 ?}!|˷_˗|ۗ/?hP?!2Ǐ ?}/ +o?}A\OD_}A_A~ B"DC/A}/_~˷|O>~!?}!`|o`߿|_>~Ѡ>~!BdA~ د߿| ̷/߾|ӗO|'`/_> HA~ 4hРA ϠAO_}˷O`}3h_>O@ DPB >4B3O|߿| '0?~COD3/|_>}o`|a> "D䧯>} '0|ۗ?~O`/|O ?"C~o?_~߾5 ?}!B"D!` 70߿| '0߿|COD3//|˗/˧/_| :(?$X?}_>O>O` __>~O@ gP`|3hРA ϠO`_g|  <0… :|hP?Gp߿~'0| '0|(㧏 A /?GP`|˗߿|O?| ׏ A $(0? $(P?$XA Ǐ ?}/߿| 70߿~O@'P`'0|(,X> ϠA 4H>'P_>8 |  <0… :|hP?O }/|/|>~;OD>~˗O>~˷O?'_|ӗ/?hP?!2Ǐ ?}/߿|?~ ̧@~+o?}O` "A~O?~/>} '_|O |-䧏a|?~߾a>"D!:|˷O |_>}߿}/?#H AO~  <0|*B *TA~ ˗/>'P_|˗/_?'0?/>}/_>~ a>߾}O` '0|O?~ o`?}˷O`>} '0?+OB *TPB | ˧PaB~8`A'p>? 4xaB P>~ 2dP?1dȐ!C C`|o`>@~ /?ap`>cȐ!C 2dР>~c0C'p ">~ H*\80? cȐ!C䧏!C 2TO|O@߿|7Po@ H> $|׏ A#>$XA .dC!`|A~ H,h „ 珡B}2dȐ>~#`>'p 3hРA 4> O?_>O?_ A~ ˗/_>} 4hРA? 4xaB 6tР>~;o ?}! ? 4Xp|?}(p@},h „ 'P`>8p'p "L>~;oB.$oB~˗/?}/>}˗}O_|w0@~.\X0-\p… .\>~;`/_>|/_> ?} 0@(P'p "Lp 7p,h „ G| Է/O_>}O>~O_>~|/_>O`-\(0… w0|/_>O |/_>`>[p… .\paA}w0@~o߾| /߾-\H_|o… .…-\p‚`O`>/o߾ |/?_߾?G A #H A O/}߾O /'p "Lp!Æ|˗O| /@ O>󗐟> >~!BX0?"C}w0|O`_O`>'0|O`>Aџ>̗/_>}70߿|O o"D!BP?g0|G`>/#H AG@}8`A&Tp`> ǐ!C #O| 짏| '߾|o`>O`O`> '0|cȐ!CCO|70߿|(߿~o_>~ /?O@ DPB >4B3O}o| |;OD"ĂA4DG?o?O`>_>ӷ| ̇>!˗/_>~;`}O ?70| "D!BtB3O?}O_|˗?/_|A} B`> "D`/>}/?/?_߿|˧OO_|_'P ?} H0>|˗?ϟ|/_~׏|? 4xaB 6tР>~@~ B4D!A} BP?A"D8`A&T8߿'p "Lh0)TPB *TPA}˧PB ӧPB)TPB SP>~ *TP>~SPB *`> 4xaB? 4xaBOB *TPB _> *T> "OB *LϟBSPB #OB *TP@~ H*?~8`A&4_A~*TPB *TР>~SPBSP!B}*TP„)T(P? *TP?G0߿|_|+@} @8`AӇ!BO@~O,h „+OB *TPB | *T?} *D_A$XA8`A4h>~ 4hРA Go ?} o | #| 4h> 4hРA ?} 4hРA+O,h „ 2lA}"āAhP H'p 3h A} 4hРA Ǐ ?}/| o`> #Ϡ@}ۗ/? ϠA 4h_| ϠA 4hp`>O@ DPB >4B8`A&4O,h B%L0a„ 0!A}˗/> &L>~+`/||%o߾%Lh &L8ПO@ D`> O,h „ 2lA}'p "Lp,h B%L0a„ 0!A}O_„ KhP?G0߿|'0|O|(߿߿8`A;x$>˗> 4xaA O,h „ 2lA} B"ĂAb| Ǐ`>߾| ԧ/>o_>~˗?}/䧯`>3o?}(߿_>_$H A~8`A&T|2d> 2dȐ!C ǐ!C 2dXP? 2d0? #o ?}o?O?۷_|O` P?O |7p_ o`>o?2䧏!C *ԗ/C ǐ!C 2dȐA}2dȐ!C ǐ!C P>~˗O| '0|70A@~ #O_A_>O>~ |߿|? 48~!BbA} B`> G0| '0|/|/|( Ǐ ?} H*\Ȑ ?} 6lذ|6l 6lذaÆ k0a 6l_Æ 6$_Äg0@}'0_>}/ O?!Ǐ ?} 6lذ!A~6lذaA}5l!?} 6lذaÆ װa|6lؐ>~ 6lؐ`> G0|˗|/_>}˷/?~_>~/>~kذaÆ װaÆ ˗aÆkذaÆ 6lP?$XA!B"<B"Da|"4B!ϟ}"Ǐ ?} o`O |7P  

/_`̗0a„K0a„ ԗ/_„ &O_„ &L0a„ &L_„ ˗/_> &L0!B} 'p "< ϠA#| O@ H#O_|'0߿|`>C!BC!B"4/_!<!B!D!B"D!B !B(P,h „ W`? 4xA$X`> $||3hРA#O_|_70|#ϠA 4OA 4hРA ˗ | ϠAO@ DPB >4D8,h „ p… oB}+@~̷p䧯`>/~`>(߿O@ Dh &L0A}/@}O_}'0_B%L0a„ &L0a„%L | O@ DPA}.\p-\|7 Ǐ ?} 'P_| _>G0… ӷp… ԗ/_A'0_>~[p… .\paA}.L`>O@ DPaA}.\p-\|O~ H#O_| /_?}`̇!BӇ!B"DhP_|O|70߿|!!D!B"D!B !B8 ,h „ p… oB}+>ӷp䧯`>_| O@~_ < ?} &L0aƒ+0@__O@ <0… :|hP?O_|"DAb| Ǐ` O`AHP?A"DAA}o>~o߿|/?~AOD!Bѡ>~! /?!>"D Ѡ>~˗O@~`> G>!B(>!B4/_> _>} ̧/>Q ?}!B"DA(_| BP?!B,DA>~O@ DPBkذaÆ0_|kh 6lذaÆ k!|5lذB} 'p "< ϠA3hРA 4hP?4hРA 4hРA4hРA 4h| W0~4h?} H*\ȰÇQ |ܗ/?!ǯ ~,h ƒ H| 4HP? 4hРA#OA 4hРA $OA 4hРA ˗ | 4h ?} H*\ȰÇQ | O>~AxP?!B,DA>~"D"DS"DA"D@~@~O>}ӗO߿| Ǐ|ADD!? 4/߾ <8P?8`A&TA} H*\HP_|_>~ /_?'p_>~˗_B~2dȐ!C 24C o`> '0_'0|O ? cȐ!C 8`A&T? 4xP>~8`A&TA$XA .,/_>_~O`>o߾O 1dȐ!C 2dhP? ˗| o`O` ǐ>~ 2d0!|,h „ 'p  ? 4xaB 6tbă#o#/|70߿|KOD%J(Q?o`>Ǐ_>O`>0@8`A;x$XР|4Do }?߾/|ăAb| B~P?!B"DGP>_>O`>~COD!Bѡ>~! /?'_|O_}/>}/_>~"DhP? C} B"D!Bԗ/A/|˧OO_}ӗ/'p "Lp!Æ"D_>"DhP? C} B"D!Bԗ/ą >!BC} B/_>>~!BX0?񃨐_| :"D"D˗~>!BC} B/_>!B|D!A} */?| B"D!?}!B"DA(_|˗|!BD!A}8`Aw$/_>4Do ?}"D+ ? 4xA$X`> $A /? 4A 4h`|O@ DPB ˗ϡCsСC:tCo~ /_?'P߾|O@~sP?:lϟC9d/_> ϡC ˗/_?:t!A}/_ϡC9tСC:ϡC70|O|߾}/@~СC sP? ϡC9tP!|'p "Lp!Æ GП>> СC:tP? ˗| G_>o` P>~:t0?sȐ_|:CO~ H*\ȰB}}:4OC:tСC}:4/_>'0|/|70?  ~,/_><(Po|| OO ?СC sP? ϡC9t!?? 4xaB 6L/_>:$OC:tСC}:4/_>˷o`>˗O>~ ϟ|/_>~СC sP? ϡC9t!|O@ DPB ˗ |/_|:$OC:tСC}:4/_>'pCsСÆ9DCС@}:tx?~˧?~:tСCСÂ9tСC:ϡCa|sP>~:t0?sȐ_|:C˗C:t谠|:tX>:tСC9th_|:t>~8`AO@ gРA4hР@~3hРA4hРA /?$XA .dؐ|:tX>:tСC9th_|:t>~8`AO@ gРA4hР@~3hРA4hРA /? 4xaB 6$/_70߿~0@ ̧8>$XA .dCAT؏?}8`>} <0aB}*TP„)T(P? ˗ϟB SPBo`)TPB *$/_70߿|'0|!?} *TPB *ThP? 0@Ǐ_?oO@ D>~ *TPa|*BOB)TP70@˗ϟ}O`>'0|OB "ԗ/_|'0߿| g0|COB *TPB OaB~Ǐ? /_|*T>~ *TPa|*BOB)TP70|߾}O` '0|OB "ԗ/_|'0?}`'0@ <0… :|hP?˷Ć˷"ĆAb| B~P?!/@~ '0|O`>'D˗`>/~0@ ߿8P ?} H*\ȰÇ{/_>~!/?"DhP? C} BX_|O? ߿_߿|  <0B+O`>o`> |[p… .\paA}ԗo… ˧o… [p… p>~ / p… o`O ?O`O ?>~[p…W0|O@3o ?~!o… .\p…[(p_~ .D/ "o… .…-\H_|.D… &/O?}ϟ|/>}˗߿ϟ|'p "Lp| o|˗ϟ@ /߿_>O,h „ 2lA}B~AP?!B,DAT/_>"Ă`>!B(P_|!>"D!BP?A0?"DhP? C} BX_| "D˗"ćA"DP_>"ă < ,h0? ϠA˗ϟA ϠA 4h | gp_|'P?$XA .d(P_| 6DO_Æ 6lذaÄ1䗏_Æ ˷_Æ kذaÆkP aC5lذ?~ ?}o?} 6lذ@}5l!?} 6lذaÆ ǐ_|6l(_|5lP 6lH0 kؐ_|6$_Æ / 7|5lذaÆװaC5lذaÆ 6LC~װa}aÆ5lذaC5L_Æװ!AO@ D!| W0|SPB *ԗ/B OB *TPB P ?}OB˷ϟ}*TP? *T0a>  OBPB}*TP@~` ? 4xaB  ԗ/_Æ װaÆ 6l0>~ o_|5l8߾|O> 6ǯaÆ aB}6/ װaÆo`o ?} 6lذ@}5l!?} 6lذaÆ ǐ˧_? O_|'P}8`A/a„ &Lh0 0!B~K0@}&L0!B~ }o?} &L0a„˗0a„ ӗ0a„ &L0a„ p ?}0@Ǐ?oO@O@ D8P &L0|&$_„/a„K0a„o`|/_}̗0a„ &LP_| &LP ?} &L0a„ &L0>~ ӗp`?~O@ǯ?$O> H0a„ &4_„K!|%Lp>~ &L!|%L0a„ &L|&L0@~&L0a„ &L0aB}/a„8?}'p "_„ &L`> /aB0a%L0aB0a„ &L0aBK0a„K0a„ &L0a„ K8 O@ 

~ &L0aB%LHP ˗_„ 0a„ ˗_„ &L0a„ ˗/a„  /a„ &L0a„ &/@~&Lx? ӷ,h  < ,h0? ϠA˗ϟA Ǐ @,h „˷OaB$XA .dhP_| 6DO_Æ 6lذaÄ1o!|O@O@ 

~ &L0aB%LHP ˗_„ G`> 4xaB O@ DPBװaC5lذaÆ 6LC~'0_ 'p AO@ D8P &L0|&$_„/a„#_„ &L`> &L0a„ ˗/a„  /a„ &L0a„ &/@~ ̗O`|O|/_} HП}8`A/a„ &Lh0 0!B~K0@}0a„ &4_„ &L0aBK0a„K0a„ &L0a„ K8O߾|/߾} HP> H0a„ &4_„K!|%Lp>~K0a„ /a„ &L0@}%L0aB%L0a„ &L0a„%O_B /O@O@ D8P &L0|&$_„/a„#_„ &L`> &L0a„ ˗/a„  /a„ &L0a„ &/@~ ̗O>}˗O_|70@Է,h %L0a„ 0!A}&D/_> &|&L0a„K0a„ &L(P_| &LP ?} &L0a„ &L0>~ ӗP`|O~/߾|П 

~ &L0aB%LHP ˗_„ G0?/a„ &_„ &L0aBK0a„K0a„ &L0a„ K8/?}/_}˗O_?$o?$XAK0a„ /aB%L_|&L8P?̗߿}%L0a„K0a„ &L(P_| &LP ?} &L0a„ &L0>~ ӗ0aƒ HП}8`A/a„ &Lh0 0!B~K0@}gp_~/|O`˗_„ 0a„ &LP|&L0@~&L0a„ &L0aB}/a„8?}'p "_„ &L`> /aB0a`|o`߿|_>~ &_„ &L0aBK0a„K0a„ &L0a„ K8K? ӷ,h 'p "Lp|8`A;x`A~;xA`|o` ̗_/ ~ s/D HП} H 'p "LpA$XР@}~3//|˗/˧/_|~˗D`|˗߿|O?| | B@}A>!BC}0_>o?}'? ? 4xaB 6t>~˗D}|O_|˗o>a>!Bq| B|OD!Bѡ>~ s/|o`> HП>~ H*\ȰÇ񃨐_| :Ǐ`> A`>!Bq| B|OD!Bѡ>~ s/|o`> HП>~ H*\ȰÇ񃨐_| :Ǐ`>!BX0?!B8P_|!>"D!BP?9̗|/ O@O@ DPB >|Dѡ>~"ĂA"ā!?}!B"D1_C o@}/߿ H_~ H*\ȰÇ񃨐_| :Ǐ`>!BX0?!B8P_|!>"D!BPATO|8| H*\ȰÇ񃨐_| :Ǐ`>8`A&T8P <0… ˗aÆkذaÆ 6lPkȐ_| OO@ DPB >Dѡ>~+П <0B ,h „ 2,/_ "䧯aÆ 6lذaB}/_ 'p?~O@ DPB >O /bC}1bĈ1bć!?}#F1băAԗ/ (p_|8`A&TaC!/bB~P?E1|"F|"FlO_Ĉ#F>~˗O?O_|E1bĈ#/bB~P?E1|"F|"FlO_Ĉ#F>~ӗ/_>}П?~˗O#F1băEL/_>G0#F_Ĉ#>ԗ/_Ĉ 1bĈ#/bAO| `>? 4xaB 6tB}"&/ #_Ĉ#/bĈ˗/bĆE1bĈ1?۷o@}Ǐ? H*\ȰÇ1!|El|"F_>#FP_|#6/bĈ#FxP#F1bĈ#&/bB~P?E1|"F|"FlO_Ĉ#F>~#F1bĈ#FL_Ą>~1bE1C}Eؐ#F1A}"F_Ĉ#F1bDEL/_>G0#F_Ĉ#>ԗ/_Ĉ 1bĈ#/bDE1bĈ#F4_Ą˗,h@}W ? 4x~ H| 4hРA 4h`A}4hРA  <0… :|hP?˗D!B"DAT/_>G0?!B,D!B/_>"D!BtD˗/D!B"DAT/_>G0?!B,D!B/_>"D!BtD˗/?!B"D 񃨐_| :Ǐ`>!BX0?!B|O>}ӧO>"D!BtD8? 4xaB 6tbDID/_>G0?%>ODO8`A/a„ &L0a„ &/a„80>$XA .dC'!|Id|$J0?%JD`>'p "DO_„ &L0a„ &L_„ O@ H*\ȰÇ#O"B~P?I(a>%J| <!?} &L0a„ &L0>~ &$0'p "Lp!ÆB(P?O"C}'QćI(Qb}  <0!?} &L0a„ &L0>~ &`> O@ DPB >>~˗ϟDOD(QD ? 4xaB~&L0a„ &L0aB}&Lx_|&L0a„ &L0a„ /aB0a/a„ &Lh0 &L0a„˗O_„ &O_„ &L0a„ &L_„ ˗_„ &L0a„ &L0a‚%L_|&L8P?%L0a„ 0a„ &LP`|%L0a%L0a„ &L0a„%L |%L0a„ &L0a„ &,_„/a„߿|( ? 4x~ H| 4hРA 4h`A~4hРA  <0… :|hP?˗@  ,| w~! /Qa _>!B>~˗D ԗ/_~!BH0?!BHD"D!:"D70|O |ۗ/?/|_>~ /_?'p_>~˗D!BDѡ>~'P`?$XA .o… .\p… ӷp… .\p‚-\| 7 ߾>۷|_ '0_ p… .,…˷ ? 4xP`>}80> H*LϟB~ H*\ȰC O,h „ 2lA} B/_>(߿|/߿_߿(0_>o`  <0… װ!?$XA O@ DP!|П <0… :L/ <0… :|hP?˗|0@_/߿? _>Ǐ_>o` ? 4xaB ǯaC~ H <0B)o… .\pBo… .\p…[p?~o?}ϟ}_>_ _>_>߿~' .\p!A}.,П 8,h „ 󧐟 .\p… W .\p o…o`>ӗ/ϟ| 'П|/>/_>}'0?ϟ|-\p… p… 'P_|[p… 󧐟 .\p… W .\p… o…o… *O྅ .\pB}.\paA}˗o… .\80?̷p.\p‚? 4xaB 6tР>~! /?!B/_| B"B} BP?W`? 4xA$X`> g_> 4H0A 4hРA W>$XA .dCA(_| B"D!:"D #_A$XA8`A$O_~˗|㗏|O | ܗO`>~߾| /> 4h|O@ DPB >4D"D!BC} BP?Ab|w0߾ǯ}_~o`}|_>ۗϟ|߿|"B "D!BtD"D!BC} BP?Ab|w0?+O|O`/߿||/>˧?W>!BC}O@ ܗ/_}˧/_|ۗ/>  ?} 2dȐ!C /!?$XA .d!C$XA .C 2Ǐ`> 2d0a> ;ϟ| ̗>~O`>/>}>~'P_?~ǐ!C䧏!C 2dȐ!C%䧏!C 2dȐA~2dȐaB}2dȐ>~cȐ!C SO|O`|˗| 70|'p_| ̗O?}/_>~˗O?  ?} 2dȐ!C /!?} 2dȐ!C ǐ!CJ }8`A&TXP?-\p…SO… [X0 .\H_-\p… .\>~ [p… .\ ?} .\p@}.\paA}p… O!?} .o!|.\paA ӷp… .\p‚%o… .\pB-\p…[p…#… .\80?-\p!B .\Р|S/_>}[a .\>~ 3o…-\>  AO@ DP1dȐ!C}ǐ!C 󧐟a>0@8_>'p "LH_%O`|SPa> *TP?}g0B SP|+OA~*TP„)TPB @~,h B8`A$OA}㗏 4`| +A 4h|3(p_?ӷ/? | Է/_?} ԧ/>˗߾~ /?ϟ|4hРA38>O`~˗|㗏|O | ܗO`>~'p_|˗o|˗|O|$OA 4hРAgРA 4hР>~3hРA 4h| `| _>/>˷O|ӗ/?(_|O|? 4xA Ӈ0|O`~7_>/_?O߾~_>'߿|ۗ0a„KO|'0~/_˷_o߿~˷/?_>~o_|/|/߿}/!?} &L0aB%L0a„#_„ &L`> gp_|o`˷O`>ۗϟ|3/A}0߾o_>?$XA+/_O| '0߿|@~ '0|/|_>~O O@ DPO`> '0_o`>/_/_>}˧O`|`|ۗO`>/>%/a„ &LhP &LР>~K0a„ /?~/_>~ /|߿|/|/|_>} ? 4x} W߿?}߿|߿ _߿|/߿|o| o@? 4xA}g0?+O_|'0| ̗_ 'p>}˗/|˧_}+/|O`|˧/_|ӗ0a„ &4_„ &LhP?%L0a„ ?}/߿|'0_>˷/|`|O`>/|K0aBa|_>/? _>7.?~o?O>~O ?K0 H?OA '0|'P_o`>~߾|'_|>~O`|G0߿~/߾'P_? <0…cȐ!C!C &O!?}_>}/߿|>'P_ G0߿~/߾'P_?  ?} ˗O_?O`>˗| '0|O_|'П|/_>~˗C 㗐> '0|ӗ/?o`>}O|/?ϟ|/_>} GP_}ӗ/ӗ/?1dC %>~ H*,|.\p) |/_}ӗ/˧ϟ?}/?ԗo>_o… +O…[0B-\pƒ%o…-,… .$O… .\8P .\XP?-\p…SO… .\p…+O…Pap… 㗐 ̷`| .\ ?} .\p@}.\paA}p… O!?8`A&TaC +OÇ>|A}Ã>|?}>|P>Ǐ`>>|0?=|ÇWÇ6/!?}>|Ä=|aB}>|P?=|a|Ç>L/_A~'0Ç>|P=|ÇÇ {C}W ? 4x~ H| ? 4xaB 6t_!_>~O_>}Ç>T_B~!A =|P?=|a|p [A 5 ?}o?} '0_>|C%@ӧP?%O>~ K/_C~>|0>~>||>|0a> {(_>)Oa> S|_!߾|'0|=|Ç K>$XP| g>~ gp| g`>g?~ 4hРA 48P? 4hРA#ϟA 4hРAg ?}  A~  |  A}  |  ?}O?~ '0| H*\ȰaB}p| S|˧P?%!?}:tP?:$|:ta> s8_>)Oa> S|_%ԗ?}O`>:tСCKOÁ)O>~ K/B}_9taC}:t萠>~sСÆ)@ӧP?%O>~ SC ӗ '0sСC /!?}˧>$A#H| Ǐ A$_> ? 4xaB ǐ!C #C 2LϟB~O!?} S_B0?1 ?} 2dȐ!C /!?} ˧>)/| S_B㧏!C &Ǐ!C G0? 2d0?1,/B~0)Oa> c/_A~2dȐ!C 24_B~ 'p A,/_,/_A,/_$/_? 4xaB ǐ!C #C 2LϟB~ 'p A,/_,/_A,/_,/_@'p "Lp!Æ/!?}8} ԧ`A} ̷ } ԧ`| ܗ`A~ <0…cȐ!C!C &O!?}8} ԧ`A} ̷ } ԧ`| ̷`O@~(p_>$XA gp_O|"/!?} ˧>)/| S_B o… .… .,|.\p)oAӧP?%O>~ So|  ̷pa -,~-<_B~O!?} S_B0-O… .\8P .\XP?O@ Dh,h0?3h| g>~ gp| g`> gР|38p_}˗?}߾|/?}_>}/_>۷|3(0_>g |ϠAOA3H>3H0?3HP?38_>gРA 4hР4hРA 4| 4hРA 480?3h| g>~ gp| g`> g @'P ?} ܷ||珟|/?}߿| 8p`o}o|O`8P7p7p 8>~8p?}/7pO@ DP1dȐ!C}ǐ!C 󧐟>SOB}_>)Oa> +O|/_> O_|`/_|'0߿| 70_}| Р>~ cX_>)Oa> S|?} 2d0>~ 2dP?1dȐ!Ä)䧏aAӧP?%O>~ SC Ӈ0'П|ӗ?O`/?~__?o`>~#H A_} $(P?G A#HP ?} Ǐ A$_>G|/A <0…cȐ!C!C &O!?} ˧>)/| Sϟ|W>˧o߾|O_}˗/_~_|'p}ۗ߾| ˗/?ӷ A˗@#H A$> /A#HP>~ $(P?G?$>$XA .C 2Ǐ`> 2d0a> cX_>)Oa> S|_1D!C 24!C%䧏aAӧP?%O>~ K/1dȐ!Ä1dȐ!C}ǐ!C 󧐟>SOB}_>)Oa> +OÃcȐ!C 2D_B~O!?} S_B01OC 2LC 2Ǐ`> 2d0a> cX| H>G|/A#HP`> AO,h „ O@$XA 㗐>SOB}_>)/| ӧPB &OB *Ǐ`> *TPa|ӧ| S|˧P?)O@ ӧPB *TPBKOƒ)O>~ K/B}_>SPB PB G0? *T0a> Sx_>)Oa> S|P|SPB *TPB%OAӧP?%O!@}8`> AO@ DP1dȐ!C}W? 4xA$X`> g`A  A} @  | ϠA  <0… :|hPA/B~0)Oa> kOD!"D #_A$XA8`A$OA3H>3H0?3HP?3H0?+O,h „ 2lA}| S|˧P?%!?}!BXP?!2Ǐ`>!BX0?A/B~0)Oa> c/_A~ B"DKO)O>~ K/B}_AbA} BP?Ab|| S|˧P?)珡|"D!:/!?}˧>)/| S_B"D"C߿'p "Lp|ǰ| S|˧P?)珡|cȐ!C 2dР>~ cX_>)Oa> S|?} 2d0>~ 2dP?1dȐ!Ä)䧏aAӧP?%O>~ SC ǐ!C 2dȐA}ǰ| S|˧P?%?~2dȐaB}2dȐ>~cȐ!C SOÂ)O>~ K/B}0?O@~8`A&TaC 㗐>S؏B~П>)/?} "ĂA>~"Ă)?㧐_?%O!~ SC "D!Bt_B~ B"D"ĂA>~"Ă)"D!B(_A"D㗐>!B@~ B`>~!Bdo| B`> "D! >}!B"D%"D!B(>! O@ DPaA}p… O!?} .\p… `>~ .\p… .,/_B~8`A&TaC O@ DPBcȐ!C!C &O!| H*\ȰÄ'P`~ H*\ȰÇK> 4xaB 6tp,h „  C 2엏 |,h „ 'p "Lp!Æ߿~'p "Lp!ÆD!B"D! ԗ"DП <0…8`A&TaC"D!B/D!B"D! "D"D!B"DA"D!"D!B"DCA"D!B} B"D"D!B"ć"D} B"D!B| B"D"D A"D!ԗ"D"D!B"D "D!B$/_?!&"D!B| BQ`| B"D!"H_> H*\ȰÇ "ă"D!B엏D!䗯D!B"D!"ܗD!BAbA"D!Bt/>!B4/>!B"D!B4/?!B"D"D˗"D!Bذ_}!Bp_}!B"D!BHP_~!B"D "D'P ?$XA .dÁ"D"D!B"D"D!Bd/_>~!B`>8`A&TaC˷"D˗D!B"D˗D!B}A| <0… :t/_>~>|p?}=|Ç>|!}=|Ç>\/_>>O H*\ȰÆÇ ˧Ç>|Ç/>|Ç ˷Æ8p>$XA .dСB{Ç Ç>|ÇӗO>|Ç˗Ç /_|Ç>L/_>>|!|=|Ç>||=|Ç>|Xp_|>l/_>|B~{Ç˗Ç>|Ç˗Ç>|Ç˗Ç Ç>L/_}>|aA}{Ç>|!AÇ>|!~aB <0… :|a}Ç>|~Ç>|Ã{Ç/_}>|aC~Ç>|B~Ç>|Â˧ÇÇ>/_|>|Ç˗o_>|Ç ۗ/~>|Ç>T/_|=|O_|>|~Ç>|h|Ç>|a}Ç>|C=}? 4xP_||`}Ç>|Ç ۗ/_>~*ԗ/Ç>|߾|Ç>|(߾|Ç>|}Ç>|Çۗ/_}ԗ/Ç>|H}Ç>|?~ǯÇ>|!~˧>|Ç>|(߾|@}=|Ç˗/>~>|Ç/_|{Ç>_| <0… :|1ĉۗ/_>}9ԗ/E)o_|G"E)R菟|E)RT߾|E)RH"E ӗ/_|K/_>)R؏|E)RH1?~˗O߾~(RH?~˗/>~(RH"E)R>}˧_?G"Eӗ/_|H"E)NO|,h „ 2T?} ?$XA .dC%N8}˗O|XbA0@ <0… :|1CO@} <0…O|,h „ 2l!Ĉ'R}8P>~O@ DР?ۧ`>? 4xaB 6tbD `>Ǐ?8`A&Lϟ?~O| <0… :|1ĉ+Nԗ/_?~'p@}Ǐ_?O@ ϟ~O| ԧ?8`A&TaC!F8q~OǏ~ ϟ?o>} (P>~O@ DPB >QD-ԗ/AO>$XAǏ?8`A&TaC!F8q?O>$XAǏ?'p "Lp!ÆB(q"Ŋ˗?'p ,h „ 2l!Ĉ'RXП?'p ϟ?$XA .dC%NXŁÈ#F1bĈ#F1bĈ#F1b/_>1bĈ#F1bĈ#F1bĈ#ƊÈ#F1bĈ#F1bĈ#F1b/_>1bĈ#F1bĈ#F1bĈ#ƊÈ#F1bĈ#F1bĈ#F1b/_>1bĈ#F1bĈ#F1bĈ#ƊÈ#F1bĈ#F1bĈ#F1b/_>1bĈ#F1bĈ#F1bĈ#ƊÈ#F0#0#0#0#0җ/,h „ 2l!Ĉ'Rh"ƌ7r#Ȑ"˗oȑ#G9rȑ#G9r@}9rȑ#G9rȑ#G/_#G9rȑ#G9rȑ7rȑ#G9rȑ#G9>$X0>$XA .dC%NXE5nG!+O@O@ DPB >QD-^ĘQF=~ џ|`> ? 4xaB 6tbD)VxcF9vC~ ? 4xP>$XP ?$XA .dC%NXE5nG'p@$XA8`A8`A&TaC!F8bE1fԸcGA:0'p 0  <0… :|1ĉ+Z1ƍ;zҡ>? 4x>$XP ?$XA .dC%NXE5nG'p@~ H'p 'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?t`>O@ `> ? 4xaB 6tbD)VxcF9vC} ? 4xP>$XP ?$XA .dC%NXE5nG˗/߾8`A8`A&TaC!F8bE1fԸcGAFׯ?8`A8`A&TaC!F8bE1fԸcGAFO?'p 'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?|`>O@ `> ? 4xaB)T`| H*\ȰÇ#JHŋ3jȱ~ 8p,h>$XP ?/?}˗/_?}'0_}(🾁8_> $H 8`A&TaC!F8bE1fԸcG8`>~ o>~8>$XP ?O`>_?}'0߿| >|GP |˧o?~ӗo|_|/_>_|'p "Lp!ÆB(q"Ŋ/b̨q#džO@㧯 ?$XP|,( '0|O`>߿| ̗/A70?|/?~۷O`>O`|/߿|'p "Lp!ÆB(q"Ŋ/b̨q#GG|,hP ?$X@},'0|O`>70|#80|$/_|_>O`뷯?}O_|/_?$XA .dC%NXE5n_|0 ? $`> 'p A'0|o@}'0_A> /߿| ?} |'>'П?~'p "Lp!ÆB(q"Ŋ/b̨q#Go?AO~ ߿ O@|ӗ/_>ӗ/_O_|/}˗/߾˗O?O_} o@߿o@}  <0 :|1ĉ+Z1ƍ˗/߾8p?0G 8`A&T| O@ DPB >QD-^ĘQF˗/_8p 7p?8@,X,h „ 2̗OC <0… :|1ĉ+Z1ƍ;/_|O HP ?} H ,? 4xaB 6tbD)VxcF9v$|'|O@'p g@ H*\ȰÇ#JHŋ3jȱǏ ˗/>8p П 'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?/_|O HP?$X8`A&TaC!F8bE1fԸcGA/_|O ?'p A} H @} "D!A}Cp`| /,h „ 2l!Ĉ'Rh"ƌ7f̗/_>~'p 'p +8_>} $/_>}˗O|'p  ,X` `,X` <0… :|1ĉ+Z1ƍ˗O? H`A$8P_>˧` '0߿~/_'_|ӗo?_|ӗ/_}O |ۗo߾~˗_Aܗo_ `>8`A&TaC!F8bE1fԸ#|0 @,X>}˗+(p߾|o?~ ϟ_O`~ W|/_˗_A'0| H*\ȰÇ#JHŋ3j܈_|`> $? ԗ_+X`~ /? 70?}/|˗o`>_˗߿| W| W߾|W|  <0… :|1ĉ+Z1ƍ˗/8`8@}/ O> '0_A}?}o`>/߿|O`>د|/_ /@}O@ DPB >QD-^ĘQƌ(?~'p }? ԗ_+X~O_|˗/˗O߾/|˗/_~˷O |'p}ۗ߾|?˗/?ӷo /_~? 4xaB 6tbD)VxcF5O| o߿˗/_> HP| O_ ,X`'p "Lp!Á8`A&TaC!F8bE1fԸA} H#? ԗ_+X` ,XP`|8`A&TaC!F8bE1fԸcG'p AG/ <0… 08`A&TaC!F8bE1fԸcDž8@}G/ <0… :|1ĉ+Z1ƍ;zRc|8߿˗/_> HP| O,h „ 2l!Ĉ'Rh"ƌ7r#H[/_|1ԗB Y0_ 7>˗_Ȑ!C 2dȐ!C"OB~珡|˧Ob|/>} ? 4X04A+/| H*\ȰÇ#JHŋ3j|˗/_> _>}o_} '0߿~˗ϟ|O>} 7_|O_>'_|Gp_}(+/|رcǎ;vرcǎӷ_|c/?ȏ_}˗/__?}w/|o>ϟ>| ܷ|˗/?o`>O@ DPB >QD-^ĘQF[/_|1ԗBIo_>} /߿|˗ϟ|3/_|_>O`뷯?}O_|;o` ˗/_>~ /_}ȑ#G9rȑ#G˧o!|P_>~'_>~ '0߿| _>3o@/|'0|'П|}!70Qg0@ȏ#G9r#8#8J 'p ˗_Wp | ,8_ۗo>/?˗O?O_} o@߿o@}(_|/_>}8p|/O>$XA .dC%NXE5nĘB~珡|˧#DŽ"񠿁qȑ#G9rȑ#GǏ} ?˗/O <0„[80_|8`A&TaC!F8bE1fԸcG8} /_|8|/?$XA .d`?O@ DPB >QD-^ĘQF'p}#H?$/?$XA .dC%NXE5nGG_}3b|8p_| H*\ȰÇ#JHŋ3jȱǏ )׏| w| <0… :|1ĉ+Z1ƍ;z2$G$ <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{ 4СD=4ҥL:} 5ԩTZ5֭\z 6رd˚=6ڵl#;;PK%۷ѷPK&AOEBPS/img/strms048.gifXIGIF89a^.# ȑ???ZWW@@@TQRonp///ZWXЕ000___ ```OOOoooppp԰PPP99:IFHsstvst+++eeeLIJ˞1-.VVW0--HHH?;;;񞝝㟝geehef\Z\zxzxvw袢܉}wvwUSTMJL624HGGɼ?<>~QNO!,^. H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜiC4s& JQ,JXBѧPM.ZO @ֈ"l4ٳhgnXjB_ʝp)( QU0)ֻt+^|<(([Y(h1eŒS,+RE;ȃ75 6T&=h@ y}چu~pS_Ybm/S1U-sP~;0W5U .k,(fmeaJk-Uu  %@饕 8sy֨|!]*^t%ndXK@WhBզqea׊ ,Ƞ` 7[_XXr{ ‚{b9މԛlA~*dwy 0@$:~秠B &eߤ &TZ|wg{O _( $+e*+vUqem:rIדAum|g d[)+l U #`;z矺zg)/9V&V @&c( |vql%K6Ȟ8^)e$flʡzWWaЬj)y1\ @v3c.Jס Lip lJBk$ץzTVfGU , U)@\puOD^*\@%V8 yymJ@`ʭU y.n?'|GV/g/k}oD㟯 o ՜k"Y4G|cTDdIDa$X50"~aCz$OO*L Ƃ5`rdb\/(]6 j~s(*E8 `ya@Дi3a A(h(p!jAlùQ{t g`RT\%,Y6h0_)$gt!r[W=Pm pTKBƏMPl䂟%/8VGJY3'5zT":Q *E.P~>RHGKs y1#bBAoo'dcSkn&GYt378FIdO(h,"OvE`|Ŧԇqւ`+bT<Dz#Ng \aK+5cBdO@H( K9ۂ9EsOhA2DZtq@i'AW $:gPP1.y\~ƂUIʒ5!Sk0GY˥8z'UjX*AH(w BV30_[@ipb_RX9ڊ BT w0¬jx'X> y@A,$]N<'z G(P[E+J$( HJT+nK@dG PNjH\wa]+ꅾ]`&SNiJÂFzcџQ(/NXfqkl90qY! L@!HmYmޛ;3+n\m|,ȑa6[ O.H(L2hNf= @:q3)g>r;LЈN8@'MJ[Ҙδ7}$ρF8MRԨ2hBngkGָ5=P+dԺujմu$BzЎ6y}_'ζ']ٲS6GoڦAtwM@~ޝ#g߹VD F޾!rD_ &1G^iG$O93$ 8CGA+r=H¿cp"B00u@q T彃@R; vp$`H8}p,{ dyя.}Aqz0N" a4]i G~;@~ a3_mw$~+"J0b`y Sx_~E }0\>AҊ_3_;ӿAG_ bw<fuiQ|JtW~v~>i{&{lBw3xu'>p> D|PG J)xX7vgDv,x),|)8zxgi\gm&zDG0(4(6(98#J Qgvhx8GI8i(iG0hi+m0@W{{T8Q qgh7&XFx+7NfOutA,|,hvDŽyPyy~{t8> 1@1`dxh7duXtFs};HPCp((gh|Ynjz$H)#uhWPG`K8i߸׊S؍Uxyhv80'y،guWH(iE'tXosoe ;Y7xi|~QE0uȷv_|f}w0t9zrGJ v}xʉ&uAU}ig|y}ImWx}}  ȝYPvYs)$sG ao=gᶑjpVjڑN&o#%Zچ) 7.USI"ɠwǣ=4g SUHsNA٤[o{HQ/SOX24n Hc:.? BJF 88JYJwJ+ d1:mY&Bi9*v2i駄֥(ҦʩЂ ʛ*mꥱlg*רk mZun zo*+:lLZъkP|sT:l*ᚩw@$:Zz#YZz~l:i0 ۰+yꯕv)gk *iQJֱ7pa\0iJ.Kj zqXg([@>KB[J` ѫ{8!OSZ-kµjJjMc+U6n^`[a`;qe[mqJ|{ڶp{p1yK;淗{! aKq; !{;/`+`;k+{s 1ʺb){w 'Kq`)0;Aa$0)A)00pԛ0 i㺺ދi+1 Qq<;+++'¾p!|#!KPkikJL{P4*<(20@k0p l$?,lܷH "L||\aW!| dla')mhǙvs\JZia;=  W|!k\;  1k[ q K(B˨L̺_;,A!/[| ilq~̿gL<@㜺f۽輶Κ,̼jKbC MiЌ#<@[ъL}QA;$M&Ma\ ڜ0]ΑK-i6}r(MMl{ƋŌ1S}@$ՆL&dl`5+d+!]+LN?$<$оmÑ#l |N ;'P;<ͺ''a{" n 7s)$M0'3_Nc L ~q$Z\*aΊ߉$Kl_!l 8}:}<>8'̚m.bn/'b,ϣ;?1 1 Ovnxz~̟Я N3-?.TT.dC%NX1'8 G!E$YI)UdK1eJd:h!JZ0G@h@E&XUY1~WaŎ%[lؚ7UGB"8p/.m4HY&lukƮ?J AB"л%p0 T2ȽL:  [8 qɕM[m/tP4 "" %dHa`ċ ϧ_k-ļP3\, kRH- m!\j> 3\H:C'S+%.s z!:.z1j!BfB {trH"3ʏ^O2J ^8!B8`ђSOG|= 4L4;;@S(EpTK wsPLPDk͓T͌8΂Z a`STS pP ?j UU0 [LQXc$FMrԥ%i2 \; BĂ @%/TZ٫ZHYvZK\wVYAJTBZt֫kulIږn\q"7tU$Wx`iޕu_f\]Z+X9EBX%e8}xՋSVC8$pd{Azn hVzivi^z yUCzk(9{l6lV;m ׳ꈳznW Ouo|p 7G WPlpq#|r+r+ߺ;n7tS'\|Upmc}vss6ioXax7xW~yG n' z~{{vy?[ X&F}SV }T1oaaXEcSpEty<]$Wd"2DHZ'D EBjȅ͢Cd+'aCAA5IP.:QEUfBX8GRŗ+`P12Medt0$`P2U t`蒃Bp@X0@QTX <*9 4%|d+EF^$"MȏL'Aْjnt=F@N)y@x1 afi"Zɖ%eFN,ʜK]L9I="=D:Q !Ps4h #l,m%˄S 2e`OGi=r)ʧl:S58$ |e}&ntAF%RpM;QլH@ TaLJ[TT[XLe  RA:y}-T rO}:jY3"PQ&FU祏v3AW.pIq'P/PJSU ⢃$JZU!+V9@OcZ &)[6moR3mc`k:"e.Ф*/L*h*mnwYN=-jeBw&۽w;&TQmبU~v\$ԬW# {ķ8IRΡrXgF#JHXiĕ³̔,`yN8(͑ b9 捃UTg:)3v4%Ɂu=6ts1Ew\[X.Qti\E<' Hz"z@_s(,;:C,>| &  ʨ=9>< oh%:4d4÷ | MY#a  8_3:B">tþH‚X”8 :۪RS8H0CB*‚8BACm A$D2A@ėSDF /2 v:IDL9M,A-r-Z #:EQ-Rt75D6 `:C>?EEZ9TTP4@ jlј8>Rr'$:3cTCaÐKFXF8頎(23%P7>,}91JH@aDGCFwtxTf*0 =R=C# )9I2Ų:aѣ`5#sѢȊK;'ӵc BbQI\,N 8`@LQRɋÌ<*(+5.#RNJ<')^EĿ<0e+bF3AA,|;$THrº$S˂:CMMAȄ,y &bˌBI4GTʘ@ #u5?D "ðp<ڴMuvt9ISɪ39r[n͊Ll%tG͙%ĠLLOlyΝšsE#M b40tLƭ\ĮT1F #ǎ0p8s&GcPrČC~Ió8%j6~d(&GsO Q*Q7<Ѯd>8SI³3ͤ=$m* RA"R3R$ʈj,0JECy4R逑-'=01"3,*c>?N[>IDݒci-:;S@ DTʃMbә`aS5!~q*b+bq KNe1z]48/0+1I2.c9cJ\c6b'1cw97)1F "թzb XdFndG~dHdIdJdI!5dBd 3eR.eS>eTNeU^eVnSLdc`ac [[4.R]_^+_eN.Wd"bUuvp*__J_l~c6dK4LRޖ0μpgh>i.Sj\92MHVpwm%n[sKOWr<;\%XhC*h(JU &9'i&=^g;٧^õ8h=P鞖 ˊ F_>H+Fusqr҃`j&4kfꂊ5BUf2a tUkj^iofkwIj*Zꨥ9rHjhzSaޡ$5@  ma #!! hUbpfm3qm٦m"m mkfiF& `ھlqNU ) nnhn^0>[,/^o4^n^GNljooopp/poDna(b$p VA*o/;oׁ.pqq/q?q9 f '# g8 O Ikqo,$1dqqJd @qXHW^r&or'WeXrs."b#$o߽\V+w>r aT r 6rs9:7_;>*s5)6<:gsAw)tMtNtOtPuQuRt;sE'sB򖑰׌-u$HtNL ښ%ĔZF]^v\}[8v}R@@X??pךgviW=jgO^JgF¶/D niRH'stǹu/J)Y#)?WL 'n@\,Ht@?igvt~"yjS'8X10=uyfG7yuGyTZNҰϙ)7HJf.A(hG]֔")@zpS7[ w;1& {ƎHTW3G#7O{8/t|GTjjP0h$mN2b\P^ї(]7xI}g?~@~hW?7  tO7o 7'j1/g dA^H1 2lP!  Xq+6G,i$ʔ*Wl%̘2gҬi $@#$x(A+%``ZhJ3D0kHbǒ-k,ڴjײeJ@mh2d@b ^zf1ud@`U;\PKn 4ԪWn۞?cZHIxE^ r8an>= 8^=gޘ5Ǔ/o^`츳aֶx{G `_= '_IEeGI8!ZxB$mt| dBoQG,”]hFnTk|4A 9Z!{% Bo!`Pe(ͽ|ȇ݂ɀF}Aj>n$mfKFK !|]A Meĥ s\ {r0SbVUh@@E `#k :rDKv UJ bAMT3dIX@t Z`bTSI9}`h#}S]`9un8~I/Iwm'|}`P`a㧹γ  ^|fo dI0oާ4t@: ]C5lVr>o}M^)?0*K'%#Ѻw XᲙ9z$!$KRVc:4yy%h꓊TXXKSin~$.7TM9 X1Lff4E8H06 1ZBz޳-K8;IrN QO: $_pe9 fӔ$(P*&i j! O!)CB#!Lʜ h,*ֱƄ4j^'-򵪦n7/%? GI5G'KMVjWM (@.ki2ϖ=-JO*x8@^Hp-T<^?ѬM-jdiyC)QmbYK& ,VkX ):2X$<:Z2׹% to7 ޫߑa_+ / ȴ9Y 兯4a@P?48qu @8q/gCȓ؁-݈Hy3v% 26:Q2<8'SʌJH7mzyKn"\Y9|M3PF˃K p9/g&0l9!V:ouG>XDŽVDUϩT0LZM#BtCT&QԦOh>:& L]z V&j!vɆmk.26_V셨D;Am?dAC`9ִ>Wf*Pj^QY/K_Vo %]mTޮt$}H bЛ'08 OݙKFRnM H¿XH|LfOgN,.I0-˖3Z/Do?)9@I}1s^tJ {@4)qj[ pN@xV0O3!] ^^J+U)@{rQ83`8~Lך[1լ~X~xܥc:NAkm:O6IJ?}Sg^K I%>q2$9zS 0ǫ@Md$`m  ZSS ^ȭd괐LAOz!qa T@M)E]Νa4 Lp(DXb@笆*b0PkR\@ٞjl""n' (V +N"b̢Ԣ-JV >Y %S["c #$2Ns18#A#H#5v#|Y`ip7[㧑ci!:cccE$>#??#@@$A9#:D@B <$DFDN$EVE^$FfDרb!ŔE$JJ$KK$Ld!!ꡟH IVI$OOdKΤGZ' ,%S6S>%TFTN%UV%TvvGҢ+ VX%YY%Z% DdQzW#٣]%^^>['$\ X%aafC12S`\=dNaecz`RfnfXZeޤB2h&ii&jj&jzXzg"dcA֦m&n$Hke⢽ pb&\'}u$m.vqc%ڪ5`[reH>'tzav./jg 'Y_%`Ew''{y+]ƘDs,(g_"v}'w2{6 ZglX(H}n`4pd@A艦GH (vc~j2͖n)BH@ɧ'czSI)Ǣ!&b΄*a~>܈ aݡ͑dͨPn)5c)ؔoY霚) A) a]Ŏ9j  ),([g^JFkQ՟khUp@iq.'QBjVV' u6gvjhQqh.'ªʪ'Rڂ"2x:Nc*!c ɳFV5!t~+nDjkJx{H++櫾+~@kK`@J4R$P6>lJXYT^,fn,vL4ǎƒ 4B`*+*,UXv,,,-Afn&#LL,?N 0 +0ׂ&,@ Vfrjg@h|G>\Hl  (@ ` $ @ , D +f.gƭ*m1 @C4@* ,jin&jI~嶄O<4 &.nn*. @ V.e&o2@y`>Dޡ2 (6@2@f@ lntjI/!Hh@޺osU|ĵ>dl϶,@'t26ao/4qnNE 6FKv)pV6 RAt )Lg\>v5Wp[PWvC|@po/uN. L d ,4@ Ā'w*@B6>2t /nWhNueş A^qp^x'(p MZ7H$2#6ڶ(x ,b p m5R/ TvmUSߡ=ExL 2E).DY՜ᑗ)2"K3FrI`@ @+z1kRkjqc7ip9ML8XqC/09ZD97)Yz`LMpzCu:^>YȀSFC 7*/M4{ܮNυ7C۱3dzW#ִ[˪8A;;g{j@@ڷ5nwǽZGj$/<7?Ƚ~>{ __~H~mtDw 闾3==Ͼ׾C=d>3~O1? '7~ K7;P_?+:2~G7z=?ɯ ?gȾ?  4xaB 6tbD)VxcF9v H$ dJ+YtfL3i8dē6yhPC]$JK%>ӥT kE* zF:5;)Գ@Zed@\sֵ{o޻8.B%`x؀ZBܹ8lڨmpu9wthѡ'\9&h!  ^ jn6~8SgZƬPөW~{vۯ(mqd?)ņ *Т`E&2B@+% 34H68Lh6zcC CP1şlq%3k)T$h` J`R`>m<aCq!IaKpKi\ ԢˉNz.ԯ5PHI҃}VER0;",HDhASL0P@a>*᫃F2ZiՠY[ r 4Dh%-A$! ӆ$OJ pw<ɚD"h# o89q :s; &u#=% [ F ^@(aw~ j@@+L2e`fcL1r_;xH8${)[ @^tH 0 >3,_0`AByN>∃5ްk<&+}6@D= 6x 0 ؀ψoLNwZcuo u9T6!x !37zR 8!28,W /8,Z % _;5Fg3@>p4=2΂" L e٦@T !5C*Tsΐ RA~ *$ ‰PO EĆp/!T8 m' w ~`Ԣ@腿A^\;!lPpK@85ỵ&M I@FC+b( cOxb&ƛĴ+,H-I"9#MB(U;F_v S ?3#DU_Z>g~'eŢL kO$' œAJؓF3&|y76[)9}&ik%C8˦mJ\' ٣ OgO2IO01$Vt.d@J2_]C P3 nlPK:ď6 "gT:͏mD7nKX5La"NB3I&!I2I |R7y '6  mj=(X"K1+ 0Fu1%YI |W@3hu*|UN v]nt|D`\{+B R:ڡ"T&).!)@iZ5a ca8` җR|1%Q%CƆŴm͜5.E7F7շn'eWvqn UMc2zEZ6yt d1q(E&]Ede,2Μg?dG=[6ThG$@)V#:?:+'39mHdM4Ehԩȧ}jMV5QP=7%@_]lc l`'@ֶJM\Wь7@!o۝o}BmrͶAnw-R7|n oDzb4"9q!Ayj g?ri~~bLs?ЉޤI/zЕtϚOu__aPg][t u#]&,~vke.t?ZN{}~wG}{h+z= x?~1)W/ei?|I_sozGͫ^?$ya7Ų5PH_l x̲ _}k@)-~&QPT_#LlOHuVn?~|i_&e|6  @@"ƀ#0`Wl@C27Z#(w,JPDVD6,1HzةBȔhTnЃe6 :ZPPH6 &ZxD`\ .6csf?B D g CJ< TvFNH@5w6? Itz0Ux+XةiK Ѷ2 C>HgIDOd Z oAEW7C!LuNSwÃJ%zJ@v*"kU8tD:I1d PzC.Ŝ5$Q J/`m#A\/LxƑ̘M bPNI"_kD7<`U6/y*ҩ7BIpGT)fIe_1SOIheC ȱgl(@ * B!6fI@ 6e͉ih2I7t27E"2alT0OHp"i) $@LAX'Ĵϕd+6=Q "i'Qc6b:v\`69=c!T@iRw„h+B # ,OQ.lDlLv6@?>J:0Ha|tfsZy,v$yNGu~w'`'8SyOq.vs: b;#05- , - *چ. Dh*b5B3վ#@1Մ/j=C&> [>5B >R<$z/ %\l&R DE^%CbC/ Aa|e`L!rO˳#&+1b$5 PLc<^tBk3 ePF ɶ/G/bvf3 !8 M8"db4/ ,5[LE&pf%lFlFlQfnHfK3oG XT%l*س`Duno /oHobI<+ NSTA1O g||H܇2J~~BJ^j@Cv 8l qj@[U]U]/f Mf>F P( )IÈF| ZSHJ(QaDT0CHbb"t"2-X/fXw,𧼲heɈjh.`z Zfhc2kO œ͏7:P/QL 50B=x_$?Ѵ<@jE?T+~"D4$!< DQȤJ5 6 I`vɗirЬpEHe%h 54 `QUPC6(O`|)/<t}g0)CjI", J=iŬ J(ʢ0j, wJnj`z[BEZʢ$`R:( |W|Ǘ|wMR hj6Fx`PJsN C~6Rag6"!`,wwQB a, G'k讈)戯B8u`;CVkZgp43(K.w@I8r}Ovctu <#(_J8_uo e$bW"O@E kɺ&gf˻Kk{ctRctM7gG܋z%lMɆ#A )w82Ck>3g s}KW>|K P`[(tIZ0L&c _%B~:KzL \R KNi<5%NKTVtro9 B:S:e 3a0z8$8y7ētZx#@!>1E%,$LxFDq9$kv Z6yB FD}.dEŸ"`H!#"9 rOJgd5!*q'MNǑBk24c 4Q["(&B_c(h&D:i2hdsگyr܏(d` WiNBĀ „ppE,F3LwHe<=:HH'Bv? 1A%T7T@՛8sNQ@C6#1 1_uk?Oӌ\dÃxdÑ-s \51OV37׏!>RioV[Is]`'g^)" I޻KW+7~Q~{S:{Js ^ i= ~ V!>c~K:c/ȆZrDC5}*Z|KXǂ\˥yR/9dX9U ;O&76?OsV(uY"g1-Nq:hz=֞!P+6a)< B[g. ]}9 5:TKPϿڮHdF ]}џmXukb <0… :|1ĉ+Z1ƍ;z2$Cp1HA(TT)P3Ν<{ 4С aҠ  (9 *\5֭\z -\)عtڽ7ވJXZҨ@ v%a:~ 9ɔnh!܁xn&fBW>:լ[~ ;ٴ;PKa%SXXPK&AOEBPS/img/strms060.gifWGIF89aR???999///___򫫫oooOOOـ@@@ppp 000rrrPPP```Ӑlll>>>,,,<<0)pCGk((É1J$GqŊ/r8rcB'M~9q-_:"O͂nYs}>{Ĺg; UJ@@e̥N U倮"InsfEk۵e W\G8ػׅ#| U!^G #6 sPfdɎ)_|+<3ϚMFmx#Ʈ~ݚv؃fˮm͛ d͹NسkνËOGw˟OJϿ(׹4G 6g Fhfa&ކ ($7a M,~4h,@)Di$y, GF)-X*w(!`)l:@ @ ,$  \pKt ,瘄f)"r (]BXbdW*@~.@ j訤"~vtpJDJpD& *vX"en^*` Z!^(*+@)+}&:șif^ nVi*k; wb+*beZ'(KPetP6l&ʸ z)x^(3v:DmC; )VŹ(ʇx|[;COS S9G{dyV氈F`kJ4@fikejy}EڝY=Z.x@8<`TL]rdz9),9YTD>>j7Osi:,pQ:Wt W#lVĒ:mKS?̔N79ӝ$^ NcbNd+?OT%W]ix: ʾYCR^mv2ܩ^ x-i[.]wGEk\\)be+~TlkVimMUT0J1Y|"ʃRNWGַ5*Ym WP cvNcgqǴ(5ԗlNӺݺ0rVGI}I|J֢U?Er=.V5Eolp \ m^ D)g΄k%R5s e>eUJpU%WҘGT=tUMZ)K.ݲA,vkvS7;2J3( 8qܬDɓ4U;i@ᥤA-m.E\w2<y_+zil] vҮ=V#)0`xj|n=7<ȿVIO7ҝL_ת Pu0);,xľ-Hs.C;T5F{dxk7⋦xZmpej]ܹ<͓gDaU'(j?ۘUN$_uj؋ I HK;M@t \!bFa 7l(ZyR'8fh''%Pt~Fc}#'m[2s:AP.p' 1.lRMW+?<ҳ1}8x$ qJfR?0lJj&3& 2BMh|$d(WVR;sxXyEg4Tk]/`cqmoF9mB@?1G3%YE4:0!P6hswwV2~u>4;,)!&l%Q9R6SLu=z6~aUjbe;[]zex5(883W?*4ˆ82$O8esn(:|BhL,&f()`Z(.()tVwWx.x*(^uHHƨ:/GQ(R򇌂9N牂ϸiHFsBԂ.26P*:k2Eq2-x[YU0 %~}8t6Sfv1Vl,ZGsge#*J8Ə苂0B (sfL3c2&S(WmR5h3)7"uE6q;1\'"88P$E& ^_Lpuf7؇Ù#ey>sɔ+8ɏ82 |.Q""l>s&r|' (EwPq".xBF }&'PΧ$ur`3([!ޑL EhZp'b 77(3眞R2ΧrP/Od޷CC(("%H(2K`A# 8* ʡJu 0{Rѡ>#g#JLYv5$-:4&(33 $53792;#=3?Az2Ch$Zx/y{S QNzFsב&# "E3bjd0fx3zOZzQzx14oz U*G~*E PgzFAE#O$E1`ZzZ2%=Щ:Zzd$EڪJ7q{@ګjXzȚz*:ګ;F5ڭJJ`:zA蚮Z­*:jAjF;[[X ۰;KGP4sVQҤ% 7ԁ*K;G'ۦ*;l6%&gZz0TBEҳ@gb5&FHgLKC7*4O SR5L۴Sܳ)g8kd(3_L >i1E@mrb.r2(dCcj;eCewR $jkg6'&i#-R=o2g2yX"k#k%(Ʀ]CP>#)"tWdպGR U^b1cf*^nbIgo4f': .s/贠}/K<+P9b3arhX2s%5$WrcbK"G pGh: _3YfWM)%TW|eg%sGp$P1ϥ8GppǨ 3ԹhtT{ @'է&x>fn_1C3 藾Wx]!&|R>[گlM:*#Ʉl7B:?: KKKK KJIJĠĿJKσ  ܛ IJ裧K˝ХM@eǴmp BLX<]Zǡ@C""Pha};ԠAB@DhiAKJ 턹8u~ 2(cJA@,DgV )z4P RTiKˏ$7S H8^Ս[O/@*h)7l$pPS5Uس2|jAk n*RTsTGQXp)ٯ^ikT7*AP&A@gc {%چJ$#LgRu&q "-bR5T,7Wv`*ҀT<(􈊎,)dk-(mpv2̗d֎FΆ=eKES-qU g瞱iSpC00uzF)Xje:ԲhKEikj<)JҨ֪˩iڪ-rJ t $ ̙j처h j,rh̞i:X.v;#f#q[GHnj+bn\oᅫK.gk p?l>Ċa;VgH@c⑚= ?X+H1m8׻ЇQrz r/! 㘵R3ʐ-1"MY眢:|$6,9ǯ5/wahexѐ#Ҙδ;l "@MuWfWVrgM륝+w^2 JU!F5#sUfkCHglQ{ʇ*&4#~ݝhҕaS6P#ucqU*en{gfֵA|No:,ѭ^D~,ZORkKe<"Bx[W +zdAT 7XLt|*wT~I7 ސ]#U-u-;!} <4N(JĂ>k FҤTK;G_ *I-ױ0<T""!y4CP@!vûE=XDD3!$睪T_%GL Kω@<:=!nxtN+;KHbDN LF*@_,hVsxP7YHs؟ny9HjD I\YIi!CVק?y} A LHDLxFlEJ&I~ƗHͲvr A'Xo{ FzQA2V$ NU4NdEd \qsjTO t!ϐCuDJ6x!xD2;8=H!)V4 @xJ_)hP=8Er MHOXgL\^YqJP@& d֐vp$$KKdkC^L%F9HP^7gQJN E V]7cC -spqH+\ԊT(gqDޤTg25M~% 8qQTCȋ,C V^]XJWU攅xA` K24$EgzqbʨEx/&Z(\2!GLTAx t i*JD}8aQVX q">El{q_7S|LuP/xYrt^ZH~p~d~8o0SP WZ$[q ޅGN=QZDL ?OwQG ^_ m(C %eTguOmT~^7ZQQxh4vzNuQe xP'pxqq .s (Q' Aaā\@p wzGYD u"Ed~!m֑ Pă?ly ,EVxI #0օcoQ1gK 0isB 4 g5bC0R/  ^'{SoP:?t noաC/ R oIj!o#s:O&0R  rPg ? Z"- 2u'\E\@p "SZxh1=QD1aC{Y&K*!IN1 j:Ss\&p-D?ѧ.`rig! Gvv)0PqF(^$6Tyb gťpKpJZG 0iT'MpjQ:p*R}FA5i"9 n+Rwx𧵑CLQܘxۊGLxhOO! &oPp!ظp@F7Q dDJ@ he `]q0_՟&*B9 6A'!aQ-< RfGQ!QM=!.kO+w` .wZA qa :mtq<2&p2 Lf! i?Sx#eQ M``!V\keWkءI3Udb 3]xqQ`]!"ԑWJaGRzk䷩Abx!T;") P@EY{ [`G鱻AZ71[V@p@psZqy jYe$ú {˽y -Wb[oަCQ8q|7b$ G""yP^aC\7Q$ BZqj"XR$9бF Ĵ%!ylJt0VJzvzl ó A&{c2AV XZ#Źp?a c e|g<2ƨj%H y@{&eYQz| Ѫ+hPL x Mr, rɢ0Ɂ\ʚ̜,. ĸ %\|K *QT|ȜʼṲ̆ ؜ڼܜɩV|7|Gά&?;c;<9?ω; 7 >Ѕs8: |ф/@ "=$]&}(8B.0.=B!6}8:$Mrme^&E|^B=ԩV [H MgԯuO- TVʼW݅* ^}iƝ:f_݉L/gԩ_-nԝfyM/{=7v3Tؔc8MVב6M7SsِxDW"'RtMshԗ-kFcۢ ۯۮ=۸۴ ۃ۷] 'ۻۻ Aɭ-ܤ]}Mݢ`ܽ ޝ-ݭ݃ޮ=}  ]*>}ऐApgH@@ ݢ > P0Ⴠ~>%n"30N @63. 5A9N>>@BF>?^3O~K䴰YT 6a`. 6>*0k2oqus.yNsJwNPnV7[^~銮].鏞1华^~頞K fNce6!!@ n. ~N빾 ~>ŎǾ.nӮ:CPЎN >P~ ~KN : NnnW O= //!o*_.(+O0-/,?R-8,:<&B+?&&^ڱJ &JNPoա#(p/*%+^P<im)$*#`d :o*'$@#(0K Jpc%`#@%4UR9؅}O.<ӫF~{j˟/=_>' v 8_&\ ``=HaMawnJ~x] b0'p() x)1Hh#8آ(9#;R(CD"i$OʑA6YeNN (82pJ_)`)f-e6вe&(Jwi{Iv‌/2hj)T啊f薓.hV:)"-KZJBbrJjg9@x©ڦ:ޚ'K+J 0B(Ѧx br˩^;~$ݲ;- "{b꯾&22\z""'-_x(uqm,*wgr~́trD,ߌ(O\ :˒s?#J*sJ4@ 1  @k_-jus]{*(((р\݀$p|7-PhުnXnJTDlL (`ACx9i*~~ knlK ɫJXy(+x'= ~ӂu]`8 @˽D;p tW` L}z6"p߾$@Uba,y@\"G~ J,f m  P `X[@|CnX09t81.6(ZdNHf4_!,&jc43)zgnLh9}ؘ18'*u, @@qyc fL~nK{P!2,kT?*y[pmc;Y-sBp\,5Rv#OAY&@ަox[.K70;@={<1Q whf^2&M{*LYr 1NEn gϓY-@\?Wȫf@_d(QR d#Z9G崃@/ON=JRȰPWXmlH T ZFG.*8Z穬!$( <F4B @h @MmP$ld'KZͬf7zv>KVV0jW^뉪gKͭnw\(! pK6`Yl::Ѝnf![Tͮv?8icUnWiWyٺ1{<},cz"]b{`$8lJ7$@X _Xf(w.DwVϘ'4RC0E͘6fIۆcjDFR1b glufȖPBD8Vбy5yC}>G? >K>P_SxrQ"h!`Rj q\2+@?@ "wzAvF$ M H,Ua+f:cC̞/]P_F[|6'w)`tCԲ|XcB ;v;r8ɺs5`VWU>`@| `H9zlY 4m^ us 0_V ޱu{7%Ku@-33O!50Jp,5fشLhOl wM)N uS(/.,O  \Oݽs\%Q{'PO,'=rhɫUUƣyy R>敗gn(ó4_MSU,PH^{D>5] OM)b\Ϭ$o1P|}8:t,!E:οe0YJhE.dq9ezbgqv  DEcAUn`|\bvPd hwJvG{wO{}vwwT+J |xDI7{?Kh68 6cEy{'w'W#c? CwIpDC#='|8xF|b| <)=?M=Dp7]WtG71'u7qB+nCh#ojx IHf4_E7@sA[M>x 4tC?Fy ({TfW~uCdwsu׈ l nhknf pkj66 X`HdSv6Gk'WC:u5V3X7XJI 05u('6f ɸQj3r3TRK78WgԦTVMimѶKH+58 8 f ] X6Vhybg )a˔`r / ~e RbՀ 9ķ= +-Y F xEP 4QɀsT<x &\P^i`yJS!WyLoa'bhr`gQt4|K% d@ٖĘ@t xE d6gDPK~aF?`I7+6D6v%h씖813\Scfjq͵ 9b a=6Ä3\wiffmX:uU;%h hWm5D5 9I:D7oJbGʹ6 '2p^f Mj6'R~kĤN*K㜈DVHӝ[jPܑg~* b޳kW* G~m=`<JZztl_2aڏuE֠Jh%wȓ8@Zq k@hѣf1D::~z U`6ņ1:jx5A*^c9) Ze sC>kc0ڪ:U:d0JZ*:m+e;ɒIz8eC&6 tHyh:qJzT_9 >= P9GJm3[;Fï|97zY k9%6~SDĚ'D xD6ڜک Z viDD654zS*G!+φCPIp M8: vdtLm68e{۳?_Ո8yc}3H jIvV:gnX<wˎ捵JT>>ڔxpԀlK2+1Л +"; [˼_WEKذV!0ӠЂ侱\DaĖ%KZ{|:*V V-;k zk  l |KG E, %̣ee0V]4=F987r:JSg}G,N\3Q!D{+coXفt:Ѵ]}^`DlWjW~XVl"f?N=КЀ ]d# QH.=~L I56MI3=i-~L^KiGFyH}5VBd$=3iUBJ 8ݜJoyi"T"=G .!)QuTP u0%ЙXͪVծz` XscMZֶ)%Oxͫ^׾ͫ уM_Sh%d'KZ7h),㳠 hGK҆`iWִֺkgK ml=y6--jq7s;N3bW7s Mz؁{Kw}No~u-")NpxF"ꩫJ x ŠM:Levl 4"FSURH/5I{kcOԸǵa?bț KI 1dHD<\9@IJZtUyE( /Q9a4]8|eQB0N"QVfsk MFH \RbB bLII.eٽЧLm8r: CޥYsa~gwD,;n)o- \V=~Np WW%Kd0K+Xl'QMuS6;~!hL'D+Z(Vtְ;_OMEy#WQ _ I>jL{2| N87c{\׌/=܍.}b@pfYbf9HlLG]uWE+r );Y?ZXsf)wt~Ȁsl"sp!w(nb VQawtlϧƠ5tU'jfA$Mׁ'&c@vhga- &xPxHffgƂ7/cGtT6d6ŃY&d.HH(SÅ%ȁ`N$fcbosa8jX`Շ70v΂|hL1!(l؈؈b x؈s (Bm8Xx؊ ;i8XxXmΰEXU(nIjVxTNWpƑ:A}|7FoSO'HSfM 89uS4p(Kx PK'% ǴIA \@Kp"ebTd$u#)aô!)}hXef jR1H@Ԑ?4@!' @k$q!ptlG$!:;ApqaKd6PWGQn\j_w w0T9c<T?)yfrTLdA96i%'Ppk |#i|+R2wUBQ l҉Y jwA <8AbKA F0nwDF նi,%yVbBJp sDqlJ"2+' SpN3D r>|CJev4yNfŌm '-iw`y0М7@1QW !+,! ։ᙖy K>0 bun9Bmv$@Q $93{YPZ7d'#40W~<}φMjJ S1P8:d9uB guwN(d9ArY(D!#n"ag`י G|A&p{)kRj*4 d s,lHIQ b=P'$@;"zR1%G 4sj PW%)& 9 0RpvQjN" 0#}7- 2}ෑ)7*!PpZr"/)=ʪreHOq?w>̷q&ӂ9(b9q$: c 1C+irDyr$O w w)m)B&ur0 r F*#:2(oX Аd%[!!*\#CIbP *xAAf@$e{A(ˀ飼Ջצh*EЎ4;&%UҔi vSL N!VoQe[ ];cj#ɑ粑ٌaF"QضBkLPei3cdey{w 8 *HI@"6ۉI $0$S&$9>D0DX9#r+NT3A'RIcjuq 8GC0lr!oar 0 I柇0v Y?ז.JLe&" b3;  $ڽB2~7%dDs=ptk)$+4 (҂~403;$GCc @HƝQam)B13bB ,t)4"Bò,ƣoòWÙ-eA9a8#$þ5Y1b#,V#Z53S!ܪ ,H@#,NC$j-$Nu!F[$^%*RE&404^6~8:<>@A- W*HJL4N.A=]V~Ā\寕;!Ud^E~ ffĭ3cp]Z q^V>=~>^~舞芾~>N> zNֽ>ߤ~a꨾ۡQ>~a븾BNp ^p얐ƎʾԮN n^پ~Nn~ P.Nn^^ oP /R>`RPBP%#"o",+O[G2-*_7OȎ(O"o@B[CEG/IKOMOoQSUWo Be gO(y8<3vOq_[:su>o~cO K_aOo/[@`K l"hK;PKcEjXWPK&AOEBPS/img/strms038.gif`5GIF87aX?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,X H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʌ 0@O@˗/70_8`A&TaC!FT/_|$/_>~o`~I(QD%J(QD%J(q` _>~|(QD%J/_a?~'0_$J(QD%J(QD%J80A~O_|G0~7| 'QD%J(_˧| Ǐ|د?}?$J(QD%J(QD%J80|?~+O? 'P/,h „ 2l!Ĉ o` '0? |_>%J(QD%J(QD%̗/_>}'P>O_~$J(QD% O_~O` /_O߿|%J(QD%J(QD%J`ۗ߿|/߿| 'P_|%J(QD/߿|/|o|'0@}I(QD%J(QD%J(q`>?(>o@~/_>$XA .dC7__>#O~'>˗OD%J(QD%J(QD/@/?˷o | 7_>%J(QD˗o|˧OG_| /߾ <0… :|1ĉ+Z1ƍ;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{ 4(B~˗A~/_}B|˗ϟ@~*TP'0_?˗/>~ǯ_>/_| e/_>}˷o߾ ̗/_>}_|'TPB7/| ||*| _+O` ̗OPBnԷ_>?߿/_|'p "Lp!ÆB(} _>~/_|'0AM8qĉ'N8qĉ'N萟>O>}/_?O߿~M8qĉ'NO|/_|_>} 7_}'N8qĉ'N8qĉ'>ԗ/_|O`~/?#/ĉ'N8qā˗߿| +/߿| 䗏ĉ'N8qĉ'N8qĉϟ|/~_|/>$XA .dC%_߾|O?oĉ'N8qĉ'N8qĉ /_ /_}˷?}8qĉ'N0?~˗/O`|0@ H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜ _?4iҤI&M4ijԗM4iOM4eӧ&M˷&M4˗&M2G&M˧&M4˗&M2G&M/?4iԗ/M4e˗&M/_~4iҔ/_>4iԗ/M4%GP_>~4iҌ/_>4iԗ/M4%Wp_}4i҄/_>4iԗ/M4%g_>}4i|/_>4iԗ/M4I>~ _|8`A&TaC!F8q|(RH"E)R4/_>)RH"E!/_?)RH"EP <0… :|1ĉG"E)R0?$/>$XA .dC%NL/_>$XA .dC%N/_>)RH"E'p O@ DPB >QDH"E)RHѠ|(RH"E)"O?)RH"EG"E)RHA}QH"E)RL/_~)RH"E˗"E)RH"EH"E)RP_>~)RH"ň(_>$XР@~ <0… :|1ĉG"E)RH|(RH"EX߾|G"E)RHA}QH"E)Rl/>)RHb|_>}/_>~ӗ|(RH"E)R4/_>)RH"E"E)R0_/_>~ԗ/E)RH"EG"E)RH|QH"E70| '0>WP_|)RH"E)ԗ/E)RHEE|8`A&TaC!F̗o`>O`>}g0_A}E1bĈ#F1@}E1bĈ#Fqa|#F1bĈO`>@} +/_#F1bĈ#F/_#F1bĈ#&/_#F1bĈ '0| /_>~ԗ/_Ĉ#F1bĈ# ԗ/_Ĉ#F1bĈ˗_Ĉ#F1bĈ˗/bĈ#F1bĈ˗/bĈ#F1bĈ1bĈ#F1bD <0… :|1ĉG"E)RH|(RH"E)2ԗ/E)RH"EG"E)RHq|QH"E)Rl/_>)RH"E ˗"E)RH~H"E)RP_|)RH"E)ԗ/E)RH"ńH"E)RQ_| H*\ȰÇ#JQ_|)RH"E8@}'p "Lp!ÆB(q~ H*\ȰÇ#JHŋ3j܈/_?$XA .dC%NXE5nG!/㗰_|D)RH"E)RH"E!|D)RH"E)RH"E&|D)RH"E)RH"E*ǯ?})RH"E)RH"E\~)RH"E)RH"EP)RH"E)RH"EP?)RH" ˗/?˷߿|1| 0_/_>2Ca>OH"E^@}'p "Lp!ÆB(qD}㗏|/_>EGaG_|˗"|=̷0E)RH"E"E)RH"E o_~/ܗO`>~O`>/_>~˷>o|/_~Ǐ`|˷O`|+/O`>~/_>~Gp_| /_/_~/@}|/_}˗?~_~/?$XA .dC˧OD%J(QD˷̗߿|_>O`>ۗϟ|/>_>_|߾}/_|ۗ/|/߾|/~O_>'0@}#/ _>~O`o|O?ׯ_>(QD%J?%J(QD%BO_|3/?~׏|/߿߿|/߿|O| 7_ /> ̗o|o`O_| 70_>}o|؏߾| o_O` '0|O`/|? 4xaB 6tbD)VxcF70_>} _|Wp>}/| ̗O_|/_} 70|_>}70_> /|㷏_| 70_>}/_|o| ̷_> '0߿| _| 70߿|o>}qȑ#G9rȑ#LJ/~ ԗ_|W_|/|/>~O`| 70|0?}߿߾|/'P`|˗O`>O~ / _>//|// 70߿|o`|'p "Lp!ÆB(q"Ŋ/b̨q@~'|/@} 7P_}/_}˗?`>#/ ϟ|O@ _?'P`|˗O`>/_>~ /_}˷O@}˷O@ۧO`˷O| ܗ/> O_}/_>$XA .dC%NXE5ncE|>~Ǐ?~Ǐ˧ǏǏ?~Ǐ?~DŽ8`A&TaC!F8bE1fԸcGA9dI'QTeK/aƔ9fM7qԹgO?:hQG&UiSOF:jUWfպkW_;lYgѦUm[oƕ;n]wջo_4ӬY3|'0@}O@~/| Է//_>O߾|_|˧??}O@ DPB >QD-^ĘQF=~|o ?} '0}_>_/|o|/_?O`>>$H A $H A/_|O`@~ W0| /| '0|O`O`> A $H A | '0߿|O@? '0| '0| O?~_? 4xaB 6tbD)VxcF9va>'0?}O?_O`O`/ '0?}'0| A $H A $ȏ ˧O|˷/?ϟ|'0|/_~ '0|ӧ߿|O_|$H A $H A ҡ? $H A $H A9_| $H A $H A $H A $@ $@  <0… :|1ĉ+Z1ƍ;z2H9rȑ#G9rȑ#Gȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#Gȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#Gȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#Gȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#Gȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#G? H*\ȰÇ#JHŋ3jȱǏ Cȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#Gȑ#G9rȑ#G9rȁ9rȑ#G9rȑ#GH ' ˗/70_,_|U/_>~o`~9rȑ#GOb _>~|惘O`/+?O>~oȑ#G9rH ~/_>?~˧/_>#o`~O`>~O_|˗?}˧'p_>~&L0~_|O_>} ~ Ǐ| H*\ȰÇ#JHŋ3jF/|_}'0߿|߾ |_| '0|߾}ۈp>~O|>} w0߿|۸qƍ7nܸqƍ0߿~O ۗ_O}3؏|;_?} Է|O`>_6&O_~O` /_O?8`A&TaC!F8bE1fԸ `>ǯ>~(0| /o| O`>߾|'P_/߿'p o|맏| GПGp| <0… :|1ĉ+Z1ƍmdO|?}ׯ`>o`7_ O`>o` _>O`o~/߿|'0|/?~'0@mܸqƍ7nܸqƍmdo |O ?}3O`˗/_/_}˷o` /_~˗|/߿|/߾|˗/>˗O>~ ߿˗/߾˗o߿_>$XA .dC%NXE5noƍ3'pƍ7nܸqƍ7nܸqmܸqc~qƍ7nܸqƍ7nܸ?7nܸqƍ7nܸqƍ7nܸq#A~6nȏ?~/_}O|mܸ?~˗_|˗oƍ7nܸqƍ7nx'˗_>|/O_|۸"|˗/_}O`|}/7nܸqƍ6h62 ~8`A&TР|'0_~˷>`| 2dȐ@#/߿| W0|/,h „ 2l!Ĉ'Rh"ƌ7#G7_}_| ̗/_?q(Q߾//_>}|8rȑ#G9rȑ#E~8r|O|7_|/|OGo>}˗/߿~˧_>˷#G9rȑ#G9V#LJ˗|'0_O`q(Q_||O`>ȑ#G9rȑ#Gq"@߿}˗>_| <0… /}o_ϟ| <0… :|1ĉ+Z1ƍ qb} O}?/,h „ 2O |˗?'_|O@$XA .dC%NXE5n4G9rȑ#G9rȑ#G9rTG9rȑ#G9rȑ#G9rTG ӧ#GǑ#G9Ǐ#G9r䨑?9ԗ/G+˧#G9r4/>9rȑcF~8rXP_|9r/>9rѠ>}ȑ#G9^#GǑ#NJȑ#G /_~9rȑ#Gqȱ|8rX1_>}9rȑA}Ǐ#G9rX@ H*\p|6lذaÆ ˧aÆ 6lذaÆ o|6lذaÆ 6lذaÆ װaÆ .ԗ/_Æ 6lذ|5lذaÆ 6lذaÂO_Æ 6lذaÆ 6lذA~6lذaÅkذaÆ 6/ 6lذaÆ 6lXPװaÆ 6lذaÆ 6lX 6lp|6lذaÆ ˧aÆ 6lذaÆ Ϡ?}5lذaÆ 6lذaÆ aÆ 5B˗? 4xaB 6t/>:tСC:4B}9tСC:tСCСCԗ/C:t`|:tСC:tР>~ СC:tСC0 <0… :|1ĉ+Z1ƍ'p ~O@ DPB >QĄ׏?O@~(ǯ7p@ 8P ?~׏_П <0… :̗OC:tСC˗,h „ 2l!Ĉ'"O`?+诟?-oa~ _ԗ/E)R,/>)RH"E G"E)RH1>~#_~_? [د~篠|(RHb|QH"E)RT/>)RH"EA~Oa~ [ϟB)oa~G"E ˧"E)RHB~QH"E#O@~8`AWP~ G_?g~ `~  Aԗ/,h „ 2l0_>}:tСC:tPa|9tСC:tp`|_>}O~ [دB)oa~ ;/_>:t|9tСC:tСÅСC:tСC '0|7p_|'0_>[ϟB)oa~ S诟?!ԗ/C:t`|:tСC:taC}9tСC:t_| _>~ ܗϟ| ˷}`~ H_? `A ׯ_+8P_| H*\Ȱ|9tСC:tСCsСC:ta| '0|/_~Էoa? _-O~P_|:tСÃsСC:tСC˧ϡC:tСÆ '0| ̧/_|Ϡ>~ [دB)oa~ S诟sСC̗OC:tСC:t_|:tСC6̗o`>O>~/%ϟB'p ,_W~ ׯ`A}'p "Lp!ÆСC:tСC˧ϡC:tСC '0| /_>~'P#Hp`? A#Hp`~  A#HP|8`A&TaCsСC:tСC СC:tСC_? [د~Oa~ [/_>:t|9tСC:tСÅСC:tСCㇰ_?S诟?-oa? 󧰟|:tСCСC:tСC˗ϟC:tСC:\A뷰_?S~_?}9tСC˧ϡC:tСC&OCr!r!r!O$(_? ׏ $8_? A$8_?}'p "Lp!ÆСC:tСC˷ϡC:tСC:`~ [ϟB)oa~ S诟?}9tСC˧ϡC:tСCԗC:tСC:tO>ܷo7p@~ 8P}o۷o8p}˗ ? 4xaB 6t/_>:tСC:<? /?$XA .dC%ND> 4xaB 6tbD)VxcF  <0… :|1ĉ+Z1ƍ;z2$F}˷OH"E)RH"E)R$B}OH"E)RH"E)RdB} ˗H"E)RH"E)RHO?"E)RH"E)RH #/>"E)RH"E)RH /?"E)RH"E)RH/_~"E)RH"E)RH"/>"E)R}'߾||"滘oW1_>߿| HAw|O`>$XA .dC%N/_})RH"E).ԗ?~˧O|/_|a|_|("̇0| QH"E)R/?)RH"E7_} _|Gp_>/_?߾| /߾/_/| ܗ/?/>ӗ`|˗O>߾| /}'_|˗|ܗ/| /O @~_>~/_~o_O@ DPB >Qa~%J(QD%J|/_}˗`|G_| 70߾|/?'0_70߾ۗϟ|G_>~?} ˷߾|?~// _>/}ӗO`߿|㗏߿}_'0~ϟD%J(QD%J(QD%J׏|g0_+|/߿߿|/߿|O| 7_ /> ̗o|o`O_| 70_>}o|؏߾| o_O` '0|O`/|? 4xaB 6tbD)VxcF70_>} _|Wp>}/| ̗O_|/_} 70|_>}70_> /|㷏_| 70_>}/_|o| ̷_> '0߿| _| 70߿|o>}qȑ#G9rȑ#LJ/~ ԗ_|W_|/|/>~O`| 70|0?}߿߾|/'P`|˗O`>O~ / _>//|// 70߿|o`|'p "Lp!ÆB(q"Ŋ/b̨q@~'|/@} 7P_}/_}˗?`>#/ ϟ|O@ _?'P`|˗O`>/_>~ /_}˷O@}˷O@ۧO`˷O| ܗ/> O_}/_>$XA .dC%NXE5ncE|>~Ǐ?~Ǐ˧ǏǏ?~Ǐ?~DŽ8`AD&TaC!F8bE1fԸcGA9dI'QTeK/aƔ9fM7qԹgO?);;PKve5`5PK&AOEBPS/img/strms418.gifGIF89a???@@@999ߟ///___ooorrr௯OOO000 ```pppPPP ԆJJJYYY;;;hhh+++lll,,,wwwUUU:::666ȢGGGyyyvvv***>>><<z)W^?&dVh\eX5d`Q~TiR c(1"͛s9̟>jУE췠P{F5ҪXjuc)ےȓd6 vl!p*庵ݚRRw/ݿwqʵ \~O*^̸ǐ#KL˘r|CMӨSJװcƜgͻ֋pMȥ /'_.νw)xAUoOϾ{˟O󳏯)Զ` VY*%& єB] @dt!Lpء HӁHӅfiȡ Ѕ$*"2\@ ">A#E#FM B$cSz╗ TdF!$# hih ^ER:)Vکicd!pe|v]0%%&2Ža!rHEvŗn* x^i"z#걬Jhڅ;"{c&vĂ`,`lzFө4 YfJ@!# ^ZJ蛩 " p0|I+P!zH%BP$cFK]NO|fc2 Y2/@`sH`W;Y5# L}@QF`sE1y&|.YGUfuӠh3Ebs}g<5&d;HG:|JW%`@,wT:CyrP`2%$AJ6miޢ 6p @3HlKO/jDfEæҀK| SI%խP_W4)K^` x@܀F6)MJcBtZֹ.uw(byK PPv{ , [:8tqHkZo5>j}hQ8doLŌSzX`5Elv@ث= bMkx;۽z &˓c.ȵG_Od /!-%eP+&biM!¸5L^ŪU a%NFkea*NJ%b'q* `@0v  0 Z0bW&&N&l8U["DXi# l*V#:%ҼڡqZ7wQʇ8'HV0]2>80~ ~ zA.(au|X>uT@Gox9cH!0ʘ W7%n[Lܽ | Ww1o~j6oe^gbz_|=9qwL>Ҙ4 8XX3#:DdrP X_K|nZnNU5wL_bXV(NDi ܋CcUJyYJ~:1ś64n;-y{0оU}rb.D(~Q8<8aӪ+6- 7T&7jf勲ʵʖo)RКʆ0UO_˹u68F8lFbE=CR8TXVxXZ\؅^󱄗0i_xhjljNJtXvxxz|؇~JR<s؈yH^!9[G!x`p*8Q7{ {"H(#XPhn+ha8ȡ(H!$(hH]`B-8!戎ҍXR2( 8^`" PߘbX菗@lcB )ny @ /Ꮯx^`" )>%) #9@@ɒ(^ @6' ^&ɒ T р!w!/I㸏!'YT W2,)2qIYɒ~`xY/!& ]PI)і9~9P`X"kYP闛iYY`] `NyI 韃!vɝY)w ɞ9) ɩ2p [!(. Iii"),) Y(ቕ`q0ARjX=ɏX)91GjQ&ɤr*4t 8jXXc:*JzE p aYyj]PIHzVqIuj} wzwXj:9zEZ ڨ0%) IIګ"Y*6ZX  ٬:@`ښQꭂ*|ɒw1x cy5Cwi `:^yh n:& б ҮGzҚ!ˤ|ɑ۱Qj+2i|9Qj^[ٵ_˛`k0({z_`70b@y+0^;!!;[giKf۸^89x+v x˷~ky˖0ZBrȵغzZk;K إہV~ zˎ;衔7eS [{R&+⋾;ˇߚp):<\| l˻*۫\7Q¾{*Lf. s0`g4̼6|;C,{1Ėuy³15T|VLX,?P\c y ׊PšKpLrLeXnO 'i$!+n (,<9, Оi\tvl൝I,0訖Yȗ {ZCIʻAȺ9Pv.˞0P{엯ii 01²q3l R <YRj\R<~+2\=L~ ,snֲ_\a,ɞLAqƷM>T[yƭTϽ;ړQ1~Vm?1"-ʄxȟL,y{Tɾ!Klr i˖n[D\8NX +?*? ˚7ߵ;Oɯ\C _g)$ϒo?0l*/] Р` 0[ܵ왉X.y2i]X,x?k ЬY/ ,[1Of_d{E[LB.Y^ϕ_\kΦхnЎ9|Ѓ fҐ+2⼯<='WlwB pp)C^^%MVk ^^^]^]^ ]\!]]) +^ ^]^„ ^σ ©D Ȱ 8H9JмXV6bTa 2BG(c#<|LQ#/l2dT=n"EDт &\iABY B$ v`#%#O-qAx*JQ1YΌp<0 8$pwYӛ@aWS < U>p"Ye5|h(w&ii+).L8BaGxB9`1Ez9S'lT:vj ``"*P}*k%ڋ+Zp+ ;!KS>i("Ϟ%Lj tz 2wk%ND'% \FHsO Χ,p)4\3YC3%AkdKsCr (Xg\ (ؗ,7\tQ|*tl% Irt%{&^E]n_d3d65.QSr6'0; @]=J[Wy1ucݢh*꬘p_dP*WsQ)7/XofIcծ|Dӗ=oWчPkrb}Аm=>>EOTVoJ=HD(DPY, Bd^)Mo~ L NAh" Bw6lp.c *g!0@ LmmBm!sa!mLlCpD%Vl"AѤyDA,mCD1,QT2"T1{̐H鋧q!_ hg@L!. `&/'P&IBBh$=IE`aY@V$G$+]<1"9I䒙n4ge:&P +L&e[`-qmpQ%E 'DF{6vE*ՏDFY 25|(C J0pD!n<"L[Za0f)^ vlß)D@C9 BAOJ""RmC#鼇FCB~bg)|e0: Ci-y1|H2FDT2ًl04cP GjPu4`JQ:*ZaĎUj9붴g^J LeB:c) (BbBS2}`SCD$3'uj|}jɂVP Zn"1V*qb*\Bd@+K»Ȁmi3]^U @Y MI$5B8^=MR┄UgU0)/)*L8J0DͿc<I?8 ^`mOs #1RHjzoZOveU hUs16l w/F4Ы`cS`YX,KuQ9d/lluY`W>,hcvĿ g6d~dxO{*7Vi*86@Hf QίN>ը u)hJzBI/3H.i!2gQbldp]NS׫zTt_jy~5Cv` 9}VarIu͞="() {0q}7[x&ukm7}$JrK*oT RDϹMX}@[8csa:++@ Ct9y5fqG΂pTD5S=1|=uPڤ}O`%Fw.SĦwۻWU Kl%CF)9`zkSIܩ^Y›qpX}ťqgӽ>*2 aXCgKTN(7 ڇgV}Xeau~`m5 "u53pT w]'U |mU#L%:aF2f2łs^Z"?Ǖdr_#c5am1 < 'b)W ?qJxW@1OdZ[drmn}+R'|d)Ph @ma{<6pBvx`L€g5 (ZGM8qp*rXP&eVfh2x~f2 S}Spœ]Ubhh8>XnXZDYǦg2Fv G`^ojט ʶ7H/@6׉CXЏ X}nnc+mj؏Y1ymmS*ՐdhC|D':p̘Ssxd-Òm:VG\I^yRp)9ik>"@&8{y}9=V 8 #yYBtA yٚ _pP|ڧ~zo:mꥋc;ٙ^q!pZ #yqb'D !] :FK*Q9c]`j YڪzcꩯZ ʫ=" sɪɬq' !0J Y0j %j Ϊ< J`: # &0&&0#&# ! ^  $P  " 9P#+ ^ ̪.਎Y; ۰]^L >]!PP]FбW+:j7 @5#jp`#p^T+5&& @$p%PqqY"!>˗8ksUL[ k۶= ;u{{K~[ $]&5Y˷eK鹬&<"-$PkK %` [ +5AK˽k$ $J"Zѯ /; пA@+5۽L!;ʊ`ྡྷ K3K @^˭Lj; i򫬐pXsW &4ËȬ{  aFt{2+;=)ô5'&*K :|^č(b0H(Ą`A^Y]N$=5e >ڔ/$>%*"1ꑯ큝Y6k |<Ν=i ͻxB~l 4I2NPR>T^V0mVW^`^@ߴ`EjZGqU8@]@J'⃗Z>Eǃf> 0g,h畀璮r~{^ ~ ^ l >^=XƗހN 7)Fj! ꦮT0s~ӏ@1eʤ[t/эx21Ne>q 0dX@>p|ښ r to]@/a#"kPٍU(&HЏ46 /q0' P#Pw$Pc^*A buI /]~ \ %2'wh 5E \C4v;vCkm*5T0-y14C3E-ǰ-AqV4Z'^Gk$F1wi +A7F " .G =3_YmTMT (`p`?@# eA9NaP+P$:%`IUX_ H>7/~O3C`+Qi]02:q^]\^(']^^ ]^] ^^ ^ ^^]\]Ӵ ]^^“ pg,C7B\6ȀPXb8@K: CXL2Rv2-H*1 *GV<4@pg c]hgE`I02+-f #ӛ.#&RR6QY%ƑɊ!_HI'&rXuxY %:W/)Btd*PZH׺BFu p5mDF1boKϱ(% J!4RR[HqKAwS6z?c.[ W&-~E [hTpUya[Kv8 >qO?VaQ &dȜGz$FS0^>!:Yp3Uh G%< X,^}uWK >pgYψV‚VLSbE$d n?ZJ@ӡv=N##WH/\nvQ7.=Iڙ-V|!r;yFk~]^l7iY}]p;\|,'N=W s yMoE3NsN1fW0y̍O8Ϲws+70GHOҕnULbIԧN[XϺE fY[Nvky4MP-p^,w nxϻ'.,"OwYW@:[^ӹ {2/f'O=iPNgO7w޻D}OYD#!yML.ߗo_o_/rXcvG|ֵz~~V{[k_Oa{n\p$7}}~hn7r(ihZ5~B &gp >~f~ tV, 4<6tg%i'\@e (q¥ kO LXG~} _9H+jh,g9O;#օ1`w <fs7h{Y'e!024ys 4 k#bep`4pfs=vӃOF@f `RIkjos01 nvk{;YfPL!H?~h]8 H@ 0H 1@} B8 lӠo`pxkv*&-( Gf j)!؎_@8 10rx[%/ғ"c5 cI`[)r{`;uC` ڨyTjVd) 4^0x7QLV$<5<%5x0b1D$*PR0d`!8de_Ti`f"tTGNq}%x5(Ê6YPX$P282}@!a"JҗY' 2>LU5  pUsx4 lpC KKS$2 aX7gC=EG1IRrGՆb b0/a愆) b<)-:.*DEC$~ XQZA.4Q\TklT;Xl` 2+)0@x%8P)Vah/N9 W3Xa .j2I^?CT^N6rI Dwr6p! 3y%y3GY,鈷P'qA4DՎQ>YG@ː,ж3tj$7sFeBX10YzREa/bRhJJ9ŊǚRH&\{dv3Z-5:qBad'+/R asGu/,4'ZaS! ~>s/'uAr b!Dy?m>3@ Չ zC_! + jw BQ58 G5'ҩ[p4' WA MkP2~,@A=:.KɴG$T2jk}&$z;SE۪,ю{%S{›(&_ 4-[T) T [н*9r+0S-Q;yI:IFfAp48i 0V=_Jr C± GU]7FNp$8 ˱Ki@.ʛ AtI +q'W/D:/i!*gjK$,e1%jy0(0[ijRT,C%b3[ z2\`)h +k'>!H [5 "\$x+y8)u?s꒩27f񼍌 n hs6M3P#5_Fa! DEs͵(   \; ɰ5PY7x/п=cPcR{g}|}Fю"> @XVp#l \ځ0ӘξxGe'nVH@Bpɶ3p~qfzL &vHMid8iӡ6 ,Erkr`g Vk]2|&h͈xJL5l=;nljD %vA l(Z]t"(unH3XdirwAyʻ <1+*Gk2f…ؚhFM>eW),MچnA9&kWY3|ݛvTևNE{m]baX{|JH ת.x2>zWߺFf4<>(?08B>D^F~HJLFN8N>T^VNE|LaWNI@(a^ tHlnpr>t^v~o.Ux~Y[vא芾莎2>>^d阞难@>ꖮȩK@(}>>c9mn0ƾ8Ǿni&>(՞~ʆ.n^>ɞιL!>%Y,TWNW*t PpK 0$_&(Q@().) 0_680/:@BS~W" L?q,, ql]>6^.ZtVF$-pR/}F@~Ojl@~ Pm/}fokP_ok`o? } @$gwF POP_"  ^a/@ &""lp\/oO8?aojd~ߢ_`"`&//Z^^"] ^$$]"µ]ú%!^$]]!%$^#"$ ]ȕ!$"&^ħ=s^Щc'QcJ83NzI" $7€ yvu^i8|v"%t Fa%E%CKIDjLG}UؘL $ U/-aГMˑ ЈrDs; !r%F(\u/Fh! qs YԪj[(훐Lv ]9S pć=L8!S}'+`cI# Vـcq3NWxysXG]w)U0\Q) ~-2Ō|sNu * #Ho $ۀ nGJ4Qp 哖!ppbL 96]gْ&!ŸnۈpW!-mRKdapցbedy%9 &=Km:Weu̙`P1ljDb|L3P)ڌ310)}]We %*ꨤ^Ҋ2Rc[@d) O" 5! $ra!uBPHHX7 (]ફs1 +,i%o~KZږj" M 2BZ q\ICqB"2 g>aX@R  T(n}Bh)Kwpj HE $\ZՅR Sv{3Lj("M&pb3ΘLSW]CDf&(]ɌX-73 q 6Bm4~݌g5@SqͲkpJ!TBZyUv"gl'lݕ1Kv {\)M۾Ws 7*]gKxf3"/ Eꭿ )NNxߣJ{Ŷۼ(>a ~"~?TdSifv;@A{va`D~_w#ov^%.c\wzDPi*T!Hh)P@Wa6 DGCZ!(FT9BE.PP wDN-5DFQP^GE)"c(/ \ AhPJbC":ID$5DDiEv$;FCj&P3.#"86Np5G80ʯ_;DbIL$Sa\U6Wx ٥FvRCUJ9\l%ȥ.A8}'a٭4f^HK_\Ix\1G> H" ,)G J`BI!H^^Jn:Uz& %Ad 90^G ~Pam;D<(ނN$Ce]k(1sK!9 FdR c۸VzMk41fj̋#ƩSPYEmAP9Q=rL(B5wTǂmP &Y>K{d#_ `?; P ">t2_>{?  (tļ*U`i-kDwI6*&V>@bƶZˣަr2L\$Ns^(G9Ԁ-NklYJf2*ws0E˟ Խ^mA;rZͲ".SuŸQ x}؏F\j-sBrw a%:iéK{2"vN܄t;wS,Edhe?q#B68NsWw.y&^"6{iDF1=Jj QzR)@VfϽwW 3ݏ4q-7#{"{%OG~-ȧ PϿ`%' 8 ׀8tyЁ "8$X&x(8@Pybq]0. a#3X @=;( =:8Eh0aNJ28P0C9n@Gbx`0@^sp0ctc5: u(P08,G@xp`0UH5]v{ 8R3VXGHX6L8ȍ^HɏP 4xc1؉X ЏcQIcgЋف 0x!?(I@0 WI 0(yH鐠Hዋ:+^ N+9H(68cȓ@xk 0Е^ِ\ՐЌPIp0~F)^Y~g 0 io:p0IqWsI9W Ɂv8_ @~&^ ! .XH!hvIx9Ap)  % {Iwi"?@&i^9ћ\:w'9i@# @GJY I:H!)^I(^D ؖKn IzyvWž*F*`J9JЋYb9 7+V/Z^Y$y> Y^*uQQPSzbǏgfjkI!9\0x i?oh q)њ8 qeANٮjꪥ 7ٍ)8YUI1X~RKY?o;=:0pj0p 3(s/(io @ؿ)nɾ_ Ex~ieη1/JNs)ѳVޮϡ0.zH1_i?np^]^ ^]^^ ^]\  Ȟ՟ׄ \Ӵբ۪ƆՆ]0c@˯]&_<^n _%8HBj@r0݂ hWAL&8PvFɁ?u'Iq۠w"(@z)҃IPjI2dBOy%$`*U V肈 # +[]J^ޝgMs^խVmA*hxoTWy82Jfqq#l|`!y\xVkC6!/866݌6 mYw*Q,! %3 L L`Lq O T@'*^:@h 2dWDyapAxm9OhwuLA3w^,c^D~|l]u\e!J@̸zg R 0A:9Hj(LQ_f   *7~~Pr)$#e|2&ϛp vaɜvz&inl3b7s85@w A΃@4̈IϭҀ~[z` R0Aw4"tM6d' @bʺ T?a"ޓMci׵M8mѲ~0|sϠuS#L;9OoDӊ 3Њ-9ҊNq 9& e!׵d#Laϟ՛j͐=-v5rak,8=_/߾<T-"610"؃$-G|7J9\U>.B~ţB(8Bypv-w|'1)f2 Ap/D\` Ox @^HD#"$ A ;  G|dL%.Y`.tx̣>_@BL"F:b!JZ򒘄F$' (GIRL*1%DU򕰌,C97++d10.Ib:D f:Ќ4IjZĚnzftIH:1 )g<)z̧>)OmF@JЂ4BH pІ:@9z _ͨF7щbdը@Ғ:JWҖKgJӚ4$( > .(d4xiӢtb5RJ2PJygu$U\V:ҭb5)Xӱ=*| ԎRE*\jS3-IJU+3iW;װ*-+bҴ?PMFTUMlFR4ZwE XjWֺlgKV`mwq! &#@8ЍtKZnuq;h x\U`UE:L7o2`安0.r\1/F x&ܫ;X"ϧ$wIA}1 vp{2N02` 'y ?ۅZX*!q]5^naT #Bhl\p\(p'\dIr-wI+leYn!UC2{NDk'3*ߜ$ `NcBȧBiU&èV]йWv`I;咑\Uz5kUx![ ߸ͮ˦clBˢ\sc:պ޶-28)vJm[[f7G m}"آK}Cs"٪.)e28nm\;)'%/'2cэlw:5nQ⫾7qZu(=e5}sQ2&0ҟb@e^q|@ŭou22ȃ@=HCh*WS9`2ACRUxWH0P]VX(DžiH_؄a*}`hux&#ǁzh؁H8x2 h與Xw8tH^"8V;`2 X'cu8ȊAC؊8GHć;@\C㋝~gRX~H8|Шh8ը臓8(؎LjX8Xh&#"1 S 9 i萮xIɍȑxy$ 90%')91ٍ|W{|r}7;~-/P4DɄ?FHSKM>yGP9Njlp_نXc9aeY膆\ymbY |syg:x6y w}e׷}~9|8Iv5Ǔ%y3v}'}}WiĘU~ 'uy5y zn\zV7Iy9rvކ'㙁biIIfsIF |Y|B`c_eP;P_I"m]X Y1t `fa橙doDɗө}y;#@!0o`#r Z_.f 0^f|$![f!)E&Ejm JD I)w 9h%'%\hF$uEKFfj7 \z>zmnFCڗ):ÝRTx+Zg aq\hJʦ;Acq=ie} %)-jwN ꪑ :ZbCuڠٟB 0W_%xh*,&i?lzC y*G_#P" @j&Z^$ʯj<62TEo!f66a%!`0bf'kбVw'j&N^0^ "ۭ&@ ۩oKDN9дdְlDC[춴IbT[B][Gyi+V{g˜e۞FKj\R۷~;[{id_z׸;[{2iNx _˵w_;T{kam[sֺŤۻ{J;a;Bk[;ʛJΛ`+B[+KB_AKכ{};@Y^߻A;[Ky_`kA z+|{c{`@{`ۿm[g,`|@j & =7P;$\?³ ޙxe8;"J!bn6#PEA[@&j QA[ dW>k! uWk#I%_Kǃ!U fKXTsp&;0l:c`܉}Ŝઢ,&#p}̘*LoKY̟`c|Rǔl0j؇ʲ*7^μʞ;[BaƜ^xj k"<<  9~E̠̾<phW%vd xyЗ rˬMDi}֡&ڡd_ M d¤ӂСaezaGM,\-C4 dtj\]Sld=fݸhwjEl~-@p=V<\׈MD-uKEA[ؒ=cbBFٜٞ٠ں+؃lل -ڪڬڮڰKͦ؜:űۺۼ۾]L=Œ}v;Ś{ȝʽܚw=]}؝5Lq-`m]ށ'<އm`5C; Z}mME};MnA=` dT7JD.C^7t1᣺y ~-$(>|X 2KnH<^Ѣ#9NF<,!,P"fkZ> ]N8㢫d\A^LXvasNu~m`l~>nAhfnU!/p!~&~PN鑮%~^މ>꒮Ꝏ~N롞ꮾ굎~^JÎ Ď^p֍h>jb))nؐ޾^^~ھ.Nn&)23Ѯӎ^מ>?/40?@nN x114_x:o4`9>?@8ύBD?F_IMOQSUOWox!Ec3/@a hf/kspОmBZ+@-/o1FSv>gsf}UߒgOߌlMn_rH١?ߩk_<>z/y),$wfwO t7?;>{Ư? i-\y*JPUve=>v ]^^^^%]$]^]%$^$$]׈ۜ˕] ^"]"^ ..u>|@$_!R $ MBI(M&ڗMHo]K͛Ա3q?2@KzAO @ԨW.$5QOKx @,Iݻ!Ys/4> ަ]ON@& e@"'j+^%")5kFrc˞S/f}o7vL}"SD`'2 Ƒn5rW"Bxڹ#XxGY VWAA$8QWq (GNR H Xb)e%V9r[\`B?hblJC%ʧ"m5s4 sMlT"Ie$ZmO@؉B dksk5B`4G`b̌XL(Н֞"vHV&&bׯ*֮2:;9 ?)B \< @D+xw*Y v'&3*rcvqmAb1w=b,GTn@sc8($piW9ˊFIpXȴ63m1805 l_E]p Nx-QFpփ:wxC@(`Q5#' G!!%j, X@0y7H0d,e P.-tYs FTF048bA2oii\&s%X=&))FaE٦:M|<8'Y`SE\> X] )9Mَ+r(AɉmuK](5|ATX'E3J/1M}7r3p*y@+!0a ?p a!ks T׈ X7KMmZ/ݶ\:=Pnv/,`|Fz?\g^ĩ]KWx I7-O#4=Z~}$ ^FG = z2RlQI_b:}\hK|qNGb};ԙwlxҝ/^}yοWUh}WzDgSF6Gb* VxU &fhP E Pnz hwh76)q+X -Iww1325~7 9HI;R/wX jfC|NGX=h?2}gPkFd|PQTtq%Bn {A0aw$|Մİ}kp]mm` pf,gFdhtah jlPiBu&aڀ8$7 S&ecAs6e0{Cq8K%m@bbgo&>,7S*NKI(TH8FʷXNGchcPq7sudph^dG|(}H;T( 'wnXcihqxa`^p-/yØ`d|f2%"@7a#o pt(e;qq^^3p(,狯Xu4%fZ8ak`s6iO6Ga[?cX r/ `26fyD9!؇v%(PS| &8avk @_cSs,7_bLdy1cpGc6d0{2"86xAHiy-}I,AYxiXԙOQ}r,ƀ0y i*ў.2y z܀(E5)tP_Q+*`Y6pyypG]ph`v6vec bf^m6dhF?VyN #01a ՟^`kwPapdXa+m2dI`vmXp(mhgfK p~֠7evxyJ Hݶdi E:+)RtZ:*vqbxh_gsvkix^9YkպwzdڬA`QXX Ђ#($իj%)ɓpp``ȐzvA eJ`Xg&eƩ0l`8v l+yMh)SMTE&`rt~J4FeX-h^@WkcYE& Pix8jXyDdD~.[%@`꯵;k'k c}Z@^&khз0 JLK d)Xhk96Gƒpɲw⡶k;ɑBr\tjvk"& ʙ)MzMeTaJd w),mMv 9_VWc pØUe z+iJWŶ&$ᠳO햞p~ 1<1[a/1Uk2{-_%,  Ʌ9%a& :={.*6 s¹3l5*"D !Zį֢ Eŭ[Oo]MJAmkuMZ4cBlnLy"aalE(f^fjj Rܱ@ ,bL+F%vF_^]@_ȕFp7Ʈ*60hx\_¨kjʭUȕ @Đî'‰ ʵ6nD6r kՐWaڐ7wkзpZ 0vj2O ļ +lqƬ~&i{zV2&  m+z5 y<۫^p`c K&b([|K w-K`>ԬlWfezr@_e8^?d FaKc[ImiQ) + -VSXLO3mblsZ:m0=Ȋge}u(6vh_gh;5~Y[] ˢJ]|cnZ5`>7Idcd™;KFwegه_]\ ʼ*<+ L|"}/m~1=*] m Avvq=ڐѰohx\"}P] &,|I< Xo:p`20w*%wϜ&E^>̷Z .3% >h=kd=pWfzhXet|domu2h*kۈe̮zʶb\cF1P> ]Z 5 Jzh.U9^k=cόxiWns ߈LOل>;EI[cX$NLJ:ό'Ϟ~]  )€ś\[E+ ˕HFB][I;^̻ҽn Nn . n (Vc]˨׏VìX)1~jցmVfI)ϫL-++c n 2Rߠ~zfz. W ;_"ᾆ\#ONxo Αy~1 Î)B. p)^1Pp >?|A=X~w]2հ NKi$s/ܮ,U<Kp ai˧ m@@`ɀ}iŠ_n_p oa/ w?_r  ߳&Zþ؟ڿ\˿&.9\B?_^]]\¾ʒχԢ٠]ߙ]ޚ) VKU*\ȰÇ#JHb ocyA IO HP]q&Q65h$ϟNzJW@;j2dh'A>(K`/"fXNAz,!Z5׹x e蘪RH2` 3-΅{Unޖu1}w"Sd>^ q6nSuL9dK' @@zJD| !B 2(!(C:{\t^ObAK~eT=5^J B^  &A0 `# a 0f^(R^tP "L{ԠvV|G͇_<(<@H&$% QR 1~F&煑A1bBLX%$C1%( x*+WX<壐BXV6裐J!DH@?1:=ySYCh1RI& tq`fah'a (5AKY 'haZ3ց y!tЙh`0Bi+X(PLBAɖx@€}ࡁhIP &cgeeԅsիս?/,:O@3T+,5f!ڙEYj LT-zA!}k̫W̓9Rs#=٥\\Lw`!͍ge0vw,6UjĶܦHܔD7(u!υx*l 0#jIZPkOBHw[-hEMTZf9| T@|l" epl\!*b32TŬ;Їx7ڈ pSI֨Vj*xGSӹ[44MN"s:Ռ8=5wozxNp+u Z9ֱNXdXy}ra5M$Zml^mXsCX\V6]5MHU}uZ+%MbҔ[M:(i|\w%UHv\醁E5m@MDvUZ9n2h&}/~(diqWWŤ|gWjT RRu~cf?RQc}IdcKonsGpz[[WRR5z0JcrU.xzO.eUqK舫novh׆fVdTKhRh戛dlgw#Fe(L eL#Ʉ m e'w .H~6 l~w|7Ljh/HahvzHYO?ax ]?Y~isqkCm >/]xt)~DIju 4QJ0294Y6y7YfQ8@B97HJ :"73Fo<\ٕ^\~GfyhOFizjpɢtUI3kStWlKK9):<GY٘5iimQ9 LaȌc8F𡗒×痑2/Hv9~ Qi?WB8,)7N鐪 Ù MH?eI q]\T)}9B$cxxY .)/UVYsu9wlUv yE[HR剙If evy)6Ǘb2Raz.COWhfMGnp&ʢgagpr\VדGiav}fC&c4zETjI:b,z\V|,mpfFJ%OXP7vW*% lI5W(0w\) 9>Jcҷu\&(JTf^jQ1*M&fjE=VMaeR> dGd}D{j$ox~ʀٹ_6cJcJmyAP&}nJNDq&٪?NPF[jc6Sd&}mG뤨HZ?(}$Σie@%R&ev[ ywܷb%O0ǭb ($nTV/agc z'kF]z['wv%k5%ha9[cFb2N^NUvlN|[FH (t]ʛ02Qp铩v Z$of'We]+|7@+AlM>5VDw&5puWv*%H&w+DfʓG]D0NftXC($' SXj[ln[p;>r y]b兲F{y}0U`Keka(*(lEKTv`e[R˼vF *I[[KY@f JK$7ra ʈ سL-9'eMp"g%/7cswyȷ; h,˟OZCeG+|$ ֨;c/65Lfkj l_^tzgTL;SjH *eN,л-!@3RYpZDzvslzEkNMQ,\RǜazyJ%MSOD%gsE]Dv]{\{;xOeMTWA:Y7~ǚ JYrIDlgTpG̙ O*hj\l n =p^.G@  nO>.(P~~PP߰PrcL 2m,:4!Ϡm/סA @#@Gr%N2"*C"@!!%!aʀ33dU-ber#&14mT. `^)A=2l^Cp>xS?X9C$떠(Dܮ(>,9eb&8 ,>//O n /Ȑ'P'h2b*ub&PP%01qdΆɂQSC!a6^!rϑLbQzА J%-2p2!^ 6aR _e^"A#bO_1A26/)0} yM.A)% sb+-@+1 0,r F]. ^^%!]&"]!ǰĞʠȖԼ؍]ʯ]㛾&뉹%]"$61C}M#0ӵA-ċ̭: rGl` 0VjVd !MVog+(܅v," )1ut$ MtLb[FIe0Wm}j.'vrjJ(Qtu!FqXW{p"/#/l֖aĊ{p Y24l@3!>-bWC wS"_no)E!KĿ 2,6!e ź%wb8xLxdՖy{{Y=TMu z쩗R{$|VZfWp%Ü&Mr)1#& T#K>^pӜc@#O2#%6sb^U!.NcbX"t!8FRV9׆tH7fKN4֥b-6k5S=ioy͠IBIlBpeO\~B 6]xP1kiK&Bd,&"䪆ūd?^Ta^U]Z*M"f,ho=b7&  Bzno(,v0h 6:ꬔRymZhe[4쒁7P$8YB tU!w NB qUQV72 ',\$=UROdqqpLap5)61юz 3bCwrSڝڦͷԹ n'{I?ⷅuٰ GB`w砇| 騧K=>azMNy5g%./o'6]LWo<`3pMOzvMqgPCL7l6c}p_⇊.π_&AF3@@#p @b `0 F0! `E"aK4!"t kh n?.XRi!/zaO+r!/(N.] Rc\>xDY$Ts 5h@`4 E6򑑜.x,>p_#H/~ECd%1hF4:E0!ـW " }`ΐnr"aGvD x#1@xᛆYMvF7)(x^Jޓ'%gYNܦ!6- En<4Z2LG4sk`4%Rjњ!Oz. g*OΕK;:?`Xg(KBwԞmuC+RKN}ю6iJzU5*.ȸ^TkakEڂ+/Brk(OCwejATyz HbFZ;҆niKqZH p"U+0ׁbܻ>^[^t:PҐF@W.7jEzhz70 rXhk^z#\:^PsZOLT,@I1/cwz cȁ0cd S"GTvFc!X[uDd#@dFarkN|x[.1$n}J e<2f8/* ;%:mܞG@?Uf9%=Y4ˁ-ﵰtS,Z)Pl ;Ics* u:@!ERh::ݺ+֤WVIFp8gӮc+#^νP:9 T|}&>$k(. J ^"VjRE;&G I;K{M;ʭGc^]@;g sQp,sW[A) p!bQnՐⰵeʩ09t[Qa'7y¸ @:c Q1*|RVlۭTSk8[[1!P4F`ـ;B5[  4qVbKR k+j ȠpԴ1Fjm#[?{ Ἴ mv;0{ar{ a5ʖ@-տ'9`  ;;Zl$|8 #:!v{y 8#l)| c 3/)Õ7WÇs\A-J@ ").Ć4VR,cV T>+ l4 Ib\|,-omGq s,u59ΰ)^?G#~ -N9%Nݽn㻳@D2OZS}^:f",.s'^FH>J~-L^NO~8| *Q3p"  %0`:,21 %$'C0Mp!Ma>1 e.-gNitG買!#0Rr$0"#;&00*, 0"N]δQNP]+> .웧 3]0Q%1Z~&z `.PkXXZ`*&ap0 `Ξ&{T> *{Ne2U>! 6Lkj*,$_bf, R 2 4 68:o^}r"p5q!o2qr qg$Uw|[]\B h f z?Ŏy܂פּ6Q,u;fnw♯g> TȠH.Iҕo@%ͧXbfUCAP "ʐDD;E-Qt~[F=ĶX^ 3aq!yu4J(Qmt]\_F.:uт<Xh]$ӽL DY:&"DƂF_k-B%;u/#ZuW{`ҵEuxdwͥ xvMhDw f¡bQq0mFBmI{] 5`i!l&$t 2 ݣWB $`h(w"!(ȆA HhH'%iu8Í('Hmug[N73 $xh bJdn!9]TГ|ec]"Zf*vxJ֢n]yjg&zd^ehZzDzwj$]2UI˲>j!I[r4(zѦ +4 ҁ\ƀR-tÌ-nCZ!.&<;+ '[:)]q*6Ӈ1n7N#嫇wnӮ*<(](T|*MFB=i7m}_p} y}/=,S ?˒Ns,S9xLǫ΁`9 (I.{a z| >?(L W*V aH8̡w@ aȷkCL6%:PH*#$a0Eʰ] H2QX<""76pH󕯎x̣Si IBL ӨF{d,(J~W(N6 O8ɈF&Q*IH. 0$,2%F4 ,I-l H #i߷H񔦕: lnUҦ0*L0N$&c\(7މ\5[.wa1B@2җB?S}%|D\J1l))i `C ak @ :c S.:*_hpG G6mE1M(R jPlWN;=jjڻLu _d#Ju85ZӈTt}T!(Щxyز0`wqVPz)6Tiu;"KC *hI{P!v} 0cKTvD*ڢ &\QȽ-ryctKZͮv3zElKcMz8^.׼덯|K~L8o;PK2(PK&AOEBPS/img/strms013.gifmrGIF89a\\\)))ddd򐐐===UUU::: zzzyyyNNN쪪tttkkk---WWW666񺺺𤤤@@@&&&LLLե󜜜"""ggg///RRRbbb㬬ǯrrrZZZ```uuuQQQ~~~mmmIII־>>>ccc]]]}}},,,hhh^^^lllCCC***iiissswww888aaaXXXpppEEESSS222eeeHHHŋBBBoooFFFƍ___OOODDDnnn???fff{{{MMMȫJJJ[[[xxxTTTGGGjjjPPPAAAӳVVV|||vvvYYYqqqKKK999444;;;...555<<<333000777111''' +++###!!!%%%$$$((( !, H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲKNI((@P-O4 $3=n`L +@'L\ʴӧP (A`ClI/cC0AiOi@XD7B]ͺS @1 U4 )U0w;<7wr`EshC/[S ӫ/w. K/a(! ʔK *|aEgG4`DX/5I,0z$hK70?Q+ RA,!@d%"`g'4O,?2-X %\vi)Ax,d)l5 e@xf @9a& 3/$0󍗔VjT9Pې"n42PLO$A@̠ r.,^Т:ߘN?RB5Cs!ă:2^HԀ$FG8-/ǐa9D+ǵ *)P"\b\#GL&˸ `FЅt"G'9V@,?qPPG @$@?Y3hPֹ 3A 61l}`}tmx7h)qބn8TiBpsG.yIs':NO%t!2?!)C#Ԑ8 .h ӻNWo}!n H)!DK$ 4sV~P6 `B 3 CK P@'u`xq cVF'` r@xB7i% T 0~0$W"3#nBXHd 6S"*j0G1`Q8e ?PE1 ' CGE0BR@lP9@5(@``5&k p?pD  1K,HZD@ġ":D !B&14c@B5 a Z8FR!M<#A FlAEK[&5Lh`&( p >ABq/61/*h Є7P 0>$3F1`8ԁh$(L,j D(AnӝrR`R0(AI;BHz` XJֲvNJi p\J׺vP<І0t֐hg ᱐d'KZ^܃JhGKҚMjW{Zq b0" *0"(?v 5@Mr\dATiͮvz w ?Хحz ( d_Kͯ~L(s%"u[8ȃIH; ~!`|p@}A.oX* W01g|8.BfG@ꚸR(L" (@n,Q cK} B9Af`L2%VlbaiILVd> πMBЈNF;ѐ͈>L#UF)Zq f ΐԨN5}E à r@0c /@PbNf;ЎMj[N1], P`P\@ا%d%?d%xbENOcmo$;?*8,sGl\g9 j 0<%w@z0@x% Xo&3 72'\ >ְBc .@"V< BH!2@hOpNkQ2+ SH wnG>x=?4LC:A,iޔa)๾ݐ t j $p`QD`s8@WxZB~d@AO[Ͼ{ipOO =#?tpDh +T i` l 0 @ Pg(!z3'1Pz>Tp%$#a8 `S f` KP6 1\ hp 9g HJL؄NPR8TXVp>%ce@t`0v P 0f0 B0Ud0 HzwAk 0Z6 3` PAV0 WT qpII؊8Xx83 ^o^`(`&P`?B m W] SpP @0 Tp DV` A@ f*ps&py-jЋw_maPt0ud  z % d+0 Ep1ׁq Q@`p y0#4`  ~ ! Zw%([3P`{w 4P  P`v)$c89zz A m Ѐ d ,Y@0 1P=pSٚ9Y) } Dٛ9 }c @_p@E0 D0Q 2% 6 V8x9IvQ@0 @i%r O3 f110Zz1%a)p1`3` )n5.0U  j#* &f + Q0  M@ VzXZ\ڥ^ؐ$2 ? _%%%帓i$ y[(.g `k\Pzڨ`2e:\c]ҦA `  C \ɕl`%Bfbٜګ*A p w\2 [0|F:Zz*`pJ"j`&jgkZگ;qP`u` A |(g ^@/`;! y "  *<۳>yЮ:z(D NPR;T[V{S 0 ADpЬW 5 0 Up  ΐ0z V h h` 290DG+Z`۹; #b^P@P @&B  {q 0 ڀ;ڣ 0`Jp ۽۽`S*10Z$Jjje+uᩩp@ oPt{Ұ. @ , '  [ypp[Jpdc ɚ  XPiܐxp j  y j0 kFJ@ +'\+AHD@ P e@0%@ gpx@ * e` 0 ɚɚE +|RD^F~p%%_\s7} [a` f~fn865= |Z 0 ~C~[.~)@` PNRէ^gаX@ ΰP- t1va5" pn"% Ȟʾ̮}x}zTN"V.`m .@ \Ua74p4.n~ӍW0 м#}sйOp !_( M7 ~<ئQ@fu  s1]=6cmemwpp- 0Q]MO:]5p!ap`EpOOQgojr]' ֤]qЇ0V0ٮAnk>>qXgp&`Ӱ.0- cq-. % %0 o Ԍ9P z5@P B$LO/z1TX(@)+C L (@ .dC\( 5nQ.,,%$)U2 d +e&3u|⇿x% +E8QJhT?ɬD[a͛9ͮ]PqM4Sr:Z^[E4 .bpñx-aͼRxa:pU.Xa*2Tk0e:渢`QH/F΢ (@IzZL7n] HaoyUNOK=U֙JG<߸" G*af 4*b  XNʊ9&`DTRI 4ֺ;~ D` :{0 Q XAhld AAHH$L$lA2XA/ %'a&q':xF/ * BBlKXKvQe "Bey!h4M$Cڮװ -Xtn)Sh0b-~ jxCadb-AD0@uYpt1lxCT3+P1 e+yE 0(C>GoYR(˜B;DQbT\&l% ްQ]  b 5 :r13Jk&̀WEZEހtx"Wp6@h7H1BPa= J KY VErkhP2Y"N:!g$QC HJx`DPG2v0 ONHHYzQ?1ULbE8M(TxbAnHF`pиx-[&/ !  ePyXbG `+SP/<0U,XE H1 AK5"d uPZcԠ0 ALdp,`d"? C(."ANb6 >,#Z(46ӊ*1&E ckiDĝJEB .$dj%%B#S˩,ЀOo82 WB`4AU {&! @KP.R"G+ #VMq]@NQ X@T 138 HDCZb`X݀@T: \nB`ȁ=Ȯ1-X8 6y;oC8?S?`?ciB>H:go`[z8!1h7WX(8CQ 0C E2("'Pb@>XA0X PBQIx=ch!N8HbXV@<9C XȂ+p>N`@sD `)0:GPVQP*8H[+>)309V0*3DY@8Hx Gpg8V 7I=E$P0ggȄI^(E,hh[XX Z (9e(I@T08>p!D J(V`ShD><ͅ `OÃ<,؅ [YP4Ќ0NCNS<~BzЄP!h1OL0VYʠ AU8@ek!HP(BT3]eFW0O8tSӍ(3d8#KpPDW jRX1 * ]s)ȑ0!pl9+[kn#\ I 0 y@x>L4#ńJ(:5P.=VP2fMWVo@hH^VhfWPHkX ^(܁/%L K@Z2@6| (P`''a P>`E Hep |XE?9Ո2S,1Sh>@g8x:Z @t@/Z8B8EE؀Ba&؀`1؃.OHll>EyxZh*_%Dh`\aطKu":pPfH328x88\XRx > 5 N*} 0]EN@8Y0}*TI_5]5<:xP+pT׸ (@"h04/LNꀀ/>k@9Bc/3TXCK裇F3 )K5 6M82N83ts xae!8ȅh#ˊbhO^2xhrhr:dX؂NXo>`FȗH^ ViC1P$8rh1n<7> D$hhqm Om\@X0n"p0p΁؀ 8jGp?0^ lP4r)r*rh-r.@ m-^!wmn pOvH@p^rhr@(態ۦjj8aIDȼeȇXKAP&oʰN+Y$<87?J9o 9Eh#Z@&FXAw_, 'X  6΂k_o`a2{"F80]*PjPlmnόhHs+ޟH `Ivw |t(@۟! xx9tqʀ{H^5i7oyMsV[Kqߋ Z`QUq/:zf;=~`9axufoHXΤ6 {2{B{Mo?B+,hqzWu=o0{FaPC_e@؀r , ̢XI A  =er>*P:@|Є4Hpd^!X h %8bъ&tA.!p,~DIRTRq !pX9:d\<0C%Ɂ[},)!Lⱟ?L[($t7p'բSn%B t?0= !84 щEd 2 5zP/ˈ3%cm2z]$H5&SB&$G1x@$]ȅWoStYǒֵpI` %/J+|+W4MD؁[14々 2P̡(f=?P P$>  CQl cЂ ,ц8 T `ExA4$QgԹTLR L%p?Xd!B1Rް9ozz3C%y%9# IxC㲦0{9aoȒO˺/~ J`QMC)4q u!#0& 4'dЌŨ(@D QjN.RdR$9@VD/'1c'5$1 jD@P7XC0WAO둅療~D ²2v 4: %E2:8"I # AdGe(/h! @ 82qJA68lVqtQI!dpAnɘQ ])7U,d؁P@)E ~ #hvcK3t]2p`hF z?;sr?6$xDtC018\iL,@*A4wD@|L*$ EMX \7XpD5 !UցqWN&,"  D"I`l9raD ,"!@I1) 0Z Q#&4" ]̄>օ@6,b -(J2 bU!N\  pS "2R([)"m#[[?D.XB-0B-LĀ1@I"1>1QDD@*&X $X 4-t4.[5A5j]e#lbь$,?L K$Ws$L L3H [#S7D!b1KAP?@ؒX" 8[H,\*X8*0+\GZ!6Q+!\)ĥPKvcXȁ?l.e 800#XPEQf"Ll@+I%a-ŀ8 ?DA 4TCIPVdL`eD!(*!̬RqAD h\ ]H"Gb0 CdTg$4)JMFKF4T@@/DCB%@40~|6@*|?3H" |13d4@ 9D-TB\.h ?tK VLG)Pf< NH +:$_ p jRR0~Ji W"1 ~!:0:Xcڊ,J,Bk˜j-⚋L$ C(E:4(%B) (sd`2X76A]"( X`G ,&1%#Vk b賏fM@1TAD8Gl$ Emp` &jƝUPT`^Lhi`߅| gIJG>X 3"QB馻,묦-:c\gmAa"`D Ȱ*XdR PCbb!aڨUc͵N%H7F ICHZzj'. J 9܄Ez0`NEhnxr6"7CabTbZn"< "E * aie+@cx(8$$F*uX/H,h  HBB N(APg 0AO Ŗj4(B6YdH?V8o~{&0!0j?j 6Z"2@UAdT@2 Da$90T$ 8"TDvzp8x!mQ /\{O΅b0)̢p c4D?a (`PBKx" 0 ,aN/@ 9Ѕ>Tp:0D'a 1X@B 5 A(.Q@ ycZ%]!2 V&.C@|O 0^ &'}G$TЃg@ 4^3BR01:I PQE6!H1+D$vQ9@C#hAA 5c\NY!'y(ň @+@ 3F0 b0eD IlFA~ @51n|)jaB3PC HAY`8uMq/b@nC?脌 [](L0l2clA!C"J p`n53$0 d0B8`b m8. ш^@Pq%, 2 pO\)X5zna ΐEXЅ6XH1|C,"Vp)=te|X(EUG@"`1|u 4F6&=`ָD OR3>ъTXX9~l" b~5 Gp,2bT!6p&XA\]r8Kz0Lh% 9͘AqQ | ( [g 0FqCh,Du A.LD%vF b FP `*t1(#/@ sMqWHolY+LMAB*@" 9l 55B̌@2 @&2'SXxK0Q( !P?LP`>B h&ܬi*Zp;l:A6A>J{3)$;nP:5 L Fh!!EMR&@- 8L߄ 2u2!n8z P LD3C 7.7@]9$ɸ\dgC$Zp$l'À/b4UrRGJ>hbi tZ珋6'︫NoPebD\"0LLj:XJ'b,0frFG‚ cP*|@pPA`!p‚V |j iiXЂ\0@`hpLP @L & N`ɐ!H@ pԠ+ڀAAxd QKP| < KOSQWRZP!ր[QwOQ !V- ҏb _`q'`!᝴d1j!&A a ^@ב@4ar~`DAR )`(tb4" !)q"!|4! d1+?ۓASa;! <"8`CȒ4P"$6C;C?DCTDGC5n' @CITEWECtOk-8$!f;;y4!\i`@IIIJTJT>csJKTJ(j 4T 52`BZ\]UB\@N" 1^ H`/I W4YdYU'NvaVaaabw~+A`#c;c A @eWe[e_fcVfUV0Qu_TW6-p`hhiVi*a ȀikviAV< lmVmזmm!!rG5Kg A $^Aަ8 a.d  *&F0R BXZQL % bf(@hb)⍅0縎b5 .L2{Ϗ d BAĀx' Ƞ!>"@alF` !zAp@ah Dth@rAL9q0e,*"͆yдŒDΙՙuBU`l @xY"a,2 !>!!؁ڠB!yR RWե-czs+jz$ab )QA @\@P : q<Ёa6T<< Mw$a!R Efy|6"0-ywi l4a2 :᜗"-1Y@/:!@߀ a0"AB  #rp"|xhr L3B~ ,!@\q!i΁[Y;f S0 V`㻇% b @ O AC{st$,A6 }@^2~ >L#  \ L!hNk4ZA " DA"`*hn "hM[  RD&毳4@U+X<כ+ڢZaTN bw "  "jaMa tAݿuy\ U`aXu!8@t:Jm ĝD!` o8V "` 8D` jxR_~'ޘB 2>"&@A H"܁np0!Xɣ}u+i1}ܷ| <0a%PÀSNXC 1!  ⏉d%q."HDRf1sTiBN#& C3}pR3E  =RPڽn Ƞ ;؎';Oǔ2tĊ+{6R%K/c' AHj@C7$~f00憚?JeŌ-ۻGOB^߿?{,d 0A}#M@KpCheEg MPf(dW ryRKqFA dH"&1Qt?l.\qb!gS#_`M& ǝ[@(R!B IfffjfgjS>[@fA~0\`g~ u''`W C&%afZ/0+Ձ7m `$vL40.873TC2&"% $RĂ> N U?% c&sqwvEYC枋nn3`~`I~F gzP424`)00A# Ԅ@@EJ!F& T x(Mܐ70(x@3hS\45%,‚LB (dd+,U,Op\@@|p A) # [Zm xNxx+G~ƒ?volP,3{\rS02bt XP @ bYCm`-O$BA|Q?GDC'm@!rAE?햍AR PS| 1OR&!B^HBR6+Q5@b؂V@ L2!#H$8 Yh) X2 6޼%1`=|s>o@~IU"E@N * ! pDBX P0fP@gB<x(:2SX JѳTbB2 U@jM h@٥xA2*<"tlUH!ָD3B ] %@"\PB A*1 (&,(x"7B%PF))B܂4 P`B 0(:J`]@-ac{@dƥln 8yt$0bcGh4$ SXE JA#rh %tC=# 5H>`b`p#>\G 8f p ^G0kWA h?,qXlV-ﴟ]q|/+| oK|X'e!G m8! !:/x8&C8p1dc(9Ȱ PZ9`(;bG4` k/=_KP`d/ώ}lo}t{#l=d\@Uy-Wށ<~H;?>0r8V` ӂ.D.эGTp S`AE3~t@ f5@a`팋{ 7(+Ce@.A!@: A /FqSRz^w ~g >lPE6LA$W 5gYT0o  <#"mFUW>p\ @cp  P `  0LP p0  8^8CpN}j!N0I0-R/P PG $ P G!Pm{ F"s zWi a<ހj [F;Ƿ BS0 `Gȅ0@^w?97y p NP rpz>, gp LPsp yp|~wA ЁaQxaP!zp  *XB k4pwpx9{H@!ް 1@u0 p @ hȐ`@ dP- = @ ]7 ƃ,!0j0 & p . P͐ `C @ `qO 1 a|Wx^B(W@Ћ+&P](q …nspaІ ` Y616`1c9/9Ўr- 7/H@Ef M@#0 *  #P uc0iP%JLA c]@ ֠ X*y$pQ p + Uk@ b`W   `2{p { P<(8\`)Wb9dW#`g @@=1P4 ]4@P F ԰;j- c a0  - @B@mՙ` :`D{ [p7 @ z7`d6 P `LONcK@ Ju=@D `Ag_ c"dxD@ M Ԑaqe m~IPc? Be?p  ``ƀ S i@d9mp k P P@0 @K@ 1 P 0P M ꕦɦ0dPm z2Mp`H  t+7 iZ0 P;oa

7b݊w3^,%SNC,(Ægc26ɲ5[-ϞAS%=t 8FmŽ[wm-;<\8dǍK.͕kv>]b׭oގ5ӪEf=ukÓ_m<V "pHϿ(h& g| F(Vhf> ($h,0 b4h8H:@l iH&-PFyRViX6He\vm dAv@2`i0fgZh)jhW蟘f# UL Zꧨj]ji2j B2)(P5< =1@X; t@,Zh nג;FL|ɔK諌(Z} 4A2(s>gąZ P$W*Zql L{2V '?oܱʹ2EKɀ{t~$Á2 0 \+55HM8m@'Pz:'^spge1rZ5)~g'mn,o bD1ߵϲnʆ:-wiݜ+;a4O(g律bm2p,P xx2?ߗ D].*~{^uO| ڼ>S&'xRܱ uLQPf8më@ੋ)fU.uni"r(R1SDw)"|AGT?S45L[û XTzUeX 36^ @6,I)zB/hnV\VZ'@c*īP$xaK[⚗G͆LVH{,g9̥b^B0`OfΌ3)j̦/nM8q<:IhROzmi3MBІ:D'P`a:@6 @Q\gI}ѐ \i?0hGY 8m9e @cTT)P\!P>S3$ݩC 'bkY>huGߺ՞4 P: @$pU Ծ^U ] X*JjTRaz dʠ@Hp q_e-IԾ&#R၂*$ :JV((e*V( -7xH H.&srRe 3,8~2z~ D+++8"TH7}9.'7T| P<@#-aC8pK #71 "~, .G6}R7ӄ|!{Ws!XeXpNs9A3B;EskDL#-w1%xϤZ3s 4:*2r8bx+874T߳sHRC*G,`p(,d#0BB(WȀ1?:x6B+B,r0EPcGGxs+d.I30s6Ŏ y=Ï珅5 s@h=71Xcd35tbI׏4xEЁ(G}x< y2dS;b8 G$i’=$+rW40+ғn8*{0R(( ޒiɈHX!B,uٗD~H)(42'~i χp"QX Up=:1Ap\ Кٚ@]Q;u(A/^p99J bP2ɀ\PYy]ЙȜᜥgiY\Йȝ36UuizŇ"@ I55AПɐ Z) RZb0K@N:c`7zN$@'z*:!("05mkg] <ڣV@(\Q0DJ @cpV|?@04p ɤ?0K RjJA S l<)D3p'0`QPqe:lj h |@4PY` x5p@ (bpRyR`Y( ]@ 8=yrFqKȨ -/ Yt@ :> zʦ+jNʪ+@֪)p  "P WuPaP gPK0 @Z犬: ʒ:N*Z/`FPB,H`N KjK )𣍸 0%{)-1 {*=6@۱֊g kD&*˲.Kl O[Jp WY[K/@S0pf.4P 7@ZʷFP epV@ k}8~ӖҔmNz1x gD)029nMi~,B8yȅ15N'P*: vrq PsFȾI‚3'Az2TytH80?-h01ȅHE4g@Q>jR-(<"xك Nn~Fh@>-[;<(o+M}F\@:{$1-/H0;Nd)Mq./)`?Ɏ`2$B6ɞz2ܨB3dΣdz5.Cn} ,;^#*$_ 0(cC%NXE5n1@E$YI)+NHֲ儉8`` Û0Pd d45AQNZUǐVnڕk$0xCCc ϶@9A?޺^&' /fx0˖(23k%p2̸m9{ BN&xk]۶FĒ .]BP*[A ~6~`m5G:@w+rW޽Tʃ}p@ 4d A:`#pB +B 3pC;ChA RTqE[tEcqFkqE*Dw$p|JdG#2IdA$oI)rJIJ=*DHTKD$"p50|0 kM:ۛPl\ HF$ u4;E H;H&He  *U!UdB@>1]IU$ 4 W XյaSbA`l@jKOB!S\dmRMX"LFY.b'e_"Ev=)P%LCp)]#@7=G<PŗT < UMWDy.c  b^]] , %=Cعdx%RyܑoSm+y_-!G|Q%֘ɞ`<EJS%ź_"2$&RFhHI4NKS݆ULdئT}.JnpvV?~ ҁ- VOeSkB#l] 5iM-AtZW嗑Gu P.tγ%;HZ3U 4WF6,Tk7Q 1z]{a``\ұaO&_KjV*Cb-@, Jk_}+Sƴ+`U@!ffꕴ:,!p1ąl€wkBD##ǠX3!@1WFR84Y:#ȉь)FǩэVǰюSSǮя* Mȑ Ґ'ZVG/6Qd$(yDl$9Lk'IiPk܎%YɑT~j)iXrjHKPFRC YL\ DŰQD1iŴ51 I$g9yNtSdg;)1y"ݼvO|Sg?Ot\DYP8 5PFThE-hG=jx"t",(vRT+e\RT3HJBĤ4iO]ڒUBHqZԆtKEOTF#:TOfU[jWU%`%kYzVN"$U` ȗq[[BW^ku+}N$t:m a)20a,c[٩8v"yO8082020RZ`1[[ehQe&^8|@ ulIݮhV"L25vqIۢED g8![`*f@I0(C0W"aC +م@I2< ?RۀP. .iG,s1lbp\ ̄.ɐ C\ 3%5qU!į_\ɜYݪdgm 4'}J"Υ$u4Dϱ@PBБ&>%8@ d l/-Pe)a3d 5P~[4ѕV \ZA ~uF2j7DdMR oP[8]fv/Y`"t.}H`~k%@k ds+v 4N[2oag!=˥m<Ȓ['ZfQlX.]q_  _4^nWD-/D ϯi- (،퓃e}!w9doplUMe;: >`HsYYd]{{5q'Q x'F9IM:] *$o\_ pZr>05MWqyAx-%r?}=)9S=]T_.€ZTɪYJ/jht'C%(A2` EEu`>/ɱPt}?;#zKAbKҡ??J>Aٜ#pA%|$q`QYꡘԝ9A 4< = ?ٓx )B {%À`&'@=hJ=/Bf-B BdC,CZBV2*AW qOEbC)CNCC'KAC&DJD?"$1GB=RD#aED!DCB2,&KLlMNܑOPQ,R4$5<9UV|WL#*8Z[)G#pr'aFb,Fc\'xs'zFhFiF}dFjrFo|6o Gq;ƜL*ut*ulG+t$rGbGzGe۫yGG H+Ml=s0;HHKHS8|NȀH ,4tFc\ I IXɈK? .{ 82 (1c5I 0[4P1q{/X4ʪ-/&1BY3 KJ0ĭd. 9K7 8,L )8FR00˶ /9X[LrI4[ jC@6h 4 ,{ Co:*H,ɻH llJMO .bɀ,J47HԈ휻<3Z94*g'ĉd.q OID|b˴R ɹ$8\K8H9~ `7`x2ktdXP ( .qcP /U6͝CPiʼn8`ͯ30({ -s uH8!5-y;z˯{0٢б);c;p.l0o:M9#S8. 39 XԳlE8\[ 4mЉ0X88 Sl -挶Ns' 0B}3@LK] 2amOOHj|Vhhͼ< @lVmVnVoVp WqVסVJEZ&BQWbWre8X#Jl]̣^)M׉Èo)}ݣdMD9L*KxX;GCSb(VIXUIHI@ = Y Z WYuʼnSA퓘VA}I e eXq@iHZ7X` >aׁٙ Yaۿ)!*r!(uZQyYݣ ]#fE(q[e(טžr[Y9"! ړuiHݩ+q>?0ax=Vh%ڷ X^ !?E㣙%[Z͖۩u)"t^&!߂2 x\)y 慛%`Hax۱)> [`sEmrbћ> [yXi^ ](r;%^ "q^`ULb`b-K~mV1~'M1ngX c7c}*c޳c='8bA;'W+ANd >ޑ?ze#6jdYdTF-yddPcUbe,1e{a`e&Ae/W4AeYRef#e*DPf!a"ff䒠f?fqf1f]ffG ؀q mn& t xg!i2ۨo~Oش(:g؈g}bbs Z.OP`g۠ R0)`:ܯf.qijjnu_jvxjM-L 6Ҵ KTMP PhVc^g ^A R"Kz+ KP@XNPGN賾~HHɮl ~tF?+[( `v 6>PP8,MlN)ʞF0 */\v H E tlٶmG JҊ0f5np֊ 0,9֦P^3`,zooo3oo4 1WA:}-\o7f-Ӎ { =P XQsQp:Qquna5qqH8@&25rH0;Ni," U@=3M  'ml \O0t*nl *kƎ#KQ# ;FU TK1̴Xg [4jl:I(Ri󸴦^S;trp_c갭ƸB5%DL 2dȴs*tWGUɐMugVl^ 6pt8@/Ngn;0苠t8o8 ogG+U/lĴ r j yu8"蠓NYVx3ɦ(R xom"wnUmm/80!.nq_Msf>?9x5.d0u0nðPG5 QqۤzhKIx0 619(:0 pzeƐנD2+7=S2o%k h<rU2|{S,w["GggwNY}tiqy:ߤ676!IGuOiS|/W|.9{씩B!ͭg:MoR V.TWl`Ȓ# 4JxDžd_0Kdҁ<!ag/̨' jAJ0 Er<2v6Z5G%\&^XpAg?̳Zp+C+>8`c)sPB4[ʀ^a%CaRE 'ZY(vdV**X@NeŧT^I#ZMV,&I2@@2V* 4W^2%4 J UYsjN5LYaVզEՙIolŸC1gEyyA:դjLziSOEŁUz***D:+zkn+k2(uiTx *,:,J;-*Ksk-z-b+{.*B2tFE,SpdѼf/K[R|l w/Ñ.S)//prk\~,B:#f/- +' _/'Ou3[̳=-|V:=P!0T aC Ё2ao<ӘMLWsmv_'7Pc+c4/PXkualPa JK'Dw'Sb&yP9h+ =䕃[9]kgtͤO}[̼;PUuFS(sAXNB!\@u W;/_ # @/\d?to}B"Be P,B)dy{^-{cW?lC>6{a;es_+29Ā΃E=}(@l; ViC o.%,"Tҁd-!<<rؚWEFexsĨ`vc#41!mB"ȵQucDgRh٥6d̋@7=; sȔ\(`2$!!uFD RF!"188Z)IQ rRh63 j9tl=aP3 I; PVT $BPFB( RbvBPF(y^.v&m5yTJśYgRYTswAk2ߕsSIڱxjVgKM!#rwM4!R2.dw*VjPf _Zաu^"@\e9Dꍅ qiOךXy X.R0[k rl7 S3ZFTrd{i ]l?*#(+`*m%5\rw=n2Pr;˺؝,I@%C -|^.,eDv6%Hp&Ve/SZ=$en[dPz.M# OUj^;2{@@ կeV!bds-~=7C׸e}c @C b%OĒ_M^)Ǎ{(XsMƵk(^ؐĿ) ]јK zTZﴒkr2t'^Kc ~⦳iUMKBSZյf5F<d00k6qBWOz*8  !ֻJR9D0`/#;kV-q{!ʽ] !JG\Tn ya.Xz CyTvWDOtH2"p;CEkh^E,{!.хMw5yxD6w_Auw5x!SڃvH]$ӿ76vI0'ߢkyy^h;OyzBùt $`[v9)=zI񥘻#w!*_˞Mzg(@ Ԡ=Km)'AL+uwȫ63u5sߧKcU´hXQ{_umB}''1G9+R XxXET u߼EׄR Η{Oԝ ZML%ݔ!U,@e=5 ޗ-C TCa#% aE}M `Pqp=/Nky) TkR2Ot]26V ")@]W DZ&U aN9T"W Xy@D@y=YBDQ1Dt@LYb+,&#vW$jtQD0E kE@5!WY,"V5D*#+RT!4*כ4z}]4NE)^2^0bY\2c@/=$C=N%̝=ND nQ 5S>:"LA#BJ7S4=Y3EFVTqqULRH;Z;A D 8=bn\O) $*dBƞ@LV!1ٟL,NIOTTnHH~Wx`px %KtSH?dC&B-WOD:, ;ɟ,z|vD@o;z :N*:y8:\hfkD& ɧɀ|lKg̟,㹠˺Һ;z8x: 88{ ;\zrX0*+; P"CCURa@-!K' g+Sw-X7+Tw~;3̽GPtxQMWѐC ś N0K^ŕ|QU{gJWϷGLOqy@ "S׈-2ő%I蔽XEjmQE1OpACJR,QUϯCJЫI{GSE}XCRV aXY>9ET!U4QMAqܓk{,?xd@7ESU[]?þ/m듐~\!k=ϓB }=ë>'j~\~ ׾I%e";XplQ$rMۻ'D Pe  H"E9vq@d>4yeʒXnؐBeL3MI'J# t@`*#TYE;$#!"Be" C(8x!Y%J0 &[xT",A >M(z;% UF%n43cNm|IY"teЙwЀ2Rf=ZY 0L R)lLnoG6 ԍ{#7*pgW01,|zНշi׷~+/r&0PhA 譣j hB( &Ͽ [&dĉm uܑ}ѿ `@BN&\2" TFh!2.Ѳ41$422eI (+F+r>D19AgDD@ޜ),PA,E=2LT@D=T4艢GȢ(PqӉ˱[}~ՉJl:CPhF(Z<,52Fzݖ-ll]mօ"`k^ۭJk`7ҝ,׍ ܃i (re(`k*݅Q $WWbIMaȒ"^hن`E(^:-ryn>z^:drgf8b#:^VCYD6K*kl7Cмv굨0xۙ;xM(*$VY4ǦE/~3 gXF3$gTF6n(G:щ>Xեfyzvh/͚ -hGҮVkLDm7[m2p{KB3. ;PKZ8HHPK&AOEBPS/img/strms045.gifWGIF89a???@@@򟟟___OOOooo///rrr젠```000 pppPPP999UUU|||vvv:::!, GE,Ȥrl:Ш4YYجvzxL.贚,P-|N~*kmchZb+Zú^чk)Zb+^*_Y W܄e <  9s>d@nC !%RJ V*d `.B(rHs\a@erwtb[n<i*Bt)H0 +P@E }0 XjhH`bvVMۭaJkd o+ "bGGfEDba\ ~daEUay5:#i:i8ke7qq[ VT\ #+YP]uٷ7O9.{5pp$G kEQ[YPxr* uYw w@MXv$HHyG z^!:AnWtt"H,^Q˹ƟҀpArx\ ?a%ohxK^B6EqMRhW@<U+uPUL S".!A;XEybLVeEIIeSbpX b 35B " 1qȄNo]32qA1@7+tBVżA^ l*|@ۧ&L+a`-RHN%@gFaOF8`#a0px[Fe{'b+.#na0bV\8, &g.MF6:nYMfx$yEJ_Δ/3)V\cW\Xa4:<WBf$nkuFW"ITݹs 338ہ.̪)M@6.{vqCc!褗nhVW:0:Uy[D!pЦ XSdRi[t4)9ViPy,0w? % A82^3 >sm. q9sdhZW,SU x^-@Il8B:ો"ZU;Qt< jdʢI. a'i. 9 閅Dr݀ч< Zz|lrr$>x`>8|yC ߛ :mMKAt%5~2KŅ\0`h[aGMb5bAMb(heplNvbgZĐ2ͥOw%ٟz6a`ifYy~g1hu6 h W0odu88Tym&)$frTNKxgEt0У'@B:DZFzHJLڤLIP'1p->\2W6\fAoA$uB'2 o pmQl:ioU-Alu)s:T;Zy1OAяS^5! ʄ&a3I5ɩœəVyЩ[ kI7RZiP1%Dʠɪͪ El 6^0TZe@jLNHqzͩ{ jb (_^p~HťJHaFd3OT)jЈWfpK  uy6XT@=V7ƘER&Ō^!8Sb Q渍S HX=8 /++ - KgFEJUTISu\VeT ¥RlВ"o0%Y(IRe 4g i k;1VPoFiX֦XKXW \UYRuVXYizQ{fYiI7ږ;xDfйfS R;y\F$ir Yrɬ̋Ϋ ;maoIbIyB'^UbjZlIU ]墟, { ێЩg/ghBh9u/jM#Zh8ڽ~e\;S0UC,HKA *vN<:g o"ourovwg7\= CVWJnu:oP:,kQ<+=%z9ڳ+[|&ɜɞɠʏ,ś2f{Ɋɢʮʡ L:ok:@ʃʰ\ʲ\NFi`V>cŒ|ͣ<>(eUxRP{aˍh朵]\;(aK A| h 붍q+GTzxMz+U)}[Iy5ٸ&Y.BE+y\Я089;[Yq딯FóB?[^`p Ձūao02 `س5B)dl\ڳ3(ᛛꙚhk`&Y[ Cl51 Vпa |bb,$2 < q=),h| ؛"E|ʱ%'P= ϷDqAe@}j޴Lʾ"W<<*2< (TV*!V#*TTH0֤.R@@VH~Rʀ:.T.<8 :"V42 4<#8:7 >)V/=cZ&b` 8`T,I"+H$H0UꭀUAq{q2adJTfP F.l[B Bxl" !=)p &Lr]:j„\}B) Fa@E|.nI-x`D-F|]1Aʀ ,9biH}d,J\!xm尛z>8 r$ۣi> j6} ]">"Yżwi!\)/%dV<571_N;aP@.4)(0*CByQ"dtBT<㋹  L}p(BC*R!JQ͐VjDh MZx 5}MSTQ4T8!I(JynD*U4>Fp:*LyA:%B.q (D\Sv@ 3 PZ/S@CKݻͥxaKxEKRʲ,E5׊. B=R6^X-vjyƭrՠu2W%t ]tRvѠcrמx ^bw'p*\{Tg[+_ Nl- ^Y-O1\#x ƇAb%6q}@-vacϘ5qc0E6!#/Mva B򕗌dd(]\)S V1.mv\F ug=}hAq_;PKPK&AOEBPS/img/strms041.gifGIF87aX?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,X H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0clϟ?o@},Xp?~ϟ?$XA .dC%N?(> 4( ?? 4xaB 6tbD)Vx?ӧ`> O@$(`> ԧ?~'p "Lp!ÆB(q"DO>$(> W>$8P}? 4xaB 6tbD)VП?~O|ϟ?8`A ?~O|,h „ 2l!Ĉ'Ǐ>} O>}ϟ?$XǏ?}'p`>}? 4xaB 6tbD)V?} o~'p "L?0 <0… :|1ă0,h „ Ǐ>} ? H*\ȰÇ#JHbA˗/_>}gѢEۧ/_|ϢE-Z}(P?~'p "LpB0@ <0… :|1ĉO_|bŊO_|bŊ+V}˧o~*VX~˗/}UXbŊ+Vh?}˷+Vd߾|_Ŋ+VXQ`}˧o?+V8}˧_+VXbŊ˗/>~*VX~˗o_+VXb}˷+V?}ǯ_Ŋ+VXbE˗/߾~+VX1a}_Ŋ+V@˗/>~8`A&TaC/_|"D!B"D˗o?!BB~˷D!Bb}D!B?˗o_?!B"D!2/_|A"D!ۗ/_} B"D ˗/~!B"D ˗/߾~!B"D!B~>|Ç/_>~>|Ç>|x_|=|Ç>\/_|>|Ç˗o>|Ç ۗ/~>|Ç>/_>}>|Ç>/_>~>|aC~Ç>|~Ç>|C~Ç>|!AÇ"/_}>|Ç>|O_|>|Ç>T/_}>|Ç>Do_|8`A&TaCСC:tСC˷ϡC:tСC˧ϡC:tСC ϡC:~:tСC:tx_|:tСC:l/_~:tСC:d/_}:t!A~sСC:tСÅСC:ta~sСC:tСCСC˗oC:tСC:/>:tСC˧ϡC:tСC ӗC:d/_> H*\ȰÇ#Jt/_~'N8qą8qĉ'NX_>}'NHP_}'N8qĉ%˷oĉ'N_|&N8qĉ'/'N/_'N8qĉ˗ĉ'N8|&N8qĉ'*/'FOĉ'N8qĉo'N8_>}'N8qĉ˗ĉ˧oĉ'N8qD7qĉ'N/'N8qĉ7qĆ8qĉ'N81>'p "Lp!Æ6̗"D!B"D "Ă"D!B"ĄA"D!ܗD!B"D"D "D!B"D "D!B/?!B"D!:䗯D! ԗD!B"D!.엏D!B| B"D!BQ_>!B/_?!B"D!Bl/?!B~A"D!Bb| B1_>!B"D!B|oD!BD/,h „ 2l!Ĉ'̧"EG"E)RHA~(RH"ŃQH"E)Rt/EG"E)RHb|(RH"łQXp_|Q8`>O@ ˗O@ ˗'p /A/A 'p`>$/AO@ DPaB~.\p… .\Hp .̗o`} .L/@}%̗Oa| .\p|.\pA-\`|㷰`|[h0_>-\/@}%̗Oa| .\0!| .\p… .$O… 7П?˧|'0_>~ӗ/>˗|˗߾|O_|˗/~ӗ/?˧߿~O |[pB-\pB-\`|_|'_|ӗ/_>˗o_|ۗ/_>}˗_|˗O?˗@_|˗O_?}/_|˗O_|˗/~˗o? 4x@%L0a„ &L0aƒ%L0!}O`|_|/_>}(p?~뗏7_|/_>ۗo? O@~O?_} <0} &L0}%L0|/߾/߿~˗o`|/_>/_ ̗_|_|g0_>ۗo?˗O|˗|ۗo?_}/?$XAK0a„ &L0a„K0a„˗߿| ̗_|/ȏ߿| ̗`|_?~ ̗?˗O`|˗ϟ~/_>~O`| &L0߾ &L |&LP`|˗ϟ|ǯ|/_> _?~˗O`|70_ ̗`|_~/@/߿|/?˗O`| ̗ϟ|&L@~8`A&TaC "D˗`|/|g`>'P`|/_|/?ӗ/|/_|/_|O| Hۗ0a„ 䗏_„  ̗o|80߿'߿ OO ?'?/˧/_ /_>}˗o`|˧/_>˗|O@ DO_ &L0a„ &L0_~ &Lh_}/?/|'0_˗/|/_>'0_/_> ̧/_>7|˗0a„ ˷/a„ &~_|//|_|˗/_ &4/_„ &L0a„ &D }'p "<0˧?}8?˷|˷_|(`>> '߿}߾|O |_} <|&L0aƒ0a„80?~뗏?~' '? o_>}'p |O@ '߿} oO |? 4x!|"D!B"D!B!B /_|ӗ/~O> /_}˗A _O~>}o ?}߿O@ D_|&L0aƒӗ0a„(P ?~O Oӷ>O@}@O@}7} o ?}ӧO`ӷ?~o ?$XA˗0a„ &L0a„ o_„ &L0a„ /a„ &L0|%L0a„_„ &LxП 0a„K0a„ &/| &L0a„ &L>~O@ DPB >QD~7qD˗oĉ'N8qDoĉ'N\?}&N8qĉ'JO?}&NH8qĉ'N8|8qĉ'0'N8qĉĉ '_>~'N8qĉ'.ܗ?&N8qą oĉ'N8qćĉ 7p_>~'N8qĉ'&ܗ@&N8qą Oĉ'N8qĆĉ Gp_}'Nh&hH_~? 4xaB 6t>~"D!B"D#DWp_}!B"D!B8P_~"D!6ǯ`|A"D!BQ?}"D 3/>!B"D!B|/_~"D!6`|A"D!B_| "Ć/?!B"D!6/?A"DP_}!B"D!:ܗOB} Bؐ"D!BBC_>$XA .dÂ%/?!B"D˗_B} Bؐ>"D!B"ă_>!BbC}˗oD!B"DO>~!BlOC}"D!B@~[/D!B>~ "D!B"DP?!6䧯?}"D!Ba|5"D!BP?"D!B"|9"D {/_>} B"D!B\/|'p A'p "Lp!Æ?}"D!BȐ_|="D _|A"D!Bx_|A"D!BP?˗D!B"D?} Bؐ>˗D!B"D˗o?"D!6"}"D!BQ`}HP?!6bA"D!BbC~|!B"ĆA4/_| B"D˗/?bC~8`A˗o?$XA .dC˗O_1bĈ#/~/bĈ#F߾|xP#*/bBǯ_Ĉ#F1b~/bB"F1bĈ񋨐_|1bĈ#./_>}"*/bĈ Ȱ߾|1bĈ#F8߾|q|#F1bDElO_|1bĈ#O_|El_Ĉa}_Ĉ#F?~_DE1bĈ "@~˷,h „ 2lpa}Á=|?~>,菟|Ç>|p?~˧Ç{Ç>,Ç˗/>~>|Ç˗/_}>4ÇÄ˗/>~>|Ç ӗ/_>}=|_>|Â=|}˷>|?~˗oÇ{a}˧o?>|C˗/>~>d/Ç>|aA}>l }˧? H*\P?~˗O߾~6lHP 6LO_Æ /_|aÆ 6l8_?~˗/>~6lx_ 6lذaÆkذaB'P>~'p "LpBO@}'p "L8P? *T?} *TP?~(P>~'p "Lp!Áۧ/_|aÆ kذaÆ 6lP 6?} O?~'p "L?0@ <0!B}*TP)TPƒӧ`>ϟ?$XA *?}(P>~'p "L| .\p… .$… .$?}8P}? 4x!AO>Է? H*$… .4O… .D?~'p`>}ϟ?$XAǏ>}80} <0ƒ-\p… .\>~ .\p~ӧ`> ӧO@~,h>}'p A},h „ p… ӷp… ?~'p |ӷ ? 4(>8>},h „ !C 2dȐ!Â1dȐ!CǏ?~ӧ`> ,O>}ǯ? H*\P? 2d> 2dP?o>}'p ӧO@~? 4xaB aÆ 6lذ!B}6lذaÆ 6lذaÆkذaÄ5lذaÆ 6lذaÆ  aÆ 6lذ!B}6lذaÆ 6lذaÆkذaÄ5lذaÆ 6lذaÆ  aÆ 6lذ!B}6lذaÆ 6lذaÆkذaÄ5lذaÆ 6lذaÆ  aÆ 6lذ!B}6lذaÆ 6lذaÆkذaÄ5lذaÆ 6lذaÆ  aÆ 6lذ!B}6lذjj }8`A&Th .\p… .\p… o… .\p…[p… .\p… .\P .\h .\p… .\p… o… .\p…[p… .\p… .\P .\h .\p… .\p… o… .\p…[p… .\p… .\P .\h .\p… .\p… o… .\p…[p… .\p… Zh? 4xaB[p… .\p… .\p?.\p… .\HP .\p… .\p… [p…[p… .\p… .\p?.\p… .\HP .\p… .\p… [p…[p… .\p… .\p?.\p… .\HP .\p… .\p… [p…[p… .\p… .\p?.\p… .\HP .\p… .\pJ }8`A&Th .\p… .\p… o… .\p…C`> 4xaB8`A8`A&TaÄ8P?$XA o… .\p… .\p…[p… .\p!A}П <0}8`AO@ DPB 'p "L ?} .\p… .\p… ./… .\p… _ .~ .\ ?} .\p… .\?$XA O@8`A&TaC _>A~˗a> د߿| ̷/߾O |ӗO|'0|"A} "ĆA"D!B4/@~,h „O@8`A&TaC _ a>KO_|>} '0|ۗ?~'| _>/@~ gP?!6"D!BA B8A"D_ '_|ӗo?}'0@}˗B~O@$(0 ׏|O`70?+0@/߿?$X>~3hРA 4h?} H*\ȰÇ1bD~ 1bĈ#F4B3O|߿| '0?~CO}  ? ? $@O`ۗ? /O߿| `>'p 3A 4hРAO@ DPB >_#W_#F1A}w0_B_>_>~!䧏 ?$X`> H ~/߿~>~O ?7П'0?}gРϠA 4hР'p "Lp!ÆBt/_Ĉ+/_Ĉ#FѠ>~;O`O`>'P?O~ H|8`˷O| ԗ/߾?}|ۗO>ӗ/? Ϡ>~ 4hРA ? 4xaB 6tC0_/bĈ#FhP?O }/|/|>~;O~ ? ? 4xaB 64A}:t ?}:tСC*!| ?}sСC:lB˗o@~_>} ߾| ϟ|!!? O@O@ DPB gP?:,OC:tСC s_'P|O`ϡC:t谡>~sx0CS/_| sСCgP?:,OC:tСC k/߿}O_~א9tСC6|s?} cB} ? 4ȏA gРA3A 4hРAO@ DPB >_O`_אE1bĈ _˧_Ĉ鋘0?"Qa 3_Ĉ1bĈ#F_ _߿_>$H'p "Lp!Æ| "DATϟ|/_~?}ӗO߿}/?O_>}ۗ/?'0C} "ĆA"D!B4/~/}㗏C~ "D!BlB#>~˗ϟ@~/_>˗ϟ?};OC s`>~ '0߾> _~O`>o߾ _>$XPO@ DPA~.\p… .\pAo`?}˷O`>} '0+/… .\p… _/}O /o@}%o|O`>o`o`>#/|O`>Ϡ>~ .\ ?} .\p… .\| +| W_ .\p… |˗O| /@ O>?G A|$(0|/| /O`_?}O`| '0AGP>~ H*4O… .\p… . Ǐ@}8`A&Th .\p… .\x_ . w>$H`>߿|8 |/>}|?}o`>/_>}_>'0|,(PO@ DPA~.\p… .\pA˗/> .,O_A.\p… .\HP?g0@ϟ|ӗ/_|0 ߿|8`A&TaCϡC СC:tСBOC +/C:taC}СÄ8`A,h „ 2lhP?9taA~:tСC6/|O?~O |/ϟ|䧯|:tСC C/C 'p`>},h „ 2lhP?9taA~:tСC6/_7۷}/|} ?}sСC:lB:t>˗O9tСC 3CϡC:taC篠|˗O| /@ O>󗐟9tСC6|:LOC}w0?СÄϡC СC:t谡? ߿'p|o`'P߾|8?} <0… :|XP?Aq ?}0/|/_|˗O |70˧>(0_|_|˗/߿}˧O_|0?O@ DPA~.\0?$XA .d`>_>|/?}O ?70/,h „ 2laA}"ā90?O| ܷo`}_>㧏`` '0|O߿|O`gP?!6"D'p "Lp!ÆO?}O_|˗?/_|䧏?8`A&TaC _>$XA  ,|'o߿| |/| O߿|O`|?~7P?~'p "LP,h`> /_|O`|`#__}㗏߿|/_| +A}~_>$XA W_> *TPB *,B */> `>`>O`}70|/|/?/||3B *T> *T8_|*TPB ˗/_>~SPB+/B *TPB OB ˗OB G_|˗O@}'0_>}ӗ/_|'PO~ /_|O_˷ϟ> '| ߿'p>~ H*4O… &/ .\paA;/… 䧯| .\p… .$… ̗O… p… .\P?-\pB-\paB[p… /| .\x-\p… .\>~ .\x0_>} .… .\paB} p… ӷp… o… .\| .\x-\p… .\>~ .\x0_>} .… .\paB} p… ?} H"/? *TPBO@ D`>} /,h „ 2laA} Ba| Bo@~,h „ 2lh0?? 4xaB[p„p… .\h ?˗oA $H A(_>$XA .dÂA80>q>$XA .da?? 4xaB[p„p… .\ |-\p@.\p… .\HP /ӷp… .\p… [p…[p„p… .\ |-\p@.\p… .\HP ˗/ӷp… .\p… [p…[p„p… .\ |-\p@.\p… .\HP ˗/_> ˧o… .\p… p…  <0!BSPB *Tp |)TPA*TPB *T>~ *0@8|'p "Lp!ÆB4_Ĉ1bD1bĈ#/1bĈ#F4_ć8/>$XA .dC1B~"F(_|"F1b1"C"F1bĈ`>'p O@ DPB >hP#*/bĈ/bĈ#F/_>#2/bĈ#FhP'p~ H0_>} H*\ȰÇ 1bDEQ|E1bĈ`~E,/_Ĉ#FѠ>~O@ H0_>} H*\ȰÇ 1bDEQ|E1bĈ |X_#F1A}"6Ǐ?}Ǐ|E1bĈ#/bĈ 1@1bĈ˗_|ԧ/>/߿}/_Ĉ#FѠ>~#˗_|E1bĈ#/bĈ 1@1bĈ˗A70_>~?E1bĈ 1"|5̗O_#F1A'p "L ?} .\_|.\p… /?o`/|[X_ .\p… o…o|'p "Lp!ÆO@ gРA 4h?~ 4hРA ˗,h „ 2lȐ_|'P/߿|>8`A'p "Lp!Æ"D`> 4xaB 6t ,hP? 4hРAgРA 4X_|8`A&TaC7_ }o_>s8_>:tСÆ9th_|'p "Lp!ÆO@ gРA 4h?~ 4hРA ˗,h „ 2lȐ_|7>'0|ϟÁ9tСC6ϡC? 4xaB 6tП|8`A4hРA 4> 4hР <0… 2/? ;/C9tСC6ϡCСC:t|ϡC СÅСC:,/_> g_~ СC:tP? ˗ϟC:tСCsxP?:,OC˗C:t |9t萠|:tСC s |9tС  <0aB})TO? *T?} *Tp|)TPB *T8_|SPaA*TPB *T>~ *$/_>OB * OB *TPB PB7>)$OB WP߾|˷O|/?/_> ˗O)TPBSPBOB *TB ||_~˗߿߾|/_> (_>$XA .dÂA(_|7@~O_}/|/_>!W0_?_>} A$/_>bC~ BX_| B"ă7߾ _ '0_ _>!BbC} B/_>O`>_>|AH`>/߿|_O߿/'p gР4hРA 4> 4hР <0… 2/O`> '0߿|O ?9tСC6ϡC70_B_>_>~9t0_|70|8߿|7P ,XP_| A 4hРA3hРA ,/_>~ H*\Ȱ!C~`>Ǐ_>o` ? <0… :|XP?˗|맏| /| `>'p W0|/|O>O?"$/_> !B",OB"DX_|"D!B"D | ԷO`>?>~@~ˇ!B"D!B",B / _>?~_|O ?"D8`>/?߾|'߿ϟ>8`3h@} 4hРA OA 4h`AO@ DPB ˗@'0|˧OO_}ӗ/?9tСC6ϡCG_|˗O>~˷O`>~ӧO`ϡC 9t|ϡC СÅСC:,/_>'pC:tСCСA~s80C:ϡC>~:tX>./?:taA~sذ_|9ϡC:t谡>~/?'p "Lp!Æ˗A}>|?}>D/_>~>|aC~{?>|Ç !A~{C(? ,ȯA GP_| A 4hРA3hРA ,/_>~ H*\Ȱ!C~s!A:tСCСA~#/_>~9ta|s0A}9:tСÆ9th_|'p?:t0a/_?O_|o_|O߾|'_|/A}9~/?$XA ߾ _|/߿}'0_o߾ԗ/_„K0a„ ӗ0a„ ˗_„ &L0a„ ˗@?~&LX_ &L0a„ &LP //?߿|/?~K0aB_>| '0| /? GP_| /a„ &~ &L0a„ &$/|8_O@ 4/B"D!B"D>~"D(_|;O`> '0߿|O ?"4o`O`>ۗ߿| '0| `>߿ O@ gРA 4h?~ 4hРA ˗,h „ 2lȐ_|Ǐ_>!A:tСCСA~>O?~_'P?$XA / '0o߿~O`>'p}#/_ 0a„ /a„ "/ &L0a„ /&LX_ &L0a„ &LP / O`>_>߿~' &o`ۗ߿|˗/_ӗ/?O`ϟ|ԗ/_„K0a„ ӗ0a„ ˗_„ &L0a„" ~|/_|8`A ˇ!B"D!B",B /?70|˧OO_}ӗ/?"D!Bԗ/BC!B!B",/_>~"D!B"Dx_|"DA"D!B"D!BC!B!B ܇!B"D!BˇaA}"D!BC!B ˗B"D!B/?"Dh_>"D!B"DaA}'p /?/_>"D!B"DXP_|!B"DX>"D|̗/_>~"D!B"4/_> o`>'`>O?}'p| H*\ȰÇП~˷~A|˗|bB}A,D"ĂG0@} BbC~#O`>} _}70"D!BlDO@ ? 4xaB |… ˗oB}.\pA~.\0|7П'p_}O | '0@~[p…`>O`3o`>~!o… .\p…[?}ӷ?Ǐ>}… .,o~o`>~ ./? (P| H*\ȰÇ(_|?~"D ̇0| BD/_>bC~ BX_|#/?O`>'|O ?!*/?O߿||7PO@'p "Lp!Æ"||Aa'P ,h ‚Kp>~ &L0A~&L0!B#o?} /߿߿| /߿ ? 4xaB`>?g0}C/… .\p… p?}-\|-\p|菟|[pA}-T… .4O… &/?맏@~/|@~/?}p… ˗| /߿~/|| .\p… .$BpBp… 엯@}ȏ… ˗oB}.\pA~.\0!߿˗?ϟ|/>}˗߿ϟ|'p "Lp |'0@}˗/ _߿| (_>$XA .dÂ5ԗ"ąA`|/_>~ "DXP?!6"D"|!BQ |A_>!BbC}"ą"D˗bA} Bؐ>!/?A"DC B"D c/D "D!*O|ӧoA!?}!B,/_>~!BA~ѡ|!B"Ć1ܗD "D!B\0'p 3hРA 4h?} 4hРAǏ`>'`>'p "Lp!CװaÃ5lذaÆ 6DC~5lؐ|5lذaÆ 0'p 3hРA 4h?} 4hРAǏ` _4hРA 4hРAgРA $/,h „ 2laA}˗oD ˷D!B`>O@gРA 4h?~ 4hРA ˗|_> 4hРA 4hР@~3hРA ? 4xaB 6t>~ /_? ˗} B"D(P,HP? 4hРAgРA 4X_|+`4hРA 4hРAgРA $/,h „ 2laA}'P_| B/_>}"D!Bl0@8`A4hРA 4> 4hРG0_|(,h „ 2/_> 6}П~˗O?Է,h „ 2l`|ÇCG0_A~/@>|aA~{?>|Ç ǐO@0OO@ DPB oC=|?~>|_|˗/>'0|>| |=||>|ÇcOCP߿8?}'p "Lp!Æ {xP>OÇ˗Ç>|ؐ_|>|_>|Â1C HП}8`A&TaC"ĆA|A"D"DA"Dǐ>'p AO@ DPB >tD"Ă"D!/?"D!BlC~'P`?$/_A$O> HP <0…'p A}8`A&Th>8`A&$/_> 'p "Lp!ÂװaÃ5lذaÆ 6DC~'0_ 'p AO@ DР>} H*\80?$,h „ GP>$XA .,h „ 2,/_> 6~/?~O@ <0A}*TP„)OB *OA}*TP„)TPB /? *4/B *TPB O@~˗O`>}߾|/'p A}'p "LhP? *T0a> SPB GP? *T0>~ *TPBOB ˧PB *TPBS(>(_>} ߿П <0A}*TP„)OB *OA}*TP„)TPB /? *4/B *TPB O@~˗O>}˗O_|70@Է,h „SPB P? *T?}SPB PB *T8_|*TР| *TPB *TXP?S/_>'P_}ۗo_>~O@O@ DР>~˧ϟB *4ϟB}*TPOB *LB *TP@~SPB)TPB *TPaA} O|O_|˗o? ? 4xaB̗߿})TP‚)OB *OA}˗O? *ThP? *TPPBSPB *TP‚)OB 'p AO@ DР>~'_} 'p_|߾|)T_> SPB GP/? *TXP? *TPPBSPB *TP‚)OB 'p AO@ DР>~#/߿|ۗϟ|o_>SPa> SPB GP?/߾/_>~ o_|*TO? *TPPBSPB *TP‚)OB 'p AO@ DР>~#/߿|˧O`| ̗O|*TϟB}*TP`|o`߿|_>~ *OB *Tp |)TPA*TPB *T>~ ӧ`| 'p A'p "LxP?o_>} /|o`|˧OB)OB *OA} G0_O|/>)TP? *TPPBSPB *TP‚)OB)DП <0A} ˧||߿|˧@}*TϟB}*TPϠ|O߿|˧/_|/|SP>~ *TPBOB ˧PB *TPBS(>+/_}ӧ/@$O?$XA w߾|_|/_>~˷O?}SP|PB 㧏>~_|_ _>O~ *OB *Tp |)TPA*TPB *T>~ ӧ`|o߾ӗO @,h „S8ПB *,ϟB}*TP}|O_|˗o>P)TPB /? *4/B *TPB O@~̗`|70@$O?$XA PB &O>~ *TP?~p? *TXP? *TPPBSPB *TP‚)OB̗o`П <0A}*TP„)OB *OA}*TP„)TPB /? *4/B *TPB O@~̗|/ O@O@ D>~ *TPa|PB 㧏>~ *TPaB}*TPB ˗ϟB OB *TPB P|O@ _?П <0A} O@ DP@ oO@ DPA~W@~,h „ 'P>~ H*\Ȱ |5l| 6lذaÆ P_> /@/>$XA Wp@},h „ 'P`>,h „ GP8`A&T8?O@ DPBaÆkذaÆ 6lPp!|П <0aB}*TP„)OB *OA}*TP„)TPB /? *4/B *TPB OaA}S0!@ӗo,h „ SPB P? *T?}SPB PB *T8_|*TР| *TPB *TXP?˗? ߾|SPBSPB P? *T?}SPB PB *T8_|*TР| *TPB *TXP?˗/~*߾|PB PB &O>~˗O? *DOA}*TP„)TPB `> HOB *TPB P?~˗O>~(@ӧ/_|O@ DPA}.\p)`-\_#… .\8P .\pA~[pB-\p… .\>~ >}8`A}O@ DPB}.\p)`>o_|ӗo@~/?}|˗A~p… o… .\h_|.\P| .\p… .$…`? <0…cȐ!C S|߾}?}߾}O|O Ǐ!C &Ǐ!C 2\/_> 2T/C 2dȐ!CcȐ!C 2dXP? 2d0?̗/_>}70| _'0߿|;OA}2dȐaB}2dȐ!Åǐ!C1dȐ!C 2dXP? cȐ!C ǐ!C P o`>맏| _?}/?  ,h „ Ǐ!C 2\/_> 2T/C 2dȐ!Ccpa} 2dȐ!A} 'p "<  A} g0?}'0__>>~3OA} 4hРA 48P? 4hРA 4X_| 4hРAO@ DPB >,D"D + < ,h0?3/|˗|/_>}ӗo_ӧ߿|˗ǯ?$XA8`A4hРA 4hРgРA $/,h „ 2laA} B/_>}!BP?!B,ϟB} ̧П? 䧏>~!BXP?!B8_| Bt/D!B>~!/_| BQ>~!BX0?AAbA} B?ϟ?!."D!BP?'P>0@8`A~(,hA~!B"DXP?"D!B'p} H˗0a„ &L0a„ 0a‚ (?#O | /'P?~G A $O?$XA .ϟB}+O}OC #C 2LC 2d| O@ D_ &L0a„ &LP 0'p|/@~/߾}߾}#H A ,h „ O>~o cp!?}cȐ!C cȐ!C O@~ H˗0a„ &L0a„ 0aB H?'P`_߿|߿8`A !B"DX0?;`>!B#B"DaA}"D!B"0 H˗0a„ &L0a„ 0a/_};o ?}(_߿|߿8`A !B"DX0?;`> ? 4x ?}C!B",B"D!B˗/?"DX_>"D!B"DaA}"DP |!o?}ϟ}_>_>"B"Da| `o ?}"D8>!D!B!B"D!ˇ!B ˇ!B"D!B",B /?ϟ|/_~O`ӧ!B!B"DX|8>~#>7П> $H Ǐ@$XA8`A4hРA 4hРgРA $/,h „ 2laA} B/_>!BP?!B,ϟB}˗/>'0|!O ?~ < ,hP? 4hРA 4h0_> 4h`A8`A&TaC @~"ą < ,h0?3hРA 4h?}˧ϠA 4hРA3hРA 4hРA3hРA ? 4xaB 6t>~! /70D"DSD'_|"āA"D!"D!BP?˗ϟB~'0D"DSD'|O@ DPB}O@ DPB .O@8`A&TaC @~S@~O |ӷ/?'_| :"D 'p "Lp!C8`A&TP?8`A&TaC | H*\ȰÇQ |)w0_'0߿|o߿~'DAbA$XA .d,h „ _ .\p… G_ .\p… o…Oa#/߿|70߿|[>~ .\p|8`| H*4/@? 4xaB S/… .\p…#/… .\p… p0} >~'0߿|O@ HC!B",B!D!B '| <0B)o… .\p‚o… .\p…[p?~So@~_>O`>~[>~ .\p|p… '_|[p… _ .\p… G_ .\p… o…o!| '0?~/|˷/?p!B}.\p)o… O |-\p…S/| .o… .4P>/,h „ 2laA} B/_>! O> W@~,h B H| ϠA 4hР O8`AO@ g|3hРA4hA 4h`>O@ DPB >,D"D"DAb|"Ć"D __~˗|㗏|O | ܗO`>~O| />G_>!BbC} B/_>!BP?!B,ϟB} Bؐ>AbA}w0߾ǯ}_~o`}|_>ۗϟ|/˷"D'| H*\ȰÇQ |AB} B`> bC~"DS/|`>}O`>/߿|O`>~˧O`| ̗_>}A0?A"DW@},h|"D!B O'p "Lp|ǐ!C GP? 2dP?O` ӗ/_>} '0|_| ܧO˗O_>}߾|1dȐ`>cȐ!C 2dȰ>~O@ DPB O@ DP)Ǐ!C 㧏>~ 2d0>~ ;ϟ| ̗O?'0|O߿|o>O?'P_?~ǐ!C!C 2dȐ!Â䧏!C 2d>~cȐ!C SC 2OA}2dȐaB}w0?/?}O`>ӗ/˗O /_>~˗?!C#/C 2dȐ!C+OC 2d!H?,h „ O>~ 2d?}cȐ!C S/C 1$C 2| 2dȐ!C ǯ ?} 2dȐ!C;C 2LϟB}2dȐ?~ǐ!C _> c80_> 2d/?1dȐ!C 2dXP1dȐ!C >~ 2d0a> cȐ!C#C 2LB2dȐ? 2d(0?)ܗ/>1dx0C 2L_A~c`> ˗`|8_>'p "Lp|ǐ!C GP? 2dP?ܗ/_>~14菠> ? 4xa‚/a~w0B SPB W>*T/B S`>;B *T0?)TPB#_A H 'p 3H_>/?4h0| W0A 4hР|gP~o_>~'0_A~o_~O_}ۗ/ _|˗?gРA ǯ ?}O`>~˧O`}O`} _|_>/>W0@'0߾|/>_~ H 'p 3HP? 4hРAGP? 4hРA  A 7__>}˷O|ӗ/?/_?'0}3hРA 珠|3_>~ o߿~'|/|/߾}o߿~|} 4h`A} G0?7p__> ǯ>o?~o_o߾W0}'0~o_'P>~? 4xaB P? 2d>1dȐ!Ä)|'0|o|۷/?g0_>'0߾o_># >'| '0|O`'߿|o?$XA+O| G0߾'0| '0߿| O_/߾˷|/|/߾>~ &L0aB%_„ &Lx>%L0a„ |ӗ/O`|_>}`|ۗO`>/>O@ D80_?O__>(`>_߿ _o@o_>~ H W> :|_߿|߿|߿O_>} o>}O`'P 7p?}8`A&Tp`> cȐ!C#C 2LB 70_O`|˗o_>}/|/|_>}ǐ!Ãa|_>/ _>?~ӷ|O?'?~1dȰ>~#O`>߿~O`>'0߿|/>O__>/?}WP1dȐ!ÄO@'p "L ?}[p… _>/_>~o|߿|#_|o߿|… G_˧| '0O`>O_}ӗ/_/_>~˗?pB䧏`>|O`>ӗ|O|˗O?/_| /_}ӗ߿|˧_A}p… O>~ .BO߿ <0…S/A~'_|ӧ/_>˗O?'P_>#/>}˗߿|˗C G_> Pa>cȐ!C䧏!C !C wP? 2d0?1dȐ!Ǐ!C &O| 2dȐ!C G_> Pa>cȐ!C䧏!C˧!C wP? 2d0?1dȐ!Ǐ!C &O| 2dȐ!C ߿O@#? 4xaB 6\_A~:t(ПCwP?:lϟB}:t ?}sСÆ)ϡC:tp`>sX0|:tСCϡC:lA} 'p "<  A} 4hРA OA} 'p "4  A8`A&TaC #/B}/|>|ÄÇ>˷>߿|>|aB} p ?)O!B}Ç SÇ㧏>~>|P?=o!?)O!맰}O`Ç>L_A~ O!?} S_B0ÇP>OA}>|0>~ {80)O>~ K/B~0?%ǯ>~_>O@ DPB W>SOB}_>)>~sСÆ)ϡC GP?:lB/| S|˧>5珠| O_?O`>:t?} p| S|˧P?>~:t0?9taA~СC S/C%O!?} S_BӧP/!?}_>~ СC:_A~_>$OA#HP`> A$(0?#8P?$XA .ϟB}2dȐ?~ǐ!C _>K/B~0)O>~ G_> 2dȐ!C W>SOB}_>)>~cȐ!C SC 2OA}2dȐaB}`> SOB}_>)Ǐ|ǐ!C 2dȐaA} ǰ| S|˧P?>~ 2d0a> cȐ!C#C 2L AG?OA#HP`> A$(P? 珠| H*\ȰÇ+OA H*\pO'p "Lp|ǐ!C GP? 2dP?0 <0… :߿|_>$XA .dÂ䧯 @/_,o_$/_,_O'p "Lp|ǐ!C GP? 2dP?O@+H_+XP߾+H_+XP߾߿W` ,X` ۗ`O| ,8P,XP| W>~ W| W`>WP>~8`AO@ g>~ 4hРA 㧏>~'p "4  A 4ϟ$OA$ϟ$OA4/?4ϠA3h | ϠA oA+OA3H>3H0?3HP?3O?+П < ,h0?3hРA 4h?}3П < ,hP?3h`>g ?} g`>g ?} g|gp| /?}߾|/?}˗o_>}/_>۷?}g_|ۗϠA'0A+OA3H>3H0?3HP?3O?3hРA 4h| ϠA 4hРϠA 4hРA3H_> gp| g>~ gp| g @'p }߾} 70?_>_>'0_ ܷ|˗/'0| _A~ ,(_+XP+H_+X0+(P?$XA .ϟB}2dȐ?~ǐ!C _>K/B~0)O>~ G_>/_|ϟ| G0|O_|O`>o`˗AaA} ǰ| S|˧P?>~ 2d0a> cȐ!C#C 2LB/| S|˧>1A_ /|߿_߿|O@_> _>G A#> /A#HP>~ $(P?Gp>~ <0…SC 2OA}2dȐaB}`> SOB}_>)Ǐ|ˇП|/_>}˷|/_|˗o|'P}?8_|/_>}8P|/_>8P>~7p7p 8>~8p?}/7p?}8`A&Tp`> cȐ!C#C 2LB/| S|˧>1A"ǐ!C ǐ?} ǰ| S|˧P?>~ 2d0a> cȐ!C#C 2LB/| S|˧>1A̗OC 2dȐA} ǰ| S|˧P?>~ 2d0a> cȐ!C#C b(!O@$H|/A#HP>~ $(>G ? 4xaB 'p ,h „ +Oƒ)O>~ K/B}wP)TPB SB *T>)TPB S/B%O!?} S_BӧP?#/B *TPB ǯ ?} ˧>)/| SA}PB &O>~ *TP?~PB &O| _>)Oa> SOB}珠| *TPB *TXP) 8_>G|Ǐ@} < ,h0?3hРA 4h?}+`? 4xA$XР>~ gР|  A~  |  A~ Ϡ? 4xaB 6t>~_>)Oa> S|wP?!B,ϟB} Bؐ>O@ Dh? 4A4h0?3H>3H0?3H>3h_>O@ DPB >,_A~ O!?} S_B0"D P?!6䧏>~!BXP?A_BӧP?%O!?} kA B"D +O)O>~ K/B}wPAb|"Ć"D _>K/B~0)O>~ #/D!B>~_>)Oa> S|wP?!B,|8>~ H*4OA}.\p)o|˧>)/| SBo… .\p…+OB)O>~ K/B}wP-\p…S… .4OA}.\p)o|˧>)/| SBo… .\p…+OB)O>~ K/B}wP-\p…S… .4OA}.\p)oA'p@OA#HP`> A$(P? 珠| H*\ȰÇ+OD맰?SϟB~;D!O>~!BlOA} B>~ (B~ SϟB~ )Oa? #/D!B>~"DwP?!B,ϟB} Bؐ_>AbA}"D!B~ B"D +OD!BOAb|"Ć"D _>!B|"D!BlO_A~ B"O@ DP)׏!C @}2dȐaB}ǐ!C 2d`>cȐ!C 2dȰ |O@ DPB O@ DP'p@'p "LР| <0…O@'p "Lp!Æ߿'p "Lp!Æ䗯?$XA .d?> 4xaB 'p A~8`A&T0߾8`A&T`?$ ? 4xaB 6t?O@ DPB >4oD!B"D! ̧"D"D!B"DA"D"D!B"D"DA"D!B| B"D"D!B"DA} B"D!B!| B"D"D!B"D""D!B"D"D!BOD"D!B/>!B/?!B"D!B\/D!B"_>~ H*/߾ .\p… "ԗo… ./ .\p… .\p… o… .\p… p… ˷p… .\pƒ[p… o… .\p… .\p[p… .\pB[p„˧o… .\p…p… ԗ… .\p… .\pB[p… .\pB[pB ? 4xaB 6t| BQ| B"D!Bq| B"D˷"D'P>$XA .dC"D "D!B"D"D!BȐ_>} B0'p "Lp!Æ˗Ç>/_>>|Ç>|_|>|Ç/_08`A&TaCÇ ˗Ç>|Ç/>|Ç C? 4xaB 6t_|>|a}{Ç>|ÁÇ>|!|=|p|O@ DPB *ܗ/>|_|>|Ç>|_|>|Ç>/_>:ԗ/Ç>|!|=|ÇÇ>|ÅÇ>|CC{Ç*/߾>| |=|Ç>|`?}{Ç>|!|{P|>|Ç˧Ç>|xp_|=|Ç>|P |{Ç>|a|{|>|衇z ˧,h „ 2l_|9tСC:t0!|sСC:t!~!B}9tСCۗ/>:tСÃϟC:tСCӗ/>:tСC:t(߾|s谠|:tСC˗C:ta}СC:t!CϡC:tСC˗/? ԗ/C:t`}СC:t8|sСC:tР?~ϡC:tСC9@˗?$XРA}!B_}"D!B萟|"D!BX߾|"D!B"D˗O? ˗"D!6O_|"D!B$o_|"D!Blo_|"D!B"Dӗ/_}Aԗ/D!BL菟|"D!BQ?~˧?!B"A˗/}8`A&TaC!F8b}˧_?H"ł˗/>~(RH"Eӗ/_|H"E/_|H"E)RHa?~˗o?G"ʼnӗ/_|H"E):o|ϟ?)R|?}˧_?)RH"E) o|?}8`A&T?~˗/>~2dȐ!C 2dȐ!B'P}O@ DPB`> <0… :|1ĉ+׏>Ǐ?$XA "?}(p~8`A&TaC!F(_?~'P} <0…O> Ǐ?$XA .dC%NX?~ 8P>~ }8P>~'p "Lp!ÆB(q"Ŋ'_?~Oӷ ?O@8>}'p@},h „ 2l!Ĉ'*}'p@ O@8?O>}8>} <0… :|1ĉ+Z4/_Ǐ?}O@ ԧO>~? 4xaB 6tbD)*?~'P |,hР>}Ǐ?8`A&TaC!F8bEw?]xŋ/^x?.^xŋ/^xQ|.^xŋ/^xŋ/^xŋ/Zԗ/ŋ/^xŋ/^xŋ/^xExŋ/^xŋ/^xŋ/^hQ_|/^xŋ/^xŋ/^xŋ-˗ŋ/^xŋ/^xŋ/^xE}]xŋ/^xŋ/^xŋ/^/_/^xŋ/^xŋ/袋."җ/,h „ 2l!Ĉ'Rh"ƌ7r#Ȑ"˗oȑ#G9rȑ#G9r@}9rȑ#G9rȑ#G/_#G9rȑ#G9rȑ7rȑ#G9rȑ#G9R|F9rȑ#G9rȑ#G ϟ~ H ?$XA .dC%NXE5nG!-O@'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?_8`A8`A&TaC!F8bE1fԸcGABܗ/_|>O@O@ DPB >QD-^ĘQF=~P|O@ `> ? 4xaB 6tbD)VxcF9v~ (,h?} H@~ H*\ȰÇ#JHŋ3jȱǏ O@8`AO@O@ DPB >QD-^ĘQF=~P|  QD-^ĘQF=~|O@ `> ? 4xaB 6tbD)VxcF9v$D>$XP ?$XA .dC%NXE5nG0  <0… :|1ĉ+Z1ƍ;z"|ϣ>$XP ?$XA .dC%NXE5nG'p`} H'p 'p "L(p_> ̗? 4xaB 6tbD)VxcF9v`> O$H>$XP ?/?}˗/_?}'0_}(🾁8_> $H 8`A&TaC!F8bE1fԸcG Hpo ,O|,( '0|>_o_|#(_|ӷ˷o`/_ /?_|8`A&TaC!F8bE1fԸ#~#`> ܷ@~ H>$XP '0|O`>߿| ̗/A70?|/?~۷O`>O`|/߿|'p "Lp!ÆB(q"Ŋ/b̨q#Dž˗?$XР@~ H?O@'p | ̗//_}/? 70A˗_o`>O~ӗ/_|˗/?~O@ DPB >QD-^ĘQF ˗/8p ? O@ O@8|`O| WP@~_>_>o@?8`A&TaC!F8bE1fԸ#|7|oO~808`A$_|˗?}/_>}/_>˗/_ۗ_|o|O| ߿|O> O@~ H*\ȰÇ#JHŋ3jȑ|`>>'p~8$8,h „ 2!| H*\ȰÇ#JHŋ3jȱ@~7|O@O@W`A H*\0_>} ˗,h „ 2l!Ĉ'Rh"ƌ7r_|`>'p 'p +X?$XA .dC%NXE5nG˗@ 8? `> @},? 4xaB 6tbD)VxcF9v$}'|O@8`A'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?QD-^ĘQF˗? 8 ? 'p |8 /˧`>8| ,X0_ W`A ,H0_A}O@ DPB >QD-^ĘQF˷?$X>$8P_>˧` '0߿~/_'_|ӗo?_|ӗ/_}O |ۗo߾~˗_Aܗo_ `>8`A&TaC!F8bE1fԸ|`> ,? ԗ_+X A} ̗/?~WP}O?o>? /|/|`_˗/O`>8`A&TaC!F8bE1fԸc|`> ,? ԗ_+X`~ /? 70?}/|˗o`>_˗߿| W| W߾|W|  <0… :|1ĉ+Z1ƍ˗O߿~ H A HP| O_'P |>|70|_ '0| Wp`뗯`O>'p "Lp!ÆB(q"Ŋ/b̨qc}˷?$Xp?$(P_>˧`˗?}/_| /_>}˗O|/_|˗/>0 o_>~O>/_|˗O߾(_| /?$XA .dC%NXE5n08P ˗/_>/˧? 4xaB*TPB O@ DPB >QD-^ĘQF'p Ǐ?$(P_>˧` ,X`AO@ DPB >QD-^ĘQF=v0 A HP| O,h „ 2lh|O@ DPB >QD-^ĘQFO@Ǐ?$(P_>˧? 4xaB 6tbD)VxcF9v$F~8`#? ԗ_O@ DPB >QD-^ĘQF=~_>} @˗o@O@ DPB >QD-^ĘQF=~_>} ˗/?C/! R}&W_>}˗/_Ȑ!C 2dȐ!C"OB~珡|˧Ob|/>} ? 4X04A#O?/,h „ 2l!Ĉ'Rh"ƌ7jOB~珡|˧O"~/|O`>oA~O>~ӧ/߾˗߿|'_|˗|˷/|3ȏ@}9rȑ#G9rȑcB-/_||$o>˗_|O||/?~۷O`>O`|/|/}a>G`> H*\ȰÇ#JHŋ3jܨ_>} ˗/?C/>˧O`>ϟ|_>˗/|'0| 70}/_|˗/_> 70}a>G0G9rȑ#G9r_>} ˗/?C/>ۗO`>O` '0A~_>_>o@a뗏b>7߾8rȑ#GqGqđB˗/_> ԗ_+X@o_}8|˧o`>/_>}8_>}70}/߿}O~ϟ|8p˗O| o>$XA .dC%NXE5nĨB~p_>~Ǒc|q丰_?9rȑ#G9rȑ ?}8P}(_| 8p?~ (p_>~ H*L/? 4xaB 6tbD)VxcF9v`> ?˷o˗o@O@ DPB O@$XA .dC%NXE5n?~+_~9/|yѣG=zѣG=zxP_>~G_|8@$XA .dC%NXE5nGWp_]0'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? )r$ɒ&OLr%˖._Œ)s&͚6o̩s'Ϟ> *t(ѢF"Mt)ӦNB*u*ժVbͪu+׮^ +v,ٲfs;;PK5lPK&AOEBPS/img/strms059.gif%XGIF89a???@@@999rrrϟ///___oooOOO``` ppp000PPPUST*()GEFcbc878qpq|||yyy:::>>><<<!,@`H,Ȥrl:tJZجvziznx7^?&t}Szx[>42b>~R>= =ŁƐû͛Δ˼\օX݂?xa^Е,(f_1l>*C *?j0(`Ōu2 G 2/HaG (P2) 0f2B Dx`Ԫd̩ [*xۑ~AB@ԕrĵ@}9ᨃ @'(8[ngg&f2ъ%@ܣ?JB H+ =Ej 9 k?$P@u GUum3җgz+r~)#|ӝVwAI?< \z%( |G }"-X 0JW[% tFHwe0{Uh!+g@?t(T4AIL$"kR,QvvY \RObdEVc'X暙)ߛpֲQUЉ'$r2瞌ڟrgȡ&(!裀DҤaiFfǦu#aϩ~=ū*cYή^:[Űk,*h"㲕.$VkfBG; k覫춫  F.,$oGKH w<qz (,2<$4L { P8|IqCAL$"BthF'$"QO͜ Z4BWuqf>cKACn`vdzswmԀ N89ABnr(nyPc^恐JSp|P#0^Ks@;tлݦQ|4^ 0 +ut0^t wPA R}W0wX}JE @Z礅꤀yu?``G* Qǁ |[0.$t4hB^]#৷m5Bp dWAa u> O#6W7O0Ptc;⑀|4^ 1c;#"ȫ)rX'a'4[dI-2 *]:phc봘:{@Qn!ȀB ۢ9V ϼB*y#=^ pj@|ЀFdi"=u@DU4*LSo! AQ`1p!{b>+< 4wfƽ;#ʶ?.2q%(1Dx!;Cҏ.LA҅ѳi8ki~Z6-Dբִ*L*1 8R|E/ }Pʖ-@;Ϝz+@`CcqVF=k~6Zl#m6Y5jxV@L)T-N ȫB<0ױh<&.|ԇz 0Jfݍ\i eOW ~4[2zrbUBVvןB R@B/H&渞Ja/a]u~$^g/^ .Ԋ[XW Rt{m\$Cxкc*G1IH 'fѐ47-6Q4M?{:!Qifeӟ(248PgIbԡ tqYwcWM-A5} `K-b${װ5 WhCr-8YF&mfg/G &:>`("LoSV7urAU #~+f> cq`"=!R'S3|W~Fn@rٻ$"@ء>1 d3?B3ȼqg$(ߡNh3,0A|$@a%OdQ1JWP% \I"rT#d"Omڌ4;DbHKI=rl|7aΧ$yA"W biguX"Gu?@! +A "sweyր 'F E G9#BHq#YbС @!%:aC& ևUT@x  {6x|~P?1 !AppcW]<Ap061@SszOAv^aY7} !6n 0Ģi8Xxw[#Jh CWGIsj0Oyou0Qqp0!^iZrX'hȆu8WxZbR2ukhiH1xU4 {{ߡ p @wQwcv8T3R^ ^D"}2g^h6bv i Ybx|1aЅ$*iz  R犬x{Q+%R0;Ȕ3fJ<`m[7w]xy`i8sזp@xh%]xWі&@YR#nيv҂yz ؚ>H0  'yh&ʀA78HѰkҜy"ϱЏq(iM~ɑ fNYVpg ?bVN9#J( zkY & Y"yIfs2 !fPq ـc*,qZrk9t*!Kb .; 6j bx)wqGi#p,!O j߆We4A oiePtB ke0+0-Y%SpnWzS(y{Vh\KܰYTX|o; C-5kAj T C-[ cWKT׵հ"]?x{R e- %*E"zP詬:J^ʻ*sQ Jjњr#ڹ +8"DQ\ܮb#jW+yk!V!j >`97/lV T%X9aJu9b0;n4KRAܐ²zAkvC{tqᴬlqj,PSV{C{q 2jdy|hhkm T᷈qU/b}q\*¸Ia#1ؽ1` ,h |0˝V˶, ,&ZJx10P̈ˠ+jJ7%Q,zcLU~< pQQS;,JB:lcBPWBX;ua䳾:v+ \ bLW_E>@OjEsG ӼCZk'}ϻ Ωi  J+u]σ?c9#pd2Y5j7z|НR^?]@u[Vw>` c4zXuBz3cV ;]Zļ L?>$TD7<Ny =%̞,\Scd4Y>PEڲ:c[#oi[%yg`Ы͋ Ҭ} - JὬʬK㝾;},,{|;+{*̬ |:Cm?φZ\ ʮ! !% D - ᭹>.|l;kf <,R +-NSX=`7>fXj v4hXбT#'*!QW<U,sYl!:ۃ-ձ57DcI4C@YIj6VPƺJimpulQYkyklNSS Y^==?ك=Qtc|[1(nulᷘ랎 כ6I#;g=?DBtbc_`ʔl6I ΅-U?@uBQU K} |Zș9!?VІ^w`#LR%l&`*J!*2K"/ `6 J&@h`34'K`"K h p H= PL,pځ@D%K`u @uFu"D(0 WNZ $XBn.Q(6tA@C`( tt_v=` `  ld[?c!@dzU5 |d]bېMMPcen۹DS0CSR\㤠JNa i0z"U2 0Q,e6Sl7d[   ~G7j4 6 "$X /F!qM |Si%:8iTqzX nAiC#B_&i5Q8i{8Z'" /(P'= }i ֜G0x}_!Z}_~`m7?58 `tj>6bX-"* ;Tъf`";`k &&)tl(ZT'5Vมtb l3D%.MtE)NFlQ80F1e4јF5mt_`uxy1}=4$ Q-t#!IIN2%Ln'Ay@)1JJYA=[,qIW*L#ڣIc.s|&iLd4&MkfT|df|Isn\g4NkS\e=IO|Se?KT%(+ zS&TdhC7P^RdE-CfTL<T3"iI_tR*G+LK]ƴ$3ElzSTiOSCP: `LSUU7{鑫n ({Ḋ'X{H2@IJmUhg5@{4Y)2K~[GmCq(){Nǹl`{\; (,s{5TtS] v lmX@8[`خ|ˆXmuk#@Fms < 9j#'IX'x*|(1|NLGle]43.*|@d`9znG280lhCD0ɲ-Ɯ*dkLg@k{ _D-$%蹾ٕ!T>1d:L3B]l2ŽkPEتE] K4yj'pt2pOb>NB3g^:7lq =G| k Ǔ0j &k @Z65m [KP:_9摢`Ru{>86é`>y`0^JwbE#:L{ϓFwFoV1}+A8ۀo'Q`|\~z Yy*feo*fGfwSYCI~w{'aN ڗż~qH)CH;2z+ݧg n<{ BW<!Ur:q3)z \.f:٪WNz Y>HأIG2O!Bb(ݨD@!fI> WB[$Ի;B}Ty6?xؖ|co4  PJb̔"lޮFrbI=lR)/aL O : >OB`.rү).>b0#`0>񆠸O@d é  5 qu]xFMG 1tĝ G0 {,>* 0 Ya ᭄n= P6O B ٰp|eư2p Zk2p QE\qaqACb +sqwQ9z1?q9 k r$p ^:QQqw)}oCmQ!ߑQݱ60J)1 1Q۠ 1 ԣ6_(R JKxQL WuP B$QldOB`, r%QH"J`"A'% c`*}o=r(`cJ&W p ˃`lhW)ɢ_^J. 01,P#z4@҄O~,;SSHT,>SNP<sS*RR <C Bs;ޓ,`^XP`>[t,N,?O`>W@HE?=gH`rI3EߓAL~ G`F]RG-K?a4TM^R`>L`@KSHJ?}RQՔ;3uDIH3I4M3?!5@{tb@?=KUP5RORG}T@C5SUFI 4XsXVDUU?bHLS@NTG=DUT7AAUSCGY[lXRSTIsFW@>4Jt@uTEL75S\F6TK`HJiTS1V;`D@\BAd=<HѳTf_e=d!L3t?AO)G R@>O ҳ;PK-Zr%%PK&AOEBPS/img/strms042.gifjGIF87aXP?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,XP H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ Jѣ*˗_E7p_|70B~87_B惘b>H"E 4_a?~'0_o@}^惘c>H"E3~_|O_>} ~ Ǐ||87P_߾ O߾|'0|O_>~_>_|/O|ۗ/o_>~O@ DPB >QD-^ĘQF o` '0? |_O`Ǐ_>ϟ/_?O`> }߿|}>o߾ '0رcǎ;vرcǎ/_'0|ӗ߿|_co|70߿| '0| '0|'0߿|o_߿|/߿O@ DPB >QD-^ĘQF /߿|/|o|'0@} 70|O߿|o`>~'0| '0|W0|(_> O|O`>? 4xaB 6tbD)VxcF9&o`|| GO } /@}'0_|/_>_O`>'0|ԷO`@~_}O`>cǎ;vرcǎ;vT/_|/_>}70?˷o | 7_O`'0|Ǐ߿~O`>˧O| GП|߿|˗?/|˷?}O`> <0… :|1ĉ+Z1ƍ;z2d|"E)RH"E)RH"+)RH"E)RH"E)RH"E)RH"70})r|/_>~o!?~(8p| H*\ȰÇ#JHE /,(_| ܗ/? g ?~(8p| H*\Ȱ|sH>9СÆ?~'0_+a|:tСC:tСCp`?~/_+a|:tСÁO |/o ?'P|˧_>Ǐ_|:do`~O`>~+/_>G0_A~?}/ۗ}sСC:tСC:th0_A~ۧ/_>G0@߿߿| /_>7_O?~˗ϟ|o_>~? 4xaB &/'?~`> '0_'0|O ۧ߿|o|O|O`>/߿}װaÆ 6lذaÆ 6lذ| / Wp>~3_>ܷ}gP?}} /_ 6l0a/?/߿|%O`O`>'_Æ /_/߿?(?O>߿|_O@ DPB >QD˗/o? /_ O߿| '0_A _>~o/߿|߿8`A&Ta|`>O?~ o}/|'0|O@ H7_O~O`'0߿~O`o`>$XA .dC%NXao_>~o|/@}70|O`>/| /|-ZO|'?}ׯ }?߾/|ϟE }O|_> 3_> '0| OYhѢE-ZhQ`o`>7_ _>#o?+Ϡ~o`O |/,h „ 2Lo |˗?~w_|˗O>~˷O`>~ӧO`ǯaÆ˗o@~o |˗|(?~~/߿|O_}8`A&TaC!F8bņ |A~7_|/A'0_A Ǐ߿~o`/߿ӗo_>$XA .d B"D'pD!B"D!B"D!>O>!Bb|!B"D˗D!B"D!B"D!>/_> H*\ȰÇ#JHŋ3jȱǏ CI2d|W}#/_|,Ydɒ%/˗|dɒ 䧯dI ǯaWdɒ%KO}棘o ?}%K䘏`ۗ/?'_|ӗ/?W~/_?O_>} 'P?~`>o_|˗ϟ@~O_|˗_ɒ%K,Q_˗@}/?@O?~O |/ϟ|'p "Lp!ÆO߿|o?~_>߾|C '0|/| |7߾|}Ç>|Ç>|Pa> '0_'0|O /}O /=|Ç˗/>_'?} ԷÇ C/|'0| O ˗O߿__>~o <0… :|1ĉ+V̗_>O`>@~˗O| /@ O>gѢE/|O@_}'p "LH0}O`O`>'P_(߿~o_>~ H*\ȰÇ#JH|맏| /| `>_>'0| 0@ۗ߿~O@ DPB g0?}O>'?~Ço_>?~_| O ?O߾ӷO` |{Ç>|Ç>To@~_>_>'0?}G0|/?}O ?70Ç>|p`>ӗ/_?}/_|˗Ç ˗o?~_>} ߾| 'П|'P`/_~˗?_|˗/_?$XA .dC%NX"| />}ӗo|˧O|˗_| /_~˗?}/_|˗E-ZhѢń,ZhѢE-ZhѢE,ZhѢE-Zh`>-ZhѢE-Zhb>$XA .dC%NXE5nG!E$YI)UdK./f̘3fA~3f̘1=˗_̘1˧/f̂3f̘1cz/1c>̗O_̘/f̘1c/_>~1c|/1 ˗_̘1cƌ_|bƌ0_>}1c/1cƌӣ|Ōa|b,/_>1cƌ3G+o |˗o |˗/fL/_|/@~ /@~/|/˗߿|/_>$XA .dC%NXEG_>_?}O߿~>1bĸ0_>}_}o߿~aĈ_|߾_?}O߿~>1bĈ#F1b,/_>~'P?'0?}ϟ|1bĈqa| O?'0?}ϟ|1b/?ϟ|>O`>1bĈ#F1b,/_>~ 70_߿8`A&TaC˧>o`o`>!BD/_> /| /| "D!B"D!BbA+O`_"D!Bq`|'0| /| "DO }o`o`"D!B"D!B|'0| /| "D!B80_>} 70߿|70߿|A"B~/|__߿8`A&TaC!F8bE#˗_|O߿}o߾۷_?}1bĈqa|o߿~߾}È#| ?}O߿}o߾۷_?}1ܗF1bĈ#ƊW0}/_~O|aĈ#ƅȏ}/_~O|aĈ_|˗ϟ}/_~O|aX_|0bĈ#F1R/?1#$(_>} H*\Ȱ!C~sСCӗϟC:tСC:tBsСC:t!|9tСC ˗ϟC:tO?}9tСC:tСC*/?:tСC"̗OC:t |9tСC ԗC:tСC:taBsСC:t!|9tСC ˗ϟC:tO@}9tСC:tСC H˗o_„ &L0a„ &L0a„K0a„ &L0!}K0a„ &L8pۗ,h „ 2l!Ĉ'O@ DPB >QD-^ĘQƊ(p_} H*\ȰÇ#J(0?Ǐ"E)RHQ`|(RH@~H"EH"E)R0?Ǐ"E)RHQ`|(RH@~H"ņH"E)R0?Ǐ"E)RHQ`|(RH@~H"EH"E)R0?Ǐ"E)RHQ`|(RH@~H"ŇH"E)R0?Ǐ"E)RHQ`|(RH@~H"Ň"EQDEQd@'p "/?$XA .dC˧OD%6/?%JH_}(QD%J(`>%˗D%J(Q|I(QĆ'QD˷>%J(QD%? 4xa|'p "Lp!ÆB0_>}%J(|I(QbC/>'p "Lp!ÆB(!|,h „ 2l!Ĉ'Rh"ƌ7V@8?~ H*\ȰÇ#JHŋ3jȱǏ CR䧏|)RH"E)RH"E9>П>"E)RH"E)RH/_> )RH"E)RH"E>sOH"E)RH"E)R?~ȏH"E)RH"E)RH~I'RH"E)RH"E)߾)RHO}'0?Ag1_>)RH"ElH"ED/_O@/߿'p 4h`| H*\ȰÇ#JHŋ3j/_ | '0|O |̗O`>~/_>~o_|˗@~O`>~/>}6nܸqƍ7nܨ7nܸ|/_/~ '0|߿|70_>|/~?}ϟ|?~O_7nܸqƍm@8`A&TaC!Fx?}˗`|G0_?~/| ̗O||_| '0>o_>'0|70ĉ'N8qĉ'N8q"A&N8qĉ'o`|O/}o| 70_>}߾| ̗O`W0|˧_} /_} 70߿|M8qĉ'N8qĉ'oĉ'N8q"A~O_|/߿/߿˗߿_/>~o߿/_/?}_O_߾߿|/'p "Lp!ÆB(q"Ŋ/bT/cƌ3f_| /_}'P_ ԗo>˗o>ϟ|/|/_} /_>~ۧ?}/|/|3f̘1cƌ3f̨3f̘1cƌ3"'P_3f̘1cƌ3f\/cƌ3f̘1cƌ˧_ƌ3f̘1cƌ3./cƌ3f̘1cƌ3f̘1cƌ3f̈q_3f̘1cƌ3f̘1cƌ3f̘1#F}3fQFeQFeQFeQFeQFe@'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? )#G9rȑ#G9rȑ郘|˗|7rȑ#G9rȑ#Gja>}A|棘O|#G9rȑ#G9rF}'0?~O`> o~>~ӗ߿| |70@}˗ϟ} /?O_|F9rȑ#G9r$H~O`>~_?70A/߾߿|/?~@'0}߾߾|ۗ߿}8`A&TaC!F8bE1fԸcG9_?~o`>o`O` W0_| /| @}?~Ǐ?~cF|o߿|O_O~__(? _`>ۗ?~O@ DPB >QD-^ĘQF=b׏a>O>ӧ`|?_/|`>O? o`?~?~cF} ԗϟ| ?}˗?~/|߾| ϟ|g0?ϟ|ӗ/_~/_|>~Ǐ?~Ǐ0Ǐ?~Ǐ?~nj>>Ǐ?~裏>裏>裌? ˗O| $`>70A  A/| H*\ȰÇ#JHŋ3jǏK߿~惘`>O`> 70_> A $H A | '0߾|˗ϟ| O_>~O`} /_/_>ܗ/'0>}/? $H A $H A>} _}'0|_>70?_}ܷ}O`>~O`>󗏟,h „ 2l!Ĉ'Rh"ƌ7r豢>O`> '0|`?~ '0@'0?#o`| ߿| /| '0Ǐ?~Ǐ?~Ѣ?}O`> '0|o? '0߿~/@ ߿|߿߿|/߿o'p "Lp!ÆB(q"Ŋ/b̨q#ǎ/W> '0}_|_ O`>_'p|O>~>}_>O`?~Ǐ?~#|˗ϟ|ӗ/?'0?}#/_||O?}#/_|˗ϟ|_| /|O`?~Ǐ?~#F~>~ϟ|?~Ǐ?~G}D <0… ˗C 2dȐ!C 2dȐ!C 7_>1dXp_|cH0C2\/_|&/_>~o`~-Ǐ!C 2dP?} H*\ȰÇ#JHŋ7P_|a̗Q`~/|/ O`|-̗1cƌ ̗/_~˗| eTȯ_ƌ3f̘1cF ԗO |?}ӷ/_?} '0@}ӗ_A}_~O |_'p_|o_>~ӗ_˧| Ǐ|د?}? ˗?}e̘1w0BG A $HP`>$XA .dC%NH1~ϟ>O`_>ϟ|+`~O _O}'0߿|}?} O`70|}bŊ+O?~/G0~/_߾| Է/_'P?o|ӗ_Ŋ+VXbŊO߿|/?O`>O |O|'0߿|oo`| /|'p /_'0|ӗ߿|_>'0?G0|߾۷}߾ _?~o?~O@ DPB >QDO߿|/|O`>O`>+>0@'P`>~/|'0| H _O`7_O~O`O`#`> /|O`>oO`>$XA .dC%NH1?~/ /| '0|O`> '>@~_}O`b~/߿|'0|/?~'0@Է|UXb|맏_|맏_| O`| ̗o`>맏_| o| WbŊ+VX"||Ǐ߿~O`>˧O| g|/߿'?߿|˷?}_,h ||/|˗/߾70@~ӗ/_?O|_>o}? 4xaB 6tbD)Vxc|3f̘1cƌ/_|˗o_>/߾|˗/_˗߿|/_> /_}7_>O`3f̘1cƌ3fL/cƌ3f̘Ѡ>3J/cƌ3f̘1cFo| o`O@ o'p "Lh0_*ܗ/? *TPB *L؏B *T_> H*\ȰÇ#JHŁ?~/_+ob70D~0ŋ/Zwŋ/^xŋ/7_ /?~ܗ/# }?}ӗO?~O|]?˗?}? _|ӗo?}'0@}//^,ȯ_A O A˗,h „ 2l!Ĉ'Rh"}`Gp߾ W0A} /'0߿|o߿~0B/'?~O_|O`~O`>_>0bџ>$/_?5̇#F1bĈ#F_>/| W`>߿߿/߿_>$XA_ ߾~W0B_>O`>~%L0a„ &LhP? ˗ϟ|o_~o~/_~|>~˗,h „ 2l!Ĉ'R(џGp߿|G0|w0?ۗ߿|'0߿|YO~'PǏo_?}o`O`> ? 4xaB 6ȏ|'0_>O`o߾ w߾>_>~oC:tСC:tСÅ }O|_> 3_>_>>~sa>@~~o_>?~_| O ?:tС|(߿~'/߿|߿?/| '0߿|? 4xaB 6tbD)V/_|˗o|#O_|+0@O߿| //? <`/_>~/A~/_>}˧/>ۗO>ӗ/ &L0a„ 0A ߾|_>o~O~ '0|'`>8`A&TaC!F8bE1FOྌ3̗1cƌKϠ>70߿|'0?}7_~?/|_> ܗ1cƌ3f̘1cƌ˗_ƌ˘1cƇ%O@_~_ϟ}>'p@}߿}?O@ DPB >QD-^ĘQF=~ȏȃ 1H A }I0_| b|$HD/_|  $H AlO}棘o ?} AB7> A4|$H A r~ /_?O_>} 'P?~W0|o_|˗?~O_|˗Ȉ'_|˗?O_|@iП>$H A  '0_'0|O  _>~o?~/@~o_>O@ D`߾}o?_>} *TPB  O| ߾| ܗ/@~/_>˗,h „ 2l!Ĉ'Rh"| o`O` W0_| '0| @~'P>1̗/_>} 70߿|'?} ԷF1ǯa> o߾_}/߾}1bĈ#F1bO`O`>'P_(߿~o_>~ Hg0|G`>/ <0… *0_| /| @}?:tСC:tСC:4o@~_>_>'0?}G0|/?}O ?70C3O}o| |sСC a>_> ߾|/?$XA .dC%NXń'_|O_}/>}/_>~'П|/_~|/_~1g0@ϟ|ӗ/_|aĈ@3}?| ̇#F1bĈ#F aĈ#F1bĈP O_|ӗ/?/_|˗/_?1bĈ#F1bT#F1bĈ#ƃaĈ#F1bĈ#F0#0#0"'p "Lp!ÆB(q"Ŋ/b̨>}6nh>}6nܸ~7nܸqƍ7nܸc|6nh_|6nܸ?}7nܸqƍ7nܸ1c|6nh_|6nܸ7nܸqƍ7nܨ1_>}7n/_>7n87nܸqƍ7nܨ1_>}7n/_>7n8Pƍ7nܸqƍ7nܘ/7Z/7nƍ7nܸqƍ7nܘ/7Z/7nOƍ7nܸqƍ7n/_>}7nd@ <0… :$Ç>|Ç>|Ç>|a| /_|˗o |˗o |>|!| O@~ /@~ /Ç>ȏÇ>|Ç>|Ç>|Â3_?}O߿~߾Ç˗ϟ@~_}o߿~Ç{Ç>|Ç>|Ç>|h0_>}ϟ|>߿8`A&T | 70?}ϟ|>1dȐ!CO@ DPB >QD-^ĘQƁ+O`_#G'0| /| /|9N#G9rȑ#G9J̗O~__8r/_>/߿|70߿|70njQǑ#G9rȑ#G'˧o`o`>!˗}o`o`㘑_|"Ǒ#G9rȑ#G'˧߾}o߾۷_?} H*\h_|߾}o߿~> 2,/_| <0… :|1ĉ+Z1ƍ7?'p_|˗}"| ԗ/?'p_|˗}㘑>sG9rȑ#G9r/>9N/?9'P_> ȑ#G9rȑ#GǑ#ljǑ#G~oa>9r!@,h „ 2l!Ĉ':̗OE)R/_~)RHpӗ? 4xaB 6tbD 8`A&TaC!F8bE1fԸ> ԗ'p "Lp!ÆB(a>'N8qĉ'2̗Oĉ'ND/_>'Nq_>~8qĉ'N0'N8qĉ˧oĉ'"/'N/>M8qĉ'Nlĉ'N8qĉ 7qĉ˗ĉ'Jܗo|'N8qĉ7qĉ'N8q"|M8qD7qĉ'N8qĉ8qĉ'N8a|&N8q"B~8qĉ8qĉ'N1'N4DM4D />$XA .dؐ!|9tСC СC:tСCСC:tСC̗OC:t |9tСC СC:tСCСC:tСC̗OC:t |9tСCСC:tСC'p "Lp!ÆB(q"Ŋ/b̨qc} />$XA .dC%NП <0… :|1ĉ+Z1ƍ߿}'p "Lp!ÆB(q"Ŋ/b̨q#ǎ? 鑟)RH"E)RH"E鐟>)RH"E)RH"E>)RH"E)RH"E _|)RH"E)RH"E/>"E)RH"E)RH"%'RH"?˗/?˷ϟ|1|D'RH"E)RH"E^ԗ?~˧O|/|)RH"E)RH"En̗o ~O/__>~߿|_|_|/_߿|ۗ/?~O߿_|O@ DPB >QD-^ĘQF=Vܗ/>~w0_>~/?_>o_>ϟ|'0_㗏|o_>|o߿|/?}Ǐ?~Ǐ?~؏|w0_#|ۗO`>˧O`|`| /_>O|/|O`Ǐ?~Ǐ?~XџO?}̗?o>~/߿?>'߿|O O_|˷_>˷_>o`>$XA .dC%NXE5ncE~O_|'0_#/_>O`>O?'0_ 70?}_> _O`| 70߿|Ǐ?~Ǐ?~/_|ӗ/>/_|˷O|˷O?}O_| ̗O`ӗ/>ӗ/?ϟ|O`˗O_?~Ǐ?~Ǐ?'P| H*\ȰÇ#JHŋ3jȱǏ C/> H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKٳhӪ]˶۷pʝKݻx˷߿ LÈ+^̸ǐ#KL˘3k̹CMӨS^ͺװc˞M۸sͻ Nȓ+_μУKNسkνËOӫ_Ͼ˟OϿ(h& 6߿O@,X`O` W`| H*\ȰÇ#JHŋ3jȱǏ/ȃ߿|]$H A $H AĘo~ /_/_o_>@ۗ@~/_>_|˗ϟ@~|@ $H A $Ho_|/@} /?/|ϟ|_}߾o߾ /_? $H H H @_߿~__߿|_>O` /@o`| /|'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?"O_}맏_|'0߿| /| '0@}/|(_> o߿|O`? 4xaB 6tbD)VxcF9va_>ϟ>O`'0| 엯@}>~?}/߾>O`> A $H A $||Ǐ?~O`>˧O| G| '0?ϟ|ϟ|O_|O`> $H A $H A)0H A $H A $H $@ $@ $@ $/_>~ۗ/_>$? ? 8_>$XA .dC%N|/_|(/_>~o`~-Ǐ`~QH"E)RH"E?~/_+ob>)RH"E_>~1W0_O}70|"E)RH"E)7_ /?~ܗ/# }?}ӗO?~O|QH"E)Rto`~/_~/>O_O`O|/?~/_>?~8`A&TaC!F8bʼn70|}`>_>O`o߾YhѢE-Jܧ'0|+O?/> ߿| gѢE-ZhѢE_>/| W`>߿߿/߿_>$XA .dC%N,؏|#/|O_~O`/_>~߾|QH"E)RHB/} /| W0|o_>~ /| G"E)RH?_?}o`> _>_>`>'pǏ,h „ 2l!Ĉ'R8O } /A}'0_|'0?~'0߿|_|-ZhѢE/߿|/|o`|o|GP߾ W0@'㧯E-ZhѢE˗/߾'0@~ӗ/_? 'P ?~/?_ϟ|'p "Lp!ÆB(q"A~'0?~_/_}˷o`/_~7_>'QH"E)RH"E''pE)RH"E)RH"E)RH"E)R|/_|8`A&TaC!F8bE1fԸcGA9dI'QTeK-Ǐd|˗&Ǐd|˗/]'p?(O˗/_O>~ U7/_|R_˗@}/?~+`/_>|/_>˗/ 7?~O_}/|_| #>~O | /ϟ||ˇ'p?} >~ '0|o|߿}߾'߿|ۗ?~8`A&TaC!F8` '0_'0|/@~ _>~}/@~o_>H"E)RH"E'0| '0|ȯ`|O`/?O}(RH"E)Ḃ_>O`>@~˗O| /@}"E)RH"E#_?}o`O` ߿'p`>o? ߾|ۗ,h „ 2l!Ĉ'Gp_?}o`O`> ߿'p`o@ ߾|ۗ,h „ 2l!Ĉ'R8>|o߿|O` `>_ ̧@~ o`>-ZhѢE / } _>'? o?'0?}o`>~YhѢE-Zh1!| />}ӗo|˧O|˗_| /_~˗?}/_|˗E-ZhD~/_>}˧/>ۗ>}ϟ|'P`/_~˗?_|˗/_?$XA .dC%NX|/^xŋ/^ŋ/^xŋ/^4ŋ/^xŋ]xŋ/^xŋ/^xŋ/^xŋ/^xŋ/^DO~/^xŋ/^|O>/^xŋ/^x@xŋ/^xC~xŋ/^xŋ˗ŋ/^xŋ˗ŋ/^xŋ/^(_|.^xŋ/^_|.^xb|O_|..̗/|8`A&TaC ˗D!B"D!B"C~"D!B$O|'0? '_>!BbC"D!B"D!Bd/_>!B"D'0?~O`> o~>~ӗ߿| |70@}˗ϟ} /?O_| BC"D!B"D!Bd/_>!B"D'0?}ǯ|Ϡ}/߾_?}O|߾}o߿~o_>߾} H*\Ȱ!AsСC:tСC:tС@~sСC:th0?~O۷_O`>'P_|'0߿|7_>~O`>~:tСÆW@~ /@~ /C:tСC:tСC~/_/_>/_>:t`>'p߿?~_߿|/߿(0_(}/?$XA .dؐ|O`?}O߿~߾C:tСC:t!| ?}o߿~_}_?}sСC /_>}O?}O~_}O` O? 3}?|sСC˗@} O` O?'0C:tСC:taC~ȏ|@}O߿|>O@ DPB ԗϟ| ?}˗?~/|߾| ϟ|g0?ϟ|ӗ/_~/_|6lذaÆǯ`>o`o` 6lذaÆ 6lذaÆ ˗_~__ /| 6lذaÆ kذaÆ 6lذaC+O`_aÆ 6lذaÆ 6lذaÁG>} /|'0߿|'_ <>} &L0a„ %L0a„ &L0a„ &/_>~ 70߿|70߿|'p "Lp!ÆB(q"ŊO__ /|)˗ϟE70|/_~70E-Z/߾}o߿~,ZhѢE-Z/_>߾}o߿~o߾۷_?})˗/?o|7_~ob>}YhѢE+o| ܗ/_?'p_|,ZhѢE-Z/_>˗߿߾|/_~˗߿}o_|8`A_|&L`ۗ߿}O`>/}'P?'_|_| ˗| _|߿| &L0a„ /?$XA .dC%NXC~xC~D}ϟ|O`>۷|!Ǐ|}/| O?/.^x|]xŋ/^x!|]x!?}wQb?~ '0_O`| ̇0߿|_'PA_?~ņ H˗o@,h „ 2l!Ĉ'Rh|]x>}/_?$XAO`> '0|o~|맏_|(߿/߿/_ӷ_>~8`A&TH`> 4xaB 6tbD)VxcF+O}'p "O@O߾ _>/~ O` /O?_>˗O?O &L`> &L/_>~ H*\ȰÇ#JHŇwŋ_| '0?}/>}ӗO>o`|󗏟?}#/_} 7P_>'߾]0 ŋ/^xŋŋ˷ŋ/^x`>˗ŋ/^xŋ˗ŋ/ ܗoŋ/^xb|.6//^xŋ/>//^/߾8`A'p "Lp!ÆBx0?OD%J(QD%J/_>%J(`|'p  'p`/_7p˗/_~ H*\Ȱa|:D/_>~:tСC:tСC:/_>:tС~9tx0C ̗|;ϡC:\ϟCϡC:tСC:tСCϡC:L/>+/_>O`}@}/_>70~˗?}/_?:t`>"/?:tСC:tСC /?:t|:L}۷|o_?/_?o߿_o?߾'p "Lp!Å H*\ȰÇ#JHŋ3jXq˷? 4x| ̗o`>_/?+`>o~&L0a„ "O@ DPB >QD-^ĘQF|8`AG0|~O`> _> 7П~'0߿~ <0… :|1ĉ+Z1ƍ;zO@}}L}_|ۧ߿| '0>~ W0@}/|/Ǐ?~Ǐ?~"?}Qa|/_>ۗO` /|ӗ/_? ˗/_ۗ߿|˷/?~Ǐ?~cE~cEǏ?~Ǐ?~_|}/_|>~Ǐ?~Ǐ?*OǏ?~G}G}G}B <0|'0_| *TPB *T|o_| O`| SP`| *<? 4xaB 6tbD)VxcF0F5jD/_O@}'0_>,˧QF5jԨQF5N7˗| ܗ/'_|?4jԨ`|ۗ@˗߿|'0'0|ۗ/?G0_>O | /߿}/_~˷O>_|ӨQF5jԨQF}__~p?ǯ'p "Lp!Æ ˗o?;/ |/| ̷/?_}?~_~/?_>˷_>>~{Ç>|Ç>|Ç#__O`O>|Çӗ/|_|O`?~ '0| ̗_>}̗O`'П|/߿~O? 70߿|O@ DPB >QD-^Ę`/_|70߿|맏_| oF5o`|O/}o| 70_>}߾| ̗O`W0|˧_} /_} 70߿|iԨQF5jԨQ#|>~__>_>OF|'P_>/~_| | ׏>'0_>O_>O>~_|/߿| _>O@ DPB >QD-^Ę`|?}__>˗o_> OF˗/>˷ϟ|W0_'0@~O_|ӗ/>o`˷O`>O|/_>}o`ӗOF5jԨQF5jDoF5jԨQcCӨQF5jԨQFOF5jԨ!|O@ DPB >QD-^ĘQF=~RH%MDRJ-]SL5męSN=}TPEETRM>UTU^ŚUV]~VXe͞EVZmݾW\u ;;PKTjjPK&AOEBPS/img/strms058.gifGIF89a^???@@@999쟟///ooo___ OOO```ppprrr000PPP*()UST878cbcGEFqpqyyyddd<<<:::|||!,^pH,Ȥrl:ШtJZجvzu(%zn; "qC|FB;uu$:;B:hmB:;u:;};"h^;:|"fÐf"VD;C}spM A5]B,K 0@k2jlUv#p냆8:)" FBƛ0<+;!4pKP2X|aT* QիI@b֮`nUQ( g@Y Som{KgX,Z0 GP K00ɠTV 6p@8S64:H w ɼ{$aˊ6ཽdhϿXV} 6F(V!w.( $Y 9CH(,0(+!.Ȁ @)DiH&yZ({6v#ФO=`)"@hFWfeq n6 s :N&-jpiä 8 %`@ p`+|!@@XRjN) * @WPm)!)pA;8@  0CNõlz^K; A: Њ ,޻C Ǹij+Ȧo;40@R1Sx Pm xkʹh,כZ^p:dJn0,**JO103V}vܳ'A; @6H/l :4 D $PA#0x~s|&q80wAHC Pݵ P2@׫K7Op+ Ly\/+,l:Z^;$K9@ 8<@k^p7:p2 !B5%O !Db:-!@ۖAkˠ;P8H?B@(`&mN P.lVH!Ms[l( +p%.+_6$0dHl6ĐT |;@"M^kb1p@g,PM\ `&z8b4 Trw멏 # G'p VV2q;l.P*6Un0|#pQB/oO #a"Tc)ÈtPuU|\&)XNa& QINØS (~S +;,E wOhgIM(87٧7 `ϔ~gVoWԒ Rz|RʅҜ4 G Q0PBrDDãS8b MKA%h\@FWP/ܔAꮞXFfhji TU- UU"4jU-@b(k_؎L#HL.*FAB5iVER m>.4ִ ,,niȿ>' F'by"{!V.wFR5 i7ʓ!Tlb*KB1!]$2y)dLUXa( oj@FFϷ0+fy)4E`4BÅQW<4־+x!ӔG>dd?Y/NWK%1k4Z)as{,_M^y`ã+>9TX~\=5gpll:o*`_!ʴ7N{Ӡ?=;JoRWVrXƕjWָV5m e$R!Yє|V\޵U{-ԣmx}Gwi>dHk@|4nwۦK.}|2hm (*a?gEeZvA!^!'mTXn Ӻy*.(t~Hᷞj/jv=xͺuWz$G6BAP,Bּ1Ak@"9ok*ٕnYg~qr["`o%Y-G$vh[$yYqP uשi++SՄ  t2fi>(4pKwP CwE@{̩;ܦH2;`A?_6kIH?Vߠephe}l?!#}CW2Uіdl`~!67ڦ-/fD2nd]݆wn*nnF*#>croW: pյD,6\ B[g5gqM,Hx.uW>-%serVC=ߕ6;=VCz2swbFoS^|/NSp+] oFYg?Vr@,U mx 1 !AW\6^vx p@  :@ ;q0֑Xfki Q u2}xAm)gU6u6-U `xPp fY2! ' b__/p`B P A7yǸ.h}RЛlЕЕ.\9P" .hY dnPp  0  PY !0喳j@ˠM`Al"mvL@e*m&Bf:F*"nc+oj h()2oN9(B;؃p?8]'nHq@ʆ)49Sq0R8rO҅ prr[r/4gsgZjR H  DD1EGHGxt{xtPTW+B!ao%ڙq )"~[)ݩ zP2'/I&H4،؎ْؐ#Q w؂@فٞ r8@X32h}}‚ڝ38q̓HlX@[3>"Nw6OHE&(нD!#p:ymCCI-&ݽs]OtkUu7R#^CBB<9tH9F== Y=S- bBB=0}ݠNBˢD!^-ݳ0(Bd4ᅇgF4B=]:E="C4B ᇑ[U-DCt-Es Be5vDK8^[Rٹ|DD +~S.i#C ov ZU_47 pTl~s┈UL"Ԫ3g8BEdfE_riZ7z*YB(-]U6FxB4yb/jp9>hV˰i YS-PI "6)aU*,,02_61`<>@B?DF0HM Lp L!$BP?%ȊP؟F `ܧ#8[|jgO%it'!e(q~ևَo _q1oza?[Qȟ__ؿڿ?_}o{oydp}_"[B\:X,jvIt>Q)`9SnHB&5_CY*)X" [Mt2D %'78& :27"EQ4>RwJ:=вRIaO]wd3a 6nMy҈%u|9]u٬0EqK?BS*,\7ٙ%8,H'mUV<  X@6PYA/MÆ$t 0]D09<~tC1$M&fM!RfKb Eph3 p9I'#E8l̔Ϊ2$Z̚!;(Y=q$Ot 5:devx ђIR"<$"VK8S2h TR `m m!yr`X'$ŠB.L# X8DIp̊C#BҘ20 $+ G3ۋXA-X \7P3ŭ( E+$DN-BVt'Xh!Hlc(>4d>ozXx=^"NOb'j[#U5XVX@ni` gQ~Nym=r&oaf& gD&_#mT~BT&0>8o(E=m#ai-22wsóXxG$> Vֻ}݉wdw۵n'r݌Jxo]ڕ}杧gQ%5‡ݕ8[ Xy÷XigWdeuoyb_hAj\4EA~, :|nǹ0y|g>  'T %46y0 @% L}rNHx"BQ9lb#t0%0Lxbe)N@UhLÜ8 AP1, ah ?,R^!q$ IV@Y c!=1冪BՒ x3+Z<Ψxq4ǩEĕ,p;A@O1ָG&RL1si;!쓟LPz&!*hjBI'0`* ZE "(Ԅh,*  {,*!Vٜ=OTE5QTұ!PBSa#H B`եPBW0*SGp֭rիI *(h 0KpA RWLi  P׼ p ԕ5d*0k[wV@RP L; ȪLtҪZmnW[5Ыn[Uh `/XV ^w(l^ʄ @vk˂]R` &JuUx}mB[מ@]aVYU+c .Ah_mq^_`VF0'' [lhͱ~w۴pjJblod+޾6,;ej+*'8lg،ַ%jjUUַ2L`jkY'B;`جsl vm 6A,,nɆzUC}i9k Pծ`! ;PK\PK&AOEBPS/img/strms507.gif@>GIF89a@@@ſ???000 999ppp```PPPﰰ///ϫooo___rrrOOO:::<<<GGGUUUֻdddvvv!,@@$xH8&t&E2fX#;ϰ4Mfk[Ƿu+z|k]jsyg~{é{q!+< )S OIH`? taA>1];.#@ )v(D 4˗0cʜI͛8sɳ@ Jљ+]ʴӧPJJm,jʵׯ`ÊEghӪ]˶ۣI}KݻxBUAm޿ /È+^l0ǐ#K^x˘3%@`ϠCV`ѨSfYOvaaاiƞ- 8i`ȓc˓ TݶO5c7(P0,~\@;ɂ;(( Sľ ICD`W@݇gy@iDgUlÂ@<7|:$r&f9c J \> ӏ/y]PA]{ i5[Ý`r٤P rYN .[NXR@Dx2cw>_lh;l^7I;fHo ?90Se@M@ X~ үz4 VժPzez 0FXuh%I/)28>~>\ӽ.9DHŐ0' XVN=kO@794v6rvӀ#P_lÎ8%9H9o@:{ %Y'd&'T.7:z[F=Ѽ(l͊=8492$m!]',2_PЄ1dQGzJ2-_aX^HxiiL--ؼ,a@ݲ8} 6Ab¤Wlw#:iiaʦ>gZ.8WK'1*$k+ujբqIx4' Zlnj1WS54DQ%I xV$&ʉ\ss(>ǩDrX9wG'}nMMcRRtvI#5 ҦU+i&$̕eث4 j5 .Xw^7 簍VY`P0mhvgCQAeJ*ٲB m?sݦO`)KL͈ƅ@r)ȷw)d?KU@kZ#K繒S鐺.zb UM(πMBhfhj$`vH[v !नYb iIҖlN.,Hp٨DV4h4S-u c:)TeXM^mKg35mGz@}O qK~ݫ@2a6z.lgzڒ5:z[-wk׺V)6K' @6r#d&N50~{2vh`țN@|OxBUpm |1. %i(0s;ܑ`qdE@;@/Mp`+@9Pд|f,iД 7Paa1wrl@  ]'(($z N@`(09[wC%5@x] "\Dpq~>g"w tS<<@3]:ByS֙o:6]y|o`o=nh;<ށwpJ&G`Xjhȇ@rG};nH.qٗ,@W|r(rtzv;3n^ w0@7|'( pi@'r Xo8}nQ}U6LHp }߷z+zzA;tdT<;t]ؕp\kq"89ȃGwGq$Ve:HD(6G((@ r(zPzɇz'zalAijgRz&r+h%xiAk ~(u P&oU~v}UYirsWwjXzPh z (r#oBXyiH} 'pqIh(m.gDY0z r]7"f[br]qlċ8$$Pw  p5V/5'th&e*,ds8w&:4Y6y86-B9DIh(pr"9txtWPR9{VגXYcdgY`IcLy.l"0Qqvi.O|ٗ~06ʥ}!QGSjY2aBTx;r! Iz6igdHnN!A' 88wۈSpqY299yGH@ pwR'Pi@rȝViI̙.g"G$~"o rptgmɟ0u r" Og"0ugr"y7zu)IL -o(0lc3xAvHJLNPRS|S<1X^,ĢAwa_\f|hUilZ\r|ws|xIcY~ƉpȈȊȌȎȐɒ<ɔ1oǙ\ƔHUM,R[!ULE 3"vR!/Q1=1R__AI.ѓSPO~92n͕ 'ٌpv2Ԍ2.QṯrM:N*S>5kW'&q_жCXsFl|U'D)"o#'[K _ 7ٕb  &9gD!u:+c*2H+PT2& Ұ$Ҁb'0b `|ns] 2gE.?_ FT# !n-!%]_l9FAr2,3/GbLMc0+ + s]ck}%h$( +3mv`'u`[l"$G؂1چ_HW\&&:!bp! p-3H>) &Dhc)6 ICڭ1#$=J&$4Sc4с)a- 5:`r)2-:rs3aI@>D!)$-=bOdwҺԗQbG}-p;=Z1ռN) >+\=L"X\}׳͑m5 MI&R1>C \=KWq[n*2h#k fn{ Ca# ]1)m=f@5+26Ku`{d׎l!\]}S14t$ m= #:U@s ([?ECs&.\H??Vǜ^aW"BL}!r]՜L 5>Ls nA0ENs+8!'9c碿E\{?_uþByq|Ljak/Я&?RQqi2u/}ro(/{`I "D.QZ:;O{~aq3t8ivq9{'v$ix4v ## $13579;NvP68v6NRV[=ݲju(-w@F^L@n{/Dh1K^8ޡĥs7+ 89R;NВ")G?# E#QT`> 0CdX@&!;9?D8*(HhSM[&)A" :($A&PTp֮_Ê;4dѧp;_TpmWo_V'~cW5 0re^_WsgϟLEtiӧQVukר?4з/GE!wo߿>xqǁ37g>۳sKNuj]sɓC O)g*=,??j :@ AbPX *bB `x!,P! 5e0\mX&* 8 :!@@@6QvN0Fi"HH+#xOFK20YxH@X:[3*, Բ=NJ#FY&a)6H. !A9MBRk^,bB7w>`!EG$MN wFYT0q 2˛_ NS*CE@YtN`X ꃕfyXw?eFI7="FfjX sq5 s)#oFwJ f%a{c@H;L8 t `:ao#A$ B "%(E~"x-TjIygd=]JEᵱNO64oJ. 33 oO17q$W_+QOGo: hq]k4ށ^xI3]Oo硇Ң7JQzB/|{owp_~e_?&4`ss`@JAo"H 6Ap d0 $ B"+t!T1\d(CCF;{AGVDтIT 8A'>$aXE ^{3Z! X1& rL8G: xG8FB< YH񑊜"HHVR&-ILn,'TnR,+Pr)oW2.m _c%I#$s?T&ye2SФfkp,4Ylr3,'9)oZl'8ix;zg\DhO|fAO&](B!z͆ԡ DZQ^ThF?Qy5iGAR#%\@CҒtLYzStCd#M PypKjS $(5AujRTVWTԭuZ*WJֳ5fmXV~U^ +V ״t\ؼ5`Jط*OFNpefYngCЦ3t1Zή CPئa]*V HX@ Z0P.FFwM6mm+,(ns]"aEBuRu@ [*21 0r@.*`pl [-@ l@  L_fwõŰ-P@$ u Ǖ@gB `w;q. Ph3岙|oq+,ZӜ[dnrn/]SG(yیu=LӸdQO< hoLSL6ðF퐗m'XeG@%qf!9ѻJE PZUo;d"݀./@%lE|aٸ(@uv#h 恆Qlb (&qnOb $;Q"W⮺8.7RX٣^˓O5$ޕ|x }o[?ѵ7m0OyK!pnpg/ET@ U{/CO^F<Kw.@p[F7Ew_9Er;||p* \ ln>((-.,4x0K3 0(@^BKT lTآڌ3 l ͂X0 ȽhMK޼om` ^ F¬ / ݰߠ- ޚmBЭ-,ΊM Q+ @ӠhbzK涐 jl܌ $ W</1>1JqF„O+, /˜u<m$m.*̑*o,`lyQ%r Wr! !B!!r"A")";"1r#1A#9##Ar$@$I$i!Qr%$Y%%ar&N%i&@&q&ur'a'}%p2)' ~`*(*+++,R,+r+,-R-+.R.2,./r,//Hc/˒.R/00*112)S, S2 ,517s2;33934=S4As4M4Qs1?S5I5O5S3S0 !666uS6y6w7{3728883919:32::2[38S,isp;:5S:;3)T (=c(&ܓ.hF@2B@>S.gh??+?1b@@ͅ~IA Ae /$\(BB5 9CChɘÀd2'A %E\s 4Pv 4Fn oG>Z/nAe,cmGq,ȃT6|ʳa8s"\>Ǜ6f#^a]z$(B@قu¼E+ԁNv`<ϳd$|=cwBJIC }מ1>^.XCx=1}<(lC|D:˧& z \!J(=ã`'VceTd c \e>xDfKpEgF՛Ca>*N KSH`D\[LE=^FcGz] %+`Ol%Ct&A>طN>"=X@Jx+DBeӺe,?瑀K-Gx]7HQ#O]QS|cx+d@_A@SZP_}ycǠWm\MhBa0KBQK`7KK[eO-_GxP "@',Q 8RE=b@"+yڧ)'@$u=y"&*.26:>BFJNRVZ^\ `uiltllv~mhX u¾ b`VbBGKOSW[_cg'jjZY ;mv5 Sr{Ÿ?+( (p3 BXnDv+RA@1 )ő.D)_Y244!в͞> :rht)SkA> BLͪU +v,ٲfϢMvXDm݃4.ݺ,[`߾~,x_;v x15neʖˬy3Ξ9O8<3ҦO_Nz֮315ښWM6αgnZ7K O|9Ο+q֯cϟ8\?ϣ~4r4~~D0?‚v(|h|!}`~H;x; P K&e Q@GPp_ȃ'a)8xQ!A#Ŋ"D@B<yĐEc Obi|Yp TT xT`adrFЀ4GCtYy:j(G4ࠝf&⩢}VɒUXei][""2pAyV`kzAEƩj<4PO!"ZAqGDhꪴ*`ȪC6 zE%fq~Wr:y#@`A;b8 /B ); \0"R?\a,q  d/[#0s#1( @yF`*ŮFY?Y/-]F^}$uG`oͣw:06BPv7텇(j6 椈P`Z.{%Fd'YVQk_I"%fEW> DJf*@<s`fR9. 2~ .޵'; S'ÒdSL 1fesa6)Fǡz8MX$#gG;//Ic?c%yOIA=iRmpDE07ΪeJ6} Pe׾vGkK<2S{Z\f:)Wp`E%]`* @h(Ǹ9MtJg9n|vMMGk{v5HSc@==c/;=v%B ʺv{dkFbG43#67v(mˠ}Fx?ȏ5FրO dOʌ>$HRh)!A0qc@ks 4`)g4#><3~t sU$<{.^ha2j! S06Q*caJq˟! O.\0,=['uw>xjJ$sEW uL*#Yͪtݢ2<I7;ށpKna~ #Sz7,T]kʅVr*Jyw%}!% Lsü$0YYӌ;:%k T4`~Rl[_TT&qp=4w!q{iܜ pc"8*VW  p\<`@՞$@IhxAC+hCq^  `,*x 9EBXCэR ~@j!!ELQA!7$|Eh%`*_U_^tu5&`Za@@j! WplEP^^y`b jn$!ҡ)Ed4_A*v*2" 7 `@bv@|/$!%a]!0"48#D$ڟhB}ac88>a"rA"McIL_ h#RL"\v(@ | <+%->fRvRfbj WbT:VUVODN d2&a&atI% ~FZd;"+l! $(E2}nNfB6gfA ng" y0ަ8t0N3t@"Ag! |"N#?4g"@w&xR>c.zG(} $l9}"MBMh6n|y(&^DPd^۱hhZ|҂T^C(hn'(R]iPew^5hHTI(m%)L,i>5=EitUi$HdĔSĪ)M<":<>g)H</Lzijn!h@^݁LU`9fva jh%(tANe  b 㥾a1+e[V:f(,#*a}~d 0BW';Bk~aB2>+Fi!dAޡ(ة6jkڨ6B22n¼~*+h!8:p@`,,%2lkg^iJ_v"*tb,n~t+&F vjʲlEi29l" C""—J-DP$Y!iJRׂr!ٮ:ٲMmm܆iZvn Bi҃2n:B}@dX*AGjrnƑ*nĽn;nnTnn.iЮNl4m# o"o*2o:o\J/*ZoFm"r/z/֊oooFoobnJ-oO0$,p0O0CWpopw0Sps0 { 0 pp ð 0 0 ; p G01c0 @2DCsO13g1w11{111qK2G2'2/72 ? G2!W 4$W s$w&{r((_'(2)*#2+++32,;,C2-kr**r++r,fXE1~A231'03C45/]3W36_1O6#7;37g8os1w393:93;2;3<3=G38߳:4B4@+C33 ;PK@E>@>PK&AOEBPS/strms_trrules.htm Troubleshooting Rules and Rule-Based Transformations

34 Troubleshooting Rules and Rule-Based Transformations

When a capture process, synchronous capture, propagation, apply process, or messaging client is not behaving as expected, the problem might be that rule sets, rules, or rule-based transformations for the Oracle Streams client are not configured properly. Use the following sections to identify and resolve problems with rule sets, rules, and rule-based transformations:

The following topics describe identifying and resolving common problems with rules and rule-based transformations in an Oracle Streams environment:

Are Rules Configured Properly for the Oracle Streams Client?

If a capture process, synchronous capture, propagation, apply process, or messaging client is behaving in an unexpected way, then the problem might be that the rules in one or more of the rule sets for the Oracle Streams client are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the rule sets used by the capture process do not instruct the capture process to capture changes to the table.

You can check the rules for a particular Oracle Streams client by querying the DBA_STREAMS_RULES data dictionary view. If you use both positive rule sets and negative rule sets in your Oracle Streams environment, then it is important to know whether a rule returned by this view is in the positive or negative rule set for a particular Oracle Streams client.

An Oracle Streams client performs an action, such as capture, propagation, apply, or dequeue, for messages that satisfy its rule sets. In general, a message satisfies the rule sets for an Oracle Streams client if no rules in the negative rule set evaluate to TRUE for the message, and at least one rule in the positive rule set evaluates to TRUE for the message.

"Rule Sets and Rule Evaluation of Messages" contains more detailed information about how a message satisfies the rule sets for an Oracle Streams client, including information about Oracle Streams client behavior when one or more rule sets are not specified.

This section includes the following subsections:

Checking Schema and Global Rules

Schema and global rules in the positive rule set for an Oracle Streams client instruct the Oracle Streams client to perform its task for all of the messages relating to a particular schema or database, respectively. Schema and global rules in the negative rule set for an Oracle Streams client instruct the Oracle Streams client to discard all of the messages relating to a particular schema or database, respectively. If an Oracle Streams client is not behaving as expected, then it might be because schema or global rules are not configured properly for the Oracle Streams client.

For example, suppose a database is running an apply process named strm01_apply, and you want this apply process to apply LCRs containing changes to the hr schema. If the apply process uses a negative rule set, then ensure that there are no schema rules that evaluate to TRUE for this schema in the negative rule set. Such rules cause the apply process to discard LCRs containing changes to the schema. "Displaying the Rules in the Negative Rule Set for an Oracle Streams Client" contains an example of a query that shows such rules.

If the query returns any such rules, then the rules returned might be causing the apply process to discard changes to the schema. If this query returns no rows, then ensure that there are schema rules in the positive rule set for the apply process that evaluate to TRUE for the schema. "Displaying the Rules in the Positive Rule Set for an Oracle Streams Client" contains an example of a query that shows such rules.

Checking Table Rules

Table rules in the positive rule set for an Oracle Streams client instruct the Oracle Streams client to perform its task for the messages relating to one or more particular tables. Table rules in the negative rule set for an Oracle Streams client instruct the Oracle Streams client to discard the messages relating to one or more particular tables.

If an Oracle Streams client is not behaving as expected for a particular table, then it might be for one of the following reasons:

  • One or more global rules in the rule sets for the Oracle Streams client instruct the Oracle Streams client to behave in a particular way for messages relating to the table because the table is in a specific database. That is, a global rule in the negative rule set for the Oracle Streams client might instruct the Oracle Streams client to discard all messages from the source database that contains the table, or a global rule in the positive rule set for the Oracle Streams client might instruct the Oracle Streams client to perform its task for all messages from the source database that contains the table.

  • One or more schema rules in the rule sets for the Oracle Streams client instruct the Oracle Streams client to behave in a particular way for messages relating to the table because the table is in a specific schema. That is, a schema rule in the negative rule set for the Oracle Streams client might instruct the Oracle Streams client to discard all messages relating to database objects in the schema, or a schema rule in the positive rule set for the Oracle Streams client might instruct the Oracle Streams client to perform its task for all messages relating to database objects in the schema.

  • One or more table rules in the rule sets for the Oracle Streams client instruct the Oracle Streams client to behave in a particular way for messages relating to the table.

If you are sure that no global or schema rules are causing the unexpected behavior, then you can check for table rules in the rule sets for an Oracle Streams client. For example, if you expect a capture process to capture changes to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the positive and negative rule sets for the capture process do not instruct it to capture changes to the table.

Suppose a database is running a capture process named strm01_capture, and you want this capture process to capture changes to the hr.departments table. If the capture process uses a negative rule set, then ensure that there are no table rules that evaluate to TRUE for this table in the negative rule set. Such rules cause the capture process to discard changes to the table. "Displaying the Rules in the Negative Rule Set for an Oracle Streams Client" contains an example of a query that shows rules in a negative rule set.

If that query returns any such rules, then the rules returned might be causing the capture process to discard changes to the table. If that query returns no rules, then ensure that there are one or more table rules in the positive rule set for the capture process that evaluate to TRUE for the table. "Displaying the Rules in the Positive Rule Set for an Oracle Streams Client" contains an example of a query that shows rules in a positive rule set.

You can also determine which rules have a particular pattern in their rule condition. "Listing Each Rule that Contains a Specified Pattern in Its Condition". For example, you can find all of the rules with the string "departments" in their rule condition, and you can ensure that these rules are in the correct rule sets.


See Also:

"Table Rules Example" for more information about specifying table rules

Checking Subset Rules

A subset rule can be in the rule set used by a capture process, synchronous capture, propagation, apply process, or messaging client. A subset rule evaluates to TRUE only if a DML operation contains a change to a particular subset of rows in the table. For example, to check for table rules that evaluate to TRUE for an apply process named strm01_apply when there are changes to the hr.departments table, run the following query:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A20
COLUMN DML_CONDITION HEADING 'Subset Condition' FORMAT A30

SELECT RULE_NAME, RULE_TYPE, DML_CONDITION
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME   = 'STRM01_APPLY' AND 
        STREAMS_TYPE   = 'APPLY' AND
        SCHEMA_NAME    = 'HR' AND
        OBJECT_NAME    = 'DEPARTMENTS';
Rule Name            Rule Type            Subset Condition
-------------------- -------------------- ------------------------------
DEPARTMENTS5         DML                  location_id=1700
DEPARTMENTS6         DML                  location_id=1700
DEPARTMENTS7         DML                  location_id=1700

Notice that this query returns any subset condition for the table in the DML_CONDITION column, which is labeled "Subset Condition" in the output. In this example, subset rules are specified for the hr.departments table. These subset rules evaluate to TRUE only if an LCR contains a change that involves a row where the location_id is 1700. So, if you expected the apply process to apply all changes to the table, then these subset rules cause the apply process to discard changes that involve rows where the location_id is not 1700.


Note:

Subset rules must reside only in positive rule sets.


See Also:


Checking for Message Rules

A message rule can be in the rule set used by a propagation, apply process, or messaging client. Message rules pertain only to user messages of a specific message type, not to captured LCRs. A message rule evaluates to TRUE if a user message in a queue is of the type specified in the message rule and satisfies the rule condition of the message rule.

If you expect a propagation, apply process, or messaging client to perform its task for some user messages, but the Oracle Streams client is not performing its task for these messages, then the cause might be that the rules in the positive and negative rule sets for the Oracle Streams client do not instruct it to perform its task for these messages. Similarly, if you expect a propagation, apply process, or messaging client to discard some user messages, but the Oracle Streams client is not discarding these messages, then the cause might be that the rules in the positive and negative rule sets for the Oracle Streams client do not instruct it to discard these messages.

For example, suppose you want a messaging client named oe to dequeue messages of type oe.user_msg that satisfy the following condition:

:"VAR$_2".OBJECT_OWNER = 'OE' AND  :"VAR$_2".OBJECT_NAME = 'ORDERS'

If the messaging client uses a negative rule set, then ensure that there are no message rules that evaluate to TRUE for this message type in the negative rule set. Such rules cause the messaging client to discard these messages. For example, to determine whether there are any such rules in the negative rule set for the messaging client, run the following query:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A30

SELECT RULE_NAME, RULE_CONDITION 
  FROM DBA_STREAMS_RULES
  WHERE STREAMS_NAME       = 'OE' AND
        MESSAGE_TYPE_OWNER = 'OE' AND
        MESSAGE_TYPE_NAME  = 'USER_MSG' AND
        RULE_SET_TYPE      = 'NEGATIVE';

If this query returns any rules, then the rules returned might be causing the messaging client to discard messages. Examine the rule condition of the returned rules to determine whether these rules are causing the messaging client to discard the messages that it should be dequeuing. If this query returns no rules, then ensure that there are message rules in the positive rule set for the messaging client that evaluate to TRUE for this message type and condition.

For example, to determine whether any message rules evaluate to TRUE for this message type in the positive rule set for the messaging client, run the following query:

COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A35
COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35

SELECT RULE_NAME, RULE_CONDITION 
  FROM DBA_STREAMS_RULES 
  WHERE STREAMS_NAME       = 'OE' AND
        MESSAGE_TYPE_OWNER = 'OE' AND
        MESSAGE_TYPE_NAME  = 'USER_MSG' AND
        RULE_SET_TYPE      = 'POSITIVE';

If you have message rules that evaluate to TRUE for this message type in the positive rule set for the messaging client, then these rules are returned. In this case, your output looks similar to the following:

Rule Name                           Rule Condition
----------------------------------- -----------------------------------
RULE$_3                             :"VAR$_2".OBJECT_OWNER = 'OE' AND
                                    :"VAR$_2".OBJECT_NAME = 'ORDERS'

Examine the rule condition for the rules returned to determine whether they instruct the messaging client to dequeue the proper messages. Based on these results, the messaging client named oe should dequeue messages of oe.user_msg type that satisfy condition shown in the output. In other words, no rule in the negative messaging client rule set discards these messages, and a rule exists in the positive messaging client rule set that evaluates to TRUE when the messaging client finds a message in its queue of the oe.user_msg type that satisfies the rule condition.


See Also:


Resolving Problems with Rules

If you determine that an Oracle Streams capture process, synchronous capture, propagation, apply process, or messaging client is not behaving as expected because one or more rules must be added to the rule set for the Oracle Streams client, then you can use one of the following procedures in the DBMS_STREAMS_ADM package to add appropriate rules:

  • ADD_GLOBAL_PROPAGATION_RULES

  • ADD_GLOBAL_RULES

  • ADD_SCHEMA_PROPAGATION_RULES

  • ADD_SCHEMA_RULES

  • ADD_SUBSET_PROPAGATION_RULES

  • ADD_SUBSET_RULES

  • ADD_TABLE_PROPAGATION_RULES

  • ADD_TABLE_RULES

  • ADD_MESSAGE_PROPAGATION_RULE

  • ADD_MESSAGE_RULE

You can use the DBMS_RULE_ADM package to add customized rules, if necessary.

It is also possible that the Oracle Streams capture process, synchronous capture, propagation, apply process, or messaging client is not behaving as expected because one or more rules should be altered or removed from a rule set.

If you have the correct rules, and the relevant messages are still filtered out by an Oracle Streams capture process, propagation, or apply process, then check your trace files and alert log for a warning about a missing "multi-version data dictionary", which is an Oracle Streams data dictionary. The following information might be included in such warning messages:

  • gdbnm: Global name of the source database of the missing object

  • scn: SCN for the transaction that has been missed

If you find such messages, and you are using custom capture process rules or reusing existing capture process rules for a new destination database, then ensure that you run the appropriate procedure to prepare for instantiation:

  • PREPARE_TABLE_INSTANTIATION

  • PREPARE_SCHEMA_INSTANTIATION

  • PREPARE_GLOBAL_INSTANTIATION

Also, ensure that propagation is working from the source database to the destination database. Oracle Streams data dictionary information is propagated to the destination database and loaded into the dictionary at the destination database.


See Also:


Are Declarative Rule-Based Transformations Configured Properly?

A declarative rule-based transformation is a rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL. If an Oracle Streams capture process, synchronous capture, propagation, apply process, or messaging client is not behaving as expected, then check the declarative rule-based transformations specified for the rules used by the Oracle Streams client and correct any mistakes.

The most common problems with declarative rule-based transformations are:

  • The declarative rule-based transformation is specified for a table or involves columns in a table, but the schema either was not specified or was incorrectly specified when the transformation was created. If the schema is not correct in a declarative rule-based transformation, then the transformation will not be run on the appropriate LCRs. You should specify the owning schema for a table when you create a declarative rule-based transformation. If the schema is not specified when a declarative rule-based transformation is created, then the user who creates the transformation is specified for the schema by default.

    If the schema is not correct for a declarative rule-based transformation, then, to correct the problem, remove the transformation and re-create it, specifying the correct schema for each table.

  • If more than one declarative rule-based transformation is specified for a particular rule, then ensure that the ordering is correct for execution of these transformations. Incorrect ordering of declarative rule-based transformations can result in errors or inconsistent data.

    If the ordering is not correct for the declarative rule-based transformation specified on a single rule, then, to correct the problem, remove the transformations and re-create them with the correct ordering.

Are the Custom Rule-Based Transformations Configured Properly?

A custom rule-based transformation is any modification by a user-defined function to a message when a rule evaluates to TRUE. A custom rule-based transformation is specified in the action context of a rule, and these action contexts contain a name-value pair with STREAMS$_TRANSFORM_FUNCTION for the name and a user-created function name for the value. This user-created function performs the transformation. If the user-created function contains any flaws, then unexpected behavior can result.

If an Oracle Streams capture process, synchronous capture, propagation, apply process, or messaging client is not behaving as expected, then check the custom rule-based transformation functions specified for the rules used by the Oracle Streams client and correct any flaws. You can find the names of these functions by querying the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view. You might need to modify a transformation function or remove a custom rule-based transformation to correct the problem. Also, ensure that the name of the function is spelled correctly when you specify the transformation for a rule.

An error caused by a custom rule-based transformation might cause a capture process, synchronous capture, propagation, apply process, or messaging client to abort. In this case, you might need to correct the transformation before the Oracle Streams client can be restarted or invoked.

Rule evaluation is done before a custom rule-based transformation. For example, if you have a transformation that changes the name of a table from emps to employees, then ensure that each rule using t he transformation specifies the table name emps, rather than employees, in its rule condition.

Are Incorrectly Transformed LCRs in the Error Queue?

In some cases, incorrectly transformed LCRs might have been moved to the error queue by an apply process. When this occurs, you should examine the transaction in the error queue to analyze the feasibility of reexecuting the transaction successfully. If an abnormality is found in the transaction, then you might be able to configure a procedure DML handler to correct the problem. The DML handler will run when you reexecute the error transaction. When a DML handler is used to correct a problem in an error transaction, the apply process that uses the DML handler should be stopped to prevent the DML handler from acting on LCRs that are not involved with the error transaction. After successful reexecution, if the DML handler is no longer needed, then remove it. Also, correct the rule-based transformation to avoid future errors.

PKKXPK&AOEBPS/strms_mrules.htm Managing Rules

18 Managing Rules

An Oracle Streams environment uses rules to control the behavior of Oracle Streams clients (capture processes, propagations, apply processes, and messaging clients). In addition, you can create custom applications that are clients of the rules engine. This chapter contains instructions for managing rule sets, rules, and privileges related to rules.

The following topics describe managing rules:

Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.


Caution:

Modifying the rules and rule sets used by an Oracle Streams client changes the behavior of the Oracle Streams client.


Note:

This chapter does not contain examples for creating evaluation contexts, nor does it contain examples for evaluating events using the DBMS_RULE.EVALUATE procedure. See Oracle Streams Extended Examples for these examples.

Managing Rule Sets

You can modify a rule set without stopping Oracle Streams capture processes, propagations, and apply processes that use the rule set. Oracle Streams will detect the change immediately after it is committed. If you need precise control over which messages use the new version of a rule set, then complete the following steps:

  1. Stop the relevant capture processes, propagations, and apply processes.

  2. Modify the rule set.

  3. Restart the Oracle Streams clients you stopped in Step 1.

This section provides instructions for completing the following tasks:

Creating a Rule Set

The following example runs the CREATE_RULE_SET procedure in the DBMS_RULE_ADM package to create a rule set:

BEGIN
  DBMS_RULE_ADM.CREATE_RULE_SET(
    rule_set_name       => 'strmadmin.hr_capture_rules',
    evaluation_context  => 'SYS.STREAMS$_EVALUATION_CONTEXT');
END;
/

Running this procedure performs the following actions:

  • Creates a rule set named hr_capture_rules in the strmadmin schema. A rule set with the same name and owner must not exist.

  • Associates the rule set with the SYS.STREAMS$_EVALUATION_CONTEXT evaluation context, which is the Oracle-supplied evaluation context for Oracle Streams.

You can also use the following procedures in the DBMS_STREAMS_ADM package to create a rule set automatically, if one does not exist for an Oracle Streams capture process, propagation, apply process, or messaging client:

Except for ADD_SUBSET_PROPAGATION_RULES and ADD_SUBSET_RULES, these procedures can create either a positive rule set or a negative rule set for an Oracle Streams client. ADD_SUBSET_PROPAGATION_RULES and ADD_SUBSET_RULES can only create a positive rule set for an Oracle Streams client.


See Also:

Oracle Streams Replication Administrator's Guide for information about creating Streams clients

Adding a Rule to a Rule Set

When you add rules to a rule set, the behavior of the Oracle Streams clients that use the rule set changes. Ensure that you understand how rules to a rule set will affect Oracle Streams clients before proceeding.

The following example runs the ADD_RULE procedure in the DBMS_RULE_ADM package to add the hr_dml rule to the hr_capture_rules rule set:

BEGIN
  DBMS_RULE_ADM.ADD_RULE(
    rule_name          => 'strmadmin.hr_dml', 
    rule_set_name      => 'strmadmin.hr_capture_rules',
    evaluation_context => NULL);
END;
/

In this example, no evaluation context is specified when running the ADD_RULE procedure. Therefore, if the rule does not have its own evaluation context, it will inherit the evaluation context of the hr_capture_rules rule set. If you want a rule to use an evaluation context other than the one specified for the rule set, then you can set the evaluation_context parameter to this evaluation context when you run the ADD_RULE procedure.

Removing a Rule from a Rule Set

When you remove a rule from a rule set, the behavior of the Oracle Streams clients that use the rule set changes. Ensure that you understand how removing a rule from a rule set will affect Oracle Streams clients before proceeding.

The following example runs the REMOVE_RULE procedure in the DBMS_RULE_ADM package to remove the hr_dml rule from the hr_capture_rules rule set:

BEGIN
  DBMS_RULE_ADM.REMOVE_RULE(
    rule_name     => 'strmadmin.hr_dml', 
    rule_set_name => 'strmadmin.hr_capture_rules');
END;
/

After running the REMOVE_RULE procedure, the rule still exists in the database and, if it was in any other rule sets, it remains in those rule sets.


See Also:

"Dropping a Rule"

Dropping a Rule Set

The following example runs the DROP_RULE_SET procedure in the DBMS_RULE_ADM package to drop the hr_capture_rules rule set from the database:

BEGIN
  DBMS_RULE_ADM.DROP_RULE_SET(
    rule_set_name => 'strmadmin.hr_capture_rules', 
    delete_rules  => FALSE);
END;
/

In this example, the delete_rules parameter in the DROP_RULE_SET procedure is set to FALSE, which is the default setting. Therefore, if the rule set contains any rules, then these rules are not dropped. If the delete_rules parameter is set to TRUE, then any rules in the rule set that are not in another rule set are dropped from the database automatically. Rules in the rule set that are in one or more other rule sets are not dropped.

Managing Rules

You can modify a rule without stopping Oracle Streams capture processes, propagations, and apply processes that use the rule. Oracle Streams will detect the change immediately after it is committed. If you need precise control over which messages use the new version of a rule, then complete the following steps:

  1. Stop the relevant capture processes, propagations, and apply processes.

  2. Modify the rule.

  3. Restart the Oracle Streams clients you stopped in Step 1.

This section provides instructions for completing the following tasks:

Creating a Rule

The following examples use the CREATE_RULE procedure in the DBMS_RULE_ADM package to create a rule without an action context and a rule with an action context.

Creating a Rule without an Action Context

To create a rule without an action context, run the CREATE_RULE procedure and specify the rule name using the rule_name parameter and the rule condition using the condition parameter, as in the following example:

BEGIN  
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name  => 'strmadmin.hr_dml',
    condition  => ' :dml.get_object_owner() = ''HR'' ');
END;
/

Running this procedure performs the following actions:

  • Creates a rule named hr_dml in the strmadmin schema. A rule with the same name and owner must not exist.

  • Creates a condition that evaluates to TRUE for any DML change to a table in the hr schema.

In this example, no evaluation context is specified for the rule. Therefore, the rule will either inherit the evaluation context of any rule set to which it is added, or it will be assigned an evaluation context explicitly when the DBMS_RULE_ADM.ADD_RULE procedure is run to add it to a rule set. At this point, the rule cannot be evaluated because it is not part of any rule set.

You can also use the following procedures in the DBMS_STREAMS_ADM package to create rules and add them to a rule set automatically:

Except for ADD_SUBSET_PROPAGATION_RULES and ADD_SUBSET_RULES, these procedures can add rules to either the positive rule set or the negative rule set for an Oracle Streams client. ADD_SUBSET_PROPAGATION_RULES and ADD_SUBSET_RULES can add rules only to the positive rule set for an Oracle Streams client.


See Also:

Oracle Streams Replication Administrator's Guide for information about creating Streams clients

Creating a Rule with an Action Context

To create a rule with an action context, run the CREATE_RULE procedure and specify the rule name using the rule_name parameter, the rule condition using the condition parameter, and the rule action context using the action_context parameter. You add a name-value pair to an action context using the ADD_PAIR member procedure of the RE$NV_LIST type

The following example creates a rule with a non-NULL action context:

DECLARE
  ac  SYS.RE$NV_LIST;
BEGIN
  ac := SYS.RE$NV_LIST(NULL);
  ac.ADD_PAIR('course_number', ANYDATA.CONVERTNUMBER(1057));
  DBMS_RULE_ADM.CREATE_RULE(
    rule_name      => 'strmadmin.rule_dep_10',
    condition      => ' :dml.get_object_owner()=''HR'' AND ' || 
       ' :dml.get_object_name()=''EMPLOYEES'' AND ' || 
       ' (:dml.get_value(''NEW'', ''DEPARTMENT_ID'').AccessNumber()=10) AND ' || 
       ' :dml.get_command_type() = ''INSERT'' ',
    action_context => ac);
END;
/

Running this procedure performs the following actions:

  • Creates a rule named rule_dep_10 in the strmadmin schema. A rule with the same name and owner must not exist.

  • Creates a condition that evaluates to TRUE for any insert into the hr.employees table where the department_id is 10.

  • Creates an action context with one name-value pair that has course_number for the name and 1057 for the value.


See Also:

"Rule Action Context" for a scenario that uses such a name-value pair in an action context

Altering a Rule

You can use the ALTER_RULE procedure in the DBMS_RULE_ADM package to alter an existing rule. Specifically, you can use this procedure to do the following:

  • Change a rule condition

  • Change a rule evaluation context

  • Remove a rule evaluation context

  • Modify a name-value pair in a rule action context

  • Add a name-value pair to a rule action context

  • Remove a name-value pair from a rule action context

  • Change the comment for a rule

  • Remove the comment for a rule

The following sections contains examples for some of these alterations.

Changing a Rule Condition

You use the condition parameter in the ALTER_RULE procedure to change the condition of an existing rule. For example, suppose you want to change the condition of the rule created in "Creating a Rule". The condition in the existing hr_dml rule evaluates to TRUE for any DML change to any object in the hr schema. If you want to exclude changes to the employees table in this schema, then you can alter the rule so that it evaluates to FALSE for DML changes to the hr.employees table, but continues to evaluate to TRUE for DML changes to any other table in this schema. The following procedure alters the rule in this way:

BEGIN  
  DBMS_RULE_ADM.ALTER_RULE(
    rule_name          => 'strmadmin.hr_dml',
    condition          => ' :dml.get_object_owner() = ''HR'' AND NOT ' ||
                          ' :dml.get_object_name() = ''EMPLOYEES'' ',
    evaluation_context => NULL);
END;
/

Note:

  • Changing the condition of a rule affects all rule sets that contain the rule.

  • To alter a rule but retain the rule action context, specify NULL for action_context parameter in the ALTER_RULE procedure. NULL is the default value for the action_context parameter.

  • When a rule is in the rule set for a synchronous capture, do not change the following rule conditions: :dml.get_object_name and :dml.get_object_owner. Changing these conditions can cause the synchronous capture not to capture changes to the database object. You can change other conditions in synchronous capture rules.


Modifying a Name-Value Pair in a Rule Action Context

To modify a name-value pair in a rule action context, you first remove the name-value pair from the rule action context and then add a different name-value pair to the rule action context.

This example modifies a name-value pair for rule rule_dep_10 by first removing the name-value pair with the name course_name from the rule action context and then adding a different name-value pair back to the rule action context with the same name (course_name) but a different value. This name-value pair being modified was added to the rule in the example in "Creating a Rule with an Action Context".

If an action context contains name-value pairs in addition to the name-value pair that you are modifying, then be cautious when you modify the action context so that you do not change or remove any of the other name-value pairs.

Complete the following steps to modify a name-value pair in an action context:

  1. You can view the name-value pairs in the action context of a rule by performing the following query:

    COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A25
    COLUMN AC_VALUE_NUMBER HEADING 'Action Context Number Value' FORMAT 9999
    
    SELECT 
        AC.NVN_NAME ACTION_CONTEXT_NAME, 
        AC.NVN_VALUE.ACCESSNUMBER() AC_VALUE_NUMBER
      FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
      WHERE RULE_NAME = 'RULE_DEP_10';
    

    This query displays output similar to the following:

    Action Context Name       Action Context Number Value
    ------------------------- ---------------------------
    course_number                                    1057
    
  2. Modify the name-value pair. Ensure that no other users are modifying the action context at the same time. This step first removes the name-value pair containing the name course_number from the action context for the rule_dep_10 rule using the REMOVE_PAIR member procedure of the RE$NV_LIST type. Next, this step adds a name-value pair containing the new name-value pair to the rule action context using the ADD_PAIR member procedure of this type. In this case, the name is course_number and the value is 1108 for the added name-value pair.

    To preserve any existing name-value pairs in the rule action context, this example selects the rule action context into a variable before altering it:

    DECLARE
      action_ctx       SYS.RE$NV_LIST;
      ac_name          VARCHAR2(30) := 'course_number';
    BEGIN
      SELECT RULE_ACTION_CONTEXT
        INTO action_ctx
        FROM DBA_RULES R
        WHERE RULE_OWNER='STRMADMIN' AND RULE_NAME='RULE_DEP_10';
      action_ctx.REMOVE_PAIR(ac_name);
      action_ctx.ADD_PAIR(ac_name,
                     ANYDATA.CONVERTNUMBER(1108));
      DBMS_RULE_ADM.ALTER_RULE(
        rule_name       =>  'strmadmin.rule_dep_10',
        action_context  => action_ctx);
    END;
    /
    

    To ensure that the name-value pair was altered properly, you can rerun the query in Step 1. The query should display output similar to the following:

    Action Context Name       Action Context Number Value
    ------------------------- ---------------------------
    course_number                                    1108
    

Adding a Name-Value Pair to a Rule Action Context

You can preserve the existing name-value pairs in the action context by selectingE the action context into a variable before adding a new pair using the ADD_PAIR member procedure of the RE$NV_LIST type. Ensure that no other users are modifying the action context at the same time. The following example preserves the existing name-value pairs in the action context of the rule_dep_10 rule and adds a new name-value pair with dist_list for the name and admin_list for the value:

DECLARE
  action_ctx       SYS.RE$NV_LIST;
  ac_name          VARCHAR2(30) := 'dist_list';
BEGIN
  action_ctx := SYS.RE$NV_LIST(SYS.RE$NV_ARRAY());
  SELECT RULE_ACTION_CONTEXT
    INTO action_ctx
    FROM DBA_RULES R
    WHERE RULE_OWNER='STRMADMIN' AND RULE_NAME='RULE_DEP_10';
  action_ctx.ADD_PAIR(ac_name,
                 ANYDATA.CONVERTVARCHAR2('admin_list'));
  DBMS_RULE_ADM.ALTER_RULE(
    rule_name       =>  'strmadmin.rule_dep_10',
    action_context  => action_ctx);
END;
/

To ensure that the name-value pair was added successfully, you can run the following query:

COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A25
COLUMN AC_VALUE_NUMBER HEADING 'Action Context|Number Value' FORMAT 9999
COLUMN AC_VALUE_VARCHAR2 HEADING 'Action Context|Text Value' FORMAT A25

SELECT 
    AC.NVN_NAME ACTION_CONTEXT_NAME, 
    AC.NVN_VALUE.ACCESSNUMBER() AC_VALUE_NUMBER,
    AC.NVN_VALUE.ACCESSVARCHAR2() AC_VALUE_VARCHAR2
  FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  WHERE RULE_NAME = 'RULE_DEP_10';

This query should display output similar to the following:

                          Action Context Action Context
Action Context Name         Number Value Text Value
------------------------- -------------- -------------------------
course_number                       1088
dist_list                                admin_list

See Also:

"Rule Action Context" for a scenario that uses similar name-value pairs in an action context

Removing a Name-Value Pair from a Rule Action Context

You remove a name-value pair in the action context of a rule using the REMOVE_PAIR member procedure of the RE$NV_LIST type. Ensure that no other users are modifying the action context at the same time.

Removing a name-value pair means altering the action context of a rule. If an action context contains name-value pairs in addition to the name-value pair being removed, then be cautious when you modify the action context so that you do not change or remove any other name-value pairs.

This example assumes that the rule_dep_10 rule has the following name-value pairs:

NameValue
course_number1088
dist_listadmin_list


See Also:

You added these name-value pairs to the rule_dep_10 rule if you completed the examples in the following sections:

This example preserves existing name-value pairs in the action context of the rule_dep_10 rule that should not be removed by selecting the existing action context into a variable and then removing the name-value pair with dist_list for the name.

DECLARE
  action_ctx       SYS.RE$NV_LIST;
  ac_name          VARCHAR2(30) := 'dist_list';
BEGIN
  SELECT RULE_ACTION_CONTEXT
    INTO action_ctx
    FROM DBA_RULES R
    WHERE RULE_OWNER='STRMADMIN' AND RULE_NAME='RULE_DEP_10';
  action_ctx.REMOVE_PAIR(ac_name);
  DBMS_RULE_ADM.ALTER_RULE(
    rule_name       =>  'strmadmin.rule_dep_10',
    action_context  =>  action_ctx);
END;
/

To ensure that the name-value pair was removed successfully without removing any other name-value pairs in the action context, you can run the following query:

COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A25
COLUMN AC_VALUE_NUMBER HEADING 'Action Context|Number Value' FORMAT 9999
COLUMN AC_VALUE_VARCHAR2 HEADING 'Action Context|Text Value' FORMAT A25

SELECT 
    AC.NVN_NAME ACTION_CONTEXT_NAME, 
    AC.NVN_VALUE.ACCESSNUMBER() AC_VALUE_NUMBER,
    AC.NVN_VALUE.ACCESSVARCHAR2() AC_VALUE_VARCHAR2
  FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  WHERE RULE_NAME = 'RULE_DEP_10';

This query should display output similar to the following:

                          Action Context Action Context
Action Context Name         Number Value Text Value
------------------------- -------------- -------------------------
course_number                       1108

Modifying System-Created Rules

System-created rules are rules created by running a procedure in the DBMS_STREAMS_ADM package. If you cannot create a rule with the exact rule condition you need using the DBMS_STREAMS_ADM package, then you can create a rule with a condition based on a system-created rule by following these general steps:

  1. Copy the rule condition of the system-created rule. You can view the rule condition of a system-created rule by querying the DBA_STREAMS_RULES data dictionary view.

  2. Modify the condition.

  3. Create a rule with the modified condition.

  4. Add the new rule to a rule set for an Oracle Streams capture process, propagation, apply process, or messaging client.

  5. Remove the original rule if it is no longer needed using the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package.


See Also:


Dropping a Rule

The following example runs the DROP_RULE procedure in the DBMS_RULE_ADM package to drop the hr_dml rule from the database:

BEGIN
  DBMS_RULE_ADM.DROP_RULE(
    rule_name => 'strmadmin.hr_dml', 
    force     => FALSE);
END;
/

In this example, the force parameter in the DROP_RULE procedure is set to FALSE, which is the default setting. Therefore, the rule cannot be dropped if it is in one or more rule sets. If the force parameter is set to TRUE, then the rule is dropped from the database and automatically removed from any rule sets that contain it.

Managing Privileges on Evaluation Contexts, Rule Sets, and Rules

This section provides instructions for completing the following tasks:

Granting System Privileges on Evaluation Contexts, Rule Sets, and Rules

You can use the GRANT_SYSTEM_PRIVILEGE procedure in the DBMS_RULE_ADM package to grant system privileges on evaluation contexts, rule sets, and rules to users and roles. These privileges enable a user to create, alter, execute, or drop these objects in the user's own schema or, if the "ANY" version of the privilege is granted, in any schema.

For example, to grant the hr user the privilege to create an evaluation context in the user's own schema, enter the following while connected as a user who can grant privileges and alter users:

BEGIN 
  DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege    => SYS.DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee      => 'hr', 
    grant_option => FALSE);
END;
/

In this example, the grant_option parameter in the GRANT_SYSTEM_PRIVILEGE procedure is set to FALSE, which is the default setting. Therefore, the hr user cannot grant the CREATE_EVALUATION_CONTEXT_OBJ system privilege to other users or roles. If the grant_option parameter were set to TRUE, then the hr user could grant this system privilege to other users or roles.

Granting Object Privileges on an Evaluation Context, Rule Set, or Rule

You can use the GRANT_OBJECT_PRIVILEGE procedure in the DBMS_RULE_ADM package to grant object privileges on a specific evaluation context, rule set, or rule. These privileges enable a user to alter or execute the specified object.

For example, to grant the hr user the privilege to both alter and execute a rule set named hr_capture_rules in the strmadmin schema, enter the following:

BEGIN 
  DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE(
    privilege    => SYS.DBMS_RULE_ADM.ALL_ON_RULE_SET,
    object_name  => 'strmadmin.hr_capture_rules',
    grantee      => 'hr', 
    grant_option => FALSE);
END;
/

In this example, the grant_option parameter in the GRANT_OBJECT_PRIVILEGE procedure is set to FALSE, which is the default setting. Therefore, the hr user cannot grant the ALL_ON_RULE_SET object privilege for the specified rule set to other users or roles. If the grant_option parameter were set to TRUE, then the hr user could grant this object privilege to other users or roles.

Revoking System Privileges on Evaluation Contexts, Rule Sets, and Rules

You can use the REVOKE_SYSTEM_PRIVILEGE procedure in the DBMS_RULE_ADM package to revoke system privileges on evaluation contexts, rule sets, and rules.

For example, to revoke from the hr user the privilege to create an evaluation context in the user's own schema, enter the following while connected as a user who can grant privileges and alter users:

BEGIN 
  DBMS_RULE_ADM.REVOKE_SYSTEM_PRIVILEGE(
    privilege    => SYS.DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    revokee      => 'hr');
END;
/

Revoking Object Privileges on an Evaluation Context, Rule Set, or Rule

You can use the REVOKE_OBJECT_PRIVILEGE procedure in the DBMS_RULE_ADM package to revoke object privileges on a specific evaluation context, rule set, or rule.

For example, to revoke from the hr user the privilege to both alter and execute a rule set named hr_capture_rules in the strmadmin schema, enter the following:

BEGIN 
  DBMS_RULE_ADM.REVOKE_OBJECT_PRIVILEGE(
    privilege    => SYS.DBMS_RULE_ADM.ALL_ON_RULE_SET,
    object_name  => 'strmadmin.hr_capture_rules',
    revokee      => 'hr');
END;
/
PK&PK&AOEBPS/pt_mon.htmT Monitoring Oracle Streams PK[fY T PK&AOEBPS/img_text/strms052.htm Description of the illustration strms052.eps

This illustration shows the basic process for online database upgrade or maintenance with Oracle Streams. This illustration shows the following Oracle databases:

  • The source database contains the following:

    • The database objects in the database being upgraded or maintained.

    • The redo log recording changes to the database objects.

  • The capture database can be the source database, the destination database, or a third database. The capture database contains the following:

    • A capture process capturing changes from the redo log of the source database. If the capture database and the source database are the same, then a local capture process captures changes in the redo log at the source database. If the capture database is the destination database or a third database, then the source database redo log is shipped to the capture database, and a downstream capture process captures changes in this redo log.

    • The capture process converts the changes to logical change records (LCRs) and enqueues the LCRs.

    • If the capture database and the destination database are different databases, then a propagation propagates the LCRs to a queue at the destination database. If the capture database and destination database are the same, then the propagation is not needed.

  • The destination database contains the following:

    • The queue that contains LCRs that were captured by the capture process.

    • An apply process that applies the LCRs as changes to the database objects.

PK<PK&AOEBPS/img_text/strms505.htm  Description of the illustration strms505.eps

This illustration shows an Oracle Streams two-database replication environment that includes the following Oracle databases:

  • sync1.example.com

  • sync2.example.com

The sync1.example.com database has the following Oracle Streams components configured:

  • The following queues: capture_queue and apply_queue.

  • A synchronous capture named sync_capture that captures DML changes to the hr.employees and hr.departments tables. The synchronous capture enqueues these changes into the local capture_queue queue.

  • A propagation named send_emp_dep that sends changes from the local capture_queue queue to the apply_queue queue at sync2.example.com.

  • An apply process named apply_emp_dep that dequeues changes that originated at sync2.example.com from the apply_queue queue and applies them to the hr.employees and hr.departments tables.

The sync2.example.com database has the following Oracle Streams components configured:

  • The following queues: capture_queue and apply_queue.

  • A synchronous capture named sync_capture that captures DML changes to the hr.employees and hr.departments tables. The synchronous capture enqueues these changes into the local capture_queue queue.

  • A propagation named send_emp_dep that sends changes from the local capture_queue queue to the apply_queue queue at sync1.example.com.

  • An apply process named apply_emp_dep that dequeues changes that originated at sync1.example.com from the apply_queue queue and applies them to the hr.employees and hr.departments tables.

PKQٶ PK&AOEBPS/img_text/strms061.htmP Description of the illustration strms061.eps

This illustration shows three Oracle databases with two stream paths flowing between them.

Both stream paths start with a capture process (component ID 1) at the first Oracle database that enqueues messages (component ID 2). From the queue (component ID 2), two propagation senders send the messages to the second Oracle database and third Oracle database.

The first stream path flows from a propagation sender (component ID 3) at the first Oracle database through components in the following way:

  • The propagation sender (component ID 3) sends messages from the queue (component ID 2) to a propagation receiver (component ID 5) at the second Oracle database.

  • A propagation receiver (component ID 5) at the second Oracle database enqueues messages received from the propagation sender (component ID 3) (component ID 6).

  • An apply process (component ID 7) dequeues messages (component ID 6).

The second stream path flows from a propagation sender (component ID 4) at the first Oracle database through components in the following way:

  • The propagation sender (component ID 4) sends messages from the queue (component ID 2) to a propagation receiver (component ID 8) at the third Oracle database.

  • A propagation receiver (component ID 8) at the third Oracle database enqueues messages received from the propagation sender (component ID 4) (component ID 9).

  • An apply process (component ID 10) dequeues messages (component ID 9).

PKꓷkPK&AOEBPS/img_text/strms054.htmG Description of the illustration strms054.eps

This illustration shows that the inst1.example.com database and the inst2.example.com database share a computer file system. The inst1.example.com database contains two tablespaces: sales_tbs1 and sales_tbs2. The illustration shows a tablespace repository with two versions of this tablespace set: v_q1fy2005 and v_q2fy2005. Other versions can also be added to the tablespace repository. The illustration also shows the directory objects that store the data files, export dump file, and export log file in each version. The q1fy2005 directory object stores the files for version v_q1fy2005, and the q2fy2005 directory object stores the files for version v_q2fy2005.

When a version of the tablespace set is attached to the inst2.example.com database, the data files and export dump file in the version are copied to a different directory object. The illustration shows the data files and export dump file in the q1fy2005 directory object being copied to the q1fy2005_r directory object during attach. After the tablespace set is attached, the inst2.example.com database contains the sales_tbs1 and sales_tbs2 tablespaces. Also, an import log file generated during the attach is added to the q1fy2005_r directory object.

PK_%DLGPK&AOEBPS/img_text/strms507.htmC Description of the illustration strms507.eps

This illustration shows a messaging environment that includes the following Oracle databases:

  • ii1.example.com

  • ii2.example.com

The ii1.example.com database has the following components:

  • The oe.orders table.

  • A trigger named enqueue_orders that fires when row is inserted into the oe.orders table. The trigger enqueues messages of order_id_date type based on the insert into the streams_queue queue.

  • A propagation named send_orders that sends messages from the local streams_queue queue to the streams_queue queue at ii2.example.com.

The ii2.example.com database has the following components:

  • The streams_queue queue.

  • A messaging client/subscriber named strmadmin. When the strmadmin messaging client/subscriber is invoked, it uses the dequeue_orders PL/SQL procedure to dequeue and process the messages in the streams_queue queue.

PKHCPK&AOEBPS/img_text/strms012.htmi Description of the illustration strms012.eps

This illustration shows users making changes to database objects in an Oracle database. The changes are logged in the redo log. A capture process formats these changes into messages called LCRs and enqueues them. The queue can contain LCRs and user messages.

PKQPK&AOEBPS/img_text/strms041.htm- Description of the illustration strms041.eps

This illustration shows the following process for row migration during capture:

  1. A user makes the following update to a table:

    UPDATE hr.employees SET department_id=50 WHERE employee_id=167;
    
  2. The source database records the change in the redo log.

  3. A capture process captures the change and transforms the UPDATE into an INSERT because the change satisfies a subset rule.

  4. The capture process enqueues the transformed LCR at the source database.

  5. A propagation propagates the LCR to a queue at the destination database.

  6. An apply process dequeues the change and applies it as an INSERT into an hr.employees subset table. This subset table contains rows only for employees with a department_id equal to 50.

PKӿI2-PK&AOEBPS/img_text/strms043.htmB Description of the illustration strms043.eps

This illustration shows the following process for rule-based transformation during dequeue by a messaging client:

  1. A messaging client dequeues the messages and performs the transformation.

  2. A messaging client continues to dequeue the transformed messages.

PKUq!PK&AOEBPS/img_text/strms036.htm{ Description of the illustration strms036.eps

This illustration shows the following Oracle Streams configuration at the cpap.example.com database:

  • One queue named streams_queue and owned by the user strmadmin.

  • One capture process named capture_emp captures DML changes to the hr.employees table and enqueues these changes into the streams_queue.

  • An apply process named apply_emp that sends row LCRs involving the hr.employees table to a procedure DML handler, which is the emp_dml_handler PL/SQL procedure. The apply process reenqueues all messages back into the streams_queue for processing by an application.

  • The emp_dml_handler converts DELETE operations on the hr.employees table into INSERT operations on the hr.emp_del table and inserts these changes into the table.

PKig8{PK&AOEBPS/img_text/strms051.htmE Description of the illustration strms051.eps

This illustration shows a time line moving forward with the following three archived redo log files on it:

  • An archived redo log file with sequence number 200 has a NEXT_TIME value of May 2, 11AM. A checkpoint at SCN 435250 corresponds to this log file.

  • An archived redo log file with sequence number 220 has a NEXT_TIME value of May 15, 11AM. A checkpoint at SCN 479315 corresponds to this log file.

  • An archived redo log file with sequence number 230 has a FIRST_TIME value of May 23, 11AM. A capture process required checkpoint SCN at SCN 494623 corresponds to this log file.

There are other archived redo log files in between the ones shown on the teemingly. At the required checkpoint SCN, the capture process computes the age of checkpoints, with the following results:

  • For the archived redo log file with sequence number 200:

    May 23 at 11AM - May 2 at 11AM = 21 days
    
  • For the archived redo log file with sequence number 220:

    May 23 at 11AM - May 15 at 11AM = 8 days
    

The checkpoint retention time is set to 20 days. Therefore, the checkpoint that corresponds with the archived redo log file with sequence number 200 is purged, but the checkpoint that corresponds with the archived redo log file with sequence number 220 is retained.

PKMmWJEPK&AOEBPS/img_text/strms058.htmZ Description of the illustration strms058.eps

This illustration shows users making changes to database objects in an Oracle database. The changes are captured by a synchronous capture. A synchronous capture formats these changes into messages called LCRs and enqueues them. The queue can contain LCRs and user messages.

PKc3PK&AOEBPS/img_text/strms060.htm Description of the illustration strms060.eps

This illustration shows two Oracle databases with two stream paths flowing between them.

The first stream path flows through components in the following way:

  • A capture process (component ID 1) at the first Oracle database enqueues messages (component ID 2).

  • A propagation sender (component ID 3) sends messages from the queue (component ID 2) to a propagation receiver (component ID 4) at the second Oracle database.

  • A propagation receiver (component ID 4) at the second Oracle database enqueues messages received from the propagation sender (component ID 3) (component ID 5).

  • An apply process (component ID 6) dequeues messages (component ID 5).

The second stream path flows through components in the following way:

  • A capture process (component ID 7) at the second Oracle database enqueues messages (component ID 8).

  • A propagation sender (component ID 9) sends messages from the queue (component ID 8) to a propagation receiver (component ID 10) at the first Oracle database.

  • A propagation receiver (component ID 10) at the first Oracle database enqueues messages received from the propagation sender (component ID 9) (component ID 11).

  • An apply process (component ID 12) dequeues messages (component ID 11).

PKPK&AOEBPS/img_text/strms042.htm7 Description of the illustration strms042.eps

This illustration shows the following three redo logs in the form of arrows with the SCN value increasing. The redo logs have SCN values on them to illustrate the following examples:

  • Example 1: First SCN of existing capture process is 10000. Start SCN of new capture process is 70000. Maximum checkpoint SCN of existing capture process is 90000.

  • Example 2: Maximum checkpoint SCN of existing capture process is 70000. Start SCN of new capture process is 90000.

  • Example 3: Maximum checkpoint SCN of existing capture process is 10000. Start SCN of new capture process is 3000000.

For example 1 and example 2, the new capture process should share the LogMiner data dictionary of an existing capture process. For example 3, the new capture process should create a LogMiner data dictionary.

PKf:<7PK&AOEBPS/img_text/strms057.htm Description of the illustration strms057.eps

This illustration shows that the inst1.example.com database and the inst2.example.com database do not share a computer file system. The inst1.example.com database runs a report at set intervals that adds the report files to the sales_reports directory object. This directory object corresponds to a directory on the inst1.example.com database's computer file system. Each time a new report is run, the report files in this directory are copied to a different directory on the inst2.example.com database's computer file system.

The illustration shows a file group repository in the inst2.example.com database with two versions of this tablespace set: sales_reports_v1 and sales_reports_v2. Other versions can also be added to the file group repository. The illustration also shows the directory objects on the inst2.example.com database's computer file system that store the files in these file groups. The sales_reports1 directory object stores the files for version sales_reports_v1, and the sales_reports2 directory object stores the files for version sales_reports_v2.

PK)PK&AOEBPS/img_text/strms066.htm4 Description of the illustration strms066.eps

This illustration is described in the surrounding text.

PK PK&AOEBPS/img_text/strms046.htmM Description of the illustration strms046.eps

This illustration shows the following process for row migration during dequeue by a messaging client:

  1. A user or application enqueues a row LCR that updates the hr.employees table at an Oracle database. In the LCR, the old value for the department_id column is 50. The new value for this column is 90.

  2. A messaging client dequeues the LCR and transforms the UPDATE into a DELETE because the change satisfies a subset rule.

PKÐc6PK&A&OEBPS/img_text/strms_streams_main1.htm Description of the illustration strms_streams_main1.gif

This screenshot shows the Overview subpage of the Manage Replication page. The top of the page has the following subpage links: Overview and Streams. Below the subpage links is a Refresh button, the last refresh date and time, and a View Data list. Below the list are the General, Component Summary, Path Summary, and Performance sections:

  • The General section includes:

    • "Streams Pool Size(MB)" text and a "100" link

    • "Streams Pool Size Used(%)" text and "19" text

  • The Component Summary section includes:

    • "Capture" text and a "1" link

    • "Propagation" text and a "1" link

    • "Apply" text and a "1" link

  • The Path Summary section includes:

    • "Streams Paths" text and a "2" link

    • "Streams Paths with problems" text and a "0" link

    • "Streams Paths with bottleneck components" text and a "0" link

  • The Performance section includes:

    • A View list with Component Level selected

    • A Statistics list with Latency selected

    • A Show Data for list with Last 1 Hour selected

    • A Capture Latency graph

    • A Propagation Latency graph

    • An Apply Latency graph

PK.HAPK&AOEBPS/img_text/strms019.htm; Description of the illustration strms019.eps

This illustration shows the following process for rule-based transformation during propagation:

  1. A propagation dequeues messages from the source queue and performs the transformation.

  2. The propagation propagates the transformed messages to the destination queue.

PKwwPK&AOEBPS/img_text/strms008.htm) Description of the illustration strms008.eps

This illustration shows a source queue and a destination queue. A propagation is propagating messages from the source queue to the destination queue.

PKz.)PK&AOEBPS/img_text/strms047.htmy Description of the illustration strms047.eps

This illustration shows an example real-time downstream capture configuration that involves the following steps:

  1. Users make changes to database objects at a source database.

  2. The log writer process (LGWR) at the source database sends redo data to the downstream database while, at the same time, LGWR logs these changes in the online redo log at the source database.

  3. Remote file server (RFS) at the downstream database receives the redo data over the network from the LGWR at the source database.

  4. RFS logs the redo data in the standby redo log at the downstream database.

  5. The archiver writes the redo data in the standby redo log to archived redo log files at the downstream database.

  6. At the downstream database, a capture process captures changes standby redo log whenever possible and in the archived redo log files whenever necessary and enqueues these changes as LCRs at the downstream database.

PK PK&AOEBPS/img_text/strms013.htmZ Description of the illustration strms013.eps

This illustration shows an apply process dequeuing LCRs and messages from the queue and either applying LCRs directly or passing messages to an apply handler. The apply handlers include a message handler procedure to process user messages, a procedure DML handler to process row LCRs, a DDL handler procedure to process DDL LCRs, a precommit handler to record commit information for LCRs or user messages, and a statement DML handler to process row LCRs.

PK|_ZPK&AOEBPS/img_text/strms007.htm Description of the illustration strms007.eps

This illustration shows four databases in a directed networks configuration. The first database is a source database in Hong Kong which propagates messages to a queue in an intermediate database in Chicago.

The intermediate database in Chicago propagates messages that originated at the source database in Hong Kong to a queue in a database in New York and to a queue in a database in Miami.

The intermediate database in Chicago is:

  • Destination queue for the source queue in Hong Kong

  • Source queue for the destination queues in New York and Miami

PK mPK&AOEBPS/img_text/strms067.htm4 Description of the illustration strms067.eps

This illustration is described in the surrounding text.

PK+vPK&AOEBPS/img_text/strms059.htm Description of the illustration strms059.eps

This illustration shows the following process for rule-based transformation during capture by a synchronous capture:

  1. A user makes changes to database objects.

  2. A synchronous capture captures the change and performs the transformation.

  3. The synchronous capture enqueues the transformed LCR.

PKPK&AOEBPS/img_text/strms037.htm Description of the illustration strms037.eps

This illustration shows an example archived-log downstream capture configuration that involves the following steps:

  1. Users make changes to database objects at a source database.

  2. The log writer process (LGWR) at the source database logs these changes in the online redo log.

  3. The archiver writes the redo data to archived redo log files.

  4. Redo log files are copied to the downstream database.

  5. A capture process captures changes in the archived redo log files at the downstream database and enqueues these changes as LCRs at the downstream database.

PKpPK&AOEBPS/img_text/strms022.htmS Description of the illustration strms022.eps

This illustration shows three arrows pointing in the same direction. The first arrow is labeled "Capture", the second arrow is labeled "Staging", and the third arrow is labeled "Consumption".

PKc4ҞXSPK&AOEBPS/img_text/strms039.htm Description of the illustration strms039.eps

This illustration shows a LogMiner data dictionary at two points in time (Time 3 and Time 4). At Time 3, the first SCN is set to 423667 and the start SCN is set to 479502. At Time 4, the first SCN has been reset to 502631 and the LogMiner data dictionary information before this SCN has been purged. In addition, start SCN is set to 502631 automatically.

PKJmLBPK&AOEBPS/img_text/strms040.htm Description of the illustration strms040.eps

This illustration shows the following process for row migration during propagation:

  1. A user makes the following update to a table:

    UPDATE hr.employees SET department_id=80 WHERE employee_id=190;
    
  2. Before the update, the department_id was 50 for the employee with employee_id equal to 190.

  3. The source database records the change in the redo log.

  4. A capture process captures the change and enqueues it as an LCR at the source database.

  5. A propagation dequeues the LCR and transforms the UPDATE into a DELETE because the change satisfies a subset rule.

  6. The propagation propagates the transformed LCR to a queue at the destination database.

  7. An apply process dequeues the change and applies it as a DELETE from an hr.employees subset table. This subset table contains rows only for employees with a department_id equal to 50.

PKHpPK&AOEBPS/img_text/strms055.htm6 Description of the illustration strms055.eps

This illustration shows that the inst1.example.com database and the inst2.example.com database share a computer file system. The inst1.example.com database contains two tablespaces: sales_tbs1 and sales_tbs2. The illustration shows a tablespace repository with three versions of this tablespace set: v_q1fy2005, v_q2fy2005, and v_q1fy2005_r. Other versions can also be added to the tablespace repository. The illustration also shows the directory objects that store the data files, export dump file, and export log file in each version:

  • The q1fy2005 directory object stores the files for version v_q1fy2005.

  • The q2fy2005 directory object stores the files for version v_q2fy2005.

  • The q1fy2005_r directory object stores the files for version v_q1fy2005_r.

When a version of the tablespace set is detached from the inst2.example.com database, the new version v_q1fy2005_r is added to the tablespace repository in inst1.example.com. Data files and the export dump file in the detached version remain in the q1fy2005_r directory object.

PK8PK&AOEBPS/img_text/strms048.htm& Description of the illustration strms048.eps

This illustration shows messages with data dependencies being shared between a source and destination database. At the source database, two sessions are enqueuing messages in the following sequence:

  1. Session 1 enqueues message e1 as part of transaction T1. Message e1 contains an insert of a row into the hr.departments table.

  2. Session 2 enqueues message e2 as part of transaction T2. Message e2 contains an insert of a row into the hr.employees table for the employee with an employee_id of 207.

  3. Session 1 enqueues message e3 as part of transaction T1. Message e3 contains an update to a row in the hr.employees table for the employee with an employee_id of 207.

Session 3 dequeues messages from the source database to the destination database. The messages are dequeued in the following order:

  1. Message e1 is dequeued, and the change is applied successfully.

  2. Message e3 is dequeued, and an error results because no data is found for an employee with an employee_id of 207.

  3. Message e2 is dequeued, and the change is applied. The result is that incorrect information is in the hr.employees table for the employee with an employee_id of 207.

PK ѣ+&PK&AOEBPS/img_text/strms026.htm_ Description of the illustration strms026.eps

This illustration is a flowchart that illustrates capture process rule evaluation. The flowchart shows the following process:

  1. Start.

  2. Find change in redo log.

    Question: Could the change pass the capture process rule sets during prefiltering?

    If yes, then continue with the next step.

    If no, then ignore the change and end the process.

  3. Convert change into LCR.

    Question: Does the LCR pass the capture process rule sets?

    If yes, then continue with the next step.

    If no, then discard the LCR and end the process.

  4. Enqueue LCR.

  5. End.

PKr4d_PK&AOEBPS/img_text/strms034.htmE Description of the illustration strms034.eps

This illustration shows the following process for row migration during apply:

  1. A user makes the following update to a table:

    UPDATE hr.employees SET department_id=50 WHERE employee_id=145;
    
  2. The source database records the change in the redo log.

  3. A capture process captures the change and enqueues it as an LCR at the source database.

  4. The propagation propagates the LCR to a queue at the destination database.

  5. An apply process dequeues the LCR at the destination database and transforms the UPDATE into an INSERT because the change satisfies a subset rule.

  6. The apply process applies the change as an INSERT into an hr.employees subset table. This subset table contains rows only for employees with a department_id equal to 50.

PK#JEPK&AOEBPS/img_text/strms056.htm[ Description of the illustration strms056.eps

This illustration shows that the inst1.example.com database and the inst3.example.com database do not share a computer file system. The inst1.example.com database contains two tablespaces: sales_tbs1 and sales_tbs2. The illustration shows a tablespace repository with two versions of this tablespace set: v_q1fy2005 and v_q2fy2005. Other versions can also be added to the tablespace repository. The illustration also shows the directory objects that store the data files, export dump file, and export log file in each version on the computer file system for the inst1.example.com database. The q1fy2005 directory object stores the files for version v_q1fy2005, and the q2fy2005 directory object stores the files for version v_q2fy2005.

Before a version of the tablespace set can be attached to the inst3.example.com database, the data files and export dump file in the version are copied to a directory on the inst3.example.com database's computer file system using the DBMS_FILE_TRANSFER package. The illustration shows the data files and export dump file in the q1fy2005 directory object on inst1.example.com being copied to the q1fy2005 directory object on inst3.example.com. After the tablespace set is attached, the inst3.example.com database contains the sales_tbs1 and sales_tbs2 tablespaces. Also, an import log file generated during the attach is added to the q1fy2005 directory object on inst3.example.com.

PKsA`[PK&AOEBPS/img_text/strms016.htm/ Description of the illustration strms016.eps

This illustration shows a single rule set being used by a capture process, a propagation, and apply process, a messaging client, and a synchronous capture.

PKr4/PK&AOEBPS/img_text/strms038.htm Description of the illustration strms038.eps

This illustration shows a LogMiner data dictionary at two points in time (Time 1 and Time 2). At Time 1, the first SCN is set to 407835 and the start SCN is set to 479502. At Time 2, the first SCN has been reset to 423667 and the LogMiner data dictionary information before this SCN has been purged. The start SCN remains unchanged.

PKSPK&AOEBPS/img_text/strms053.htm8 Description of the illustration strms053.eps

This illustration shows the inst1.example.com database with two tablespaces: sales_tbs1 and sales_tbs2. This tablespace set is cloned at different times. Each time the tablespace set is cloned, a new version of the tablespace set is added to the tablespace repository. The illustration shows two versions in the tablespace repository: v_q1fy2005 and v_q2fy2005. Other versions can also be added to the tablespace repository. The illustration also shows the directory objects that store the data files, export dump file, and export log file in each version. The q1fy2005 directory object stores the files for version v_q1fy2005, and the q2fy2005 directory object stores the files for version v_q2fy2005.

PKtgPK&AOEBPS/img_text/strms020.htm- Description of the illustration strms020.eps

This illustration shows the following process for rule-based transformation during capture by a capture process:

  1. A user makes changes to database objects.

  2. The source database records the change in the redo log.

  3. A capture process captures the change and performs the transformation.

  4. The capture process enqueues the transformed LCR.

PKO32-PK&AOEBPS/img_text/strms506.htm Description of the illustration strms506.eps

This illustration shows an Oracle Streams two-database replication environment that includes the following Oracle databases:

  • src.example.com

  • dest.example.com

The src.example.com database contains the redo log for the local database. This redo log records changes to the hr schema. This redo log is sent to the dest.example.com database by Redo Transport Services.

The dest.example.com database has the following Oracle Streams components configured:

  • The streams_queue queue.

  • A capture process named capture_hns that captures DML changes to the tables in the hr schema from the src.example.com redo log. The capture process enqueues these changes into the local streams_queue queue.

  • An apply process named apply that dequeues changes that originated at src.example.com from the streams_queue queue and applies them to the tables in the hr schema.

PK PK&AOEBPS/img_text/strms068.htm4 Description of the illustration strms068.eps

This illustration is described in the surrounding text.

PKW+PK&AOEBPS/img_text/strms065.htm4 Description of the illustration strms065.eps

This illustration is described in the surrounding text.

PKmizPK&AOEBPS/img_text/strms045.htmP Description of the illustration strms045.eps

This illustration shows a messaging client being invoked by a user or application. When the messaging client is invoked, it explicitly dequeues persistent LCRs or persistent user messages.

PKlUPPK&AOEBPS/img_text/strms418.htm{ Description of the illustration strms418.eps

This illustration shows an Oracle Streams replication environment that involves the following Oracle databases:

  • mult1.example.com

  • mult2.example.com

  • mult3.example.com

The mult1.example.com Oracle database has the following configuration:

  • The following three ANYDATA queues owned by the strmadmin user: captured_mult1, from_mult2, and from_mult3.

  • A capture process named capture_hrmult that captures DML and DDL changes to the tables in the hrmult schema: countries, departments, employees, job_history, jobs, locations, and regions. The capture process enqueues these changes into the captured_mult1 queue.

  • A propagation named mult1_to_mult2 that propagates changes from the local captured_mult1 queue to the from_mult1 queue at mult2.example.com.

  • A propagation named mult1_to_mult3 that propagates changes from the local captured_mult1 queue to the from_mult1 queue at mult3.example.com.

  • An apply process named apply_from_mult2 that dequeues changes that originated at mult2.example.com from the from_mult2 queue and applies them to the tables in the hrmult schema.

  • An apply process named apply_from_mult3 that dequeues changes that originated at mult3.example.com from the from_mult3 queue and applies them to the tables in the hrmult schema.

The mult2.example.com Oracle database has the following configuration:

  • The following three ANYDATA queues owned by the strmadmin user: captured_mult2, from_mult1, and from_mult3.

  • A capture process named capture_hrmult that captures DML and DDL changes to the tables in the hrmult schema: countries, departments, employees, job_history, jobs, locations, and regions. The capture process enqueues these changes into the captured_mult2 queue.

  • A propagation named mult2_to_mult1 that propagates changes from the local captured_mult2 queue to the from_mult2 queue at mult1.example.com.

  • A propagation named mult2_to_mult3 that propagates changes from the local captured_mult2 queue to the from_mult2 queue at mult3.example.com.

  • An apply process named apply_from_mult1 that dequeues changes that originated at mult1.example.com from the from_mult1 queue and applies them to the tables in the hrmult schema.

  • An apply process named apply_from_mult3 that dequeues changes that originated at mult3.example.com from the from_mult3 queue and applies them to the tables in the hrmult schema.

The mult3.example.com Oracle database has the following configuration:

  • The following three ANYDATA queues owned by the strmadmin user: captured_mult3, from_mult1, and from_mult2.

  • A capture process named capture_hrmult that captures DML and DDL changes to the tables in the hrmult schema: countries, departments, employees, job_history, jobs, locations, and regions. The capture process enqueues these changes into the captured_mult3 queue.

  • A propagation named mult3_to_mult1 that propagates changes from the local captured_mult3 queue to the from_mult3 queue at mult1.example.com.

  • A propagation named mult3_to_mult2 that propagates changes from the local captured_mult3 queue to the from_mult3 queue at mult2.example.com.

  • An apply process named apply_from_mult1 that dequeues changes that originated at mult1.example.com from the from_mult1 queue and applies them to the tables in the hrmult schema.

  • An apply process named apply_from_mult2 that dequeues changes that originated at mult2.example.com from the from_mult2 queue and applies them to the tables in the hrmult schema.

PKmHPK&AOEBPS/img_text/strms017.htm Description of the illustration strms017.eps

This illustration is described in the step-by-step rule evaluation process that precedes the illustration.

PKPK&AOEBPS/img_text/strms049.htmf Description of the illustration strms049.eps

This illustration shows messages being enqueued and browsed within a database. Two sessions are enqueuing messages in the following sequence:

  1. Session 1 enqueues message e1 as part of transaction T1.

  2. Session 2 enqueues message e2 as part of transaction T2.

  3. Session 1 enqueues message e3 as part of transaction T1.

  4. Session 2 commits transaction T2.

  5. Session 1 commits transaction T1.

Session 3 browses messages in the queue at two different times. The first time session 3 browses messages, session 2 has committed, but session 1 has not yet committed. For this browse, the browse set shows messages in the following order:

  1. e2

  2. e1

  3. e3

The second time session 3 browses messages, both session 1 and session 2 have committed. For this browse, the browse set shows messages in the following order:

  1. e1

  2. e3

  3. e2

PK{PK&AOEBPS/img_text/strms044.htm Description of the illustration strms044.eps

This illustration shows the following process for rule-based transformation during apply:

  1. An apply process dequeues messages and performs the transformation.

  2. The apply process either sends the transformed messages to apply handlers or applies the transformed messages directly to database objects.

PK֙kzPK&AOEBPS/img_text/strms504.htm Description of the illustration strms504.eps

This illustration shows an Oracle Streams hub-and-spoke replication environment that includes the following Oracle databases:

  • hub.example.com

  • spoke1.example.com

  • spoke2.example.com

At each database, the local redo log records changes to the hns schema.

The hub.example.com database has the following Oracle Streams components configured:

  • The following queues: source_hns, destination_spoke1, and destination_spoke2.

  • A capture process named capture_hns that captures DML changes to the tables in the hns schema from the local redo log. The capture process enqueues these changes into the local source_hns queue.

  • A propagation named propagation_spoke1 that sends changes from the local source_hns queue to the destination_spoke1 queue at spoke1.example.com.

  • A propagation named propagation_spoke2 that sends changes from the local source_hns queue to the destination_spoke2 queue at spoke2.example.com.

  • An apply process named apply_spoke1 that dequeues changes that originated at spoke1.example.com from the destination_spoke1 queue and applies them to the tables in the hns schema.

  • An apply process named apply_spoke2 that dequeues changes that originated at spoke2.example.com from the destination_spoke2 queue and applies them to the tables in the hns schema.

The spoke1.example.com database has the following Oracle Streams components configured:

  • The following queues: source_hns and destination_spoke1.

  • A capture process named capture_hns that captures DML changes to the tables in the hns schema from the local redo log. The capture process enqueues these changes into the local source_hns queue.

  • A propagation named propagation_spoke1 that sends changes from the local source_hns queue to the destination_spoke1 queue at hub.example.com.

  • An apply process named apply_spoke1 that dequeues changes that originated at hub.example.com and spoke2.example.com from the destination_spoke1 queue and applies them to the tables in the hns schema.

The spoke2.example.com database has the following Oracle Streams components configured:

  • The following queues: source_hns and destination_spoke2.

  • A capture process named capture_hns that captures DML changes to the tables in the hns schema from the local redo log. The capture process enqueues these changes into the local source_hns queue.

  • A propagation named propagation_spoke2 that sends changes from the local source_hns queue to the destination_spoke2 queue at hub.example.com.

  • An apply process named apply_spoke2 that dequeues changes that originated at hub.example.com and spoke1.example.com from the destination_spoke2 queue and applies them to the tables in the hns schema.

PKY /"PK&AOEBPS/strms_mtransform.htm Managing Rule-Based Transformations

19 Managing Rule-Based Transformations

In Oracle Streams, a rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. There are two types of rule-based transformations: declarative and custom.

The following sections describe managing rule-based transformations:


Note:

A transformation specified for a rule is performed only if the rule is in a positive rule set. If the rule is in the negative rule set for a capture process, propagation, apply process, or messaging client, then these Oracle Streams clients ignore the rule-based transformation.

Managing Declarative Rule-Based Transformations

You can use the following procedures in the DBMS_STREAMS_ADM package to manage declarative rule-based transformations: ADD_COLUMN, DELETE_COLUMN, KEEP_COLUMNS, RENAME_COLUMN, RENAME_SCHEMA, and RENAME_TABLE.

This section provides instructions for completing the following tasks:

Adding Declarative Rule-Based Transformations

The following sections contain examples that add declarative rule-based transformations to DML rules.


Note:

Declarative rule-based transformations can be specified for DML rules only. They cannot be specified for DDL rules.

Adding a Declarative Rule-Based Transformation that Renames a Table

Use the RENAME_TABLE procedure in the DBMS_STREAMS_ADM package to add a declarative rule-based transformation that renames a table in a row LCR. For example, the following procedure adds a declarative rule-based transformation to the jobs12 rule in the strmadmin schema:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_TABLE(
    rule_name       => 'strmadmin.jobs12',
    from_table_name => 'hr.jobs',
    to_table_name   => 'hr.assignments', 
    step_number     => 0,
    operation       => 'ADD');
END;
/

The declarative rule-based transformation added by this procedure renames the table hr.jobs to hr.assignments in a row LCR when the rule jobs12 evaluates to TRUE for the row LCR. If more than one declarative rule-based transformation is specified for the jobs12 rule, then this transformation follows default transformation ordering because the step_number parameter is set to 0 (zero). In addition, the operation parameter is set to ADD to indicate that the transformation is being added to the rule, not removed from it.

The RENAME_TABLE procedure can also add a transformation that renames the schema in addition to the table. For example, in the previous example, to specify that the schema should be renamed to oe, specify oe.assignments for the to_table_name parameter.

Adding a Declarative Rule-Based Transformation that Adds a Column

Use the ADD_COLUMN procedure in the DBMS_STREAMS_ADM package to add a declarative rule-based transformation that adds a column to a row in a row LCR. For example, the following procedure adds a declarative rule-based transformation to the employees35 rule in the strmadmin schema:

BEGIN 
  DBMS_STREAMS_ADM.ADD_COLUMN(
    rule_name    => 'employees35',
    table_name   => 'hr.employees',
    column_name  => 'birth_date', 
    column_value => ANYDATA.ConvertDate(NULL),
    value_type   => 'NEW',
    step_number  => 0,
    operation    => 'ADD');
END;
/

The declarative rule-based transformation added by this procedure adds a birth_date column of data type DATE to an hr.employees table row in a row LCR when the rule employees35 evaluates to TRUE for the row LCR.

Notice that the ANYDATA.ConvertDate function specifies the column type and the column value. In this example, the added column value is NULL, but a valid date can also be specified. Use the appropriate ANYDATA function for the column being added. For example, if the data type of the column being added is NUMBER, then use the ANYDATA.ConvertNumber function.

The value_type parameter is set to NEW to indicate that the column is added to the new values in a row LCR. You can also specify OLD to add the column to the old values.

If more than one declarative rule-based transformation is specified for the employees35 rule, then the transformation follows default transformation ordering because the step_number parameter is set to 0 (zero). In addition, the operation parameter is set to ADD to indicate that the transformation is being added, not removed.


Note:

The ADD_COLUMN procedure is overloaded. A column_function parameter can specify that the current system date or time stamp is the value for the added column. The column_value and column_function parameters are mutually exclusive.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about AnyData type functions

Overwriting an Existing Declarative Rule-Based Transformation

When the operation parameter is set to ADD in a procedure that adds a declarative rule-based transformation, an existing declarative rule-based transformation is overwritten if the parameters in the following list match the existing transformation parameters:

  • ADD_COLUMN procedure: rule_name, table_name, column_name, and step_number parameters

  • DELETE_COLUMN procedure: rule_name, table_name, column_name, and step_number parameters

  • KEEP_COLUMNS procedure: rule_name, table_name, column_list, and step_number parameters, or rule_name, table_name, column_table, and step_number parameters (The column_list and column_table parameters are mutually exclusive.)

  • RENAME_COLUMN procedure: rule_name, table_name, from_column_name, and step_number parameters

  • RENAME_SCHEMA procedure: rule_name, from_schema_name, and step_number parameters

  • RENAME_TABLE procedure: rule_name, from_table_name, and step_number parameters

For example, suppose an existing declarative rule-based transformation was creating by running the following procedure:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_COLUMN(
    rule_name         => 'departments33',
    table_name        => 'hr.departments',
    from_column_name  => 'manager_id', 
    to_column_name    => 'lead_id',
    value_type        => 'NEW',
    step_number       => 0,
    operation         => 'ADD');
END;
/

Running the following procedure overwrites this existing declarative rule-based transformation:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_COLUMN(
    rule_name         => 'departments33',
    table_name        => 'hr.departments',
    from_column_name  => 'manager_id', 
    to_column_name    => 'lead_id',
    value_type        => '*',
    step_number       => 0,
    operation         => 'ADD');
END;
/

In this case, the value_type parameter in the declarative rule-based transformation was changed from NEW to *. That is, in the original transformation, only new values were renamed in row LCRs, but, in the new transformation, both old and new values are renamed in row LCRs.

Removing Declarative Rule-Based Transformations

To remove a declarative rule-based transformation from a rule, use the same procedure used to add the transformation, but specify REMOVE for the operation parameter. For example, to remove the transformation added in "Adding a Declarative Rule-Based Transformation that Renames a Table", run the following procedure:

BEGIN 
  DBMS_STREAMS_ADM.RENAME_TABLE(
    rule_name       => 'strmadmin.jobs12',
    from_table_name => 'hr.jobs',
    to_table_name   => 'hr.assignments', 
    step_number     => 0,
    operation       => 'REMOVE');
END;
/

When the operation parameter is set to REMOVE in any of the declarative transformation procedures listed in "Managing Declarative Rule-Based Transformations", the other parameters in the procedure are optional, excluding the rule_name parameter. If these optional parameters are set to NULL, then they become wildcards.

The RENAME_TABLE procedure in the previous example behaves in the following way when one or more of the optional parameters are set to NULL:

from_table_name Parameterto_table_name Parameterstep_number ParameterResult
NULLNULLNULLRemove all rename table transformations for the specified rule
non-NULLNULLNULLRemove all rename table transformations with the specified from_table_name for the specified rule
NULLnon-NULLNULLRemove all rename table transformations with the specified to_table_name for the specified rule
NULLNULLnon-NULLRemove all rename table transformations with the specified step_number for the specified rule
non-NULLnon-NULLNULLRemove all rename table transformations with the specified from_table_name and to_table_name for the specified rule
NULLnon-NULLnon-NULLRemove all rename table transformations with the specified to_table_name and step_number for the specified rule
non-NULLNULLnon-NULLRemove all rename table transformations with the specified from_table_name and step_number for the specified rule

The other declarative transformation procedures work in a similar way when optional parameters are set to NULL and the operation parameter is set to REMOVE.

Managing Custom Rule-Based Transformations

Use the SET_RULE_TRANSFORM_FUNCTION procedure in the DBMS_STREAMS_ADM package to set or unset a custom rule-based transformation for a rule. This procedure modifies the rule action context to specify the custom rule-based transformation.

This section provides instructions for completing the following tasks:


Caution:

Do not modify LONG, LONG RAW, LOB, or XMLType column data in an LCR with a custom rule-based transformation.


Note:

  • There is no automatic locking mechanism for a rule action context. Therefore, ensure that an action context is not updated by two or more sessions at the same time.

  • When you perform custom rule-based transformations on DDL LCRs, you probably need to modify the DDL text in the DDL LCR to match any other modification. For example, if the transformation changes the name of a table in the DDL LCR, then the transformation should change the table name in the DDL text in the same way.


Creating a Custom Rule-Based Transformation

A custom rule-based transformation function always operates on one message, but it can return one message or many messages. A custom rule-based transformation function that returns one message is a one-to-one transformation function. A one-to-one transformation function must have the following signature:

FUNCTION user_function (
   parameter_name   IN  ANYDATA)
RETURN ANYDATA;

Here, user_function stands for the name of the function and parameter_name stands for the name of the parameter passed to the function. The parameter passed to the function is an ANYDATA encapsulation of a message, and the function must return an ANYDATA encapsulation of a message.

A custom rule-based transformation function that can return more than one message is a one-to-many transformation function. A one-to-many transformation function must have the following signature:

FUNCTION user_function (
   parameter_name   IN  ANYDATA)
RETURN STREAMS$_ANYDATA_ARRAY;

Here, user_function stands for the name of the function and parameter_name stands for the name of the parameter passed to the function. The parameter passed to the function is an ANYDATA encapsulation of a message, and the function must return an array that contains zero or more ANYDATA encapsulations of a message. If the array contains zero ANYDATA encapsulations of a message, then the original message is discarded. One-to-many transformation functions are supported only for Oracle Streams capture processes and synchronous captures.

The STREAMS$_ANYDATA_ARRAY type is an Oracle-supplied type that has the following definition:

CREATE OR REPLACE TYPE SYS.STREAMS$_ANYDATA_ARRAY
   AS VARRAY(2147483647) of SYS.ANYDATA
/

The following steps outline the general procedure for creating a custom rule-based transformation that uses a one-to-one function:

  1. In SQL*Plus, connect to the database as an administrative user or as the user who will own the PL/SQL function. For this example, connect as hr user.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create a PL/SQL function that performs the transformation.


    Caution:

    Ensure that the transformation function is deterministic. A deterministic function always returns the same value for any given set of input argument values, now and in the future. Also, ensure that the transformation function does not raise any exceptions. Exceptions can cause a capture process, propagation, or apply process to become disabled, and you will need to correct the transformation function before the capture process, propagation, or apply process can proceed. Exceptions raised by a custom rule-based transformation for a synchronous capture aborts the DML statement that caused the exception. Exceptions raised by a custom rule-based transformation for a messaging client can prevent the messaging client from dequeuing messages.

    The following example creates a function called executive_to_management in the hr schema that changes the value in the department_name column of the departments table from Executive to Management. Such a transformation might be necessary if one branch in a company uses a different name for this department.

    CREATE OR REPLACE FUNCTION hr.executive_to_management(in_any IN ANYDATA) 
    RETURN ANYDATA
    IS
      lcr SYS.LCR$_ROW_RECORD;
      rc  NUMBER;
      ob_owner VARCHAR2(30);
      ob_name VARCHAR2(30);
      dep_value_anydata ANYDATA;
      dep_value_varchar2 VARCHAR2(30);
    BEGIN
      -- Get the type of object
      -- Check if the object type is SYS.LCR$_ROW_RECORD
      IF in_any.GETTYPENAME='SYS.LCR$_ROW_RECORD' THEN
        -- Put the row LCR into lcr
        rc := in_any.GETOBJECT(lcr);
        -- Get the object owner and name
        ob_owner := lcr.GET_OBJECT_OWNER();
        ob_name := lcr.GET_OBJECT_NAME();
        -- Check for the hr.departments table
        IF ob_owner = 'HR' AND ob_name = 'DEPARTMENTS' THEN
          -- Get the old value of the department_name column in the LCR
          dep_value_anydata := lcr.GET_VALUE('old','DEPARTMENT_NAME');
          IF dep_value_anydata IS NOT NULL THEN
            -- Put the column value into dep_value_varchar2
            rc := dep_value_anydata.GETVARCHAR2(dep_value_varchar2);
            -- Change a value of Executive in the column to Management
            IF (dep_value_varchar2 = 'Executive') THEN
              lcr.SET_VALUE('OLD','DEPARTMENT_NAME',
                ANYDATA.CONVERTVARCHAR2('Management'));
            END IF;
          END IF;
          -- Get the new value of the department_name column in the LCR
          dep_value_anydata := lcr.GET_VALUE('new', 'DEPARTMENT_NAME', 'n');
          IF dep_value_anydata IS NOT NULL THEN
            -- Put the column value into dep_value_varchar2
            rc := dep_value_anydata.GETVARCHAR2(dep_value_varchar2);
            -- Change a value of Executive in the column to Management
            IF (dep_value_varchar2 = 'Executive') THEN
              lcr.SET_VALUE('new','DEPARTMENT_NAME',
                ANYDATA.CONVERTVARCHAR2('Management'));
            END IF;
          END IF;
        END IF;
        RETURN ANYDATA.CONVERTOBJECT(lcr);
      END IF;
    RETURN in_any;
    END;
    /
    
  3. Grant the Oracle Streams administrator EXECUTE privilege on the hr.executive_to_management function.

    GRANT EXECUTE ON hr.executive_to_management TO strmadmin;
    
  4. Connect to the database as the Oracle Streams administrator.

  5. Create subset rules for DML operations on the hr.departments table. The subset rules will use the transformation created in Step 2.

    Subset rules are not required to use custom rule-based transformations. This example uses subset rules to illustrate an action context with more than one name-value pair. This example creates subset rules for an apply process on a database named dbs1.example.com. These rules evaluate to TRUE when an LCR contains a DML change to a row with a location_id of 1700 in the hr.departments table. This example assumes that an ANYDATA queue named streams_queue already exists in the database.

    To create these rules, run the following ADD_SUBSET_RULES procedure:

    BEGIN 
      DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
        table_name               =>  'hr.departments',
        dml_condition            =>  'location_id=1700',
        streams_type             =>  'apply',
        streams_name             =>  'strm01_apply',
        queue_name               =>  'streams_queue',
        include_tagged_lcr       =>  FALSE,
        source_database          =>  'dbs1.example.com');
    END;
    /
    

    Note:

    • To create the rule and the rule set, the Oracle Streams administrator must have CREATE_RULE_SET_OBJ (or CREATE_ANYRULE_SET_OBJ) and CREATE_RULE_OBJ (or CREATE_ANY_RULE_OBJ) system privileges. You grant these privileges using the GRANT_SYSTEM_PRIVILEGE procedure in the DBMS_RULE_ADM package.

    • This example creates the rule using the DBMS_STREAMS_ADM package. Alternatively, you can create a rule, add it to a rule set, and specify a custom rule-based transformation using the DBMS_RULE_ADM package. Oracle Streams Extended Examples contains an example of this procedure.

    • The ADD_SUBSET_RULES procedure adds the subset rules to the positive rule set for the apply process.


  6. Determine the names of the system-created rules by running the following query:

    SELECT RULE_NAME, SUBSETTING_OPERATION FROM DBA_STREAMS_RULES 
      WHERE OBJECT_NAME='DEPARTMENTS' AND DML_CONDITION='location_id=1700';
    

    This query displays output similar to the following:

    RULE_NAME                      SUBSET
    ------------------------------ ------
    DEPARTMENTS5                   INSERT
    DEPARTMENTS6                   UPDATE
    DEPARTMENTS7                   DELETE
    

    Note:

    You can also obtain this information using the OUT parameters when you run ADD_SUBSET_RULES.

    Because these are subset rules, two of them contain a non-NULL action context that performs an internal transformation:

    • The rule with a subsetting condition of INSERT contains an internal transformation that converts updates into inserts if the update changes the value of the location_id column to 1700 from some other value. The internal transformation does not affect inserts.

    • The rule with a subsetting condition of DELETE contains an internal transformation that converts updates into deletes if the update changes the value of the location_id column from 1700 to a different value. The internal transformation does not affect deletes.

    In this example, you can confirm that the rules DEPARTMENTS5 and DEPARTMENTS7 have a non-NULL action context, and that the rule DEPARTMENTS6 has a NULL action context, by running the following query:

    COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A13
    COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A27
    COLUMN ACTION_CONTEXT_VALUE HEADING 'Action Context Value' FORMAT A30
    
    SELECT 
        RULE_NAME,
        AC.NVN_NAME ACTION_CONTEXT_NAME, 
        AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
      FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
      WHERE RULE_NAME IN ('DEPARTMENTS5','DEPARTMENTS6','DEPARTMENTS7');
    

    This query displays output similar to the following:

    Rule Name     Action Context Name         Action Context Value
    ------------- --------------------------- ------------------------------
    DEPARTMENTS5  STREAMS$_ROW_SUBSET         INSERT
    DEPARTMENTS7  STREAMS$_ROW_SUBSET         DELETE
    

    The DEPARTMENTS6 rule does not appear in the output because its action context is NULL.

  7. Set the custom rule-based transformation for each subset rule by running the SET_RULE_TRANSFORM_FUNCTION procedure. This step runs this procedure for each rule and specifies hr.executive_to_management as the transformation function. Ensure that no other users are modifying the action context at the same time.

    BEGIN
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments5',
        transform_function  => 'hr.executive_to_management');
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments6',
        transform_function  => 'hr.executive_to_management');
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments7',
        transform_function  => 'hr.executive_to_management');    
    END;
    /
    

    Specifically, this procedure adds a name-value pair to each rule action context that specifies the name STREAMS$_TRANSFORM_FUNCTION and a value that is an ANYDATA instance containing the name of the PL/SQL function that performs the transformation. In this case, the transformation function is hr.executive_to_management.


    Note:

    The SET_RULE_TRANSFORM_FUNCTION does not verify that the specified transformation function exists. If the function does not exist, then an error is raised when an Oracle Streams process or job tries to invoke the transformation function.

Now, if you run the query that displays the name-value pairs in the action context for these rules, each rule, including the DEPARTMENTS6 rule, shows the name-value pair for the custom rule-based transformation:

SELECT 
    RULE_NAME,
    AC.NVN_NAME ACTION_CONTEXT_NAME, 
    AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
  FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
  WHERE RULE_NAME IN ('DEPARTMENTS5','DEPARTMENTS6','DEPARTMENTS7');

This query displays output similar to the following:

Rule Name     Action Context Name         Action Context Value
------------- --------------------------- ------------------------------
DEPARTMENTS5  STREAMS$_ROW_SUBSET         INSERT
DEPARTMENTS5  STREAMS$_TRANSFORM_FUNCTION "HR"."EXECUTIVE_TO_MANAGEMENT"
DEPARTMENTS6  STREAMS$_TRANSFORM_FUNCTION "HR"."EXECUTIVE_TO_MANAGEMENT"
DEPARTMENTS7  STREAMS$_ROW_SUBSET         DELETE
DEPARTMENTS7  STREAMS$_TRANSFORM_FUNCTION "HR"."EXECUTIVE_TO_MANAGEMENT"

You can also view transformation functions using the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the SET_RULE_TRANSFORM_FUNCTION and the rule types used in this example

Altering a Custom Rule-Based Transformation

To alter a custom rule-based transformation, you can either edit the transformation function or run the SET_RULE_TRANSFORM_FUNCTION procedure to specify a different transformation function. This example runs the SET_RULE_TRANSFORM_FUNCTION procedure to specify a different transformation function. The SET_RULE_TRANSFORM_FUNCTION procedure modifies the action context of a specified rule to run a different transformation function. If you edit the transformation function itself, then you do not need to run this procedure.

This example alters a custom rule-based transformation for rule DEPARTMENTS5 by changing the transformation function from hr.execute_to_management to hr.executive_to_lead. The hr.execute_to_management rule-based transformation was added to the DEPARTMENTS5 rule in the example in "Creating a Custom Rule-Based Transformation".

In Oracle Streams, subset rules use name-value pairs in an action context to perform internal transformations that convert UPDATE operations into INSERT and DELETE operations in some situations. Such a conversion is called a row migration. The SET_RULE_TRANSFORM_FUNCTION procedure preserves the name-value pairs that perform row migrations.


See Also:

"Row Migration and Subset Rules" for more information about row migration

Complete the following steps to alter a custom rule-based transformation:

  1. You can view all of the name-value pairs in the action context of a rule by performing the following query:

    COLUMN ACTION_CONTEXT_NAME HEADING 'Action Context Name' FORMAT A30
    COLUMN ACTION_CONTEXT_VALUE HEADING 'Action Context Value' FORMAT A30
    
    SELECT 
        AC.NVN_NAME ACTION_CONTEXT_NAME, 
        AC.NVN_VALUE.ACCESSVARCHAR2() ACTION_CONTEXT_VALUE
      FROM DBA_RULES R, TABLE(R.RULE_ACTION_CONTEXT.ACTX_LIST) AC
      WHERE RULE_NAME = 'DEPARTMENTS5';
    

    This query displays output similar to the following:

    Action Context Name            Action Context Value
    ------------------------------ ------------------------------
    STREAMS$_ROW_SUBSET            INSERT
    STREAMS$_TRANSFORM_FUNCTION    "HR"."EXECUTIVE_TO_MANAGEMENT"
    
  2. Run the SET_RULE_TRANSFORM_FUNCTION procedure to set the transformation function to executive_to_lead for the DEPARTMENTS5 rule. In this example, it is assumed that the new transformation function is hr.executive_to_lead and that the strmadmin user has EXECUTE privilege on it.

    BEGIN
      DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
        rule_name           => 'departments5',
        transform_function  => 'hr.executive_to_lead');
    END;
    /  
    

    To ensure that the transformation function was altered properly, you can rerun the query in Step 1. You should alter the action context for the DEPARTMENTS6 and DEPARTMENTS7 rules in a similar way to keep the three subset rules consistent.


Note:

  • The SET_RULE_TRANSFORM_FUNCTION does not verify that the specified transformation function exists. If the function does not exist, then an error is raised when an Oracle Streams process or job tries to invoke the transformation function.

  • If a custom rule-based transformation function is modified at the same time that an Oracle Streams client tries to access it, then an error might be raised.


Unsetting a Custom Rule-Based Transformation

To unset a custom rule-based transformation from a rule, run the SET_RULE_TRANSFORM_FUNCTION procedure and specify NULL for the transformation function. Specifying NULL unsets the name-value pair that specifies the custom rule-based transformation in the rule action context. This example unsets a custom rule-based transformation for rule DEPARTMENTS5. This transformation was added to the DEPARTMENTS5 rule in the example in "Creating a Custom Rule-Based Transformation".

In Oracle Streams, subset rules use name-value pairs in an action context to perform internal transformations that convert UPDATE operations into INSERT and DELETE operations in some situations. Such a conversion is called a row migration. The SET_RULE_TRANSFORM_FUNCTION procedure preserves the name-value pairs that perform row migrations.


See Also:

"Row Migration and Subset Rules" for more information about row migration

Run the following procedure to unset the custom rule-based transformation for rule DEPARTMENTS5:

BEGIN
  DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(
    rule_name           => 'departments5',
    transform_function  => NULL);
END;
/

To ensure that the transformation function was unset, you can run the query in Step 1. You should alter the action context for the DEPARTMENTS6 and DEPARTMENTS7 rules in a similar way to keep the three subset rules consistent.


See Also:

"Row Migration and Subset Rules" for more information about row migration

PKAs|PK&AOEBPS/strms_adapply.htm Advanced Apply Process Concepts

10 Advanced Apply Process Concepts

The following topics contain information about consuming information with Oracle Streams.

Apply Process Creation

You can create an apply process using the DBMS_STREAMS_ADM package or the DBMS_APPLY_ADM package. Using the DBMS_STREAMS_ADM package to create an apply process is simpler because defaults are used automatically for some configuration options. Alternatively, using the DBMS_APPLY_ADM package to create an apply process is more flexible.

When you create an apply process by running the CREATE_APPLY procedure in the DBMS_APPLY_ADM package, you can specify nondefault values for the apply_captured, apply_database_link, and apply_tag parameters. You can use the procedures in the DBMS_STREAMS_ADM package or the DBMS_RULE_ADM package to add rules to a rule set for the apply process.

If you create more than one apply process in a database, then the apply processes are completely independent of each other. These apply processes do not synchronize with each other, even if they apply LCRs from the same source database.

Table 10-1 describes the differences between using the DBMS_STREAMS_ADM package and the DBMS_APPLY_ADM package for apply process creation.

Table 10-1 DBMS_STREAMS_ADM and DBMS_APPLY_ADM Apply Process Creation

DBMS_STREAMS_ADM PackageDBMS_APPLY_ADM Package

A rule set is created automatically for the apply process and rules can be added to the rule set automatically. The rule set is a positive rule set if the inclusion_rule parameter is set to TRUE (the default). It is a negative rule set if the inclusion_rule parameter is set to FALSE. You can use the procedures in the DBMS_STREAMS_ADM and DBMS_RULE_ADM package to manage rule sets and rules for the apply process after the apply process is created.

You create one or more rule sets and rules for the apply process either before or after it is created. You can use the procedures in the DBMS_RULE_ADM package to create rule sets and add rules to rule sets either before or after the apply process is created. You can use the procedures in the DBMS_STREAMS_ADM package to create rule sets and add rules to rule sets for the apply process after the apply process is created.

The apply process can apply messages only at the local database.

You specify whether the apply process applies messages at the local database or at a remote database during apply process creation.

Changes applied by the apply process generate tags in the redo log at the destination database with a value of 00 (double zero).

You specify the tag value for changes applied by the apply process during apply process creation. The default value for the tag is 00 (double zero).



See Also:


Apply Processes and Dependencies

The following sections describe how apply processes handle dependencies:

How Dependent Transactions Are Applied

The parallelism apply process parameter controls the parallelism of an apply process. When apply process parallelism is set to 1, a single apply server applies transactions in the same order as the order in which they were committed on the source database. In this case, dependencies are not an issue. For example, if transaction A is committed before transaction B on the source database, then, on the destination database, all of the LCRs in transaction A are applied before any LCRs in transaction B.

However, when apply process parallelism is set to a value greater than 1, multiple apply servers apply transactions simultaneously. When an apply process is applying transactions in parallel, it applies the row LCRs in these transactions until it detects a row LCR that depends on a row LCR in another transaction. When a dependent row LCR is detected, an apply process finishes applying the LCRs in the transaction with the lower commit system change number (CSCN) and commits this transaction before it finishes applying the remaining row LCRs in the transaction with the higher CSCN.

For example, consider two transactions: transaction A and transaction B. The transactions are dependent transactions, and each transaction contains 100 row LCRs. Transaction A committed on the source database before transaction B. Therefore, transaction A has the lower CSCN of the two transactions. An apply process can apply these transactions in parallel in the following way:

  1. The apply process begins to apply row LCRs from both transactions in parallel.

  2. Using a constraint in the destination database's data dictionary or a virtual dependency definition at the destination database, the apply process detects a dependency between a row LCR in transaction A and a row LCR in transaction B.

  3. Because transaction B has the higher CSCN of the two transactions, the apply process waits to apply transaction B and does not apply the dependent row LCR in transaction B. The row LCRs before the dependent row LCR in transaction B have been applied. For example, if the dependent row LCR in transaction B is the 81st row LCR, then the apply process could have applied 80 of the 100 row LCRs in transaction B.

  4. Because transaction A has the lower CSCN of the two transactions, the apply process applies all the row LCRs in transaction A and commits.

  5. The apply process applies the dependent row LCR in transaction B and the remaining row LCRs in transaction B. When all of the row LCRs in transaction B are applied, the apply process commits transaction B.


Note:

You can set the parallelism apply process parameter using the SET_PARAMETER procedure in the DBMS_APPLY_ADM package.

Row LCR Ordering During Apply

An apply process orders and applies row LCRs in the following way:

  • Row LCRs within a single transaction are always applied in the same order as the corresponding changes on the source database.

  • Row LCRs that depend on each other in different transactions are always applied in the same order as the corresponding changes on the source database. When apply process parallelism is greater than 1, and the apply process detects a dependency between row LCRs in different transactions, the apply process always executes the transaction with the lower CSCN before executing the dependent row LCR. This behavior is described in more detail in "How Dependent Transactions Are Applied".

  • If commit_serialization apply process parameter is set to FULL, then the apply process commits all transactions, regardless of whether they contain dependent row LCRs, in the same order as the corresponding transactions on the source database.

  • If commit_serialization apply process parameter is set to DEPENDENT_TRANSACTIONS, then the apply process might apply transactions that do not depend on each other in a different order than the commit order of the corresponding transactions on the source database.


Note:

You can set the commit_serialization apply process parameter using the SET_PARAMETER procedure in the DBMS_APPLY_ADM package.

Dependencies and Constraints

If the names of shared database objects are the same at the source and destination databases, and if the objects are in the same schemas at these databases, then an apply process automatically detects dependencies between row LCRs, assuming constraints are defined for the database objects at the destination database. Information about these constraints is stored in the data dictionary at the destination database.

Regardless of the setting for the commit_serialization parameter and apply process parallelism, an apply process always respects dependencies between transactions that are enforced by database constraints. When an apply process is applying a transaction that contains row LCRs that depend on row LCRs in another transaction, the apply process ensures that the row LCRs are applied in the correct order and that the transactions are committed in the correct order to maintain the dependencies. Apply processes detect dependencies for captured row LCRs and persistent row LCRs.

However, some environments have dependencies that are not enforced by database constraints, such as environments that enforce dependencies using applications. If your environment has dependencies for shared database objects that are not enforced by database constraints, then set the commit_serialization parameter to FULL for apply processes that apply changes to these database objects.

Dependency Detection, Rule-Based Transformations, and Apply Handlers

When rule-based transformations are specified for rules used by an apply process, and apply handlers are configured for the apply process, LCRs are processed in the following order:

  1. The apply process dequeues LCRs from its queue.

  2. The apply process runs rule-based transformations on LCRs, when appropriate.

  3. The apply process detects dependencies between LCRs.

  4. The apply process passes LCRs to apply handlers, when appropriate.

Virtual Dependency Definitions

In some cases, an apply process requires additional information to detect dependencies in row LCRs that are being applied in parallel. The following are examples of cases in which an apply process requires additional information to detect dependencies:

  • The data dictionary at the destination database does not contain the required information. The following are examples of this case:

    • The apply process cannot find information about a database object in the data dictionary of the destination database. This can happen when there are data dictionary differences for shared database objects between the source and destination databases. For example, a shared database object can have a different name or can be in a different schema at the source database and destination database.

    • A relationship exists between two or more tables, and the relationship is not recorded in the data dictionary of the destination database. This can happen when database constraints are not defined to improve performance or when an application enforces dependencies during database operations instead of database constraints.

  • Data is denormalized by an apply handler after dependency computation. For example, the information in a single row LCR can be used to create multiple row LCRs that are applied to multiple tables.

Apply errors or incorrect processing can result when an apply process cannot determine dependencies properly. In some of the cases described in the previous list, you can use rule-based transformations to avoid apply problems. For example, if a shared database object is in different schemas at the source and destination databases, then a rule-based transformation can change the schema in the appropriate LCRs. However, the disadvantage with using rule-based transformations is that they cannot be executed in parallel.

A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions at a destination database. A virtual dependency definition is not described as a constraint in the data dictionary of the destination database. Instead, it is specified using procedures in the DBMS_APPLY_ADM package. Virtual dependency definitions enable an apply process to detect dependencies that it would not be able to detect by using only the constraint information in the data dictionary. After dependencies are detected, an apply process schedules LCRs and transactions in the correct order for apply.

Virtual dependency definitions provide required information so that apply processes can detect dependencies correctly before applying LCRs directly or passing LCRs to apply handlers. Virtual dependency definitions enable apply handlers to process these LCRs correctly, and the apply handlers can process them in parallel to improve performance.

A virtual dependency definition can define one of the following types of dependencies:


Note:

A destination database must be running Oracle Database 10g Release 2 or later to specify virtual dependency definitions.

Value Dependency

A value dependency defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables. A value dependency is set for one or more columns, and an apply process uses a value dependency to detect dependencies between row LCRs that contain values for these columns. Value dependencies can define virtual foreign key relationships between tables, but, unlike foreign key relationships, value dependencies can involve more than two tables.

Value dependencies are useful when relationships between columns in tables are not described by constraints in the data dictionary of the destination database. Value dependencies describe these relationships, and an apply process uses the value dependencies to determine when two or more row LCRs in different transactions involve the same row in a table at the destination database. For transactions that are being applied in parallel, when two or more row LCRs involve the same row, the transactions that include these row LCRs are dependent transactions.

Use the SET_VALUE_DEPENDENCY procedure in the DBMS_APPLY_ADM package to define or remove a value dependency at a destination database. In this procedure, table columns are specified as attributes.

The following restrictions pertain to value dependencies:

  • The row LCRs that involve the database objects specified in a value dependency must originate from a single source database.

  • Each value dependency must contain only one set of attributes for a particular database object.

Also, any columns specified in a value dependency at a destination database must be supplementally logged at the source database. These columns must be unconditionally logged.

Object Dependency

An object dependency defines a parent-child relationship between two objects at a destination database. An apply process schedules execution of transactions that involve the child object after all transactions with lower commit system change number (CSCN) values that involve the parent object have been committed. An apply process uses the object identifier in each row LCR to detect dependencies. The apply process does not use column values in the row LCRs to detect object dependencies.

Object dependencies are useful when relationships between tables are not described by constraints in the data dictionary of the destination database. Object dependencies describe these relationships, and an apply process uses the object dependencies to determine when two or more row LCRs in different transactions involve these tables. For transactions that are being applied in parallel, when a row LCR in one transaction involves the child table, and a row LCR in a different transaction involves the parent table, the transactions that include these row LCRs are dependent transactions.

Use the CREATE_OBJECT_DEPENDENCY procedure to create an object dependency at a destination database. Use the DROP_OBJECT_DEPENDENCY procedure to drop an object dependency at a destination database. Both of these procedures are in the in the DBMS_APPLY_ADM package.


Note:

Tables with circular dependencies can result in apply process deadlocks when apply process parallelism is greater than 1. The following is an example of a circular dependency: Table A has a foreign key constraint on table B, and table B has a foreign key constraint on table A. Apply process deadlocks are possible when two or more transactions that involve the tables with circular dependencies commit at the same SCN.

Barrier Transactions

When an apply process cannot identify the table row or the database object specified in a row LCR by using the destination database's data dictionary and virtual dependency definitions, the transaction that contains the row LCR is applied after all of the other transactions with lower CSCN values. Such a transaction is called a barrier transaction. Transactions with higher CSCN values than the barrier transaction are not applied until after the barrier transaction has committed. In addition, all DDL transactions are barrier transactions.

Considerations for Applying DML Changes to Tables

The following sections discuss considerations for applying DML changes to tables:

Constraints and Applying DML Changes to Tables

You must ensure that the primary key columns at the destination database are logged in the redo log at the source database for every update. A unique key or foreign key constraint at a destination database that contains data from more that one column at the source database requires additional logging at the source database.

There are various ways to ensure that a column is logged at the source database. For example, whenever the value of a column is updated, the column is logged. Also, Oracle has a feature called supplemental logging that automates the logging of specified columns.

For a unique key and foreign key constraint at a destination database that contains data from only one column at a source database, no supplemental logging is required. However, for a constraint that contains data from multiple columns at the source database, you must create a conditional supplemental log group containing all the columns at the source database that are used by the constraint at the destination database.

Typically, unique key and foreign key constraints include the same columns at the source database and destination database. However, in some cases, an apply handler or custom rule-based transformation can combine a multi-column constraint from the source database into a single key column at the destination database. Also, an apply handler or custom rule-based transformation can separate a single key column from the source database into a multi-column constraint at the destination database. In such cases, the number of columns in the constraint at the source database determines whether a conditional supplemental log group is required. If there is more than one column in the constraint at the source database, then a conditional supplemental log group containing all the constraint columns is required at the source database. If there is only one column in the constraint at the source database, then no supplemental logging is required for the key column.


See Also:

Oracle Streams Replication Administrator's Guide for more information about supplemental logging

Substitute Key Columns

If possible, each table for which changes are applied by an apply process should have a primary key. When a primary key is not possible, Oracle recommends that each table have a set of columns that can be used as a unique identifier for each row of the table. If the tables that you plan to use in your Oracle Streams environment do not have a primary key or a set of unique columns, then consider altering these tables accordingly.

To detect conflicts and handle errors accurately, Oracle must be able to identify uniquely and match corresponding rows at different databases. By default, Oracle Streams uses the primary key of a table to identify rows in the table, and if a primary key does not exist, Oracle Streams uses the smallest unique key that has at least one NOT NULL column to identify rows in the table. When a table at a destination database does not have a primary key or a unique key with at least one NOT NULL column, or when you want to use columns other than the primary key or unique key for the key, you can designate a substitute key at the destination database. A substitute key is a column or set of columns that Oracle can use to identify rows in the table during apply.

You can specify the substitute primary key for a table using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. Unlike true primary keys, the substitute key columns can contain nulls. Also, the substitute key columns take precedence over any existing primary key or unique keys for the specified table for all apply processes at the destination database.

If you specify a substitute key for a table in a destination database, and these columns are not a primary key for the same table at the source database, then you must create an unconditional supplemental log group containing the substitute key columns at the source database.

In the absence of substitute key columns, primary key constraints, and unique key constraints, an apply process uses all of the columns in the table as the key columns, excluding columns of the following data types: LOB, LONG, LONG RAW, user-defined types (including object types, REFs, varrays, nested tables), and Oracle-supplied types (including Any types, XML types, spatial types, and media types). In this case, you must create an unconditional supplemental log group containing these columns at the source database. Using substitute key columns is preferable when there is no primary key constraint for a table because fewer columns are needed in the row LCR.


Note:

  • Oracle recommends that each column you specify as a substitute key column be a NOT NULL column. You should also create a single index that includes all of the columns in a substitute key. Following these guidelines improves performance for changes because the database can locate the relevant row more efficiently.

  • LOB, LONG, LONG RAW, user-defined type, and Oracle-supplied type columns cannot be specified as substitute key columns.



See Also:


Apply Process Behavior for Column Discrepancies

A column discrepancy is any difference in the columns in a table at a source database and the columns in the same table at a destination database. If there are column discrepancies in your Oracle Streams environment, then use rule-based transformations, statement DML handlers, or procedure DML handlers to make the columns in row LCRs being applied by an apply process match the columns in the relevant tables at a destination database.

The following sections describe apply process behavior for common column discrepancies.

Missing Columns at the Destination Database

If the table at the destination database is missing one or more columns that are in the table at the source database, then an apply process raises an error and moves the transaction that caused the error into the error queue. You can avoid such an error by creating a rule-based transformation or procedure DML handler that deletes the missing columns from the LCRs before they are applied. Specifically, the transformation or handler can remove the extra columns using the DELETE_COLUMN member procedure on the row LCR. You can also create a statement DML handler with a SQL statement that excludes the missing columns.

Extra Columns at the Destination Database

If the table at the destination database has more columns than the table at the source database, then apply process behavior depends on whether the extra columns are required for dependency computations. If the extra columns are not used for dependency computations, then an apply process applies changes to the destination table. In this case, if column defaults exist for the extra columns at the destination database, then these defaults are used for these columns for all inserts. Otherwise, these inserted columns are NULL.

If, however, the extra columns are used for dependency computations, then an apply process places the transactions that include these changes in the error queue. The following types of columns are required for dependency computations:

  • For all changes, all key columns

  • For INSERT and DELETE statements, all columns involved with constraints

  • For UPDATE statements, if a constraint column is changed, such as a unique key constraint column or a foreign key constraint column, then all columns involved in the constraint

When the extra columns are used for dependency computations, one way to avoid apply errors is to use statement DML handlers to add the extra columns.

Column Data Type Mismatch

A column data type mismatch results when the data type for a column in a table at the destination database does not match the data type for the same column at the source database. An apply process can automatically convert certain data types when it encounters a column data type mismatch. If an apply process cannot automatically convert the data type, then apply process places transactions containing the changes to the mismatched column into the error queue. To avoid such an error, you can create a custom rule-based transformation or DML handler that converts the data type.

Conflict Resolution and an Apply Process

Conflicts are possible in an Oracle Streams configuration where data is shared between multiple databases. A conflict is a mismatch between the old values in an LCR and the expected data in a table. A conflict can occur if DML changes are allowed to a table for which changes are captured and to a table where these changes are applied.

For example, a transaction at the source database can update a row at nearly the same time as a different transaction that updates the same row at a destination database. In this case, if data consistency between the two databases is important, then when the change is propagated to the destination database, an apply process must be instructed either to keep the change at the destination database or replace it with the change from the source database. When data conflicts occur, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules.

Oracle Streams automatically detects conflicts and, for update conflicts, tries to use an update conflict handler to resolve them if one is configured. Oracle Streams offers a variety of prebuilt handlers that enable you to define a conflict resolution system for your database that resolves conflicts in accordance with your business rules. If you have a unique situation that a prebuilt conflict resolution handler cannot resolve, then you can build and use your own custom conflict resolution handlers in an error handler or procedure DML handler. Conflict detection can be disabled for nonkey columns.

Handlers and Row LCR Processing

Any of the following handlers can process a row LCR:

  • DML handler (either statement DML handler or procedure DML handler)

  • Error handler

  • Update conflict handler

The following sections describe the possible scenarios involving these handlers:

No Relevant Handlers

If there are no relevant handlers for a row LCR, then an apply process tries to apply the change specified in the row LCR directly. If the apply process can apply the row LCR, then the change is made to the row in the table. If there is a conflict or an error during apply, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Relevant Update Conflict Handler

Consider a case where there is a relevant update conflict handler configured, but no other relevant handlers are configured. An apply process tries to apply the change specified in a row LCR directly. If the apply process can apply the row LCR, then the change is made to the row in the table.

If there is an error during apply that is caused by a condition other than an update conflict, including a uniqueness conflict or a delete conflict, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

If there is an update conflict during apply, then the relevant update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, then the apply process either applies the LCR or discards the LCR, depending on the resolution of the update conflict, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the update conflict handler cannot resolve the conflict, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

DML Handler But No Relevant Update Conflict Handler

Consider a case where an apply process passes a row LCR to a DML handler, and there is no relevant update conflict handler configured. The DML handler can be a statement DML handler or a procedure DML handler.

The DML handler processes the row LCR. The designer of the DML handler has complete control over this processing. Some DML handlers can perform SQL operations or run the EXECUTE member procedure of the row LCR. If the DML handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR might have been modified by the DML handler.

Statement DML Handler Failure

An apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR. If any SQL operation performed by any statement DML handler fails, or if an attempt to run the EXECUTE member procedure fails, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Procedure DML Handler Failure

If any SQL operation performed by a procedure DML handler fails, or if an attempt to run the EXECUTE member procedure fails, then the procedure DML handler can try to handle the exception. If the procedure DML handler does not raise an exception, then the apply process assumes the procedure DML handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.

If the procedure DML handler cannot handle the exception, then the procedure DML handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

DML Handler And a Relevant Update Conflict Handler

Consider a case where an apply process passes a row LCR to a DML handler and there is a relevant update conflict handler configured. The DML handler can be a statement DML handler or a procedure DML handler. An apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR.

The DML handler processes the row LCR. The designer of the DML handler has complete control over this processing. Some DML handlers might perform SQL operations or run the EXECUTE member procedure of the row LCR. If the DML handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR. If the DML handler is a procedure DML handler, then this row LCR could have been modified by the procedure DML handler.

If any SQL operation performed by a DML handler fails, or if an attempt to run the EXECUTE member procedure fails for any reason other than an update conflict, then the behavior is the same as that described in "DML Handler But No Relevant Update Conflict Handler". Note that uniqueness conflicts and delete conflicts are not update conflicts.

If an attempt to run the EXECUTE member procedure fails because of an update conflict, then the behavior depends on the setting of the conflict_resolution parameter in the EXECUTE member procedure:

The conflict_resolution Parameter Is Set to TRUE

If the conflict_resolution parameter is set to TRUE, then the relevant update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, and all other operations performed by the DML handler succeed, then the DML handler finishes without raising an exception, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.

If the update conflict handler cannot resolve the conflict, and the DML handler is a statement DML handler, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

If the update conflict handler cannot resolve the conflict, and the DML handler is a procedure DML handler, then a procedure DML handler can try to handle the exception. If the procedure DML handler does not raise an exception, then the apply process assumes the procedure DML handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the procedure DML handler cannot handle the exception, then the procedure DML handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

The conflict_resolution Parameter Is Set to FALSE

If the conflict_resolution parameter is set to FALSE, then the relevant update conflict handler is not invoked. In this case, the behavior is the same as that described in "DML Handler But No Relevant Update Conflict Handler".

Statement DML Handler and Procedure DML Handler

Consider a case where an apply process passes a row LCR to both a statement DML handler and a procedure DML handler for the same operation on the same table. In this case, the DML handlers can be run in any order, and each DML handler receives each original row LCR. Also, an apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR. Each DML handler processes the row LCR independently, and the behavior is the same as any other scenario that involves a DML handler.

If any statement DML handler or procedure DML handler fails, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Error Handler But No Relevant Update Conflict Handler

Consider a case where an apply process encounters an error when it tries to apply a row LCR. This error can be caused by a conflict or by some other condition. There is an error handler for the table operation but no relevant update conflict handler configured.

The row LCR is passed to the error handler. The error handler processes the row LCR. The designer of the error handler has complete control over this processing. Some error handlers might perform SQL operations or run the EXECUTE member procedure of the row LCR. If the error handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR could have been modified by the error handler.

If any SQL operation performed by the error handler fails, or if an attempt to run the EXECUTE member procedure fails, then the error handler can try to handle the exception. If the error handler does not raise an exception, then the apply process assumes the error handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.

If the error handler cannot handle the exception, then the error handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Error Handler And a Relevant Update Conflict Handler

Consider a case where an apply process encounters an error when it tries to apply a row LCR. There is an error handler for the table operation, and there is a relevant update conflict handler configured.

The handler that is invoked to handle the error depends on the type of error it is:

  • If the error is caused by a condition other than an update conflict, including a uniqueness conflict or a delete conflict, then the error handler is invoked, and the behavior is the same as that described in "Error Handler But No Relevant Update Conflict Handler".

  • If the error is caused by an update conflict, then the update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, then the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. In this case, the error handler is not invoked.

    If the update conflict handler cannot resolve the conflict, then the error handler is invoked. If the error handler does not raise an exception, then the apply process assumes the error handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the error handler cannot process the LCR, then the error handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.

Statement DML Handler and Relevant Error Handler

Consider a case where an apply process passes a row LCR to a statement DML handler and there is a relevant error handler configured.

The statement DML handler processes the row LCR. The designer of the statement DML handler has complete control over this processing. Some statement DML handlers might perform SQL operations or run the EXECUTE member procedure of the row LCR. If the statement DML handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR.

Also, an apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR.

If any SQL operation performed by any statement DML handler fails, or if an attempt to run the EXECUTE member procedure fails for any reason, then the behavior is the same as that described in "Error Handler But No Relevant Update Conflict Handler". The error handler gets the original row LCR, not the row LCR processed by the statement DML handler.


Note:

You cannot have a procedure DML handler and an error handler simultaneously for the same operation on the same table. Therefore, there is no scenario in which they could both be invoked.

Statement DML Handler, Error Handler, and Relevant Update Conflict Handler

Consider a case where an apply process passes a row LCR to a statement DML handler and there is a relevant error handler and a relevant update conflict handler configured.

The statement DML handler processes the row LCR. The designer of the statement DML handler has complete control over this processing. Some statement DML handlers might perform SQL operations or run the EXECUTE member procedure of the row LCR. If the statement DML handler runs the EXECUTE member procedure of the row LCR, then the apply process tries to apply the row LCR.

Also, an apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR.

If any SQL operation performed by any statement DML handler fails, or if an attempt to run the EXECUTE member procedure fails for any reason, then the behavior is the same as that described in "Error Handler And a Relevant Update Conflict Handler".


Note:

You cannot have a procedure DML handler and an error handler simultaneously for the same operation on the same table. Therefore, there is no scenario in which they could both be invoked.

Considerations for Applying DDL Changes

The following sections discuss considerations for applying DDL changes to tables:

System-Generated Names

If you plan to capture DDL changes at a source database and apply these DDL changes at a destination database, then avoid using system-generated names. If a DDL statement results in a system-generated name for an object, then the name of the object typically will be different at the source database and each destination database applying the DDL change from this source database. Different names for objects can result in apply errors for future DDL changes.

For example, suppose the following DDL statement is run at a source database:

CREATE TABLE sys_gen_name (n1 NUMBER  NOT NULL); 

This statement results in a NOT NULL constraint with a system-generated name. For example, the NOT NULL constraint might be named sys_001500. When this change is applied at a destination database, the system-generated name for this constraint might be sys_c1000.

Suppose the following DDL statement is run at the source database:

ALTER TABLE sys_gen_name DROP CONSTRAINT sys_001500;

This DDL statement succeeds at the source database, but it fails at the destination database and results in an apply error.

To avoid such an error, explicitly name all objects resulting from DDL statements. For example, to name a NOT NULL constraint explicitly, run the following DDL statement:

CREATE TABLE sys_gen_name (n1 NUMBER CONSTRAINT sys_gen_name_nn NOT NULL);

CREATE TABLE AS SELECT Statements

When applying a change resulting from a CREATE TABLE AS SELECT statement, an apply process performs two steps:

  1. The CREATE TABLE AS SELECT statement is executed at the destination database, but it creates only the structure of the table. It does not insert any rows into the table. If the CREATE TABLE AS SELECT statement fails, then an apply process error results. Otherwise, the statement automatically commits, and the apply process performs Step 2.

  2. The apply process inserts the rows that were inserted at the source database because of the CREATE TABLE AS SELECT statement into the corresponding table at the destination database. It is possible that a capture process, a propagation, or an apply process will discard all of the row LCRs with these inserts based on their rule sets. In this case, the table remains empty at the destination database.

DML Statements within DDL Statements

When an apply process applies a data definition language (DDL) change, Oracle Streams ensures that the data manipulation language (DML) changes on the DDL target within the same transaction are not replicated at the destination database. Therefore, the source database and destination database can diverge in some cases. Divergence can result in apply process errors when the old values in row logical change records (LCRs) do not match the current values in a destination table.

The following cases cause the source database and destination database to diverge:

The DDL Statement Contains Derived Values

When a DDL statement contains a non-literal value that is derived, the value that is derived might not match at the source database and destination database. For example, the following DDL statement adds a column to the hr.employees table and inserts a date value derived from the computer system running the source database:

ALTER TABLE hr.employees ADD(start_date DATE DEFAULT SYSDATE);

Assume that a replication environment maintains DML and DDL changes made to the hr.employees table between a source database and a destination database. In this case, the SYSDATE function is executed independently at the source database and at the destination database. Therefore, the DATE value inserted at the source database will not match the DATE value inserted at the destination database.

The DDL Statement Fires DML Triggers

When a DDL statement fires a DML trigger defined on the destination table, the DML changes made by the trigger are not replicated at the destination database. Because the DML changes made by the triggers occur in the same transaction as the DDL statement, and operate on the table that is the target of the DDL statement, the triggered DML changes are not replicated at the destination database.

For example, assume you create the following table:

CREATE TABLE hr.temp_employees(
   emp_id       NUMBER  PRIMARY KEY,
   first_name   VARCHAR2(64),
   last_name    VARCHAR2(64),
   modify_date  TIMESTAMP);

Assume you create a trigger on the table so that whenever the table is updated the modify_date column is updated to reflect the time of change:

CREATE OR REPLACE TRIGGER hr.trg_mod_dt BEFORE UPDATE ON hr.temp_employees
   REFERENCING
   NEW AS NEW_ROW FOR EACH ROW
BEGIN
   :NEW_ROW.modify_date:= SYSTIMESTAMP;
END;
/

Assume that a replication environment maintains DML and DDL changes made to the hr.temp_employees table between a source database and a destination database. In this case, the hr.temp_employees table is maintained correctly at the destination database for direct DML changes made to this table at the source database. However, if an ADD COLUMN statement at the source database adds a column to this table, then the hr.trg_mod_dt update trigger changes the modify_date column of all of the rows in the table to a new timestamp. These changes to the modify_date column are not replicated at the destination database.

Instantiation SCN and Ignore SCN for an Apply Process

In an Oracle Streams environment that shares information within a single database or between multiple databases, a source database is the database where changes are generated in the redo log. Suppose an environment has the following characteristics:

  • A capture process or a synchronous capture captures changes to tables at the source database and stages the changes as LCRs in a queue.

  • An apply process applies these LCRs, either at the same database or at a destination database to which the LCRs have been propagated.

In such an environment, for each table, only changes that committed after a specific system change number (SCN) at the source database are applied. An instantiation SCN specifies this value for each table.

An instantiation SCN can be set during instantiation, or an instantiation SCN can be set using a procedure in the DBMS_APPLY_ADM package. If the tables do not exist at the destination database before the Oracle Streams replication environment is configured, then these table are physically created (instantiated) using copies from the source database, and the instantiation SCN is set for each table during instantiation. If the tables already exist at the destination database before the Oracle Streams replication environment is configured, then these table are not instantiated using copies from the source database. Instead, the instantiation SCN must be set manually for each table using one of the following procedures in the DBMS_APPLY_ADM package: SET_TABLE_INSTANTIATION_SCN, SET_SCHEMA_INSTANATIATION_SCN, or SET_GLOBAL_INSTANTIATION_SCN.

The instantiation SCN for a database object controls which LCRs that contain changes to the database object are ignored by an apply process and which LCRs are applied by an apply process. If the commit SCN of an LCR for a database object from a source database is less than or equal to the instantiation SCN for that database object at a destination database, then the apply process at the destination database discards the LCR. Otherwise, the apply process applies the LCR.

Also, if there are multiple source databases for a shared database object at a destination database, then an instantiation SCN must be set for each source database, and the instantiation SCN can be different for each source database. You can set instantiation SCNs by using export/import or transportable tablespaces. You can also set an instantiation SCN by using a procedure in the DBMS_APPLY_ADM package.

Oracle Streams also records the ignore SCN for each database object. The ignore SCN is the SCN below which changes to the database object cannot be applied. The instantiation SCN for an object cannot be set lower than the ignore SCN for the object. This value corresponds to the SCN value at the source database at the time when the object was prepared for instantiation. An ignore SCN is set for a database object only when the database object is instantiated using Oracle Data Pump.

You can view the instantiation SCN and ignore SCN for database objects by querying the DBA_APPLY_INSTANTIATED_OBJECTS data dictionary view.

The Oldest SCN for an Apply Process

If an apply process is running, then the oldest SCN is the earliest SCN of the transactions currently being dequeued and applied. For a stopped apply process, the oldest SCN is the earliest SCN of the transactions that were being applied when the apply process was stopped.

The following are two common scenarios in which the oldest SCN is important:

  • You must recover the database in which the apply process is running to a certain point in time.

  • You stop using an existing capture process that captures changes for the apply process and use a different capture process to capture changes for the apply process.

In both cases, you should determine the oldest SCN for the apply process by querying the DBA_APPLY_PROGRESS data dictionary view. The OLDEST_MESSAGE_NUMBER column in this view contains the oldest SCN. Next, set the start SCN for the capture process that is capturing changes for the apply process to the same value as the oldest SCN value. If the capture process is capturing changes for other apply processes, then these other apply processes might receive duplicate LCRs when you reset the start SCN for the capture process. In this case, the other apply processes automatically discard the duplicate LCRs.


Note:

The oldest SCN is only valid for apply processes that apply LCRs that were captured by a capture process. The oldest SCN does not pertain to apply processes that apply LCRs captured by synchronous capture or LCRs enqueued explicitly.

Low-Watermark and High-Watermark for an Apply Process

The low-watermark for an apply process is the system change number (SCN) up to which all LCRs have been applied. That is, LCRs that were committed at an SCN less than or equal to the low-watermark number have definitely been applied, but some LCRs that were committed with a higher SCN also might have been applied. The low-watermark SCN for an apply process is equivalent to the applied SCN for a capture process.

The high-watermark for an apply process is the SCN beyond which no LCRs have been applied. That is, no LCRs that were committed with an SCN greater than the high-watermark have been applied.

You can view the low-watermark and high-watermark for one or more apply processes by querying the V$STREAMS_APPLY_COORDINATOR and ALL_APPLY_PROGRESS data dictionary views.

Apply Processes and Triggers

This section describes how Oracle Streams apply processes interact with triggers.

This section contains these topics:

Trigger Firing Property

You can control a DML or DDL trigger's firing property using the SET_TRIGGER_FIRING_PROPERTY procedure in the DBMS_DDL package. This procedure lets you specify whether a trigger always fires, fires once, or fires for apply process changes only.

The SET_TRIGGER_FIRING_PROPERTY procedure is overloaded. Set a trigger's firing property in one of the following ways:

  • To specify that a trigger always fires, set the fire_once procedure parameter to FALSE.

  • To specify that a trigger fires once, set the fire_once parameter to TRUE.

  • To specify that a trigger fires for apply process changes only, set the property parameter to DBMS_DDL.APPLY_SERVER_ONLY.

If DBMS_DDL.APPLY_SERVER_ONLY property is set for a trigger, then the trigger only fires for apply process changes, regardless of the setting of the fire_once parameter. That is, setting DBMS_DDL.APPLY_SERVER_ONLY for the property parameter overrides the fire_once parameter setting.

A trigger's firing property determines whether the trigger fires in each of the following cases:

  • When a triggering event is executed by a user process

  • When a triggering event is executed by an apply process

  • When a triggering event results from the execution of one or more apply errors using the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package

Table 10-2 shows when a trigger fires based on its trigger firing property.

Table 10-2 Trigger Firing Property

Trigger Firing PropertyUser Process Causes Triggering EventApply Process Causes Triggering EventApply Error Execution Causes Triggering Event

Always fire

Trigger Fires

Trigger Fires

Trigger Fires

Fire once

Trigger Fires

Trigger Does Not Fire

Trigger Does Not Fire

For for apply process changes only

Trigger Does Not Fire

Trigger Fires

Trigger Fires


For example, in the hr schema, the update_job_history trigger adds a row to the job_history table when data is updated in the job_id or department_id column in the employees table. Suppose, in an Oracle Streams environment, the following configuration exists:

  • A capture process or synchronous capture captures changes to both of these tables at the dbs1.example.com database.

  • A propagation propagates these changes to the dbs2.example.com database.

  • An apply process applies these changes at the dbs2.example.com database.

  • The update_job_history trigger exists in the hr schema in both databases.

If the update_job_history trigger is set to always fire at dbs2.example.com in this scenario, then these actions result:

  1. The job_id column is updated for an employee in the employees table at dbs1.example.com.

  2. The update_job_history trigger fires at dbs1.example.com and adds a row to the job_history table that records the change.

  3. The capture process or synchronous capture at dbs1.example.com captures the changes to both the employees table and the job_history table.

  4. A propagation propagates these changes to the dbs2.example.com database.

  5. An apply process at the dbs2.example.com database applies both changes.

  6. The update_job_history trigger fires at dbs2.example.com when the apply process updates the employees table.

In this case, the change to the employees table is recorded twice at the dbs2.example.com database: when the apply process applies the change to the job_history table and when the update_job_history trigger fires to record the change made to the employees table by the apply process.

A database administrator might not want the update_job_history trigger to fire at the dbs2.example.com database when a change is made by the apply process. Similarly, a database administrator might not want a trigger to fire because of the execution of an apply error transaction. If the update_job_history trigger's firing property is set to fire once, then it does not fire at dbs2.example.com when the apply process applies a change to the employees table, and it does not fire when an executed error transaction updates the employees table.


Note:

Only DML and DDL triggers can be set to fire once. All other types of triggers always fire.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about setting a trigger's firing property with the SET_TRIGGER_FIRING_PROPERTY procedure

Apply Processes and Triggers Created with the ON SCHEMA Clause

If you use the ON SCHEMA clause to create a schema trigger, then the schema trigger fires only if the schema performs a relevant change. Therefore, when an apply process is applying changes, a schema trigger that is set to fire always fires only if the apply user is the same as the schema specified in the schema trigger. If the schema trigger is set to fire once, then it never fires when an apply process applies changes, regardless of whether the apply user is the same as the schema specified in the schema trigger.

For example, if you specify a schema trigger that always fires on the hr schema at a source database and destination database, but the apply user at a destination database is strmadmin, then the trigger fires when the hr user performs a relevant change on the source database, but the trigger does not fire when this change is applied at the destination database. However, if you specify a schema trigger that always fires on the strmadmin schema at the destination database, then this trigger fires whenever a relevant change is made by the apply process, regardless of any trigger specifications at the source database.

Oracle Streams Data Dictionary for an Apply Process

When a database object is prepared for instantiation at a source database, an Oracle Streams data dictionary is populated automatically at the database where changes to the object are captured by a capture process. The Oracle Streams data dictionary is a multiversioned copy of some of the information in the primary data dictionary at a source database. The Oracle Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types. This mapping keeps each captured LCR as small as possible because a captured LCR can often use numbers rather than names internally.

Unless a captured LCR is passed as a parameter to a custom rule-based transformation during capture or propagation, the mapping information in the Oracle Streams data dictionary at the source database is needed to interpret the contents of the LCR at any database that applies the captured LCR. To make this mapping information available to an apply process, Oracle automatically populates a multiversioned Oracle Streams data dictionary at each destination database that has an Oracle Streams apply process. Oracle automatically propagates relevant information from the Oracle Streams data dictionary at the source database to all other databases that apply captured LCRs from the source database.

Multiple Apply Processes in a Single Database

If you run multiple apply processes in a single database, consider increasing the size of the System Global Area (SGA). Use the SGA_MAX_SIZE initialization parameter to increase the SGA size. Also, if the size of the Oracle Streams pool is not managed automatically in the database, then you should increase the size of the Oracle Streams pool by 1 MB for each apply process parallelism. For example, if you have two apply processes running in a database, and the parallelism parameter is set to 4 for one of them and 1 for the other, then increase the Oracle Streams pool by 5 MB (4 + 1 = 5 parallelism).


Note:

The size of the Oracle Streams pool is managed automatically if the MEMORY_TARGET, MEMORY_MAX_TARGET, or SGA_TARGET initialization parameter is set to a nonzero value.


See Also:


PKGgCt4tPK&AOEBPS/ap_restrictions.htm Oracle Streams Restrictions

B Oracle Streams Restrictions

This appendix describes Oracle Streams restrictions.

This appendix includes these topics:

Capture Process Restrictions

This section describes restrictions for capture processes.

This section contains these topics:

Unsupported Data Types for Capture Processes

A capture process does not capture the results of DML changes to columns of the following data types:

  • BFILE

  • ROWID

  • User-defined types (including object types, REFs, varrays, and nested tables)

  • XMLType stored object relationally or as binary XML

  • The following Oracle-supplied types: Any types, URI types, spatial types, and media types

These data type restrictions pertain to both ordinary (heap-organized) tables and index-organized tables.

Capture processes can capture changes to SecureFile LOB columns only if the source database compatibility level is set to 11.2.0.0 or higher. Also, capture processes do not support capturing changes to SecureFile LOB columns stored using deduplication, capturing changes resulting from fragment-based operations on SecureFile LOB columns, and capturing changes resulting from SecureFile archive manager operations.

A capture process raises an error if it tries to create a row LCR for a DML change to a column of an unsupported data type. When a capture process raises an error, it writes the LCR that caused the error into its trace file, raises an ORA-26744 error, and becomes disabled. In this case, modify the rules used by the capture process to avoid the error, and restart the capture process.

It is possible to configure Oracle Streams for extended data type support. For instructions, go to the My Oracle Support (formerly OracleMetaLink) Web site using a Web browser:

http://support.oracle.com/

Database bulletin 556742.1 describes extended data type support for Oracle Streams.


Note:

You can add rules to a negative rule set for a capture process that instruct the capture process to discard changes to tables with columns of unsupported data types. However, if these rules are not simple rules, then a capture process might create a row LCR for the change and continue to process it. In this case, a change that includes an unsupported data type can cause the capture process to raise an error, even if the change does not satisfy the rule sets used by the capture process. The DBMS_STREAMS_ADM package creates only simple rules.


See Also:


Unsupported Changes for Capture Processes

This section describes changes that are not supported by capture processes.

This section contains these topics:

Unsupported Schemas for Capture Processes

A capture process never captures changes made to the following schemas:

  • CTXSYS

  • DBSNMP

  • DMSYS

  • DVSYS

  • EXFSYS

  • LBACSYS

  • MDDATA

  • MDSYS

  • OLAPSYS

  • ORDDATA

  • ORDPLUGINS

  • ORDSYS

  • OUTLN

  • SI_INFORMTN_SCHEMA

  • SYS

  • SYSMAN

  • SYSTEM

  • WMSYS

  • XDB

Unsupported Table Types for Capture Processes

A capture process cannot capture DML changes made to temporary tables or object tables.


Note:

  • A capture process can capture changes to tables compressed with basic table compression and OLTP table compression only if the compatibility level at both the source database and the capture database is set to 11.2.0.0.0 or higher.

  • Starting with Oracle Database 11g Release 2 (11.2.0.2), a capture process can capture changes to tables compressed with hybrid columnar compression if all of the following conditions are met: both the source database and the capture database must be running Oracle Database 11g Release 2 (11.2.0.2), and the compatibility level at both the source database and the capture database is set to 11.2.0.0.0 or higher.


Unsupported DDL Changes for Capture Processes

A capture process captures the DDL changes that satisfy its rule sets, except for the following types of DDL changes:

  • ALTER DATABASE

  • CREATE CONTROLFILE

  • CREATE DATABASE

  • CREATE PFILE

  • CREATE SPFILE

A capture process can capture DDL statements, but not the results of DDL statements, unless the DDL statement is a CREATE TABLE AS SELECT statement. For example, when a capture process captures an ANALYZE statement, it does not capture the statistics generated by the ANALYZE statement. However, when a capture process captures a CREATE TABLE AS SELECT statement, it captures the statement itself and all of the rows selected (as INSERT row LCRs).

Some types of DDL changes that are captured by a capture process cannot be applied by an apply process. If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records information about it in the trace file for the apply process.

When a capture process captures a DDL change that specifies time stamps or system change number (SCN) values in its syntax, configure a DDL handler for any apply processes that will dequeue the change. The DDL handler must process time stamp or SCN values properly. For example, a capture process captures FLASHBACK TABLE statements when its rule sets instruct it to capture DDL changes to the specified table. FLASHBACK TABLE statements include time stamps or SCN values in its syntax.


See Also:


Changes Ignored by a Capture Process

A capture process ignores the following types of changes:

  • The session control statements ALTER SESSION and SET ROLE.

  • The system control statement ALTER SYSTEM.

  • CALL, EXPLAIN PLAN, and LOCK TABLE statements.

  • GRANT statements on views.

  • Changes made to a table or schema by online redefinition using the DBMS_REDEFINITION package. Online table redefinition is supported on a table for which a capture process captures changes, but the logical structure of the table before online redefinition must be the same as the logical structure after online redefinition.

  • Changes to sequence values. For example, if a user references a NEXTVAL or sets the sequence, then a capture process does not capture changes resulting from these operations. Also, if you share a sequence at multiple databases, then sequence values used for individual rows at these databases might vary.

  • Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to database objects, then these changes can be captured by a capture process if the changes satisfy the capture process rule sets.


Note:

  • If an Oracle-supplied package related to XML makes changes to database objects, then these changes are not captured by capture processes. See Oracle Database PL/SQL Packages and Types Reference for information about packages related to XML.

  • If an Oracle-supplied package related to Oracle Text makes changes to database objects, then these changes are not captured by capture processes. See Oracle Text Reference for information about packages related to Oracle Text.



See Also:

Oracle Streams Replication Administrator's Guide for information about strategies to avoid having the same sequence-generated value for two different rows at different databases

NOLOGGING and UNRECOVERABLE Keywords for SQL Operations

If you use the NOLOGGING or UNRECOVERABLE keyword for a SQL operation, then the changes resulting from the SQL operation cannot be captured by a capture process. Therefore, do not use these keywords to capture the changes that result from a SQL operation.

If the object for which you are specifying the logging attributes resides in a database or tablespace in FORCE LOGGING mode, then Oracle Database ignores any NOLOGGING or UNRECOVERABLE setting until the database or tablespace is taken out of FORCE LOGGING mode. You can determine the current logging mode for a database by querying the FORCE_LOGGING column in the V$DATABASE dynamic performance view. You can determine the current logging mode for a tablespace by querying the FORCE_LOGGING column in the DBA_TABLESPACES static data dictionary view.


Note:

The UNRECOVERABLE keyword is deprecated and has been replaced with the NOLOGGING keyword in the logging_clause. Although UNRECOVERABLE is supported for backward compatibility, Oracle strongly recommends that you use the NOLOGGING keyword, when appropriate.


See Also:

Oracle Database SQL Language Reference for more information about the NOLOGGING and UNRECOVERABLE keywords, FORCE LOGGING mode, and the logging_clause

UNRECOVERABLE Clause for Direct Path Loads

If you use the UNRECOVERABLE clause in the SQL*Loader control file for a direct path load, then a capture process cannot capture the changes resulting from the direct path load. Therefore, if the changes resulting from a direct path load should be captured by a capture process, then do not use the UNRECOVERABLE clause.

If you perform a direct path load without logging changes at a source database, but you do not perform a similar direct path load at the destination databases of the source database, then apply errors can result at these destination databases when changes are made to the loaded objects at the source database. In this case, a capture process at the source database can capture changes to these objects, and one or more propagations can propagate the changes to the destination databases. When an apply process tries to apply these changes, errors result unless both the changed object and the changed rows in the object exist on the destination database.

Therefore, if you use the UNRECOVERABLE clause for a direct path load and a capture process is configured to capture changes to the loaded objects, then ensure that any destination databases contain the loaded objects and the loaded data to avoid apply errors. One way to ensure that these objects exist at the destination databases is to perform a direct path load at each of these destination databases that is similar to the direct path load performed at the source database.

If you load objects into a database or tablespace that is in FORCE LOGGING mode, then Oracle Database ignores any UNRECOVERABLE clause during a direct path load, and the loaded changes are logged. You can determine the current logging mode for a database by querying the FORCE_LOGGING column in the V$DATABASE dynamic performance view. You can determine the current logging mode for a tablespace by querying the FORCE_LOGGING column in the DBA_TABLESPACES static data dictionary view.


See Also:

Oracle Database Utilities for information about direct path loads and SQL*Loader

Supplemental Logging Data Type Restrictions

Columns of the following data types cannot be part of a supplemental log group: LOB, LONG, LONG RAW, user-defined types (including object types, REFs, varrays, nested tables), and Oracle-supplied types (including Any types, XML types, spatial types, and media types).

Operational Requirements for Downstream Capture

The following are operational requirements for using downstream capture:

  • The source database must be running at least Oracle Database 10g and the downstream capture database must be running the same release of Oracle Database as the source database or later.

  • The downstream database must be running Oracle Database 10g Release 2 or later to configure real-time downstream capture. In this case, the source database must be running Oracle Database 10g Release 1 or later.

  • The operating system on the source and downstream capture sites must be the same, but the operating system release does not need to be the same. In addition, the downstream sites can use a different directory structure than the source site.

  • The hardware architecture on the source and downstream capture sites must be the same. For example, a downstream capture configuration with a source database on a 32-bit Sun system must have a downstream database that is configured on a 32-bit Sun system. Other hardware elements, such as the number of CPUs, memory size, and storage configuration, can be different between the source and downstream sites.

Capture Processes Do Not Support Oracle Label Security

Capture processes do not support database objects that use Oracle Label Security (OLS).

Capture Process Interoperability with Oracle Streams Apply Processes

A capture process must be Oracle9i Database release 9.2.0.6 or later for the changes it captures to be processed by an Oracle Database 11g Release 2 (11.2) apply process. The data type restrictions for the release of the capture process are enforced at the source database for the capture process.


See Also:

The Oracle Streams documentation for an earlier Oracle Database release for information about capture process data type restrictions and apply process data type restrictions for that release.

Synchronous Capture Restrictions

This section describes restrictions for synchronous captures.

This section contains these topics:

Synchronous Captures Only Use Table Rules

Synchronous captures only use table rules that were created by a procedure in the DBMS_STREAMS_ADM package. Synchronous captures ignore schema rules, global rules, and rules created by a procedure in the DBMS_RULE_ADM package.

Unsupported Data Types for Synchronous Captures

Synchronous capture does not capture the results of DML changes to columns of the following data types:

  • LONG

  • LONG RAW

  • CLOB

  • NCLOB

  • BLOB

  • BFILE

  • ROWID

  • User-defined types (including object types, REFs, varrays, and nested tables

  • Oracle-supplied types (including Any types, XML types, spatial types, and media types)

These data type restrictions pertain to both ordinary (heap-organized) tables and index-organized tables.

Synchronous capture raises an error if it tries to create a row LCR for a DML change to a table containing a column of an unsupported data type. Synchronous capture returns an ORA-25341 error to the user, and the DML change is not made. In this case, modify the rules used by synchronous capture to avoid the error.


Note:

  • The rules in the positive rule set determine the types of changes captured by synchronous capture. To avoid errors, ensure that these rules do not instruct synchronous capture to capture changes to tables with unsupported data types.

  • It might be possible to configure a synchronous capture to capture changes to tables with unsupported columns. To do so, specify DELETE_COLUMN declarative rule-based transformations on the relevant synchronous capture rules to remove the unsupported columns.



See Also:


Unsupported Changes for Synchronous Captures

This section describes changes that are not supported by synchronous captures.

This section contains these topics:

Unsupported Schemas for Synchronous Captures

A synchronous capture never captures changes made to the following schemas:

  • CTXSYS

  • DBSNMP

  • DMSYS

  • DVSYS

  • EXFSYS

  • LBACSYS

  • MDDATA

  • MDSYS

  • OLAPSYS

  • ORDDATA

  • ORDPLUGINS

  • ORDSYS

  • OUTLN

  • SI_INFORMTN_SCHEMA

  • SYS

  • SYSMAN

  • SYSTEM

  • WMSYS

  • XDB

Unsupported Table Types for Synchronous Captures

A synchronous capture cannot capture DML changes made to temporary tables, object tables, or tables compressed with hybrid columnar compression.


Note:

A synchronous capture can capture changes to tables compressed with basic table compression or OLTP table compression if the compatibility level of the database is set to 11.2.0.0.0 or higher.

Changes Ignored by Synchronous Capture

The following types of changes are ignored by synchronous capture:

  • DDL changes.

  • The session control statements ALTER SESSION and SET ROLE.

  • The system control statement ALTER SYSTEM.

  • Synchronous capture ignores CALL, EXPLAIN PLAN, or LOCK TABLE statements.

  • Changes made by direct path loads.

  • Changes made to a table or schema by online redefinition using the DBMS_REDEFINITION package. Online table redefinition is supported on a table for which synchronous capture captures changes, but the logical structure of the table before online redefinition must be the same as the logical structure after online redefinition.

  • Changes to actual sequence values. For example, if a user references a NEXTVAL or sets the sequence, then synchronous capture does not capture changes resulting from these operations. Also, if you share a sequence at multiple databases, then sequence values used for individual rows at these databases might vary.

  • Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to database objects, then these changes can be captured by synchronous capture if the changes satisfy the synchronous capture rule set.


Note:

  • If an Oracle-supplied package related to XML makes changes to database objects, then these changes are not captured by synchronous captures. See Oracle Database PL/SQL Packages and Types Reference for information about packages related to XML.

  • If an Oracle-supplied package related to Oracle Text makes changes to database objects, then these changes are not captured by synchronous captures. See Oracle Text Reference for information about packages related to Oracle Text.



See Also:

Oracle Streams Replication Administrator's Guide for information about strategies to avoid having the same sequence-generated value for two different rows at different databases

Synchronous Capture Rules and the DBMS_STREAMS_ADM Package

Although you can create a rule set for a synchronous capture using the DBMS_RULE_ADM package, only rules created using the DBMS_STREAMS_ADM package determine the behavior of a synchronous capture. A synchronous capture ignores rules created by the DBMS_RULE_ADM package.

Synchronous Captures Do Not Support Oracle Label Security

Synchronous captures do not support database objects that use Oracle Label Security (OLS).

Queue Restrictions

This section describes restrictions for queues.

This section contains these topics:


See Also:

"Queues"

Explicit Enqueue Restrictions for ANYDATA Queues

You cannot explicitly enqueue ANYDATA payloads that contain payloads of the following types into an ANYDATA queue:

  • CLOB

  • NCLOB

  • BLOB

  • Object types with LOB attributes

  • Object types that use type evolution or type inheritance


Note:

Payloads of ROWID data type cannot be wrapped in an ANYDATA wrapper. This restriction does not apply to payloads of UROWID data type.


See Also:


Restrictions for Buffered Messaging

To use buffered messaging, the compatibility level of the Oracle database must be 10.2.0 or higher.

The DBMS_STREAMS_MESSAGING package cannot be used to enqueue messages into or dequeue messages from a buffered queue. However, the DBMS_AQ package supports enqueue and dequeue of buffered messages.

Triggers and Queue Tables

Using triggers on queue tables is not recommended because it can have a negative impact on performance. Also, triggers are not supported on index-organized queue tables.

Propagation Restrictions

This section describes restrictions for propagations.

This section contains these topics:

Connection Qualifiers and Propagations

Connection qualifiers cannot be specified in the database links that are used by Oracle Streams propagations.

Character Set Restrictions for Propagations

Propagations can propagate ANYDATA messages that encapsulate payloads of object types, varrays, or nested tables between databases only if the databases use the same character set.

Propagations can propagate logical change records (LCRs) between databases of the same character set or different character sets.

Compatibility Requirements for Queue-To-Queue Propagations

To use queue-to-queue propagation, the compatibility level must be 10.2.0 or higher for each database that contains a queue involved in the propagation.

Apply Process Restrictions

This section describes restrictions for apply processes.

This section contains these topics:

Unsupported Data Types for Apply Processes

An apply process does not apply row LCRs containing the results of DML changes in columns of the following data types:

  • BFILE

  • ROWID

  • User-defined types (including object types, REFs, varrays, and nested tables)

  • The following Oracle-supplied types: Any types, URI types, spatial types, and media types

An apply process raises an error if it attempts to apply a row LCR that contains information about a column of an unsupported data type. In addition, an apply process cannot apply DML changes to temporary tables or object tables. An apply process raises an error if it attempts to apply such changes. When an apply process raises an error for an LCR, it moves the transaction that includes the LCR into the error queue.

These data type restrictions pertain to both ordinary (heap-organized) tables and index-organized tables.

It is possible to configure Oracle Streams for extended data type support. For instructions, go to the My Oracle Support (formerly OracleMetaLink) Web site using a Web browser:

http://support.oracle.com/

Database bulletin 556742.1 describes extended data type support for Oracle Streams.

Unsupported Data Types for Apply Handlers

Statement DML handlers cannot process LONG, LONG RAW, or nonassembled LOB column data in row LCRs. However, statement DML handlers can process LOB column data in row LCRs that have been constructed by LOB assembly. LOB assembly is enabled by default for statement DML handlers.

Procedure DML handlers and error handlers cannot process LONG or LONG RAW column data in row LCRs. However, procedure DML handlers and error handlers can process both nonassembled and assembled LOB column data in row LCRs, but these handlers cannot modify nonassembled LOB column data.

Types of DDL Changes Ignored by an Apply Process

The following types of DDL changes are not supported by an apply process. These types of DDL changes are not applied:

  • ALTER MATERIALIZED VIEW

  • ALTER MATERIALIZED VIEW LOG

  • CREATE DATABASE LINK

  • CREATE SCHEMA AUTHORIZATION

  • CREATE MATERIALIZED VIEW

  • CREATE MATERIALIZED VIEW LOG

  • DROP DATABASE LINK

  • DROP MATERIALIZED VIEW

  • DROP MATERIALIZED VIEW LOG

  • FLASHBACK DATABASE

  • RENAME

If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records the following message in the apply process trace file, followed by the DDL text that was ignored:

Apply process ignored the following DDL:

An apply process applies all other types of DDL changes if the DDL LCRs containing the changes should be applied according to the apply process rule sets.


Note:

  • An apply process applies ALTER object_type object_name RENAME changes, such as ALTER TABLE jobs RENAME. Therefore, if you want DDL changes that rename objects to be applied, then use ALTER object_type object_name RENAME statements instead of RENAME statements. After changing the name of a database object, new rules that specify the new database object name might be needed to replicate changes to the database object.

  • The name "materialized view" is synonymous with the name "snapshot". Snapshot equivalents of the statements on materialized views are ignored by an apply process.


Database Structures in an Oracle Streams Environment

For captured DDL changes to be applied properly at a destination database, either the destination database must have the same database structures as the source database, or the nonidentical database structural information must not be specified in the DDL statement. Database structures include data files, tablespaces, rollback segments, and other physical and logical structures that support database objects.

For example, for captured DDL changes to tables to be applied properly at a destination database, the following conditions must be met:

  • The same storage parameters must be specified in the CREATE TABLE statement at the source database and destination database.

  • If a DDL statement refers to specific tablespaces or rollback segments, then the tablespaces or rollback segments must have the same names and compatible specifications at the source database and destination database.

    However, if the tablespaces and rollback segments are not specified in the DDL statement, then the default tablespaces and rollback segments are used. In this case, the tablespaces and rollback segments can differ at the source database and destination database.

  • The same partitioning specifications must be used at the source database and destination database.

Current Schema User Must Exist at Destination Database

For a DDL LCR to be applied at a destination database successfully, the user specified as the current_schema in the DDL LCR must exist at the destination database. The current schema is the schema that is used if no schema is specified for an object in the DDL text.


See Also:


Apply Processes Do Not Support Oracle Label Security

Apply processes do not support database objects that use Oracle Label Security (OLS).

Apply Process Interoperability with Oracle Streams Capture Components

An apply process must be Oracle9i Database release 9.2.0.6 or later to process changes captured by an Oracle Database 11g Release 2 (11.2) capture process. The data type restrictions for the release of the apply process are enforced at the apply process database.

An apply process must be Oracle Database 11g Release 1 (11.1) or later to process changes captured by an Oracle Database 11g Release 2 (11.2) synchronous capture. The data type restrictions for the release of the apply process are enforced at the apply process database.


See Also:

The Oracle Streams documentation for an earlier Oracle Database release for information about apply process data type restrictions for that release.

Messaging Client Restrictions

This section describes restrictions for messaging clients.

This section contains these topics:

Messaging Clients and Buffered Messages

Messaging clients cannot dequeue buffered messages. However, the DBMS_AQ package supports enqueue and dequeue of buffered messages.


See Also:

Oracle Streams Advanced Queuing User's Guide for information about the DBMS_AQ package

Rule Restrictions

This section describes restrictions for rules.

This section contains these topics:

Restrictions for Subset Rules

The following restrictions apply to subset rules:

  • A table with the table name referenced in the subset rule must exist in the same database as the subset rule, and this table must be in the same schema referenced for the table in the subset rule.

  • If the subset rule is in the positive rule set for a capture process or a synchronous capture, then the table must contain the columns specified in the subset condition, and the data type of each of these columns must match the data type of the corresponding column at the source database.

  • If the subset rule is in the positive rule set for a propagation or apply process, then the table must contain the columns specified in the subset condition, and the data type of each column must match the data type of the corresponding column in row LCRs that evaluate to TRUE for the subset rule.

  • Creating subset rules for tables that have one or more columns of the following data types is not supported: LOB, LONG, LONG RAW, user-defined types (including object types, REFs, varrays, nested tables), and Oracle-supplied types (including Any types, XML types, spatial types, and media types).


See Also:


Restrictions for Action Contexts

An action context cannot contain information of the following data types:

  • CLOB

  • NCLOB

  • BLOB

  • LONG

  • LONG RAW

In addition, an action context cannot contain object types with attributes of these data types, or object types that use type evolution or type inheritance.

Rule-Based Transformation Restrictions

This section describes restrictions for rule-based transformations.

This section contains these topics:

Unsupported Data Types for Declarative Rule-Based Transformations

Except for add column transformations, declarative rule-based transformations that operate on columns support the same data types that are supported by Oracle Streams capture processes.

Add column transformations cannot add columns of the following data types: BLOB, CLOB, NCLOB, BFILE, LONG, LONG RAW, ROWID, user-defined types (including object types, REFs, varrays, nested tables), and Oracle-supplied types (including Any types, XML types, spatial types, and media types).

Unsupported Data Types for Custom Rule-Based Transformations

Do not modify LONG, LONG RAW, nonassembled LOB column data, or XMLType data in a custom rule-based transformation function.

Character Set Restrictions for Oracle Streams Replication

In an Oracle Streams replication configuration, the character set of a destination database must compatible to or a superset of the character set of its source database. Also, character repertoires of data contents must be supported by both source and destination database character sets to guarantee data integrity.


See Also:


PK PK&AOEBPS/pt_adconcepts.htmA Advanced Oracle Streams Concepts PK=PK&AOEBPS/strms_mprop.htm Managing Staging and Propagation

16 Managing Staging and Propagation

The following topics describe managing ANYDATA queues and propagations:

Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.

Managing Queues

An ANYDATA queue stages messages whose payloads are of ANYDATA type. Therefore, an ANYDATA queue can stage a message with a payload of nearly any type, if the payload is wrapped in an ANYDATA wrapper. Each Oracle Streams capture process, apply process, and messaging client is associated with one ANYDATA queue, and each Oracle Streams propagation is associated with one ANYDATA source queue and one ANYDATA destination queue.

This section contains instructions for completing the following tasks related to queues:

Enabling a User to Perform Operations on a Secure Queue

For a user to perform queue operations, such as enqueue and dequeue, on a secure queue, the user must be configured as a secure queue user of the queue. If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create the secure queue, then the queue owner and the user specified by the queue_user parameter are configured as secure users of the queue automatically. If you want to enable other users to perform operations on the queue, then you can configure these users in one of the following ways:

  • Run SET_UP_QUEUE and specify a queue_user. Queue creation is skipped if the queue already exists, but a new queue user is configured if one is specified.

  • Associate the user with an Oracle Streams Advanced Queuing (AQ) agent manually.

The following example illustrates associating a user with an Oracle Streams AQ agent manually. Suppose you want to enable the oe user to perform queue operations on a queue named streams_queue. The following steps configure the oe user as a secure queue user of streams_queue:

  1. In SQL*Plus, connect as an administrative user who can create Oracle Streams AQ agents and alter users.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create an agent:

    EXEC DBMS_AQADM.CREATE_AQ_AGENT(agent_name => 'streams_queue_agent');
    
  3. If the user must be able to dequeue messages from queue, then make the agent a subscriber of the secure queue:

    DECLARE
      subscriber SYS.AQ$_AGENT;
    BEGIN
      subscriber :=  SYS.AQ$_AGENT('streams_queue_agent', NULL, NULL);  
      DBMS_AQADM.ADD_SUBSCRIBER(
        queue_name          =>  'strmadmin.streams_queue',
        subscriber          =>  subscriber,
        rule                =>  NULL,
        transformation      =>  NULL);
    END;
    /
    
  4. Associate the user with the agent:

    BEGIN
      DBMS_AQADM.ENABLE_DB_ACCESS(
        agent_name  => 'streams_queue_agent',
        db_username => 'oe');
    END;
    /
    
  5. Grant the user EXECUTE privilege on the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package, if the user is not already granted these privileges:

    GRANT EXECUTE ON DBMS_STREAMS_MESSAGING TO oe;
    
    GRANT EXECUTE ON DBMS_AQ TO oe;
    

When these steps are complete, the oe user is a secure user of the streams_queue queue and can perform operations on the queue. You still must grant the user specific privileges to perform queue operations, such as enqueue and dequeue privileges.


See Also:


Disabling a User from Performing Operations on a Secure Queue

You might want to disable a user from performing queue operations on a secure queue for the following reasons:

  • You dropped a capture process or a synchronous capture, but you did not drop the queue that was used by the capture process or synchronous capture, and you do not want the user who was the capture user to be able to perform operations on the remaining secure queue.

  • You dropped an apply process, but you did not drop the queue that was used by the apply process, and you do not want the user who was the apply user to be able to perform operations on the remaining secure queue.

  • You used the ALTER_APPLY procedure in the DBMS_APPLY_ADM package to change the apply_user for an apply process, and you do not want the old apply_user to be able to perform operations on the apply process's queue.

  • You enabled a user to perform operations on a secure queue by completing the steps described in Enabling a User to Perform Operations on a Secure Queue, but you no longer want this user to be able to perform operations on the secure queue.

To disable a secure queue user, you can revoke ENQUEUE and DEQUEUE privilege on the queue from the user, or you can run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package. For example, suppose you want to disable the oe user from performing queue operations on a queue named streams_queue.


Caution:

If an Oracle Streams AQ agent is used for multiple secure queues, then running DISABLE_DB_ACCESS for the agent prevents the user associated with the agent from performing operations on all of these queues.

  1. Run the following procedure to disable the oe user from performing queue operations on the secure queue streams_queue:

    BEGIN
      DBMS_AQADM.DISABLE_DB_ACCESS(
        agent_name  => 'streams_queue_agent',
        db_username => 'oe');
    END;
    /
    
  2. If the agent is no longer needed, you can drop the agent:

    BEGIN
      DBMS_AQADM.DROP_AQ_AGENT(
        agent_name  => 'streams_queue_agent');
    END;
    /
    
  3. Revoke privileges on the queue from the user, if the user no longer needs these privileges.

    BEGIN
      DBMS_AQADM.REVOKE_QUEUE_PRIVILEGE (
       privilege   => 'ALL',
       queue_name  => 'strmadmin.streams_queue',
       grantee     => 'oe');
    END;
    /
    

See Also:


Removing a Queue

You use the REMOVE_QUEUE procedure in the DBMS_STREAMS_ADM package to remove an existing ANYDATA queue. When you run the REMOVE_QUEUE procedure, it waits until any existing messages in the queue are consumed. Next, it stops the queue, which means that no further enqueues into the queue or dequeues from the queue are allowed. When the queue is stopped, it drops the queue.

You can also drop the queue table for the queue if it is empty and is not used by another queue. To do so, specify TRUE, the default, for the drop_unused_queue_table parameter.

In addition, you can drop any Oracle Streams clients that use the queue by setting the cascade parameter to TRUE. By default, the cascade parameter is set to FALSE.

For example, to remove an ANYDATA queue named streams_queue in the strmadmin schema and drop its empty queue table, run the following procedure:

BEGIN
  DBMS_STREAMS_ADM.REMOVE_QUEUE(
    queue_name              => 'strmadmin.streams_queue',
    cascade                 => FALSE,
    drop_unused_queue_table => TRUE);
END;
/

In this case, because the cascade parameter is set to FALSE, this procedure drops the streams_queue only if no Oracle Streams clients use the queue. If the cascade parameter is set to FALSE and any Oracle Streams client uses the queue, then an error is raised.

Managing Oracle Streams Propagations and Propagation Jobs

A propagation propagates messages from an Oracle Streams source queue to an Oracle Streams destination queue. This section provides instructions for completing the following tasks:

In addition, you can use the features of Oracle Streams Advanced Queuing (AQ) to manage Oracle Streams propagations.


See Also:


Starting a Propagation

You run the START_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to start an existing propagation. For example, the following procedure starts a propagation named strm01_propagation:

BEGIN
  DBMS_PROPAGATION_ADM.START_PROPAGATION(
    propagation_name => 'strm01_propagation');
END;
/

Stopping a Propagation

You run the STOP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to stop an existing propagation. For example, the following procedure stops a propagation named strm01_propagation:

BEGIN
  DBMS_PROPAGATION_ADM.STOP_PROPAGATION(
    propagation_name => 'strm01_propagation',
    force            => FALSE);
END;
/

To clear the statistics for the propagation when it is stopped, set the force parameter to TRUE. If there is a problem with a propagation, then stopping the propagation with the force parameter set to TRUE and restarting the propagation might correct the problem. If the force parameter is set to FALSE, then the statistics for the propagation are not cleared.

Altering the Schedule of a Propagation Job

To alter the schedule of an existing propagation job, use the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package. The following sections contain examples that alter the schedule of a propagation job for a queue-to-queue propagation and for a queue-to-dblink propagation. These examples set the propagation job to propagate messages every 15 minutes (900 seconds), with each propagation lasting 300 seconds, and a 25-second wait before new messages in a completely propagated queue are propagated.

This section contains these topics:


See Also:


Altering the Schedule of a Propagation Job for a Queue-to-Queue Propagation

To alter the schedule of a propagation job for a queue-to-queue propagation that propagates messages from the strmadmin.strm_a_queue source queue to the strmadmin.strm_b_queue destination queue using the dbs2.example.com database link, run the following procedure:

BEGIN
  DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE(
   queue_name        => 'strmadmin.strm_a_queue',
   destination       => 'dbs2.example.com',
   duration          => 300,
   next_time         => 'SYSDATE + 900/86400',
   latency           => 25,
   destination_queue => 'strmadmin.strm_b_queue'); 
END;
/

Because each queue-to-queue propagation has its own propagation job, this procedure alters only the schedule of the propagation that propagates messages between the two queues specified. The destination_queue parameter must specify the name of the destination queue to alter the propagation schedule of a queue-to-queue propagation.

Altering the Schedule of a Propagation Job for a Queue-to-Dblink Propagation

To alter the schedule of a propagation job for a queue-to-dblink propagation that propagates messages from the strmadmin.streams_queue source queue using the dbs3.example.com database link, run the following procedure:

BEGIN
  DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE(
   queue_name  => 'strmadmin.streams_queue',
   destination => 'dbs3.example.com',
   duration    => 300,
   next_time   => 'SYSDATE + 900/86400',
   latency     => 25); 
END;
/

Because the propagation is a queue-to-dblink propagation, the destination_queue parameter is not specified. Completing this task affects all queue-to-dblink propagations that propagate messages from the source queue to all destination queues that use the dbs3.example.com database link.

Specifying the Rule Set for a Propagation

You can specify one positive rule set and one negative rule set for a propagation. The propagation propagates a message if it evaluates to TRUE for at least one rule in the positive rule set and discards a change if it evaluates to TRUE for at least one rule in the negative rule set. The negative rule set is evaluated before the positive rule set.

This section contains these topics:

Specifying a Positive Rule Set for a Propagation

You specify an existing rule set as the positive rule set for an existing propagation using the rule_set_name parameter in the ALTER_PROPAGATION procedure. This procedure is in the DBMS_PROPAGATION_ADM package.

For example, the following procedure sets the positive rule set for a propagation named strm01_propagation to strm02_rule_set.

BEGIN
  DBMS_PROPAGATION_ADM.ALTER_PROPAGATION(
    propagation_name  => 'strm01_propagation',
    rule_set_name     => 'strmadmin.strm02_rule_set');
END;
/

Specifying a Negative Rule Set for a Propagation

You specify an existing rule set as the negative rule set for an existing propagation using the negative_rule_set_name parameter in the ALTER_PROPAGATION procedure. This procedure is in the DBMS_PROPAGATION_ADM package.

For example, the following procedure sets the negative rule set for a propagation named strm01_propagation to strm03_rule_set.

BEGIN
  DBMS_PROPAGATION_ADM.ALTER_PROPAGATION(
    propagation_name        => 'strm01_propagation',
    negative_rule_set_name  => 'strmadmin.strm03_rule_set');
END;
/

Adding Rules to the Rule Set for a Propagation

To add rules to the rule set of a propagation, you can run one of the following procedures:

Excluding the ADD_SUBSET_PROPAGATION_RULES procedure, these procedures can add rules to the positive rule set or negative rule set for a propagation. The ADD_SUBSET_PROPAGATION_RULES procedure can add rules only to the positive rule set for a propagation.

This section contains these topics:

Adding Rules to the Positive Rule Set for a Propagation

The following example runs the ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the positive rule set of an existing propagation named strm01_propagation:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name              => 'hr.locations',
    streams_name            => 'strm01_propagation',
    source_queue_name       => 'strmadmin.strm_a_queue',
    destination_queue_name  => 'strmadmin.strm_b_queue@dbs2.example.com',
    include_dml             => TRUE,
    include_ddl             => TRUE,
    source_database         => 'dbs1.example.com',
    inclusion_rule          => TRUE);
END;
/

Running this procedure performs the following actions:

  • Creates two rules. One rule evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.locations table. The other rule evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.locations table. The rule names are system generated.

  • Specifies that both rules evaluate to TRUE only for LCRs whose changes originated at the dbs1.example.com source database.

  • Adds the two rules to the positive rule set associated with the propagation because the inclusion_rule parameter is set to TRUE.

Adding Rules to the Negative Rule Set for a Propagation

The following example runs the ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the negative rule set of an existing propagation named strm01_propagation:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name              => 'hr.departments',
    streams_name            => 'strm01_propagation',
    source_queue_name       => 'strmadmin.strm_a_queue',
    destination_queue_name  =&g%t; 'strmadmin.strm_b_queue@dbs2.example.com',
    include_dml             => TRUE,
    include_ddl             => TRUE,
    source_database         => 'dbs1.example.com',
    inclusion_rule          => FALSE);
END;
/

Running this procedure performs the following actions:

  • Creates two rules. One rule evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.departments table, and the other rule evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.departments table. The rule names are system generated.

  • Specifies that both rules evaluate to TRUE only for LCRs whose changes originated at the dbs1.example.com source database.

  • Adds the two rules to the negative rule set associated with the propagation because the inclusion_rule parameter is set to FALSE.

Removing a Rule from the Rule Set for a Propagation

You remove a rule from the rule set for an existing propagation by running the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the following procedure removes a rule named departments3 from the positive rule set of a propagation named strm01_propagation.

BEGIN
  DBMS_STREAMS_ADM.REMOVE_RULE(
    rule_name        => 'departments3',
    streams_type     => 'propagation',
    streams_name     => 'strm01_propagation',
    drop_unused_rule => TRUE,
    inclusion_rule   => TRUE);
END;
/

In this example, the drop_unused_rule parameter in the REMOVE_RULE procedure is set to TRUE, which is the default setting. Therefore, if the rule being removed is not in any other rule set, then it will be dropped from the database. If the drop_unused_rule parameter is set to FALSE, then the rule is removed from the rule set, but it is not dropped from the database even if it is not in any other rule set.

If the inclusion_rule parameter is set to FALSE, then the REMOVE_RULE procedure removes the rule from the negative rule set for the propagation, not the positive rule set.

To remove all of the rules in the rule set for the propagation, then specify NULL for the rule_name parameter when you run the REMOVE_RULE procedure.

Removing a Rule Set for a Propagation

You remove a rule set from a propagation using the ALTER_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package. This procedure can remove the positive rule set, negative rule set, or both. Specify TRUE for the remove_rule_set parameter to remove the positive rule set for the propagation. Specify TRUE for the remove_negative_rule_set parameter to remove the negative rule set for the propagation.

For example, the following procedure removes both the positive and the negative rule set from a propagation named strm01_propagation.

BEGIN
  DBMS_PROPAGATION_ADM.ALTER_PROPAGATION(
    propagation_name         => 'strm01_propagation',
    remove_rule_set          => TRUE,
    remove_negative_rule_set => TRUE);
END;
/

Note:

If a propagation does not have a positive or negative rule set, then the propagation propagates all messages in the source queue to the destination queue.

Dropping a Propagation

You run the DROP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package to drop an existing propagation. For example, the following procedure drops a propagation named strm01_propagation:

BEGIN
  DBMS_PROPAGATION_ADM.DROP_PROPAGATION(
    propagation_name      => 'strm01_propagation',
    drop_unused_rule_sets => TRUE);
END;
/

Because the drop_unused_rule_sets parameter is set to TRUE, this procedure also drops any rule sets used by the propagation strm01_propagation, unless a rule set is used by another Oracle Streams client. If the drop_unused_rule_sets parameter is set to TRUE, then both the positive rule set and negative rule set for the propagation might be dropped. If this procedure drops a rule set, then it also drops any rules in the rule set that are not in another rule set.


Note:

When you drop a propagation, the propagation job used by the propagation is dropped automatically, if no other propagations are using the propagation job.

PK46hڞPK&AOEBPS/strms_change_table.htm Using Oracle Streams to Record Table Changes

20 Using Oracle Streams to Record Table Changes

This chapter describes using Oracle Streams to record data manipulation language (DML) changes made to tables.

This chapter contains these topics:

About Using Oracle Streams to Record Changes to Tables

Oracle Streams can record information about the changes made to database tables, including information about inserts, updates, and deletes. The table for which changes are recorded is called the source table, and the information about the recorded changes is stored in another table called the change table. Also, the database that contains the source table is called the source database, while the database that contains the change table is called the destination database. The destination database can be the same database as the source database, or it can be a different database.

The recorded information describes the data that was changed in each row because of a DML operation, and metadata about each change. Typically, data warehouse environments record information about table changes, but other types of environments might track table changes as well.

To record table changes in a change table, an Oracle Stream apply process uses a change handler. A change handler is a special type of statement DML handler that tracks table changes and was created by either the DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE procedure or the DBMS_APPLY_ADM.SET_CHANGE_HANDLER procedure. This chapter describes using these procedures to create and manage change handlers. Information about change handlers is stored in the ALL_APPLY_CHANGE_HANDLERS and DBA_APPLY_CHANGE_HANDLERS views.


Note:

It is possible to create a statement DML handler that tracks table changes without using the change handler procedures. Such statement DML handlers are not technically considered change handlers, and information about them is not stored in the ALL_APPLY_CHANGE_HANDLERS and DBA_APPLY_CHANGE_HANDLERS views.

Preparing for an Oracle Streams Environment That Records Table Changes

The MAINTAIN_CHANGE_TABLE procedure in the DBMS_STREAMS_ADM package can configure an Oracle Streams environment that records changes to a source table. This procedure configures all of the required Oracle Streams components. This procedure also enables you to identify the metadata to record for each change. For example, you can choose to record the username of the user who made the change and the time when the change was made, as well as many other types of metadata.

Before you use the MAINTAIN_CHANGE_TABLE procedure to configure an Oracle Stream environment that records the changes to a table, you have decisions to make and prerequisites to complete.

The following sections describe the decisions and prerequisites for the MAINTAIN_CHANGE_TABLE procedure:

Decisions to Make Before Running the MAINTAIN_CHANGE_TABLE Procedure

The following sections describe the decisions to make before running the MAINTAIN_CHANGE_TABLE procedure:

Decide Which Type of Environment to Configure

An Oracle Streams environment that records table changes has the following components:

  1. A capture process captures information about changes to the source table from the redo log. The capture process encapsulates the information for each row change in a row logical change record (row LCR). The database where the changes originated is called the source database. The database that contains the capture process is called the capture database.

  2. If the source table and change table are on different databases, then a propagation sends the captured row LCRs to the database that contains the change table. The propagation is not needed if the source table and change table are in the same database.

  3. An apply process records the information in the change table. The apply process uses statement DML handlers to insert the information in the row LCRs into the change table.

You can configure these components in the following ways:

  • Local capture and apply on one database: The source table, capture process, apply process, and change table are all in the same database. This option is the easiest to configure and maintain because all of the components are contained in one database.

  • Local capture and remote apply: The source table and capture process are in one database, and the apply process and change table are in another database. A propagation sends row LCRs from the source database to the destination database. This option is best when you want easy configuration and maintenance and when the source table and change table must reside in different databases.

  • Downstream capture and local apply: The source table is in one database, and the capture process, apply process, and change table are in another database. This option is best when you want to optimize the performance of the database with the source table and want to offload change capture to another database. With this option, most of the components run on the database with the change table.

  • Downstream capture and remote apply: The source table is in one database, the apply process and change table are in another database, and the capture process is in a third database. This option is best when you want to optimize the performance of both the database with the source table and the database with the change table. With this option, the capture process runs on a third database, and a propagation sends row LCRs from the capture database to the destination database.

The capture database is always the database on which the MAINTAIN_CHANGE_TABLE procedure is run. Table 20-1 describes where to run the procedure to configure each type of environment.

Table 20-1 Configuration Options for MAINTAIN_CHANGE_TABLE

Type of EnvironmentWhere to Run MAINTAIN_CHANGE_TABLE

Local capture and apply on one database

On the source database that contains the source table

Local capture and remote apply

On the source database that contains the source table

Downstream capture and local apply

On the destination database that does not contain the source table but will contain the change table

Downstream capture and remote apply

On a third database that does not contain the source table and will not contain the change table


Additional requirements must be met to configure downstream capture. See "Operational Requirements for Downstream Capture" for information.

If you decide to configure a downstream capture process, then you must decide which type of downstream capture process you want to configure. The following types are available:

  • A real-time downstream capture process configuration means that redo transport services use the log writer process (LGWR) at the source database to send redo data to the downstream database, and a remote file server process (RFS) at the downstream database receives the redo data over the network and stores the redo data in the standby redo log.

  • An archived-log downstream capture process configuration means that archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these archived redo log files. These log files can be transferred automatically using redo transport services, or they can be transferred manually using a method such as FTP.

The advantage of real-time downstream capture over archived-log downstream capture is that real-time downstream capture reduces the amount of time required to capture changes made at the source database. The time is reduced because the real-time downstream capture process does not need to wait for the redo log file to be archived before it can capture changes from it. You can configure more than one real-time downstream capture process that captures changes from the same source database, but you cannot configure real-time downstream capture for multiple source databases at one downstream database.

The advantage of archived-log downstream capture over real-time downstream capture is that archived-log downstream capture allows downstream capture processes for multiple source databases at a downstream database. You can copy redo log files from multiple source databases to a single downstream database and configure multiple archived-log downstream capture processes to capture changes in these redo log files.

Decide Which Columns to Track

The column_type_list parameter in the MAINTAIN_CHANGE_TABLE procedure enables you to specify which columns to track in the change table. The Oracle Streams environment records changes for the listed columns only. To track all of the columns in the table, list all of the columns in this parameter. To track a subset of columns, list the columns to track. In the column_type_list parameter, you can specify the data type of the column and any valid column properties, such as inline constraint specifications.

You might choose to omit columns from the list for various reasons. For example, some columns might contain sensitive information, such as salary data, that you do not want to populate in the change table. Or, the table might contain hundreds of columns, and you might be interested in tracking only a small number of them.

Decide Which Metadata to Record

The extra_column_list parameter in the MAINTAIN_CHANGE_TABLE procedure enables you to specify which metadata to record in the change table. The following types of metadata can be listed in this parameter:

  • value_type

  • source_database_name

  • command_type

  • object_owner

  • object_name

  • tag

  • transaction_id

  • scn

  • commit_scn

  • commit_time

  • position

  • compatible

  • instance_number

  • message_number

  • row_text

  • row_id

  • serial#

  • session#

  • source_time

  • thread#

  • tx_name

  • username

In the change table, a dollar sign ($) is appended to the column name for each metadata attribute. For example, the metadata for the command_type attribute is stored in the command_type$ column in the change table.

All of these metadata attributes, except for value_type and message_number, are row LCR attributes that can be stored in row LCRs.

The value_type$ column in the change table contains either OLD or NEW, depending on whether the column value is the original column value or the new column value, respectively.

The message_number$ column in the change table contains the identification number of each row LCR within a transaction. The message number increases incrementally for each row LCR within a transaction and shows the order of the row LCRs within a transaction.


Note:

LCR position is commonly used in XStream configurations.

Decide Which Values to Track for Update Operations

The capture_values parameter in the MAINTAIN_CHANGE_TABLE procedure enables you to specify the values to record in the change table for update operations on the source table. When an update operation is performed on a row, the old value for each column is the value before the update operation and the new value is the value after the update operation. You can choose to record old values, new values, or both old and new values.

Decide Whether to Configure a KEEP_COLUMNS Transformation

The keep_change_columns_only parameter in the MAINTAIN_CHANGE_TABLE procedure enables you to specify whether to configure a KEEP_COLUMNS declarative rule-based transformation. The KEEP_COLUMNS declarative rule-based transformation keeps the list of columns specified in the column_type_list parameter in a row LCR. The transformation removes columns that are not in the list from the row LCR.

For example, suppose a table has ten columns, but only three of these columns need to be tracked in a change table. In this case, it is usually more efficient to configure one KEEP_COLUMNS declarative rule-based transformation that keeps the three columns that must be tracked than to configure seven DELETE_COLUMN declarative rule-based transformations that remove the seven columns that should not be tracked.

The keep_change_columns_only parameter is relevant only if you specify a subset of the table columns in the column_type_list parameter. In this case, you might choose to configure the transformation to reduce the amount of information sent over the network or to eliminate sensitive information from row LCRs.

Set the keep_change_columns_only parameter to FALSE when information about columns that are not included in the column_type_list parameter is needed at the destination database. For example, if the execute_lcr parameter is set to TRUE and the configuration will replicate all of the columns in a source table, but the column_type_list parameter includes a subset of these columns, then set the keep_change_columns_only parameter to FALSE.

Decide Whether to Specify CREATE TABLE Options for the Change Table

The options_string parameter in the MAINTAIN_CHANGE_TABLE procedure enables you to append a string of options to the CREATE TABLE statement that creates the change table. The string is appended to the generated CREATE TABLE statement after the closing parenthesis that defines the columns of the table. The string must be syntactically correct. For example, you can specify a TABLESPACE clause to store the table in a specific tablespace. You can also partition the change table. The advantage of partitioning a change table is that you can truncate a partition using the TRUNCATE PARTITION clause of an ALTER TABLE statement instead of deleting rows with a DELETE statement.


See Also:

Oracle Database SQL Language Reference for information about CREATE TABLE options

Decide Whether to Perform the Configuration Actions Directly or with a Script

The MAINTAIN_CHANGE_TABLE procedure can configure the Oracle Streams environment directly, or it can generate a script that configures the environment. Using the procedure to configure directly is simpler than running a script, and the environment is configured immediately. However, you might choose to generate a script for the following reasons:

  • You want to review the actions performed by the procedure before configuring the environment.

  • You want to modify the script to customize the configuration.

For example, you might want an apply process to use apply handlers for customized processing of the changes before applying these changes. In this case, you can use the procedure to generate a script and modify the script to add the apply handlers.

The perform_actions parameter controls whether the procedure configures the environment directly:

  • To configure the environment directly when you run the MAINTAIN_CHANGE_TABLE procedure, set the perform_actions parameter to TRUE. The default value for this parameter is TRUE.

  • To generate a configuration script when you run the MAINTAIN_CHANGE_TABLE procedure, set the perform_actions parameter to FALSE, and use the script_name and script_directory_object parameters to specify the name and location of the configuration script.

Decide Whether to Replicate the Source Table

In addition to a change table, some environments require that the source table is replicated at the destination database. In this case, the source table is on a different database than the change table, and an additional replica of the source table is in the same database as the change table.

For example, consider an Oracle Streams environment that records the changes made the hr.employees table. Assume that the change table is named hr.emp_change_table and that the source table and the change table are on different databases. In this case, the following tables are involved in an Oracle Streams environment that records changes to the hr.employees table.

  • hr.employees table in database 1

  • hr.emp_change_table in database 2

The apply process at the destination database has a separate change handler that records changes for each type of operation (insert, update, and delete).

If the Oracle Streams environment also replicates the hr.employees table at database 2, then the following tables are involved:

  • hr.employees table in database 1

  • hr.employees table (replica) in database 2

  • hr.emp_change_table in database 2

In an environment that replicates the table in addition to recording its changes, an additional change handler is added to the apply process at the destination database for each type of operation (insert, update, and delete). These change handlers execute the row LCRs to apply their changes to the replicated table.

The execute_lcr parameter controls whether the procedure configures replication of the source table:

  • To configure an Oracle Streams environment that replicates the source table, set the execute_lcr parameter to TRUE.

  • To configure an Oracle Streams environment that does not replicate the source table, set the execute_lcr parameter to FALSE. The default value for this parameter is FALSE.


Note:

When the keep_change_columns_only parameter is set to TRUE and the column_list parameter includes a subset of the columns in the source table, the execute_lcr parameter must be set to FALSE. Apply errors will result if the row LCRs do not contain the column values required to replicate changes.

Prerequisites for the MAINTAIN_CHANGE _TABLE Procedure

The DBMS_STREAMS_ADM package includes procedures that configure replication environments, such as MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, and MAINTAIN_TABLES. Using the MAINTAIN_CHANGE_TABLE procedure is similar to using these other procedures, and many of the prerequisites are the same.

The following sections describe the prerequisites to complete before running the MAINTAIN_CHANGE_TABLE procedure:

Many of these prerequisites are described in detail in Oracle Streams Replication Administrator's Guide.

Configure an Oracle Streams Administrator on All Databases

Each database in the environment must have an Oracle Streams administrator to configure and manage the Oracle Streams components. See Oracle Streams Replication Administrator's Guide for instructions.

Configure Network Connectivity and Database Links

Depending on the type of Oracle Streams environment you plan to configure, network connectivity and one or more database links might be required. If the environment will include more than one database, then network connectivity between the databases in the environment is required.

The following database links are required for each type of Oracle Streams environment:

  • Local capture and apply on one database: No database links are required.

  • Local capture and remote apply: A database link from the source database to the destination database is required.

  • Downstream capture and local apply: The following database links are required:

    • A database link from the source database to the destination database

    • A database link from the destination database to the source database

  • Downstream capture and remote apply: The following database links are required:

    • A database link from the source database to the destination database

    • A database link from the source database to the capture database

    • A database link from the capture database to the source database

    • A database link from the capture database to the destination database

See Oracle Streams Replication Administrator's Guide for instructions.

Ensure That the Source Database Is in ARCHIVELOG Mode

The source database that contains the source table must be in ARCHIVELOG mode because an Oracle Streams capture process scans the redo log to capture changes. If you plan to configure a downstream capture process, then the capture database also must be in ARCHIVELOG mode. See Oracle Database Administrator's Guide for instructions.

Set Initialization Parameters That Are Relevant to Oracle Streams

Some initialization parameters are important for the configuration, operation, reliability, and performance of an Oracle Streams environment. Set these parameters appropriately for your Oracle Streams environment. See Oracle Streams Replication Administrator's Guide for instructions.

Configure the Oracle Streams Pool

The Oracle Streams pool is a portion of memory in the System Global Area (SGA) that is used by Oracle Streams. Configure your database memory so that there is enough space available in the Oracle Streams pool. See Oracle Streams Replication Administrator's Guide for instructions.

Configure Log File Transfer to a Downstream Capture Database

If you decided to use a local capture process at the source database, then log file transfer is not required. However, if you decided to use downstream capture that uses redo transport services to transfer archived redo log files to the downstream database automatically, then configure log file transfer from the source database to the capture database before configuring the Oracle Streams environment. See Oracle Streams Replication Administrator's Guide for instructions.

Configure Standby Redo Logs for Real-Time Downstream Capture

If you decided to use a real-time downstream capture process, then you must configure standby redo logs at the capture database. See Oracle Streams Replication Administrator's Guide for instructions.

Configure the Required Directory Object If You Are Using a Script

If you decided to generate a script with the MAINTAIN_CHANGE_TABLE procedure and configure the Oracle Streams environment with the script, then create the directory object that will store the script in the capture database. The capture database is the database on which you will run the procedure. This directory object is not required if you are not generating a script.

A directory object is similar to an alias for a directory on a file system. Each directory object must be created using the SQL statement CREATE DIRECTORY, and the user who invokes the MAINTAIN_CHANGE_TABLE procedure must have READ and WRITE privilege on the directory object.

For example, the following statement creates a directory object named db_files_directory that corresponds to the /usr/db_files directory:

CREATE DIRECTORY db_files_directory AS '/usr/db_files';

The user who creates the directory object automatically has READ and WRITE privilege on the directory object. When you are configuring an Oracle Streams replication environment, typically the Oracle Streams administrator creates the directory object.

Instantiate the Source Table at the Destination Database

If you decided to replicate the source table, then instantiate the source table at the destination database. Instantiation is not required if you decided not to replicate the source table.

If instantiation is required because you decided to replicate the source table, then complete the following steps before running the MAINTAIN_CHANGE_TABLE procedure:

  1. Prepare the source table for instantiation.

  2. Ensure that the source table and the replica table are consistent.

  3. Set the instantiation SCN for the replica table at the destination database.

Configuring an Oracle Streams Environment That Records Table Changes

This section uses examples to illustrate how to configure an Oracle Streams environment that records table changes. Specifically, this section illustrates the four types of Oracle Streams environments that record table changes.

This section includes the following examples:

Recording Table Changes Using Local Capture and Apply on One Database

This example illustrates how to record the changes to a table using local capture and apply on one database. Specifically, this example records the changes made to the hr.jobs table.

The following table lists the decisions that were made about the Oracle Streams environment configured in this example.

DecisionAssumption for This Example
Decide Which Type of Environment to Configure
This example configures local capture and apply on one database.
Decide Which Columns to Track
This example tracks all of the columns in the hr.jobs table.
Decide Which Metadata to Record
This example records the command_type, value_type (OLD or NEW), and commit_scn metadata.
Decide Which Values to Track for Update Operations
This example tracks both the old and new column values when an update is performed on the source table.
Decide Whether to Configure a KEEP_COLUMNS Transformation
This example does not configure a KEEP_COLUMNS declarative rule-based transformation.
Decide Whether to Specify CREATE TABLE Options for the Change Table
This example does not specify any CREATE TABLE options. The change table is created with the default CREATE TABLE options.
Decide Whether to Perform the Configuration Actions Directly or with a Script
This example performs the configuration actions directly. It does not use a script.
Decide Whether to Replicate the Source Table
This example does not replicate the source table.

Figure 20-1 provides an overview of the Oracle Stream environment created in this example.

Figure 20-1 Recording Changes Using Local Capture and Apply on One Database

Description of Figure 20-1 follows
Description of "Figure 20-1 Recording Changes Using Local Capture and Apply on One Database"

Complete the following steps to configure an Oracle Streams environment that records the changes to a table using local capture and apply on one database:

  1. Complete the required prerequisites before running the MAINTAIN_CHANGE_TABLE procedure. See "Prerequisites for the MAINTAIN_CHANGE _TABLE Procedure" for instructions.

    For this configuration, the following tasks must be completed:

  2. Connect to the database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Run the MAINTAIN_CHANGE_TABLE procedure:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE(
        change_table_name       => 'hr.jobs_change_table',
        source_table_name       => 'hr.jobs',
        column_type_list        => 'job_id VARCHAR2(10),
                                    job_title VARCHAR2(35),
                                    min_salary NUMBER(6),
                                    max_salary NUMBER(6)',
        extra_column_list        => 'command_type,value_type,commit_scn',
        capture_values           => '*',
        keep_change_columns_only => FALSE);
    END;
    /
    

    This procedure uses the default value for each parameter that is not specified. The keep_change_columns_only parameter is set to FALSE because all of the columns in the hr.jobs table are listed in the column_type_list parameter.

    When this procedure completes, the Oracle Streams environment is configured.

    If this procedure encounters an error and stops, then see Oracle Streams Replication Administrator's Guide for information about either recovering from the error or rolling back the configuration operation by using the DBMS_STREAMS_ADM.RECOVER_OPERATION procedure.

The resulting Oracle Streams environment has the following characteristics:

  • An unconditional supplemental log group includes all of the columns in the hr.jobs table.

  • The database has an hr.jobs_change_table. This change table has the following definition:

     Name                                      Null?    Type
     ----------------------------------------- -------- ---------------------------
     COMMAND_TYPE$                                      VARCHAR2(10)
     VALUE_TYPE$                                        VARCHAR2(3)
     COMMIT_SCN$                                        NUMBER
     JOB_ID                                             VARCHAR2(10)
     JOB_TITLE                                          VARCHAR2(35)
     MIN_SALARY                                         NUMBER(6)
     MAX_SALARY                                         NUMBER(6)
    
  • The database has a queue with a system-generated name. This queue is used by the capture process and apply process.

  • A capture process with a system-generated name captures data manipulation language (DML) changes made to the hr.jobs table.

  • An apply process with a system-generated name. The apply process uses change handlers with system-generated names to process the captured row LCRs for inserts, updates, and deletes on the hr.jobs table. The change handlers use the information in the row LCRs to populate the hr.jobs_change_table.


See Also:

"Monitoring a Change Table" for an example that makes changes to the hr.jobs table and then queries the hr.jobs_change_table to verify change tracking

Recording Table Changes Using Local Capture and Remote Apply with Replication

This example illustrates how to record the changes to a table using local capture and remote apply. In addition to recording table changes, the Oracle Stream environment configured by this example also replicates the changes made to the table.

Specifically, this example records the changes made to a subset of columns in the hr.departments table. This example also replicates data manipulation language (DML) changes made to all of the columns in the hr.departments table. The Oracle Steams environment configured in this example captures the changes on the source database ct1.example.com and sends the changes to the destination database ct2.example.com. An apply process on ct2.example.com records the changes in a change table and applies the changes to the replica hr.departments table.

The following table lists the decisions that were made about the Oracle Streams environment configured in this example.

DecisionAssumption for This Example
Decide Which Type of Environment to Configure
This example configures local capture and remote apply using two databases: the source database is ct1.example.com and the destination database is ct2.example.com. The capture process will be a local capture process on ct1.example.com.
Decide Which Columns to Track
This example tracks the department_id and manager_id columns in the hr.departments table.
Decide Which Metadata to Record
This example records the command_type and value_type (OLD or NEW) metadata. This metadata is recorded by default when the extra_column_list parameter is not specified in MAINTAIN_CHANGE_TABLE.
Decide Which Values to Track for Update Operations
This example tracks both the old and new column values when an update is performed on the source table.
Decide Whether to Configure a KEEP_COLUMNS Transformation
This example does not configure a KEEP_COLUMNS declarative rule-based transformation because all of the table columns are replicated.
Decide Whether to Specify CREATE TABLE Options for the Change Table
This example does not specify any CREATE TABLE options. The change table is created with the default CREATE TABLE options.
Decide Whether to Perform the Configuration Actions Directly or with a Script
This example performs the configuration actions directly. It does not use a script.
Decide Whether to Replicate the Source Table
This example replicates the source table at the destination database. Therefore, the hr.departments table exists on both the source database and the destination database, and the MAINTAIN_CHANGE_TABLE procedure configures a one-way replication environment for this table from the source database to the destination database.

Figure 20-2 provides an overview of the Oracle Stream environment created in this example.

Figure 20-2 Recording Changes Using Local Capture and Remote Apply with Replication

Description of Figure 20-2 follows
Description of "Figure 20-2 Recording Changes Using Local Capture and Remote Apply with Replication"

Complete the following steps to configure an Oracle Streams environment that records and replicates the changes to a table local capture and remote apply:

  1. Complete the required prerequisites before running the MAINTAIN_CHANGE_TABLE procedure. See "Prerequisites for the MAINTAIN_CHANGE _TABLE Procedure" for instructions.

    For this configuration, the following tasks must be completed:

  2. Connect to the source database ct1.example.com as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Run the MAINTAIN_CHANGE_TABLE procedure:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE(
        change_table_name       => 'hr.dep_change_table',
        source_table_name       => 'hr.departments',
        column_type_list        => 'department_id NUMBER(4),
                                    manager_id NUMBER(6)',
        capture_values           => '*',
        source_database          => 'ct1.example.com',
        destination_database     => 'ct2.example.com',
        keep_change_columns_only => FALSE,
        execute_lcr              => TRUE);
    END;
    /
    

    This procedure uses the default value for each parameter that is not specified. The keep_change_columns_only parameter is set to FALSE because the execute_lcr parameter is set to TRUE. The row logical change records (LCRs) must contain information about changes to all of the columns in the table because all of the columns are replicated at the destination database. When the execute_lcr parameter is set to TRUE, the keep_change_columns_only parameter can be set to TRUE only if the column_type_list parameter includes all of the columns that are replicated, which is not the case in this example.

    When this procedure completes, the Oracle Streams environment is configured.

    If this procedure encounters an error and stops, then see Oracle Streams Replication Administrator's Guide for information about either recovering from the error or rolling back the configuration operation by using the DBMS_STREAMS_ADM.RECOVER_OPERATION procedure.

The resulting Oracle Streams environment has the following characteristics:

  • An unconditional supplemental log group includes the columns in the hr.departments table for which changes are recorded at the source database ct1.example.com. These columns are the ones specified in the column_type_list parameter of the MAINTAIN_CHANGE_TABLE procedure.

  • The destination database ct2.example.com has an hr.dep_change_table. This change table has the following definition:

     Name                                      Null?    Type
     ----------------------------------------- -------- ---------------------------
     COMMAND_TYPE$                                      VARCHAR2(10)
     VALUE_TYPE$                                        VARCHAR2(3)
     DEPARTMENT_ID                                      NUMBER(4)
     MANAGER_ID                                         NUMBER(6)
    
  • The source database ct1.example.com has a queue with a system-generated name. This queue is used by the capture process.

  • The destination database ct2.example.com has a queue with a system-generated name. This queue is used by the apply process.

  • The source database ct1.example.com has a local capture process with a system-generated name that captures data manipulation language (DML) changes made to the hr.departments table.

  • The destination database ct2.example.com has an apply process with a system-generated name. The apply process uses change handlers with system-generated names to process the captured row LCRs for inserts, updates, and deletes on the hr.departments table. The change handlers use the information in the row LCRs to populate the hr.dep_change_table.

    The apply process also includes change handlers with system-generated names to execute row LCRs for each type of operation (insert, update, and delete). The row LCRs are executed so that the changes made to the source table are applied to the replica hr.departments table at the destination database.

  • A propagation running on the ct1.example.com database with a system-generated name sends the captured changes from the ct1.example.com database to the ct2.example.com database.

Recording Table Changes Using Downstream Capture and Local Apply

This example illustrates how to record the changes to a table using downstream capture and local apply. Specifically, this example records the changes made to the hr.locations table using a source database and a destination database. The destination database is also the capture database for the downstream capture process.

The following table lists the decisions that were made about the Oracle Streams environment configured in this example.

DecisionAssumption for This Example
Decide Which Type of Environment to Configure
This example configures downstream capture and local apply using the source database ct1.example.com and the destination database ct2.example.com. The capture process will be a real-time downstream capture process running on ct2.example.com.
Decide Which Columns to Track
This example tracks all of the columns in the hr.locations table.
Decide Which Metadata to Record
This example records the following metadata: command_type, value_type (OLD or NEW), object_owner, object_name, and username.
Decide Which Values to Track for Update Operations
This example tracks both the old and new column values when an update is performed on the source table.
Decide Whether to Configure a KEEP_COLUMNS Transformation
This example does not configure a KEEP_COLUMNS declarative rule-based transformation.
Decide Whether to Specify CREATE TABLE Options for the Change Table
This example does not specify any CREATE TABLE options. The change table is created with the default CREATE TABLE options.
Decide Whether to Perform the Configuration Actions Directly or with a Script
This example performs the configuration actions directly. It does not use a script.
Decide Whether to Replicate the Source Table
This example does not replicate the source table.

Figure 20-3 provides an overview of the Oracle Stream environment created in this example.

Figure 20-3 Recording Changes Using Downstream Capture and Local Apply

Description of Figure 20-3 follows
Description of "Figure 20-3 Recording Changes Using Downstream Capture and Local Apply"

Complete the following steps to configure an Oracle Streams environment that records the changes to a table using downstream capture and remote apply:

  1. Complete the required prerequisites before running the MAINTAIN_CHANGE_TABLE procedure. See "Prerequisites for the MAINTAIN_CHANGE _TABLE Procedure" for instructions.

    For this configuration, the following tasks must be completed:

    • Configure an Oracle Streams administrator on both databases. See "Configure an Oracle Streams Administrator on All Databases".

    • Configure network connectivity and database links:

      • Configure network connectivity between the source database ct1.example.com and the destination database ct2.example.com.

      • Because downstream capture will be configured at the destination database, create a database link from the source database ct1.example.com to the destination database ct2.example.com. The database link is used to send redo log data from ct1.example.com to ct2.example.com.

      • Because downstream capture will be configured at the destination database, create a database link from the destination database ct2.example.com to the source database ct1.example.com. The database link is used to complete management tasks related to downstream capture on the source database.

      See Oracle Streams Replication Administrator's Guide for instructions.

    • Ensure that the source database and the destination database are in ARCHIVELOG mode. In this example, the source database is ct1.example.com and the destination database is ct2.example.com. See "Ensure That the Source Database Is in ARCHIVELOG Mode".

    • Ensure that the initialization parameters are set properly at both databases. See "Set Initialization Parameters That Are Relevant to Oracle Streams".

    • Configure the Oracle Streams pool properly at both databases. See "Configure the Oracle Streams Pool".

    • Because a destination database will be the capture database for changes made to the source database, configure log file copying from the source database ct1.example.com to the capture database ct2.example.com. See "Configure Log File Transfer to a Downstream Capture Database".

    • Because this example configures a real-time downstream capture process, add standby redo logs at the capture database, and configure standby redo logs at the capture database ct2.example.com. See "Configure Standby Redo Logs for Real-Time Downstream Capture".

  2. Connect to the destination database ct2.example.com as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Run the MAINTAIN_CHANGE_TABLE procedure:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE(
        change_table_name       => 'hr.loc_change_table',
        source_table_name       => 'hr.locations',
        column_type_list        => 'location_id NUMBER(4),
                                    street_address VARCHAR2(40),
                                    postal_code VARCHAR2(12),
                                    city VARCHAR2(30),
                                    state_province VARCHAR2(25),
                                    country_id CHAR(2)',
        extra_column_list       => 'command_type,value_type,object_owner,
                                    object_name,username',
        capture_values          => '*',
        source_database          => 'ct1.example.com',
        destination_database     => 'ct2.example.com',
        keep_change_columns_only => FALSE);
    END;
    /
    

    This procedure uses the default value for each parameter that is not specified. The keep_change_columns_only parameter is set to FALSE because all of the columns in the hr.locations table are listed in the column_type_list parameter.

    When this procedure completes, the Oracle Streams environment is configured.

    If this procedure encounters an error and stops, then see Oracle Streams Replication Administrator's Guide for information about either recovering from the error or rolling back the configuration operation by using the DBMS_STREAMS_ADM.RECOVER_OPERATION procedure.

  4. Set the downstream_real_time_mine capture process parameter to Y.

    1. Query the CAPTURE_NAME column in the DBA_CAPTURE view to determine the name of the capture process.

    2. Run the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM package to set the downstream_real_time_mine capture process parameter to Y.

      For example, if the capture process name is cap$chg5, then run the following procedure:

      BEGIN
        DBMS_CAPTURE_ADM.SET_PARAMETER(
          capture_name => 'cap$chg5',
          parameter    => 'downstream_real_time_mine',
          value        => 'Y');
      END;
      /
      
  5. Connect to the source database ct1.example.com as an administrative user with the necessary privileges to switch log files.

  6. Archive the current log file at the source database:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    

    Archiving the current log file at the source database starts real time mining of the source database redo log.

The resulting Oracle Streams environment has the following characteristics:

  • An unconditional supplemental log group at the source database ct1.example.com includes all o the columns in the hr.locations table.

  • Because username is specified in the extra_column_list parameter, the source database is configured to place additional information about the username of the user who makes a change in the redo log. The capture process captures this information, and it is recorded in the change table. The other values specified in the extra_column_list parameter (command_type, value_type, object_owner, and object_name) are always tracked in the redo log. Therefore, no additional configuration is necessary to capture this information.

  • The destination database ct2.example.com has an hr.loc_change_table. This change table has the following definition:

     Name                                      Null?    Type
     ----------------------------------------- -------- ---------------------------
     COMMAND_TYPE$                                      VARCHAR2(10)
     VALUE_TYPE$                                        VARCHAR2(3)
     OBJECT_OWNER$                                      VARCHAR2(30)
     OBJECT_NAME$                                       VARCHAR2(30)
     USERNAME$                                          VARCHAR2(30)
     LOCATION_ID                                        NUMBER(4)
     STREET_ADDRESS                                     VARCHAR2(40)
     POSTAL_CODE                                        VARCHAR2(12)
     CITY                                               VARCHAR2(30)
     STATE_PROVINCE                                     VARCHAR2(25)
     COUNTRY_ID                                         CHAR(2)
    
  • The destination database ct2.example.com has a queue with a system-generated name. This queue is used by the downstream capture process and the apply process.

  • The destination database ct2.example.com has a real-time downstream capture process with a system-generated name that captures data manipulation language (DML) changes made to the hr.locations table.

  • The destination database ct2.example.com has an apply process with a system-generated name. The apply process uses change handlers with system-generated names to process the captured row LCRs for inserts, updates, and deletes on the hr.locations table. The change handlers use the information in the row LCRs to populate the hr.loc_change_table.

Recording Table Changes Using Downstream Capture and Remote Apply

This example illustrates how to record the changes to a table using downstream capture and remote apply. Specifically, this example records the changes made to the hr.employees table using three databases: the source database, the destination database, and the capture database.

The following table lists the decisions that were made about the Oracle Streams environment configured in this example.

DecisionAssumption for This Example
Decide Which Type of Environment to Configure
This example configures downstream capture and remote apply using three databases: the source database is ct1.example.com, the destination database is ct2.example.com, and the capture database is ct3.example.com. The capture process will be a real-time downstream capture process.
Decide Which Columns to Track
This example tracks the columns in the hr.employees table, except for the salary and commission_pct columns.
Decide Which Metadata to Record
This example records the following metadata: command_type, value_type (OLD or NEW), object_owner, object_name, and username.
Decide Which Values to Track for Update Operations
This example tracks both the old and new column values when an update is performed on the source table.
Decide Whether to Configure a KEEP_COLUMNS Transformation
This example configures a KEEP_COLUMNS declarative rule-based transformation so that row LCRs do not contain salary and commission percentage information for employees.
Decide Whether to Specify CREATE TABLE Options for the Change Table
This example specifies a STORAGE clause in the CREATE TABLE options.
Decide Whether to Perform the Configuration Actions Directly or with a Script
This example performs the configuration actions directly. It does not use a script.
Decide Whether to Replicate the Source Table
This example does not replicate the source table.

Figure 20-4 provides an overview of the Oracle Stream environment created in this example.

Figure 20-4 Recording Changes Using Downstream Capture and Remote Apply

Description of Figure 20-4 follows
Description of "Figure 20-4 Recording Changes Using Downstream Capture and Remote Apply"

Complete the following steps to configure an Oracle Streams environment that records the changes to a table using downstream capture and remote apply:

  1. Complete the required prerequisites before running the MAINTAIN_CHANGE_TABLE procedure. See "Prerequisites for the MAINTAIN_CHANGE _TABLE Procedure" for instructions.

    For this configuration, the following tasks must be completed:

    • Configure an Oracle Streams administrator on all of three databases. See "Configure an Oracle Streams Administrator on All Databases".

    • Configure network connectivity and database links:

      • Configure network connectivity between the source database ct1.example.com and the destination database ct2.example.com.

      • Configure network connectivity between the source database ct1.example.com and the third database ct3.example.com.

      • Configure network connectivity between the destination database ct2.example.com and the third database ct3.example.com.

      • Create a database link from the source database ct1.example.com to the destination database ct2.example.com.

      • Because downstream capture will be configured at the third database, create a database link from the source database ct1.example.com to the third database ct3.example.com.

      • Because downstream capture will be configured at the third database, create a database link from the third database ct3.example.com to the source database ct1.example.com.

      • Because downstream capture will be configured at the third database, create a database link from the third database ct3.example.com to the destination database ct2.example.com.

      See Oracle Streams Replication Administrator's Guide for instructions.

    • Ensure that the source database and the capture database are in ARCHIVELOG mode. In this example, the source database is ct1.example.com and the capture database is ct3.example.com. See "Ensure That the Source Database Is in ARCHIVELOG Mode".

    • Ensure that the initialization parameters are set properly at all databases. See "Set Initialization Parameters That Are Relevant to Oracle Streams".

    • Configure the Oracle Streams pool properly at all databases. See "Configure the Oracle Streams Pool".

    • Because a third database (ct3.example.com) will be the capture database for changes made to the source database, configure log file copying from the source database ct1.example.com to the capture database ct3.example.com. See "Configure Log File Transfer to a Downstream Capture Database".

    • Because this example configures a real-time downstream capture process, add standby redo logs at the capture database, and configure standby redo logs at the capture database ct3.example.com. See "Configure Standby Redo Logs for Real-Time Downstream Capture".

  2. Connect to the capture database ct3.example.com as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Run the MAINTAIN_CHANGE_TABLE procedure:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE(
        change_table_name       => 'hr.emp_change_table',
        source_table_name       => 'hr.employees',
        column_type_list        => 'employee_id VARCHAR2(6),
                                    first_name VARCHAR2(20),
                                    last_name VARCHAR2(25),
                                    email VARCHAR2(25),
                                    phone_number VARCHAR2(20),
                                    hire_date DATE,
                                    job_id VARCHAR2(10),
                                    manager_id NUMBER(6),
                                    department_id NUMBER(4)',
        capture_values          => '*',
        options_string          => 'STORAGE (INITIAL     6144  
                                             NEXT        6144 
                                             MINEXTENTS     1  
                                             MAXEXTENTS     5)',
        source_database          => 'ct1.example.com',
        destination_database     => 'ct2.example.com',
        keep_change_columns_only => TRUE);
    END;
    /
    

    This procedure uses the default value for each parameter that is not specified. The options_string parameter specifies a storage clause for the change table. The keep_change_columns_only parameter is set to TRUE to create a keep columns declarative rule-based transformation that excludes the salary and commission_pct columns from captured row logical change records (LCRs). The salary and commission_pct columns are excluded because they are not in the column_type_list parameter.

    When this procedure completes, the Oracle Streams environment is configured.

    If this procedure encounters an error and stops, then see Oracle Streams Replication Administrator's Guide for information about either recovering from the error or rolling back the configuration operation by using the DBMS_STREAMS_ADM.RECOVER_OPERATION procedure.

  4. Set the downstream_real_time_mine capture process parameter to Y.

    1. Query the CAPTURE_NAME column in the DBA_CAPTURE view to determine the name of the capture process.

    2. Run the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM package to set the downstream_real_time_mine capture process parameter to Y.

      For example, if the capture process name is cap$chg5, then run the following procedure:

      BEGIN
        DBMS_CAPTURE_ADM.SET_PARAMETER(
          capture_name => 'cap$chg5',
          parameter    => 'downstream_real_time_mine',
          value        => 'Y');
      END;
      /
      
  5. Connect to the source database ct1.example.com as an administrative user with the necessary privileges to switch log files.

  6. Archive the current log file at the source database:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    

    Archiving the current log file at the source database starts real time mining of the source database redo log.

The resulting Oracle Streams environment has the following characteristics:

  • An unconditional supplemental log group includes the columns in the hr.employees table for which changes are recorded at the source database ct1.example.com. These columns are the ones specified in the column_type_list parameter of the MAINTAIN_CHANGE_TABLE procedure.

  • The destination database ct2.example.com has an hr.emp_change_table. This change table has the following definition:

     Name                                      Null?    Type
     ----------------------------------------- -------- ---------------------------
     COMMAND_TYPE$                                      VARCHAR2(10)
     VALUE_TYPE$                                        VARCHAR2(3)
     EMPLOYEE_ID                               NOT NULL NUMBER(6)
     FIRST_NAME                                         VARCHAR2(20)
     LAST_NAME                                 NOT NULL VARCHAR2(25)
     EMAIL                                     NOT NULL VARCHAR2(25)
     PHONE_NUMBER                                       VARCHAR2(20)
     HIRE_DATE                                 NOT NULL DATE
     JOB_ID                                    NOT NULL VARCHAR2(10)
     MANAGER_ID                                         NUMBER(6)
     DEPARTMENT_ID                                      NUMBER(4)
    
  • The capture database ct3.example.com has a queue with a system-generated name. This queue is used by the downstream capture process.

  • The destination database ct2.example.com has a queue with a system-generated name. This queue is used by the apply process.

  • The capture database ct3.example.com has a real-time downstream capture process with a system-generated name that captures data manipulation language (DML) changes made to the hr.employees table.

  • The capture database ct3.example.com has a KEEP_COLUMNS declarative rule-based transformation that keeps all of the columns in the row LCRs for the hr.employees table, except for the salary and commission_pct columns.

  • A propagation running on the ct3.example.com database with a system-generated name sends the captured changes from the ct3.example.com database to the ct2.example.com database.

  • The destination database ct2.example.com has an apply process with a system-generated name. The apply process uses change handlers with system-generated names to process the captured row LCRs for inserts, updates, and deletes on the hr.employees table. The change handlers use the information in the row LCRs to populate the hr.emp_change_table.

Managing an Oracle Streams Environment That Records Table Changes

This section describes setting and unsetting change handlers.

This section contains these topics:

Unsetting and Setting a Change Handler

The SET_CHANGE_HANDLER procedure in the DBMS_APPLY_ADM package can unset and set a change handler for a specified operation on a specified table for a single apply process. This procedure assumes that the Oracle Streams components are configured to capture changes to the specified table and send the changes to the specified apply process.

For the example in this section, assume that you want to unset the change handler for update operations that was created in "Recording Table Changes Using Local Capture and Remote Apply with Replication". Next, you want to reset this change handler.

Complete the following steps to set a change handler:

  1. Connect to the database that contains the apply process as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Identify the change handler to modify.

    This example unsets the change handler for <code>UPDATE operations on the hr.departments table. Assume that these changes are applied by the app$chg38 apply process. Run the following query to determine the owner of the change table, the name of the change table, the capture values tracked in the change table, and the name of the change handler:

    COLUMN CHANGE_TABLE_OWNER HEADING 'Change Table Owner' FORMAT A20
    COLUMN CHANGE_TABLE_NAME HEADING 'Change Table Name' FORMAT A20
    COLUMN CAPTURE_VALUES HEADING 'Capture|Values' FORMAT A7
    COLUMN HANDLER_NAME HEADING 'Change Handler Name' FORMAT A25
     
    SELECT CHANGE_TABLE_OWNER, 
           CHANGE_TABLE_NAME, 
           CAPTURE_VALUES, 
           HANDLER_NAME 
      FROM DBA_APPLY_CHANGE_HANDLERS
      WHERE SOURCE_TABLE_OWNER = 'HR' AND
            SOURCE_TABLE_NAME  = 'DEPARTMENTS' AND
            APPLY_NAME         = 'APP$CHG38' AND
            OPERATION_NAME     = 'UPDATE';
    

    Your output looks similar to the following:

                                              Capture
    Change Table Owner   Change Table Name    Values  Change Handler Name
    -------------------- -------------------- ------- -------------------------
    HR                   DEP_CHANGE_TABLE     *       HR_DEPARTMENTS_CHG$10
    

    Make a note of the values returned by this query, and use these values in the subsequent steps in this example.

  3. Unset the change handler.

    To unset a change handler, specify NULL in the change_handler_name parameter in the SET_CHANGE_HANDLER procedure, and specify the change table owner, change table name, capture values, operation, source table, and apply process using the other procedure parameters. For example:

    BEGIN
      DBMS_APPLY_ADM.SET_CHANGE_HANDLER(
        change_table_name    =>  'hr.dep_change_table',
        source_table_name    =>  'hr.departments',
        capture_values       =>  '*',
        apply_name           =>  'app$chg38',
        operation_name       =>  'UPDATE',
        change_handler_name  =>  NULL);
    END;
    /
    

    When this change handler is unset, it no longer records update changes.

  4. Set the change handler.

    To set the change handler, specify the change handler in the change_handler_name parameter in the SET_CHANGE_HANDLER procedure, and specify the change table owner, change table name, capture values, operation, source table, and apply process using the other procedure parameters. For example:

    BEGIN
      DBMS_APPLY_ADM.SET_CHANGE_HANDLER(
        change_table_name    =>  'hr.dep_change_table',
        source_table_name    =>  'hr.departments',
        capture_values       =>  '*',
        apply_name           =>  'app$chg38',
        operation_name       =>  'UPDATE',
        change_handler_name  =>  'hr_departments_chg$10');
    END;
    /
    

    When this change handler is reset, it records update changes.

Recording Changes to a Table Using Existing Oracle Streams Components

You can configure existing Oracle Streams components to record changes to a table. These existing components include capture processes, propagations, and apply processes. To use existing components, specify the component names when you run the MAINTAIN_CHANGE_TABLE procedure in the DBMS_STREAMS_ADM package.

The example in this section builds on the Oracle Streams environment created in "Recording Table Changes Using Local Capture and Apply on One Database". That example configured an Oracle Streams environment that records changes to the hr.jobs table. The example in this section configures the existing capture process and apply process to record changes to the hr.employees table as well.

The following table lists the decisions that were made about the changes that will be recorded for the hr.employees table.

DecisionAssumption for This Example
Decide Which Type of Environment to Configure
This example uses existing Oracle Streams components that perform local capture and apply on one database.
Decide Which Columns to Track
This example tracks all of the columns in the hr.employees table.
Decide Which Metadata to Record
This example records the command_type, value_type (OLD or NEW), and commit_scn metadata.
Decide Which Values to Track for Update Operations
This example tracks both the old and new column values when an update is performed on the source table.
Decide Whether to Configure a KEEP_COLUMNS Transformation
This example does not configure a KEEP_COLUMNS declarative rule-based transformation.
Decide Whether to Specify CREATE TABLE Options for the Change Table
This example does not specify any CREATE TABLE options. The change table is created with the default CREATE TABLE options.
Decide Whether to Perform the Configuration Actions Directly or with a Script
This example performs the configuration actions directly. It does not use a script.
Decide Whether to Replicate the Source Table
This example does not replicate the source table.

Complete the following steps to record changes to a table using existing Oracle Streams components:

  1. Ensure that the required prerequisites are met before running the MAINTAIN_CHANGE_TABLE procedure. See "Prerequisites for the MAINTAIN_CHANGE _TABLE Procedure" for instructions.

    For this configuration, the following tasks must be completed:

    In this example, these requirements should already be met because an existing Oracle Streams environment is recording changes to the hr.jobs table.

  2. Determine the names of the existing Oracle Streams components.

    In SQL*Plus, connect to the database that contains a component and query the appropriate data dictionary view:

    • Query the CAPTURE_NAME column in the DBA_CAPTURE view to determine the names of the capture processes in a database.

    • Query the PROPAGATION_NAME column in the DBA_PROPAGATION view to determine the names of the propagations in a database.

    • Query the APPLY_NAME column in the DBA_APPLY view to determine the names of the apply processes in a database.

    This example records changes using a capture process and apply process in a single database. Therefore, it does not use a propagation.

    Assume that the name of the capture process is cap$chg3 and that the name of the apply process is app$chg4.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  3. Connect to the database that contains the existing capture process as the Oracle Streams administrator.

  4. Run the MAINTAIN_CHANGE_TABLE procedure:

    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE(
        change_table_name       => 'hr.employees_change_table',
        source_table_name       => 'hr.employees',
        column_type_list        => 'employee_id VARCHAR2(6),
                                    first_name VARCHAR2(20),
                                    last_name VARCHAR2(25),
                                    email VARCHAR2(25),
                                    phone_number VARCHAR2(20),
                                    hire_date DATE,
                                    job_id VARCHAR2(10),
                                    salary NUMBER(8,2),
                                    commission_pct NUMBER(2,2),
                                    manager_id NUMBER(6),
                                    department_id NUMBER(4)',
        extra_column_list        => 'command_type,value_type,commit_scn',
        capture_values           => '*',
        capture_name             => 'cap$chg3',
        apply_name               => 'app$chg4',
        keep_change_columns_only => FALSE);
    END;
    /
    

    This procedure uses the default value for each parameter that is not specified. The keep_change_columns_only parameter is set to FALSE because all of the columns in the hr.jobs table are listed in the column_type_list parameter.

    When this procedure completes, the Oracle Streams environment is configured.

    If this procedure encounters an error and stops, then see Oracle Streams Replication Administrator's Guide for information about either recovering from the error or rolling back the configuration operation by using the DBMS_STREAMS_ADM.RECOVER_OPERATION procedure.

The resulting Oracle Streams environment has the following characteristics:

  • The characteristics previously described in "Recording Table Changes Using Local Capture and Apply on One Database".

  • An unconditional supplemental log group includes all of the columns in the hr.employees table.

  • The database has an hr.employees_change_table. This change table has the following definition:

     Name                                      Null?    Type
     ----------------------------------------- -------- ---------------------------
     COMMAND_TYPE$                                      VARCHAR2(10)
     VALUE_TYPE$                                        VARCHAR2(3)
     COMMIT_SCN$                                        NUMBER
     EMPLOYEE_ID                                        VARCHAR2(6)
     FIRST_NAME                                         VARCHAR2(20)
     LAST_NAME                                          VARCHAR2(25)
     EMAIL                                              VARCHAR2(25)
     PHONE_NUMBER                                       VARCHAR2(20)
     HIRE_DATE                                          DATE
     JOB_ID                                             VARCHAR2(10)
     SALARY                                             NUMBER(8,2)
     COMMISSION_PCT                                     NUMBER(2,2)
     MANAGER_ID                                         NUMBER(6)
     DEPARTMENT_ID                                      NUMBER(4)
    
  • The capture process cap$chg3 captures data manipulation language (DML) changes made to the hr.employees table.

  • An apply process app$chg4 uses change handlers with system-generated names to process the captured row LCRs for inserts, updates, and deletes on the hr.employees table. The change handlers use the information in the row LCRs to populate the hr.employees_change_table.

Maintaining Change Tables

Change tables can grow large over time. You can query one or more change tables to obtain a transactionally consistent set of change data. When the change data is no longer needed, you can remove it from the change tables. To perform these operations, configure the change table to track commit SCN metadata by including commit_scn in the extra_column_list parameter when you run the MAINTAIN_CHANGE_TABLE procedure. You can use the commit SCN to obtain consistent data and to specify which data to remove when it is no longer needed.

The example in this section maintains the change tables created in the following sections:

The example in this section queries the change tables to obtain a transactionally consistent set of change data and then removes the change data that has been viewed.

Complete the following steps to maintain change tables:

  1. Determine the current low-watermark of the apply process that applies changes to the change table. Changes that were committed at a system change number (SCN) less than or equal to the low-watermark have definitely been applied.

    For example, if the name of the apply process is app$chg4, then run the following query to determine its low-watermark:

    SELECT APPLIED_MESSAGE_NUMBER 
      FROM DBA_APPLY_PROGRESS 
      WHERE APPLY_NAME='APP$CHG4';
    

    Make a note of the returned low-watermark SCN. For this example, assume that the low-watermark SCN is 663090.

  2. Query the change tables for changes that are less than or equal to the low-watermark returned in Step 1.

    For example, run the following query on the hr.jobs_change_table:

    SELECT * FROM hr.jobs_change_table WHERE commit_scn$ <= 663090;
    

    For example, run the following query on the hr.employees_change_table:

    SELECT * FROM hr.employees_change_table WHERE commit_scn$ <= 663090;
    

    These queries specify the low-watermark SCN returned in Step 1. The changes returned are transactionally consistent up to the specified SCN.

  3. When the changes viewed in Step 2 are no longer needed, run the following statements to remove the changes:

    DELETE FROM hr.jobs_change_table WHERE commit_scn$ <= 663090;
    
    DELETE FROM hr.employees_change_table WHERE commit_scn$ <= 663090;
    
    COMMIT;
    

    These queries specify the same low-watermark SCN returned in Step 1 and used in the queries in Step 2.

There are other ways to maintain change tables. For example, you can query them using a range of changes between two SCN values. You can also create a view to show a consistent set of data in two or more change tables.

Managing the Oracle Streams Environment

After the MAINTAIN_CHANGE_TABLE procedure has configured the Oracle Streams environment, you can manage the Oracle Streams environment by referring to the sections in the following table.

Monitoring an Oracle Streams Environment That Records Table Changes

This section describes monitoring the Oracle Streams components in a configuration that tracks table changes.

This section contains these topics:

Monitoring a Change Table

You can monitor a change table using SELECT statement the same way you monitor other database tables. The columns in the change table depend on the column_type_list parameter in the MAINTAIN_CHANGE_TABLE procedure. The change table can include a tracking column for each column in the source table, or it can include a subset of the columns in the source table. In addition, the change table can include several additional columns that contain metadata about each change.

For example, the Oracle Streams environment configured in "Recording Table Changes Using Local Capture and Apply on One Database" records changes to the hr.jobs table. Each column in the hr.jobs table is tracked in the change table hr.jobs_change_table, and the default metadata columns (command_type$, value_type$, and commit_scn$) are included in the change table.

To monitor this sample change table, complete the following steps:

  1. Connect to the database as hr user.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Make changes to the source table so that the change table is populated:

    INSERT INTO hr.jobs VALUES('BN_CNTR','Bean Counter',6000,8000);
    COMMIT;
    
    UPDATE hr.jobs SET min_salary=7000 WHERE job_id='BN_CNTR';
    COMMIT;
    
    DELETE FROM hr.jobs WHERE job_id='BN_CNTR';
    COMMIT;
    
  3. Query the change table:

    COLUMN COMMAND_TYPE$ HEADING 'Command Type' FORMAT A12
    COLUMN VALUE_TYPE$ HEADING 'Value|Type' FORMAT A5
    COLUMN COMMIT_SCN$ HEADING 'Commit SCN' FORMAT 9999999
    COLUMN JOB_ID HEADING 'Job ID' FORMAT A10
    COLUMN JOB_TITLE HEADING 'Job Title' FORMAT A12
    COLUMN MIN_SALARY HEADING 'Minimum|Salary' FORMAT 9999999
    COLUMN MAX_SALARY HEADING 'Maximum|Salary' FORMAT 9999999
    
    SELECT * FROM hr.jobs_change_table;
    

    Your output looks similar to the following:

                 Value                                     Minimum  Maximum
    Command Type Type  Commit SCN Job ID     Job Title      Salary   Salary
    ------------ ----- ---------- ---------- ------------ -------- --------
    INSERT       NEW       663075 BN_CNTR    Bean Counter     6000     8000
    UPDATE       OLD       663082 BN_CNTR    Bean Counter     6000     8000
    UPDATE       NEW       663082 BN_CNTR    Bean Counter     7000     8000
    DELETE       OLD       663090 BN_CNTR    Bean Counter     7000     8000
    

    This output shows the changes made in Step 2.

Monitoring Change Handlers

This section describes monitoring change handlers.

This section contains these topics:

Displaying General Information About Change Handlers

You can query the DBA_APPLY_CHANGE_HANDLERS view to display the following information about each change handler in a database:

  • The name of the change handler

  • The captured values tracked by the change handler for update operations, either NEW for new column values, OLD for old column values, or * for both new and old column values

  • The name of the apply process that uses the change handler

  • The operation for which the change handler is invoked, either INSERT, UPDATE, or DELETE

Run the following query to display this information:

COLUMN HANDLER_NAME HEADING 'Change Handler Name' FORMAT A30
COLUMN CAPTURE_VALUES HEADING 'Capture|Values' FORMAT A7
COLUMN APPLY_NAME HEADING 'Apply|Process' FORMAT A10
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A10
 
SELECT HANDLER_NAME, 
       CAPTURE_VALUES,
       APPLY_NAME,
       OPERATION_NAME
  FROM DBA_APPLY_CHANGE_HANDLERS
  ORDER BY HANDLER_NAME;

Your output looks similar to the following:

                               Capture Apply 
Change Handler Name            Values  Process    Operation
------------------------------ ------- ---------- ----------
HR_DEPARTMENTS_CHG$40                  APP$CHG38  INSERT
HR_DEPARTMENTS_CHG$41                  APP$CHG38  DELETE
HR_DEPARTMENTS_CHG$42          *       APP$CHG38  UPDATE
HR_JOBS_CHG$80                         APP$CHG79  INSERT
HR_JOBS_CHG$81                         APP$CHG79  DELETE
HR_JOBS_CHG$82                 *       APP$CHG79  UPDATE

Notice that the "Capture Values" column is NULL for INSERT and DELETE operations. The DBA_APPLY_CHANGE_HANDLERS view only displays captured values for change handlers that track UPDATE operations. Only new column values are possible for inserts, and only old column values are possible for deletes.

Displaying the Change Table and Source Table for Change Handlers

You can query the DBA_APPLY_CHANGE_HANDLERS view to display the following information about each change handler in a database:

  • The name of the change handler

  • The owner of the change table that tracks changes to the source table

  • The name of the change table that tracks changes to the source table

  • The owner of the source table

  • The name of the source table

Run the following query to display this information:

COLUMN HANDLER_NAME HEADING 'Change Handler Name' FORMAT A25
COLUMN CHANGE_TABLE_OWNER HEADING 'Change|Table|Owner' FORMAT A8
COLUMN CHANGE_TABLE_NAME HEADING 'Change|Table|Name' FORMAT A17
COLUMN SOURCE_TABLE_OWNER HEADING 'Source|Table|Owner' FORMAT A8
COLUMN SOURCE_TABLE_NAME HEADING 'Source|Table|Name' FORMAT A17
 
SELECT HANDLER_NAME,
       CHANGE_TABLE_OWNER, 
       CHANGE_TABLE_NAME, 
       SOURCE_TABLE_OWNER, 
       SOURCE_TABLE_NAME 
  FROM DBA_APPLY_CHANGE_HANDLERS
  ORDER BY HANDLER_NAME;

Your output looks similar to the following:

                          Change   Change            Source   Source
                          Table    Table             Table    Table
Change Handler Name       Owner    Name              Owner    Name
------------------------- -------- ----------------- -------- -----------------
HR_DEPARTMENTS_CHG$40     HR       DEP_CHANGE_TABLE  HR       DEPARTMENTS
HR_DEPARTMENTS_CHG$41     HR       DEP_CHANGE_TABLE  HR       DEPARTMENTS
HR_DEPARTMENTS_CHG$42     HR       DEP_CHANGE_TABLE  HR       DEPARTMENTS
HR_JOBS_CHG$80            HR       JOBS_CHANGE_TABLE HR       JOBS
HR_JOBS_CHG$81            HR       JOBS_CHANGE_TABLE HR       JOBS
HR_JOBS_CHG$82            HR       JOBS_CHANGE_TABLE HR       JOBS

Monitoring the Oracle Streams Environment

After the MAINTAIN_CHANGE_TABLE procedure has configured the Oracle Streams environment, you can monitor the Oracle Streams environment by referring to the sections in the following table.

PKpPK&AOEBPS/strms_trmon.htmgM Monitoring Rule-Based Transformations

28 Monitoring Rule-Based Transformations

A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. This chapter provides sample queries that you can use to monitor rule-based transformations.

The following topics describe monitoring rule-based transformations:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See the online Help for the Oracle Streams tool for more information.


See Also:


Displaying Information About All Rule-Based Transformations

The query in this section displays the following information about each rule-based transformation in a database:

  • The owner of the rule for which a rule-based transformation is specified

  • The name of the rule for which a rule-based transformation is specified

  • The type of rule-based transformation:

Run the following query to display this information for the rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
COLUMN TRANSFORM_TYPE HEADING 'Transformation Type' FORMAT A30

SELECT RULE_OWNER, 
       RULE_NAME, 
       TRANSFORM_TYPE
  FROM DBA_STREAMS_TRANSFORMATIONS;

Your output looks similar to the following:

Rule Owner           Rule Name            Transformation Type
-------------------- -------------------- ------------------------------
STRMADMIN            EMPLOYEES23          DECLARATIVE TRANSFORMATION
STRMADMIN            JOBS26               DECLARATIVE TRANSFORMATION
STRMADMIN            DEPARTMENTS33        SUBSET RULE
STRMADMIN            DEPARTMENTS32        SUBSET RULE
STRMADMIN            DEPARTMENTS34        SUBSET RULE
STRMADMIN            DEPARTMENTS32        CUSTOM TRANSFORMATION
STRMADMIN            DEPARTMENTS33        CUSTOM TRANSFORMATION
STRMADMIN            DEPARTMENTS34        CUSTOM TRANSFORMATION

Displaying Declarative Rule-Based Transformations

A declarative rule-based transformation is a rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL.

The query in this section displays the following information about each declarative rule-based transformation in a database:

  • The owner of the rule for which a declarative rule-based transformation is specified.

  • The name of the rule for which a declarative rule-based transformation is specified.

  • The type of declarative rule-based transformation specified. The following types are possible: ADD COLUMN, DELETE COLUMN, KEEP COLUMNS, RENAME COLUMN, RENAME SCHEMA, and RENAME TABLE.

  • The precedence of the declarative rule-based transformation. The precedence is the execution order of a transformation in relation to other transformations with the same step number specified for the same rule. For transformations with the same step number, the transformation with the lowest precedence is executed first.

  • The step number of the declarative rule-based transformation. If more than one declarative rule-based transformation is specified for the same rule, then the transformation with the lowest step number is executed first. You can specify the step number for a declarative rule-based transformation when you create the transformation.

Run the following query to display this information for the declarative rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN DECLARATIVE_TYPE HEADING 'Declarative|Type' FORMAT A15
COLUMN PRECEDENCE HEADING 'Precedence' FORMAT 99999
COLUMN STEP_NUMBER HEADING 'Step Number' FORMAT 99999

SELECT RULE_OWNER, 
       RULE_NAME, 
       DECLARATIVE_TYPE,
       PRECEDENCE,
       STEP_NUMBER
  FROM DBA_STREAMS_TRANSFORMATIONS
  WHERE TRANSFORM_TYPE = 'DECLARATIVE TRANSFORMATION';

Your output looks similar to the following:

                                Declarative
Rule Owner      Rule Name       Type            Precedence Step Number
--------------- --------------- --------------- ---------- -----------
STRMADMIN       JOBS26          RENAME TABLE             4           0
STRMADMIN       EMPLOYEES23     ADD COLUMN               3           0

Based on this output, the ADD COLUMN transformation executes before the RENAME TABLE transformation because the step number is the same (zero) for both transformations and the ADD COLUMN transformation has the lower precedence.

When you determine which types of declarative rule-based transformations are in a database, you can display more detailed information about each transformation. The following data dictionary views contain detailed information about the various types of declarative rule-based transformations:

  • The DBA_STREAMS_ADD_COLUMN view contains information about ADD COLUMN declarative transformations.

  • The DBA_STREAMS_DELETE_COLUMN view contains information about DELETE COLUMN declarative transformations.

  • The DBA_STREAMS_KEEP_COLUMNS view contains information about KEEP COLUMNS declarative transformations.

  • The DBA_STREAMS_RENAME_COLUMN view contains information about RENAME COLUMN declarative transformations.

  • The DBA_STREAMS_RENAME_SCHEMA view contains information about RENAME SCHEMA declarative transformations.

  • The DBA_STREAMS_RENAME_TABLE view contains information about RENAME TABLE declarative transformations.

For example, the previous query listed an ADD COLUMN transformation and a RENAME TABLE transformation. The following sections contain queries that display detailed information about these transformations:


Note:

Precedence and step number pertain only to declarative rule-based transformations. They do not pertain to subset rule transformations or custom rule-based transformations.

Displaying Information About ADD COLUMN Transformations

The following query displays detailed information about the ADD COLUMN declarative rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule|Owner' FORMAT A9
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A12
COLUMN SCHEMA_NAME HEADING 'Schema|Name' FORMAT A6
COLUMN TABLE_NAME HEADING 'Table|Name' FORMAT A9
COLUMN COLUMN_NAME HEADING 'Column|Name' FORMAT A10
COLUMN COLUMN_TYPE HEADING 'Column|Type' FORMAT A8

SELECT RULE_OWNER, 
       RULE_NAME, 
       SCHEMA_NAME,
       TABLE_NAME,
       COLUMN_NAME,
       ANYDATA.AccessDate(COLUMN_VALUE) "Value",
       COLUMN_TYPE
  FROM DBA_STREAMS_ADD_COLUMN;

Your output looks similar to the following:

Rule      Rule         Schema Table     Column                          Column
Owner     Name         Name   Name      Name       Value                Type
--------- ------------ ------ --------- ---------- -------------------- --------
STRMADMIN EMPLOYEES23  HR     EMPLOYEES BIRTH_DATE                      SYS.DATE

This output show the following information about the ADD COLUMN declarative rule-based transformation:

  • It is specified on the employees23 rule in the strmadmin schema.

  • It adds a column to row LCRs that involve the employees table in the hr schema.

  • The column name of the added column is birth_date.

  • The value of the added column is NULL. Notice that the COLUMN_VALUE column in the DBA_STREAMS_ADD_COLUMN view is type ANYDATA. In this example, because the column type is DATE, the ANYDATA.AccessDate member function is used to display the value. Use the appropriate member function to display values of other types.

  • The type of the added column is DATE.

Displaying Information About RENAME TABLE Transformations

The following query displays detailed information about the RENAME TABLE declarative rule-based transformations in a database:

COLUMN RULE_OWNER HEADING 'Rule|Owner' FORMAT A10
COLUMN RULE_NAME HEADING 'Rule|Name' FORMAT A10
COLUMN FROM_SCHEMA_NAME HEADING 'From|Schema|Name' FORMAT A10
COLUMN TO_SCHEMA_NAME HEADING 'To|Schema|Name' FORMAT A10
COLUMN FROM_TABLE_NAME HEADING 'From|Table|Name' FORMAT A15
COLUMN TO_TABLE_NAME HEADING 'To|Table|Name' FORMAT A15

SELECT RULE_OWNER, 
       RULE_NAME, 
       FROM_SCHEMA_NAME,
       TO_SCHEMA_NAME,
       FROM_TABLE_NAME,
       TO_TABLE_NAME
  FROM DBA_STREAMS_RENAME_TABLE;

Your output looks similar to the following:

                      From       To         From            To
Rule       Rule       Schema     Schema     Table           Table
Owner      Name       Name       Name       Name            Name
---------- ---------- ---------- ---------- --------------- ---------------
STRMADMIN  JOBS26     HR         HR         JOBS            ASSIGNMENTS

This output show the following information about the RENAME TABLE declarative rule-based transformation:

  • It is specified on the jobs26 rule in the strmadmin schema.

  • It renames the hr.jobs table in row LCRs to the hr.assignments table.

Displaying Custom Rule-Based Transformations

A custom rule-based transformation is a rule-based transformation that requires a user-defined PL/SQL function. The query in this section displays the following information about each custom rule-based transformation specified in a database:

  • The owner of the rule on which the custom rule-based transformation is set

  • The name of the rule on which the custom rule-based transformation is set

  • The owner and name of the transformation function

  • Whether the custom rule-based transformation is one-to-one or one-to-many

Run the following query to display this information:

COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN TRANSFORM_FUNCTION_NAME HEADING 'Transformation Function' FORMAT A30
COLUMN CUSTOM_TYPE HEADING 'Type' FORMAT A11
 
SELECT RULE_OWNER, RULE_NAME, TRANSFORM_FUNCTION_NAME, CUSTOM_TYPE
  FROM DBA_STREAMS_TRANSFORM_FUNCTION;

Your output looks similar to the following:

Rule Owner           Rule Name       Transformation Function        Type
-------------------- --------------- ------------------------------ -----------
STRMADMIN            DEPARTMENTS31   "HR"."EXECUTIVE_TO_MANAGEMENT" ONE TO ONE
STRMADMIN            DEPARTMENTS32   "HR"."EXECUTIVE_TO_MANAGEMENT" ONE TO ONE
STRMADMIN            DEPARTMENTS33   "HR"."EXECUTIVE_TO_MANAGEMENT" ONE TO ONE

Note:

The transformation function name must be of type VARCHAR2. If it is not, then the value of TRANSFORM_FUNCTION_NAME is NULL. The VALUE_TYPE column in the DBA_STREAMS_TRANSFORM_FUNCTION view displays the type of the transform function name.

PK½lMgMPK&AOEBPS/man_advanced.htm]O Other Oracle Streams Management Tasks

21 Other Oracle Streams Management Tasks

This chapter provides instructions for performing full database export/import in an Oracle Streams environment. This chapter also provides instructions for removing an Oracle Streams configuration.

The following topics describe Oracle Streams management tasks:

Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.


See Also:

Oracle Streams Replication Administrator's Guide for information about creating an Oracle Streams administrator

Performing Full Database Export/Import in an Oracle Streams Environment

This section describes how to perform a full database export/import on a database that is running one or more Oracle Streams capture processes, propagations, or apply processes. These instructions pertain to a full database export/import where the import database and export database are running on different computers, and the import database replaces the export database. The global name of the import database and the global name of the export database must match. These instructions assume that both databases already exist.


Note:

If you want to add a database to an existing Oracle Streams environment, then do not use the instructions in this section. Instead, see Oracle Streams Replication Administrator's Guide.


See Also:


Complete the following steps to perform a full database export/import on a database that is using Oracle Streams:

  1. If the export database contains any destination queues for propagations from other databases, then stop each propagation that propagates messages to the export database. You can stop a propagation using the STOP_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package.

  2. Make the necessary changes to your network configuration so that the database links used by the propagation jobs you disabled in Step 1 point to the computer running the import database.

    To complete this step, you might need to re-create the database links used by these propagation jobs or modify your Oracle networking files at the databases that contain the source queues.

  3. Notify all users to stop making data manipulation language (DML) and data definition language (DDL) changes to the export database, and wait until these changes have stopped.

  4. Make a note of the current export database system change number (SCN). You can determine the current SCN using the GET_SYSTEM_CHANGE_NUMBER function in the DBMS_FLASHBACK package. For example:

    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
      current_scn NUMBER;
    BEGIN
      current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
          DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn);
    END;
    /
    

    In this example, assume that current SCN returned is 7000000.

    After completing this step, do not stop any capture process running on the export database. Step 7d instructs you to use the V$STREAMS_CAPTURE dynamic performance view to ensure that no DML or DDL changes were made to the database after Step 3. The information about a capture process in this view is reset if the capture process is stopped and restarted.

    For the check in Step 7d to be valid, this information should not be reset for any capture process. To prevent a capture process from stopping automatically, you might need to set the message_limit and time_limit capture process parameters to INFINITE if these parameters are set to another value for any capture process.

  5. If any downstream capture processes are capturing changes that originated at the export database, then ensure that the log file containing the SCN determined in Step 4 has been transferred to the downstream database and added to the capture process session. See "Displaying the Registered Redo Log Files for Each Capture Process" for queries that can determine this information.

  6. If the export database is not running any apply processes, and is not propagating messages, then start the full database export now. Ensure that the FULL export parameter is set to y so that the required Oracle Streams metadata is exported.

    If the export database is running one or more apply processes or is propagating messages, then do not start the export and proceed to the next step.

  7. If the export database is the source database for changes captured by any capture processes, then complete the following steps for each capture process:

    1. Wait until the capture process has scanned past the redo record that corresponds to the SCN determined in Step 4. You can view the SCN of the redo record last scanned by a capture process by querying the CAPTURE_MESSAGE_NUMBER column in the V$STREAMS_CAPTURE dynamic performance view. Ensure that the value of CAPTURE_MESSAGE_NUMBER is greater than or equal to the SCN determined in Step 4 before you continue.

    2. In SQL*Plus, connect to the database as the Oracle Streams administrator.

      See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

    3. Monitor the Oracle Streams environment until the apply process at the destination database has applied all of the changes from the capture database. For example, if the name of the capture process is capture, the name of the apply process is apply, the global name of the destination database is dest.example.com, and the SCN value returned in Step 4 is 7000000, then run the following query at the capture database:

      SELECT cap.ENQUEUE_MESSAGE_NUMBER
        FROM V$STREAMS_CAPTURE cap
        WHERE cap.CAPTURE_NAME = 'CAPTURE' AND
              cap.ENQUEUE_MESSAGE_NUMBER IN (
                SELECT DEQUEUED_MESSAGE_NUMBER
                FROM V$STREAMS_APPLY_READER@dest.example.com reader,
                     V$STREAMS_APPLY_COORDINATOR@dest.example.com coord
                WHERE reader.APPLY_NAME = 'APPLY' AND
                  reader.DEQUEUED_MESSAGE_NUMBER = reader.OLDEST_SCN_NUM AND
                  coord.APPLY_NAME = 'APPLY' AND
                  coord.LWM_MESSAGE_NUMBER = coord.HWM_MESSAGE_NUMBER AND
                  coord.APPLY# = reader.APPLY#) AND
                cap.CAPTURE_MESSAGE_NUMBER >= 7000000;
      

      When this query returns a row, all of the changes from the capture database have been applied at the destination database, and you can move on to the next step.

      If this query returns no results for an inordinately long time, then ensure that the Oracle Streams clients in the environment are enabled by querying the STATUS column in the DBA_CAPTURE view at the source database and the DBA_APPLY view at the destination database. You can check the status of the propagation by running the query in "Displaying Information About the Schedules for Propagation Jobs".

      If an Oracle Streams client is disabled, then try restarting it. If an Oracle Streams client will not restart, then troubleshoot the environment using the information in Chapter 30, "Identifying Problems in an Oracle Streams Environment".

      The query in this step assumes that a database link accessible to the Oracle Streams administrator exists between the capture database and the destination database. If such a database link does not exist, then you can perform two separate queries at the capture database and destination database.

    4. Verify that the enqueue message number of each capture process is less than or equal to the SCN determined in Step 4. You can view the enqueue message number for each capture process by querying the ENQUEUE_MESSAGE_NUMBER column in the V$STREAMS_CAPTURE dynamic performance view.

      If the enqueue message number of each capture process is less than or equal to the SCN determined in Step 4, then proceed to Step 9.

      However, if the enqueue message number of any capture process is higher than the SCN determined in Step 4, then one or more DML or DDL changes were made after the SCN determined in Step 4, and these changes were captured and enqueued by a capture process. In this case, perform all of the steps in this section again, starting with Step 1.


    Note:

    For this verification to be valid, each capture process must have been running uninterrupted since Step 4.

  8. If any downstream capture processes captured changes that originated at the export database, then drop these downstream capture processes. You will re-create them in Step 14a.

  9. If the export database has any propagations that are propagating messages, then stop these propagations using the STOP_PROPAGATION procedure in the DBMS_PROPAGATION package.

  10. If the export database is running one or more apply processes, or is propagating messages, then start the full database export now. Ensure that the FULL export parameter is set to y so that the required Oracle Streams metadata is exported. If you already started the export in Step 6, then proceed to Step 11.

  11. When the export is complete, transfer the export dump file to the computer running the import database.

  12. Perform the full database import. Ensure that the STREAMS_CONFIGURATION and FULL import parameters are both set to y so that the required Oracle Streams metadata is imported. The default setting is y for the STREAMS_CONFIGURATION import parameter. Also, ensure that no DML or DDL changes are made to the import database during the import.

  13. If any downstream capture processes are capturing changes that originated at the database, then make the necessary changes so that log files are transferred from the import database to the downstream database. See Oracle Streams Replication Administrator's Guide for instructions.

  14. Re-create downstream capture processes:

    1. Re-create any downstream capture processes that you dropped in Step 8, if necessary. These dropped downstream capture processes were capturing changes that originated at the export database. Configure the re-created downstream capture processes to capture changes that originate at the import database.

    2. Re-create in the import database any downstream capture processes that were running in the export database, if necessary. If the export database had any downstream capture processes, then those downstream capture processes were not exported.


    See Also:

    Oracle Streams Replication Administrator's Guide for information about configuring a capture process

  15. If any local or downstream capture processes will capture changes that originate at the database, then, at the import database, prepare the database objects whose changes will be captured for instantiation. See Oracle Streams Replication Administrator's Guide for information about preparing database objects for instantiation.

  16. Let users access the import database, and shut down the export database.

  17. Enable any propagation jobs you disabled in Steps 1 and 9.

  18. If you reset the value of a message_limit or time_limit capture process parameter in Step 4, then, at the import database, reset these parameters to their original settings.

Removing an Oracle Streams Configuration

You run the REMOVE_STREAMS_CONFIGURATION procedure in the DBMS_STREAMS_ADM package to remove an Oracle Streams configuration at the local database.


Caution:

Running this procedure is dangerous. You should run this procedure only if you are sure you want to remove the entire Oracle Streams configuration at a database.

To remove the Oracle Streams configuration at the local database, run the following procedure while connected to the database as the Oracle Streams administrator:

EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

After running this procedure, drop the Oracle Streams administrator at the database, if possible.


See Also:

Oracle Database PL/SQL Packages and Types Reference for detailed information about the actions performed by the REMOVE_STREAMS_CONFIGURATION procedure

PK}IbO]OPK&AOEBPS/ap_xmlschema.htm*_ XML Schema for LCRs

C XML Schema for LCRs

The XML schema described in this appendix defines the format of a logical change record (LCR). The Oracle XML DB must be installed to use the XML schema for LCRs.

The namespace for this schema is the following:

http://xmlns.oracle.com/streams/schemas/lcr 

The schema is the following:

http://xmlns.oracle.com/streams/schemas/lcr/streamslcr.xsd

Definition of the XML Schema for LCRs

The following is the XML schema definition for LCRs:

'<schema xmlns="http://www.w3.org/2001/XMLSchema" 
        targetNamespace="http://xmlns.oracle.com/streams/schemas/lcr" 
        xmlns:lcr="http://xmlns.oracle.com/streams/schemas/lcr"
        xmlns:xdb="http://xmlns.oracle.com/xdb"
          version="1.0"
        elementFormDefault="qualified">
 
  <simpleType name = "short_name">
    <restriction base = "string">
      <maxLength value="30"/>
    </restriction>
  </simpleType>
 
  <simpleType name = "long_name">
    <restriction base = "string">
      <maxLength value="4000"/>
    </restriction>
  </simpleType>
 
  <simpleType name = "db_name">
    <restriction base = "string">
      <maxLength value="128"/>
    </restriction>
  </simpleType>
 
  <!-- Default session parameter is used if format is not specified -->
  <complexType name="datetime_format">
    <sequence>
      <element name = "value" type = "string" nillable="true"/>
      <element name = "format" type = "string" minOccurs="0" nillable="true"/>
    </sequence>
  </complexType>
 
  <complexType name="anydata">
    <choice>
      <element name="varchar2" type = "string" xdb:SQLType="CLOB" 
                                                        nillable="true"/>
 
      <!-- Represent char as varchar2. xdb:CHAR blank pads upto 2000 bytes! -->
      <element name="char" type = "string" xdb:SQLType="CLOB"
                                                        nillable="true"/>
      <element name="nchar" type = "string" xdb:SQLType="NCLOB"
                                                        nillable="true"/>
 
      <element name="nvarchar2" type = "string" xdb:SQLType="NCLOB"
                                                        nillable="true"/>
      <element name="number" type = "double" xdb:SQLType="NUMBER"
                                                        nillable="true"/>
      <element name="raw" type = "hexBinary" xdb:SQLType="BLOB" 
                                                        nillable="true"/>
      <element name="date" type = "lcr:datetime_format"/>
      <element name="timestamp" type = "lcr:datetime_format"/>
      <element name="timestamp_tz" type = "lcr:datetime_format"/>
      <element name="timestamp_ltz" type = "lcr:datetime_format"/>
 
      <!-- Interval YM should be as per format allowed by SQL -->
      <element name="interval_ym" type = "string" nillable="true"/>
 
      <!-- Interval DS should be as per format allowed by SQL -->
      <element name="interval_ds" type = "string" nillable="true"/>
 
      <element name="urowid" type = "string" xdb:SQLType="VARCHAR2"
                                                        nillable="true"/>
    </choice>
  </complexType>
 
  <complexType name="column_value">
    <sequence>
      <element name = "column_name" type = "lcr:long_name" nillable="false"/>
      <element name = "data" type = "lcr:anydata" nillable="false"/>
      <element name = "lob_information" type = "string" minOccurs="0"
                                                           nillable="true"/>
      <element name = "lob_offset" type = "nonNegativeInteger" minOccurs="0"
                                                           nillable="true"/>
      <element name = "lob_operation_size" type = "nonNegativeInteger" 
                                             minOccurs="0" nillable="true"/>
      <element name = "long_information" type = "string" minOccurs="0"
                                                           nillable="true"/>
    </sequence>
  </complexType>
 
  <complexType name="extra_attribute">
    <sequence>
      <element name = "attribute_name" type = "lcr:short_name"/>
      <element name = "attribute_value" type = "lcr:anydata"/>
    </sequence>
  </complexType>
 
  <element name = "ROW_LCR" xdb:defaultTable="">
    <complexType>
      <sequence>
        <element name = "source_database_name" type = "lcr:db_name" 
                                                            nillable="false"/>
        <element name = "command_type" type = "string" nillable="false"/>
        <element name = "object_owner" type = "lcr:short_name" 
                                                            nillable="false"/>
        <element name = "object_name" type = "lcr:short_name"
                                                            nillable="false"/>
        <element name = "tag" type = "hexBinary" xdb:SQLType="RAW" 
                                               minOccurs="0" nillable="true"/>
        <element name = "transaction_id" type = "string" minOccurs="0" 
                                                             nillable="true"/>
        <element name = "scn" type = "double" xdb:SQLType="NUMBER" 
                                               minOccurs="0" nillable="true"/>
        <element name = "old_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "old_value" type="lcr:column_value" 
                                                    maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
        <element name = "new_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "new_value" type="lcr:column_value" 
                                                    maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
        <element name = "extra_attribute_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "extra_attribute_value"
                       type="lcr:extra_attribute"
                       maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
      </sequence>
    </complexType>
  </element>
 
  <element name = "DDL_LCR" xdb:defaultTable="">
    <complexType>
      <sequence>
        <element name = "source_database_name" type = "lcr:db_name" 
                                                        nillable="false"/>
        <element name = "command_type" type = "string" nillable="false"/>
        <element name = "current_schema" type = "lcr:short_name"
                                                        nillable="false"/>
        <element name = "ddl_text" type = "string" xdb:SQLType="CLOB"
                                                        nillable="false"/>
        <element name = "object_type" type = "string"
                                        minOccurs = "0" nillable="true"/>
        <element name = "object_owner" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "object_name" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "logon_user" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "base_table_owner" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "base_table_name" type = "lcr:short_name"
                                        minOccurs = "0" nillable="true"/>
        <element name = "tag" type = "hexBinary" xdb:SQLType="RAW"
                                        minOccurs = "0" nillable="true"/>
        <element name = "transaction_id" type = "string"
                                        minOccurs = "0" nillable="true"/>
        <element name = "scn" type = "double" xdb:SQLType="NUMBER"
                                        minOccurs = "0" nillable="true"/>
        <element name = "extra_attribute_values" minOccurs = "0">
          <complexType>
            <sequence>
              <element name = "extra_attribute_value"
                       type="lcr:extra_attribute"
                       maxOccurs = "unbounded"/>
            </sequence>
          </complexType>
        </element>
      </sequence>
    </complexType>
  </element>
</schema>';
PKzd**PK&AOEBPS/strms_capture.htm Oracle Streams Information Capture

2 Oracle Streams Information Capture

Capturing information with Oracle Streams means creating a message that contains the information and enqueuing the message into a queue. The captured information can describe a database change, or it can be any other type of information.

The following topics contain conceptual information about capturing information with Oracle Streams:

Ways to Capture Information with Oracle Streams

There are two ways to capture information with Oracle Streams: implicit capture and explicit capture.

Implicit Capture

With implicit capture, data definition language (DDL) and data manipulation language (DML) changes are captured automatically either by a capture process or by synchronous capture. A specific type of message called logical change record (LCR) describes these database changes. Both a capture process and synchronous capture can filter database changes with user-defined rules. Therefore, only changes to specified objects are captured.

The following topics describe capture processes and synchronous captures:

Capture Processes

A capture process retrieves change data from the redo log, either by mining the online redo log or, if necessary, by mining archived log files. After retrieving the data, the capture process formats it into an LCR and enqueues it for further processing.

A capture process enqueues information about database changes in the form of messages containing LCRs. A message containing an LCR that was originally captured and enqueued by a capture process is called a captured LCR. A capture process always enqueues messages into a buffered queue. A buffered queue is the portion of a queue that uses the Oracle Streams pool to store messages in memory and a queue table to store messages that have spilled from memory.

A capture process is useful in the following situations:

  • When you want to capture changes to a relatively large number of tables

  • When you want to capture changes to schemas or to an entire database

  • When you want to capture DDL changes

  • When you want to capture changes at a database other than the source database using downstream capture

Synchronous Captures

Synchronous capture uses an internal mechanism to capture DML changes immediately after they happen. Synchronous capture enqueues information about DML changes in the form of messages containing row LCRs. Synchronous capture enqueues these LCRs into a persistent queue. Synchronous capture always enqueues messages into a persistent queue. A persistent queue is the portion of a queue that only stores messages on hard disk in a queue table, not in memory. The messages captured by a synchronous capture are persistent LCRs.

Synchronous capture is useful in the following situations:

  • For the best performance, when you want to capture DML changes to a relatively small number of tables

  • When you want to capture DML changes to a table immediately after these changes are made

Explicit Capture

With explicit capture, applications generate messages and enqueue them. These messages can be formatted as LCRs, or they can be formatted into different types of messages for consumption by other applications. Messages can also be enqueued explicitly by an apply process or by an apply handler for an apply process.

Explicit capture is useful in the following situations:

  • When applications generate messages that must be processed by other applications.

  • When you have a heterogeneous replication environment in which an apply process in an Oracle database applies changes that originated at a non-Oracle database. In this case, an application captures LCRs based on the changes at the non-Oracle database, and these LCRs are processed by an apply process at an Oracle database.


See Also:


Types of Information Captured with Oracle Streams

The following types of information can be captured with Oracle Streams:

Logical Change Records (LCRs)

An LCR is a message with a specific format that describes a database change. There are two types of LCRs: row LCRs and DDL LCRs. A capture process, a synchronous capture, or an application can capture LCRs.

You can capture the following types of LCRs with Oracle Streams:

  • A captured LCR is an LCR that is captured implicitly by a capture process and enqueued into the buffered queue portion of an ANYDATA queue.

  • A persistent LCR is an LCR that is enqueued into the persistent queue portion of an ANYDATA queue. A persistent LCR can be enqueued in one of the following ways:

    • Captured implicitly by a synchronous capture and enqueued

    • Constructed explicitly by an application and enqueued

    • Dequeued by an apply process and enqueued by the same apply process using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package

    The only difference between the persistent LCRs captured in these three ways is that persistent LCRs captured by a synchronous capture have more attributes than those constructed by an application or enqueued by an apply process.

  • A buffered LCR is and LCR that is constructed explicitly by an application and enqueued into the buffered queue portion of an ANYDATA queue.

The following sections contain information about LCRs:


See Also:


Row LCRs

A row LCR describes a change to the data in a single row or a change to a single LOB column, LONG column, LONG RAW column, or XMLType stored as CLOB column in a row. The change results from a data manipulation language (DML) statement or a piecewise operation. For example, a single DML statement can insert or merge multiple rows into a table, can update multiple rows in a table, or can delete multiple rows from a table. Applications can also construct LCRs that are enqueued for further processing.

A single DML statement can produce multiple row LCRs. That is, a capture process creates a row LCR for each row that is changed by the DML statement. In addition, an update to a LOB, LONG, LONG RAW, or XMLType stored as CLOB column in a single row can result in more than one row LCR.

Each row LCR is encapsulated in an object of LCR$_ROW_RECORD type. Table 2-1 describes the attributes that are present in each row LCR.

Table 2-1 Attributes Present in All Row LCRs

AttributeDescription

source_database_name

The name of the source database where the row change occurred.

command_type

The type of DML statement that produced the change, either INSERT, UPDATE, DELETE, LOB ERASE, LOB WRITE, or LOB TRIM.

object_owner

The schema name that contains the table with the changed row.

object_name

The name of the table that contains the changed row.

tag

A raw tag that you can use to track the LCR.

transaction_id

The identifier of the transaction in which the DML statement was run.

scn

The system change number (SCN) at the time when the change was made.

old_values

The old column values related to the change. These are the column values for the row before the DML change. If the type of the DML statement is UPDATE or DELETE, then these old values include some or all of the columns in the changed row before the DML statement. If the type of the DML statement is INSERT, then there are no old values. For UPDATE and DELETE statements, row LCRs created by a capture process can include some or all of the old column values in the row, but row LCRs created by a synchronous capture always contain all of the new column values in the row.

new_values

The new column values related to the change. These are the column values for the row after the DML change. If the type of the DML statement is UPDATE or INSERT, then these new values include some or all of the columns in the changed row after the DML statement. If the type of the DML statement is DELETE, then there are no new values. For UPDATE and INSERT statements, row LCRs created by a capture process can include some or all of the new column values in the row, but row LCRs created by a synchronous capture always contain all of the new column values in the row.

position

A unique identifier of RAW data type for each LCR. The position is strictly increasing within a transaction and across transactions.

LCR position is commonly used in XStream configurations.

See Oracle Database XStream Guide.


Row LCRs that were captured by a capture process or a synchronous capture contain additional attributes. Table 2-2 describes these additional attributes. This table also shows whether the attribute is present in row LCRs captured by capture processes and row LCRs captured by synchronous captures. These attributes are not present in explicitly captured row LCRs.

Table 2-2 Additional Attributes in Captured Row LCRs

AttributeDescriptionIn Capture Process Row LCRs?In Synchronous Capture Row LCRs?

commit_scn

The commit system change number (SCN) of the transaction to which the LCR belongs.

Yes

No

commit_scn_from_position

The commit system change number (SCN) of a transaction determined by the input position, which is generated by an XStream outbound server.

Yes

No

commit_time

The commit time of the transaction to which the LCR belongs.

Yes

No

compatible

The minimal database compatibility required to support the LCR.

Yes

Yes

instance_number

The instance number of the database instance that made the change that is encapsulated in the LCR. Typically, the instance number is relevant in an Oracle Real Application Clusters (Oracle RAC) configuration.

Yes

Yes

lob_information

The LOB information for the column, such as NOT_A_LOB or LOB_CHUNK.

Yes

No

lob_offset

The LOB offset for the specified column in the number of characters for CLOB columns and the number of bytes for BLOB columns.

Yes

No

lob_operation_size

The operation size for the LOB column in the number of characters for CLOB columns and the number of bytes for BLOB columns.

Yes

No

long_information

The LONG information for the column, such as NOT_A_LONG or LONG_CHUNK.

Yes

No

row_text

The SQL statement for the change that is encapsulated in the row LCR.

Yes

Yes

scn_from_position

The commit system change number (SCN) of a transaction determined by the input position, which is generated by an XStream outbound server.

Yes

No

source_time

The time when the change in an LCR captured by a capture process was generated in the redo log of the source database, or the time when a persistent LCR was created.

Yes

Yes

xml_information

The XML information for the column, such as NOT_XML, XML_DOC, or XML_DIFF.

Yes

No


A row LCR captured by a capture process or synchronous capture can also contain transaction control statements. These row LCRs contain transaction control directives such as COMMIT and ROLLBACK. Such row LCRs are internal and are used by an apply process to maintain transaction consistency between a source database and a destination database.

DDL LCRs

A DDL LCR describes a data definition language (DDL) change. A DDL statement changes the structure of the database. For example, a DDL statement can create, alter, or drop a database object.

Each DDL LCR is encapsulated in an object of LCR$_DDL_RECORD type. Table 2-3 describes the attributes that are present in each DDL LCR.

Table 2-3 Attributes Present in All DDL LCRs

AttributeDescription

source_database_name

The name of the source database where the row change occurred.

command_type

The type of DDL statement that produced the change, for example ALTER TABLE or CREATE INDEX.

object_owner

The schema name of the user who owns the database object on which the DDL statement was run.

object_name

The name of the database object on which the DDL statement was run.

object_type

The type of database object on which the DDL statement was run, for example TABLE or PACKAGE.

ddl_text

The text of the DDL statement.

logon_user

The logon user, which is the user whose session executed the DDL statement.

current_schema

The schema that is used if no schema is specified for an object in the DDL text.

base_table_owner

The base table owner. If the DDL statement is dependent on a table, then the base table owner is the owner of the table on which it is dependent.

base_table_name

The base table name. If the DDL statement is dependent on a table, then the base table name is the name of the table on which it is dependent.

tag

A raw tag that you can use to track the LCR.

transaction_id

The identifier of the transaction in which the DDL statement was run.

scn

The system change number (SCN) at the time when the change was made.

position

A unique identifier of RAW data type for each LCR. The position is strictly increasing within a transaction and across transactions.

LCR position is commonly used in XStream configurations.

See Oracle Database XStream Guide.

edition_name

The name of the edition in which the DDL statement was executed.


DDL LCRs that were captured by a capture process contain additional attributes. Table 2-2 describes these additional attributes. Synchronous captures cannot capture DDL changes, and these attributes are not present in explicitly captured DDL LCRs.

Table 2-4 Additional Attributes in Captured DDL LCRs

AttributeDescription

commit_scn

The commit system change number (SCN) of the transaction to which the LCR belongs.

commit_scn_from_position

The commit system change number (SCN) of a transaction determined by the input position, which is generated by an XStream outbound server.

commit_time

The commit time of the transaction to which the LCR belongs.

compatible

The minimal database compatibility required to support the LCR.

instance_number

The instance number of the database instance that made the change that is encapsulated in the LCR. Typically, the instance number is relevant in an Oracle Real Application Clusters (Oracle RAC) configuration.

scn_from_position

The commit system change number (SCN) of a transaction determined by the input position, which is generated by an XStream outbound server.

source_time

The time when the change in an LCR captured by a capture process was generated in the redo log of the source database, or the time when a persistent LCR was created.



Note:

Both row LCRs and DDL LCRs contain the source database name of the database where a change originated. If captured LCRs will be propagated by a propagation or applied by an apply process, then, to avoid propagation and apply problems, Oracle recommends that you do not rename the source database after a capture process has started capturing changes.


See Also:


Extra Information in LCRs

In addition to the information discussed in the previous sections, row LCRs and DDL LCRs optionally can include the extra information (or LCR attributes) described in Table 2-5.

Table 2-5 Extra Attributes in LCRs

AttributeDescription

row_id

The rowid of the row changed in a row LCR. This attribute is not included in DDL LCRs or row LCRs for index-organized tables.

serial#

The serial number of the session that performed the change captured in the LCR.

session#

The identifier of the session that performed the change captured in the LCR.

thread#

The thread number of the instance in which the change captured in the LCR was performed. Typically, the thread number is relevant only in an Oracle Real Application Clusters (Oracle RAC) environment.

tx_name

The name of the transaction that includes the LCR.

username

The name of the current user who performed the change captured in the LCR.


You can use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process or synchronous capture to capture one or more extra attributes.

User Messages

Messages that do not contain LCRs are called user messages. User messages can be of any type (except an LCR type). User messages can be created by an application and consumed by an application. For example, a business application might create a user message for each order, and these messages might be processed by another application.

You can capture the following types of user messages with Oracle Streams:

  • A persistent user message is a non-LCR message of a user-defined type that is enqueued into a persistent queue. A persistent user message can be enqueued in one of the following ways:

    • Created explicitly by an application and enqueued

    • Dequeued by an apply process and enqueued by the same apply process using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package

    A persistent user message can be enqueued into the persistent queue portion of an ANYDATA queue or a typed queue.

  • A buffered user message is a non-LCR message of a user-defined type that is created explicitly by an application and enqueued into a buffered queue. A buffered user message can be enqueued into the buffered queue portion of an ANYDATA queue or a typed queue.


Note:

Capture processes and synchronous captures never capture user messages.


See Also:


Summary of Information Capture Options with Oracle Streams

Table 2-6 summarizes the capture options available with Oracle Streams.

Table 2-6 Information Capture Options with Oracle Streams

Capture TypeCapture MechanismMessage TypesEnqueued IntoUse When

Implicit Capture with an Oracle Streams Capture Process


Mining of Redo Log

Captured LCRs

Buffered Queue

You want to capture changes to many tables.

You want to capture changes to schemas or an entire database.

You want to capture DDL changes.

You want to capture changes at a downstream database.

Implicit Capture with Synchronous Capture


Internal Mechanism

Persistent LCRs

Persistent Queue

You want to capture DML changes to a small number of tables.

You want to capture DML changes immediately after they occur.

Explicit Capture by Applications


Manual Message Creation and Enqueue

Buffered LCRs

Persistent LCRs

Buffered User Messages

Persistent User Messages

Buffered Queue or Persistent Queue

You want to capture user messages that will be consumed by applications.

You want to capture LCRs in a heterogeneous replication environment.

You want to construct LCRs by using an application instead of by using a capture process or a synchronous capture.



Note:

A single database can use any combination of the capture options summarized in the table.

Instantiation in an Oracle Streams Environment

An Oracle Streams environment can share a database object within a single database or between multiple databases. In an Oracle Streams environment that shares database objects and uses implicit capture to capture changes to the database object, the source database is the database where the change originated. The source database is one of the following depending on the type of implicit capture used:

  • If a capture process captures changes, then the source database is the database where changes to the object are generated in the redo log.

  • If synchronous capture captures changes, then the source database is the database where synchronous capture is configured.

After changes are captured, they can be applied locally or propagated to other databases and applied at destination databases.

In an Oracle Streams environment that shares database objects, you must instantiate the shared source database objects before changes to them can be dequeued and processed by an apply process. If a database where changes to the source database objects will be applied is a different database than the source database, then the destination database must have a copy of these database objects.

In Oracle Streams, the following general steps instantiate a database object:

  1. Prepare the object for instantiation at the source database.

  2. If a copy of the object does not exist at the destination database, then create an object physically at the destination database based on an object at the source database. You can use export/import, transportable tablespaces, or RMAN to copy database objects for instantiation. If the database objects already exist at the destination database, then this step is not necessary.

  3. Set the instantiation SCN for the database object at the destination database. An instantiation system change number (SCN) instructs an apply process at the destination database to apply only changes that committed at the source database after the specified SCN.

In some cases, Step 1 and Step 3 are completed automatically. For example, when you add rules for an object to the positive rule set for a capture process by running a procedure in the DBMS_STREAMS_ADM package, the procedure prepares the object for instantiation automatically. Also, when you use export/import or transportable tablespaces to copy database objects from a source database to a destination database, instantiation SCNs can be set for these objects automatically during import. Instantiation is required whenever an apply process dequeues captured LCRs, even if the apply process sends the LCRs to an apply handler that does not execute them.


See Also:


Implicit Capture with an Oracle Streams Capture Process

This section explains the concepts related to the Oracle Streams capture process.

This section contains these topics:

Introduction to Capture Processes

Every Oracle database has a set of two or more redo log files. The redo log files for a database are collectively known as the database redo log. The primary function of the redo log is to record all of the changes made to the database.

Redo logs are used to guarantee recoverability in the event of human error or media failure. A capture process is an optional Oracle background process that scans the database redo log to capture data manipulation language (DML) and data definition language (DDL) changes made to database objects. When a capture process is configured to capture changes from a redo log, the database where the changes were generated is called the source database for the capture process.

When a capture process captures a database change, it converts it into a specific message format called a logical change record (LCR). After capturing an LCR, a capture process enqueues a message containing the LCR into a queue. A capture process is always associated with a single ANYDATA queue, and it enqueues messages into this queue only. For improved performance, captured LCRs always are stored in a buffered queue, which is System Global Area (SGA) memory associated with a queue. You can create multiple queues and associate a different capture process with each queue.

Captured LCRs can be sent to queues in the same database or other databases by propagations. Captured LCRs can also be dequeued by apply processes. In some situations, an optimization enables capture processes to send LCRs to apply processes more efficiently. This optimization is called combined capture and apply.

A capture process can run on its source database or on a remote database. When a capture process runs on its source database, the capture process is a local capture process. When a capture process runs on a remote database, the capture process is a downstream capture process, and the remote database is called the downstream database.

Figure 2-1 shows a capture process capturing LCRs.

Figure 2-1 Capture Process

Description of Figure 2-1 follows
Description of "Figure 2-1 Capture Process"


Note:

  • A capture process can be associated only with an ANYDATA queue, not with a typed queue.

  • A capture process and a synchronous capture should not capture changes made to the same table.


Capture Process Rules

A capture process either captures or discards changes based on rules that you define. Each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You can place these rules in a positive rule set or negative rule set for the capture process.

If a rule evaluates to TRUE for a change, and the rule is in the positive rule set for a capture process, then the capture process captures the change. If a rule evaluates to TRUE for a change, and the rule is in the negative rule set for a capture process, then the capture process discards the change. If a capture process has both a positive and a negative rule set, then the negative rule set is always evaluated first.

You can specify capture process rules at the following levels:

  • A table rule captures or discards either row changes resulting from DML changes or DDL changes to a particular table. Subset rules are table rules that include a subset of the row changes to a particular table.

  • A schema rule captures or discards either row changes resulting from DML changes or DDL changes to the database objects in a particular schema.

  • A global rule captures or discards either all row changes resulting from DML changes or all DDL changes in the database.


Note:

The capture process does not capture certain types of changes and changes to certain data types in table columns. Also, a capture process never captures changes in the SYS, SYSTEM, or CTXSYS schemas.

Data Types Captured by Capture Processes

When capturing the row changes resulting from DML changes made to tables, a capture process can capture changes made to columns of the following data types:

  • VARCHAR2

  • NVARCHAR2

  • FLOAT

  • NUMBER

  • LONG

  • DATE

  • BINARY_FLOAT

  • BINARY_DOUBLE

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

  • RAW

  • LONG RAW

  • CHAR

  • NCHAR

  • UROWID

  • CLOB with BASICFILE or SECUREFILE storage

  • NCLOB with BASICFILE or SECUREFILE storage

  • BLOB with BASICFILE or SECUREFILE storage

  • XMLType stored as CLOB


Note:

  • Some of these data types might not be supported by Oracle Streams in earlier releases of Oracle Database. If your Oracle Streams environment includes one or more databases from an earlier release, then ensure that row LCRs do not flow into a database that does not support all of the data types in the row LCRs. See the Oracle Streams documentation for the earlier release for information about supported data types.

  • Capture processes can capture changes to SecureFile LOB columns only if the database compatibility level is set to 11.2.0 or higher.



See Also:


Types of DML Changes Captured by Capture Processes

When you specify that DML changes made to certain tables should be captured, a capture process captures the following types of DML changes made to these tables:

  • INSERT

  • UPDATE

  • DELETE

  • MERGE

  • Piecewise operations

A capture process converts each MERGE change into an INSERT or UPDATE change. MERGE is not a valid command type in a row LCR.


See Also:


Supplemental Logging in an Oracle Streams Environment

Supplemental logging places additional column data into a redo log whenever an operation is performed. A capture process captures this additional information and places it in LCRs. Supplemental logging is always configured at a source database, regardless of location of the capture process that captures changes to the source database.

Typically, supplemental logging is required in Oracle Streams replication environments. In these environments, an apply process needs the additional information in the LCRs to properly apply changes that are replicated from a source database to a destination database. However, supplemental logging can also be required in environments where changes are not applied to database objects directly by an apply process. In such environments, an apply handler can process the changes without applying them to the database objects, and the supplemental information might be needed by the apply handlers.


See Also:


Local Capture and Downstream Capture

You can configure a capture process to run locally on a source database or remotely on a downstream database. A single database can have one or more capture processes that capture local changes and other capture processes that capture changes from a remote source database. That is, you can configure a single database to perform both local capture and downstream capture.

Local Capture

Local capture means that a capture process runs on the source database. Figure 2-1 shows a database using local capture.

The Source Database Performs All Change Capture Actions

If you configure local capture, then the following actions are performed at the source database:

  • The DBMS_CAPTURE_ADM.BUILD procedure is run to extract (or build) the data dictionary to the redo log.

  • Supplemental logging at the source database places additional information in the redo log. This information might be needed when captured changes are applied by an apply process.

  • The first time a capture process is started at the database, Oracle Database uses the extracted data dictionary information in the redo log to create a LogMiner data dictionary, which is separate from the primary data dictionary for the source database. Additional capture processes can use this existing LogMiner data dictionary, or they can create new LogMiner data dictionaries.

  • A capture process scans the redo log for changes using LogMiner.

  • The rules engine evaluates changes based on the rules in one or more of the capture process rule sets.

  • The capture process enqueues changes that satisfy the rules in its rule sets into a local ANYDATA queue.

  • If the captured changes are shared with one or more other databases, then one or more propagations propagate these changes from the source database to the other databases.

  • If database objects at the source database must be instantiated at a destination database, then the objects must be prepared for instantiation, and a mechanism such as an Export utility must be used to make a copy of the database objects.

Advantages of Local Capture

The following are the advantages of using local capture:

  • Configuration and administration of the capture process is simpler than when downstream capture is used. When you use local capture, you do not need to configure redo data copying to a downstream database, and you administer the capture process locally at the database where the captured changes originated.

  • A local capture process can scan changes in the online redo log before the database writes these changes to an archived redo log file. When you use an archived-log downstream capture process, archived redo log files are copied to the downstream database after the source database has finished writing changes to them, and some time is required to copy the redo log files to the downstream database. However, a real-time downstream capture process can capture changes in the online redo log sent from the source database.

  • The amount of data being sent over the network is reduced, because the redo data is not copied to the downstream database. Even if captured LCRs are propagated to other databases, the captured LCRs can be a subset of the total changes made to the database, and only the LCRs that satisfy the rules in the rule sets for a propagation are propagated.

  • Security might be improved because only the source (local) database can access the redo data. For example, if the capture process captures changes in the hr schema only, then, when you use local capture, only the source database can access the redo data to enqueue changes to the hr schema into the capture process queue. However, when you use downstream capture, the redo data is copied to the downstream database, and the redo data contains all of the changes made to the database, not just the changes made to the hr schema.

  • Some types of custom rule-based transformations are simpler to configure if the capture process is running at the local source database. For example, if you use local capture, then a custom rule-based transformation can use cached information in a PL/SQL session variable which is populated with data stored at the source database.

  • In an Oracle Streams environment where messages are captured and applied in the same database, it might be simpler, and use fewer resources, to configure local queries and computations that require information about captured changes and the local data.

Downstream Capture

Downstream capture means that a capture process runs on a database other than the source database. The following types of downstream capture configurations are possible: real-time downstream capture and archived-log downstream capture. The downstream_real_time_mine capture process parameter controls whether a downstream capture process performs real-time downstream capture or archived-log downstream capture. A real-time downstream capture process and one or more archived-log downstream capture processes can coexist at a downstream database.


Note:

  • References to "downstream capture processes" in this document apply to both real-time downstream capture processes and archived-log downstream capture processes. This document distinguishes between the two types of downstream capture processes when necessary.

  • A downstream capture process only can capture changes from a single source database. However, multiple downstream capture processes at a single downstream database can capture changes from a single source database or multiple source databases.

  • To configure downstream capture, the source database must be an Oracle Database 10g Release 1 or later database.


Real-Time Downstream Capture

A real-time downstream capture configuration works in the following way:

  • Redo transport services use the log writer process (LGWR) at the source database to send redo data to the downstream database either synchronously or asynchronously. At the same time, the LGWR records redo data in the online redo log at the source database.

  • A remote file server process (RFS) at the downstream database receives the redo data over the network and stores the redo data in the standby redo log.

  • A log switch at the source database causes a log switch at the downstream database, and the ARCHn process at the downstream database archives the current standby redo log file.

  • The real-time downstream capture process captures changes from the standby redo log whenever possible and from the archived standby redo log files whenever necessary. A capture process can capture changes in the archived standby redo log files if it falls behind. When it catches up, it resumes capturing changes from the standby redo log.

Figure 2-2 Real-Time Downstream Capture

Description of Figure 2-2 follows
Description of "Figure 2-2 Real-Time Downstream Capture"

The advantage of real-time downstream capture over archived-log downstream capture is that real-time downstream capture reduces the amount of time required to capture changes made at the source database. The time is reduced because the real-time downstream capture process does not need to wait for the redo log file to be archived before it can capture data from it.


Note:

You can configure more than one real-time downstream capture process that captures changes from the same source database, but you cannot configure real-time downstream capture for multiple source databases at one downstream database.

Archived-Log Downstream Capture

An archived-log downstream capture configuration means that archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these archived redo log files. You can copy the archived redo log files to the downstream database using redo transport services, the DBMS_FILE_TRANSFER package, file transfer protocol (FTP), or some other mechanism.

Figure 2-3 Archived-Log Downstream Capture

Description of Figure 2-3 follows
Description of "Figure 2-3 Archived-Log Downstream Capture"

The advantage of archived-log downstream capture over real-time downstream capture is that archived-log downstream capture allows downstream capture processes from multiple source databases at a downstream database. You can copy redo log files from multiple source databases to a single downstream database and configure multiple archived-log downstream capture processes to capture changes in these redo log files.


See Also:

Oracle Data Guard Concepts and Administration for more information about redo transport services

The Downstream Database Performs Most Change Capture Actions

If you configure either real-time or archived-log downstream capture, then the following actions are performed at the downstream database:

  • The first time a downstream capture process is started at the downstream database, Oracle Database uses data dictionary information in the redo data from the source database to create a LogMiner data dictionary at the downstream database. The DBMS_CAPTURE_ADM.BUILD procedure is run at the source database to extract the source data dictionary information to the redo log at the source database. Next, the redo data is copied to the downstream database from the source database. Additional downstream capture processes for the same source database can use this existing LogMiner data dictionary, or they can create new LogMiner data dictionaries. Also, a real-time downstream capture process can share a LogMiner data dictionary with one or more archived-log downstream capture processes.

  • A capture process scans the redo data from the source database for changes using LogMiner.

  • The rules engine evaluates changes based on the rules in one or more of the capture process rule sets.

  • The capture process enqueues changes that satisfy the rules in its rule sets into a local ANYDATA queue. The capture process formats the changes as LCRs.

  • If the captured LCRs are shared with one or more other databases, then one or more propagations propagate these LCRs from the downstream database to the other databases.

In a downstream capture configuration, the following actions are performed at the source database:

  • The DBMS_CAPTURE_ADM.BUILD procedure is run at the source database to extract the data dictionary to the redo log.

  • Supplemental logging at the source database places additional information that might be needed for apply in the redo log.

  • If database objects at the source database must be instantiated at other databases in the environment, then the objects must be prepared for instantiation, and a mechanism such as an Export utility must be used to make a copy of the database objects.

In addition, the redo data must be copied from the computer system running the source database to the computer system running the downstream database. In a real-time downstream capture configuration, redo transport services use LGWR to send redo data to the downstream database. Typically, in an archived-log downstream capture configuration, redo transport services copy the archived redo log files to the downstream database.


See Also:

Chapter 5, "How Rules Are Used in Oracle Streams" for more information about rule sets for Oracle Streams clients and for information about how messages satisfy rule sets

Advantages of Downstream Capture

The following are the advantages of using downstream capture:

  • Capturing changes uses fewer resources at the source database because the downstream database performs most of the required work.

  • If you plan to capture changes originating at multiple source databases, then capture process administration can be simplified by running multiple archived-log downstream capture processes with different source databases at one downstream database. That is, one downstream database can act as the central location for change capture from multiple sources. In such a configuration, one real-time downstream capture process can run at the downstream database in addition to the archived-log downstream capture processes.

  • Copying redo data to one or more downstream databases provides improved protection against data loss. For example, redo log files at the downstream database can be used for recovery of the source database in some situations.

  • The ability to configure at one or more downstream databases multiple capture processes that capture changes from a single source database provides more flexibility and can improve scalability.

Optional Database Link From the Downstream Database to the Source Database

When you create or alter a downstream capture process, you optionally can specify the use of a database link from the downstream database to the source database. This database link must have the same name as the global name of the source database. Such a database link simplifies the creation and administration of a downstream capture process. You specify that a downstream capture process uses a database link by setting the use_database_link parameter to TRUE when you run the CREATE_CAPTURE or ALTER_CAPTURE procedure on the downstream capture process. The name of the database link must match the global name of the source database.

When a downstream capture process uses a database link to the source database, the capture process connects to the source database to perform the following administrative actions automatically:

  • In certain situations, runs the DBMS_CAPTURE_ADM.BUILD procedure at the source database to extract the data dictionary at the source database to the redo log when a capture process is created.

  • Prepares source database objects for instantiation.

  • Obtains the first SCN for the downstream capture process if the first system change number (SCN) is not specified during capture process creation. The first SCN is needed to create a capture process.

If a downstream capture process does not use a database link, then you must perform these actions manually.


Note:

During the creation of a downstream capture process, if the first_scn parameter is set to NULL in the CREATE_CAPTURE procedure, then the use_database_link parameter must be set to TRUE. Otherwise, an error is raised.


See Also:

Oracle Streams Replication Administrator's Guide for information about when the DBMS_CAPTURE_ADM.BUILD procedure is run automatically during capture process creation if the downstream capture process uses a database link

Operational Requirements for Downstream Capture

The following are operational requirements for using downstream capture:

  • The source database must be running at least Oracle Database 10g and the downstream capture database must be running the same release of Oracle Database as the source database or later.

  • The downstream database must be running Oracle Database 10g Release 2 or later to configure real-time downstream capture. In this case, the source database must be running Oracle Database 10g Release 1 or later.

  • The operating system on the source and downstream capture sites must be the same, but the operating system release does not need to be the same. In addition, the downstream sites can use a different directory structure than the source site.

  • The hardware architecture on the source and downstream capture sites must be the same. For example, a downstream capture configuration with a source database on a 32-bit Sun system must have a downstream database that is configured on a 32-bit Sun system. Other hardware elements, such as the number of CPUs, memory size, and storage configuration, can be different between the source and downstream sites.

SCN Values Related to a Capture Process

This section describes system change number (SCN) values that are important for a capture process. You can query the DBA_CAPTURE data dictionary view to display these values for one or more capture processes.

Captured SCN and Applied SCN

The captured SCN is the SCN that corresponds to the most recent change scanned in the redo log by a capture process. The applied SCN for a capture process is the SCN of the most recent message dequeued by the relevant apply processes. All messages lower than this SCN have been dequeued by all apply processes that apply changes captured by the capture process. The applied SCN for a capture process is equivalent to the low-watermark SCN for an apply process that applies changes captured by the capture process.

First SCN and Start SCN

The following sections describe the first SCN and start SCN for a capture process:

First SCN

The first SCN is the lowest SCN in the redo log from which a capture process can capture changes. If you specify a first SCN during capture process creation, then the database must be able to access redo data from the SCN specified and higher.

The DBMS_CAPTURE_ADM.BUILD procedure extracts the source database data dictionary to the redo log. When you create a capture process, you can specify a first SCN that corresponds to this data dictionary build in the redo log. Specifically, the first SCN for the capture process being created can be set to any value returned by the following query:

COLUMN FIRST_CHANGE# HEADING 'First SCN' FORMAT 999999999
COLUMN NAME HEADING 'Log File Name' FORMAT A50

SELECT DISTINCT FIRST_CHANGE#, NAME FROM V$ARCHIVED_LOG
  WHERE DICTIONARY_BEGIN = 'YES';

The value returned for the NAME column is the name of the redo log file that contains the SCN corresponding to the first SCN. This redo log file, and all subsequent redo log files, must be available to the capture process. If this query returns multiple distinct values for FIRST_CHANGE#, then the DBMS_CAPTURE_ADM.BUILD procedure has been run more than once on the source database. In this case, choose the first SCN value that is most appropriate for the capture process you are creating.

In some cases, the DBMS_CAPTURE_ADM.BUILD procedure is run automatically when a capture process is created. When this happens, the first SCN for the capture process corresponds to this data dictionary build.

Start SCN

The start SCN is the SCN from which a capture process begins to capture changes. You can specify a start SCN that is different than the first SCN during capture process creation, or you can alter a capture process to set its start SCN. The start SCN does not need to be modified for normal operation of a capture process. Typically, you reset the start SCN for a capture process if point-in-time recovery must be performed on one of the destination databases that receive changes from the capture process. In these cases, the capture process can capture the changes made at the source database after the point-in-time of the recovery.


Note:

An existing capture process must be stopped before setting its start SCN.

Start SCN Must Be Greater Than or Equal to First SCN

If you specify a start SCN when you create or alter a capture process, then the start SCN specified must be greater than or equal to the first SCN for the capture process. A capture process always scans any unscanned redo log records that have higher SCN values than the first SCN, even if the redo log records have lower SCN values than the start SCN. So, if you specify a start SCN that is greater than the first SCN, then the capture process might scan redo log records for which it cannot capture changes, because these redo log records have a lower SCN than the start SCN.

Scanning redo log records before the start SCN should be avoided if possible because it can take some time. Therefore, Oracle recommends that the difference between the first SCN and start SCN be as small as possible during capture process creation to keep the initial capture process startup time to a minimum.


Caution:

When a capture process is started or restarted, it might need to scan redo log files with a FIRST_CHANGE# value that is lower than start SCN. Removing required redo log files before they are scanned by a capture process causes the capture process to abort. You can query the DBA_CAPTURE data dictionary view to determine the first SCN, start SCN, and required checkpoint SCN. A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files.


See Also:


A Start SCN Setting That Is Before Preparation for Instantiation

If you want to capture changes to a database object and apply these changes using an apply process, then only changes that occurred after the database object has been prepared for instantiation can be applied. Therefore, if you set the start SCN for a capture process lower than the SCN that corresponds to the time when a database object was prepared for instantiation, then any captured changes to this database object before the prepare SCN cannot be applied by an apply process.

This limitation can be important during capture process creation. If a database object was never prepared for instantiation before the time of capture process creation, then an apply process cannot apply any captured changes to the object from a time before capture process creation time.

In some cases, database objects might have been prepared for instantiation before a new capture process is created. For example, if you want to create a capture process for a source database whose changes are already being captured by one or more existing capture processes, then some or all of the database objects might have been prepared for instantiation before the new capture process is created. If you want to capture changes to a certain database object with a new capture process from a time before the new capture process was created, then the following conditions must be met for an apply process to apply these captured changes:

  • The database object must have been prepared for instantiation before the new capture process is created.

  • The start SCN for the new capture process must correspond to a time before the database object was prepared for instantiation.

  • The redo logs for the time corresponding to the specified start SCN must be available. Additional redo logs previous to the start SCN might be required as well.


See Also:


Oracle Streams Capture Processes and RESTRICTED SESSION

When you enable restricted session during system startup by issuing a STARTUP RESTRICT statement, capture processes do not start, even if they were running when the database shut down. When restricted session is disabled with an ALTER SYSTEM statement, each capture process that was running when the database shut down is started.

When restricted session is enabled in a running database by the SQL statement ALTER SYSTEM ENABLE RESTRICTED SESSION clause, it does not affect any running capture processes. These capture processes continue to run and capture changes. If a stopped capture process is started in a restricted session, then the capture process does not actually start until the restricted session is disabled.

Capture Process Subcomponents

A capture process is an optional Oracle background process whose process name is CPnn, where nn can include letters and numbers. A capture process captures changes from the redo log by using the infrastructure of LogMiner. Oracle Streams configures LogMiner automatically. The underlying LogMiner process name is MSnn, where nn can include letters and numbers. You can create, alter, start, stop, and drop a capture process, and you can define capture process rules that control which changes a capture process captures.

A capture process consists of the following subcomponents:

  • One reader server that reads the redo log and divides the redo log into regions.

  • One or more preparer servers that scan the regions defined by the reader server in parallel and perform prefiltering of changes found in the redo log. Prefiltering involves sending partial information about changes, such as schema and object name for a change, to the rules engine for evaluation, and receiving the results of the evaluation. You can control the number of preparer servers using the parallelism capture process parameter.

  • One builder server that merges redo records from the preparer servers. These redo records either evaluated to TRUE during partial evaluation or partial evaluation was inconclusive for them. The builder server preserves the system change number (SCN) order of these redo records and passes the merged redo records to the capture process.

  • The capture process (CPnn) performs the following actions for each change when it receives merged redo records from the builder server:

    • Formats the change into an LCR

    • If the partial evaluation performed by a preparer server was inconclusive for the change in the LCR, then sends the LCR to the rules engine for full evaluation

    • Receives the results of the full evaluation of the LCR if it was performed

    • Discards the LCR if it satisfies the rules in the negative rule set for the capture process or if it does not satisfy the rules in the positive rule set

    • Enqueues the LCR into the queue associated with the capture process if the LCR satisfies the rules in the positive rule set for the capture process

Each reader server, preparer server, and builder server is a process.

Capture User

Changes are captured in the security domain of the capture user for a capture process. The capture user captures all changes that satisfy the capture process rule sets. In addition, the capture user runs all custom rule-based transformations specified by the rules in these rule sets. The capture user must have the necessary privileges to perform these actions, including EXECUTE privilege on the rule sets used by the capture process, EXECUTE privilege on all custom rule-based transformation functions specified for rules in the positive rule set, and privileges to enqueue messages into the capture process queue. A capture process can be associated with only one user, but one user can be associated with many capture processes.


See Also:


Capture Process States

The state of a capture process describes what the capture process is doing currently. You can view the state of a capture process by querying the STATE column in the V$STREAMS_CAPTURE dynamic performance view. The following capture process states are possible:

  • INITIALIZING - Starting up.

  • WAITING FOR DICTIONARY REDO - Waiting for redo log files containing the dictionary build related to the first SCN to be added to the capture process session. A capture process cannot begin to scan the redo log files until all of the log files containing the dictionary build have been added.

  • DICTIONARY INITIALIZATION - Processing a dictionary build.

  • MINING (PROCESSED SCN = scn_value) - Mining a dictionary build at the SCN scn_value.

  • LOADING (step X of Y) - Processing information from a dictionary build and currently at step X in a process that involves Y steps, where X and Y are numbers.

  • CAPTURING CHANGES - Scanning the redo log for changes that satisfy the capture process rule sets.

  • WAITING FOR REDO - Waiting for new redo log files to be added to the capture process session. The capture process has finished processing all of the redo log files added to its session. This state is possible if there is no activity at a source database. For a downstream capture process, this state is possible if the capture process is waiting for new log files to be added to its session.

  • EVALUATING RULE - Evaluating a change against a capture process rule set.

  • CREATING LCR - Converting a change into a logical change record (LCR).

  • ENQUEUING MESSAGE - Enqueuing an LCR that satisfies the capture process rule sets into the capture process queue.

  • PAUSED FOR FLOW CONTROL - Unable to enqueue LCRs either because of low memory or because propagations and apply processes are consuming messages slower than the capture process is creating them. This state indicates flow control that is used to reduce spilling of captured LCRs when propagation or apply has fallen behind or is unavailable.

  • WAITING FOR A SUBSCRIBER TO BE ADDED - Waiting for a subscriber to the capture process's queue to be added. A subscriber can be a propagation or an apply process.

  • WAITING FOR THE BUFFERED QUEUE TO SHRINK - Waiting for the buffered queue to change to a smaller size. The buffered queue shrinks when there is a memory limitation or when an administrator reduces its size.

  • WAITING FOR n SUBSCRIBER(S) INITIALIZING - Waiting for apply processes that receive LCRs from the capture process to start, where n is the number of apply processes.

  • WAITING FOR TRANSACTION - Waiting for LogMiner to provide more transactions.

  • WAITING FOR INACTIVE DEQUEUERS - Waiting for capture process's queue subscribers to start. The capture process stops enqueuing LCRs if there are no active subscribers to the queue.

  • SUSPENDED FOR AUTO SPLIT/MERGE - Waiting for a merge operation to complete.

  • SHUTTING DOWN - Stopping.

  • ABORTING - Aborting.

Capture Process Parameters

After creation, a capture process is disabled so that you can set the capture process parameters for your environment before starting it for the first time. Capture process parameters control the way a capture process operates. For example, the parallelism capture process parameter controls the number of preparer servers used by a capture process, and the time_limit capture process parameter specifies the amount of time a capture process runs before it is shut down automatically. You set capture process parameters using the DBMS_CAPTURE_ADM.SET_PARAMETER procedure.


See Also:


Persistent Capture Process Status Upon Database Restart

A capture process maintains a persistent status when the database running the capture process is shut down and restarted. For example, if a capture process is enabled when the database is shut down, then the capture process automatically starts when the database is restarted. Similarly, if a capture process is disabled or aborted when a database is shut down, then the capture process is not started and retains the disabled or aborted status when the database is restarted.

Implicit Capture with Synchronous Capture

This section explains the concepts related to synchronous capture.

This section discusses the following topics:


See Also:


Introduction to Synchronous Capture

Synchronous capture is an optional Oracle Streams client that captures data manipulation language (DML) changes made to tables. Synchronous capture uses an internal mechanism to capture DML changes to specified tables. When synchronous capture is configured to capture changes to tables, the database that contains these tables is called the source database.

When a DML change it made to a table, it can result in changes to one or more rows in the table. Synchronous capture captures each row change and converts it into a specific message format called a row logical change record (row LCR). After capturing a row LCR, synchronous capture enqueues a message containing the row LCR into a queue. Row LCRs created by synchronous capture always contain values for all the columns in a row, even if some of the columns where not modified by the change.

Figure 2-4 shows a synchronous capture capturing LCRs.

Figure 2-4 Synchronous Capture

Description of Figure 2-4 follows
Description of "Figure 2-4 Synchronous Capture"


Note:

A synchronous capture and a capture process should not capture changes made to the same table.

Synchronous Capture and Queues

Synchronous capture is always associated with a single ANYDATA queue, and it enqueues messages into this queue only. The queue used by synchronous capture must be a commit-time queue. Commit-time queues ensure that messages are grouped into transactions, and that transactions groups are in commit system change number (CSCN) order. Synchronous capture always enqueues row LCRs into the persistent queue. The persistent queue is the portion of a queue that only stores messages on hard disk in a queue table, not in memory. You can create multiple queues and associate a different synchronous capture with each queue.

Although synchronous capture must enqueue messages into a commit-time queue, messages captured by synchronous capture can be propagated to queues that are not commit-time queues. Therefore, any intermediate queues that storOe messages captured by synchronous capture do not need to be commit-time queue. Also, apply processes that apply messages captured by synchronous capture can use queues that are not commit-time queues.


Note:

  • Synchronous capture can be associated only with an ANYDATA queue, not with a typed queue.

  • Synchronous capture should not enqueue messages that is used by a capture process.


Synchronous Capture Rules

Synchronous capture either captures or discards changes based on rules that you define. Each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You can place these rules in a positive rule set. If a rule evaluates to TRUE for a change, and the rule is in the positive rule set for synchronous capture, then synchronous capture captures the change. Synchronous capture does not use negative rule sets.

You can specify synchronous capture rules at the table level. A table rule captures or discards row changes resulting from DML changes to a particular table. Subset rules are table rules that include a subset of the row changes to a particular table. Synchronous capture does not use schema or global rules.

All synchronous capture rules must be created with one of the following procedures in the DBMS_STREAMS_ADM package:

  • ADD_TABLE_RULES

  • ADD_SUBSET_RULES

Synchronous capture does not capture changes based on the following types of rules:

  • Rules added to the synchronous capture rules set by any procedure other than ADD_TABLE_RULES or ADD_SUBSET_RULES in the DBMS_STREAMS_ADM package.

  • Rules created by the DBMS_RULE_ADM package.

If these types of rules are in a synchronous capture rule set, then synchronous capture ignores these rules.

A synchronous capture can use a rule set created by the CREATE_RULE_SET procedure in the DBMS_RULE_ADM package, but you must add rules to the rule set with the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure.

If the specified synchronous capture does not exist when you run the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure, then the procedure creates it automatically. You can also use the CREATE_SYNC_CAPTURE procedure in the DBMS_CAPTURE_ADM package to create a synchronous capture.


Note:

  • Synchronous capture does not capture certain types of changes and changes to certain data types in table columns. Also, synchronous capture never captures changes in the SYS, SYSTEM, or CTXSYS schemas.

  • When a rule is in the rule set for a synchronous capture, do not change the following rule conditions: :dml.get_object_name and :dml.get_object_owner. Changing these conditions can cause the synchronous capture not to capture changes to the database object. You can change other conditions in synchronous capture rules.


Data Types Captured by Synchronous Capture

When capturing the row changes resulting from DML changes made to tables, synchronous capture can capture changes made to columns of the following data types:

  • VARCHAR2

  • NVARCHAR2

  • NUMBER

  • FLOAT

  • DATE

  • BINARY_FLOAT

  • BINARY_DOUBLE

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

  • RAW

  • CHAR

  • NCHAR

  • UROWID


See Also:


Types of DML Changes Captured by Synchronous Capture

When you specify that DML changes made to specific tables should be captured, synchronous capture captures the following types of DML changes made to these tables:

  • INSERT

  • UPDATE

  • DELETE

  • MERGE

Synchronous capture converts each MERGE change into an INSERT or UPDATE change. MERGE is not a valid command type in a row LCR.


See Also:


Capture User for Synchronous Capture

Changes are captured in the security domain of the capture user for a synchronous capture. The capture user captures all changes that satisfy the synchronous capture rule set. In addition, the capture user runs all custom rule-based transformations specified by the rules in these rule sets. The capture user must have the necessary privileges to perform these actions, including EXECUTE privilege on the rule set used by synchronous capture, EXECUTE privilege on all custom rule-based transformation functions specified for rules in the rule set, and privileges to enqueue messages into the synchronous capture queue. A synchronous capture can be associated with only one user, but one user can be associated with many synchronous captures.


See Also:

Oracle Streams Replication Administrator's Guide for information about the required privileges

Multiple Synchronous Captures in a Single Database

Oracle recommends that each ANYDATA queue used by a synchronous capture, propagation, or apply process have messages from at most one synchronous capture from a particular source database. Therefore, use a separate queue for each synchronous capture that captures changes originating at a particular source database, and ensure that each queue has its own queue table. Also, messages from two or more synchronous captures in the same source database should not be propagated to the same destination queue.

Explicit Capture by Applications

When applications enqueue messages manually, it is called explicit capture. After enqueue, these messages can be propagated by Oracle Streams propagations within the same database or to a different database. These messages can also be consumed by applications, apply processes, and messaging clients. You can use either the DBMS_STREAMS_MESSAGING package or the DBMS_AQADM package to enqueue messages.

The following sections describe conceptual information about enqueuing messages:


See Also:


Types of Messages That Can Be Enqueued Explicitly

Applications can create and enqueue different types of messages for various purposes in an Oracle Streams environment. These messages can be messages of a user-defined type called user messages, or they can be LCRs.

This section contains these topics:

User Messages

An application can construct a message of a user-defined type and enqueue it. The queue can be a queue of the same type as the message, or it can be an ANYDATA queue. Typically, these user messages are consumed by applications or apply processes.

User messages enqueued into a buffered queue are called buffered user messages. Buffered user messages can be dequeued by an application only. An application processes the messages after it dequeues them.

User messages enqueued into a persistent queue are called persistent user messages. Persistent user messages can be dequeued by:

  • Messaging clients: A messaging client passes the messages to the application that invoked the messaging client for processing.

  • Applications: An application processes the messages after it dequeues them.

  • Apply processes: An apply process passes the messages to a message handler for processing. The queue must be an ANYDATA queue for an apply process to dequeue messages from it.

Logical Change Records (LCRs) and Messaging

An application can construct and enqueue LCRs into an ANYDATA queue. Row LCRs describe the results of DML changes, and DDL LCRs describe DDL changes. Typically, LCRs are consumed by apply processes, but they can also be consumed by messaging clients and applications. Heterogeneous replication environment can use explicit enqueue of LCRs to replicate database changes from a non-Oracle database to an Oracle database.

LCRs enqueued explicitly into a buffered queue are called buffered LCRs. Buffered LCRs can be dequeued only by applications. An application processes the buffered LCRs after it dequeues them.

LCRs enqueued explicitly into a persistent queue are called persistent LCRs. Persistent LCRs can be dequeued by:

  • Messaging clients: A messaging client passes the messages to the application that invoked the messaging client for processing.

  • Applications: An application processes the messages after it dequeues them.

  • Apply processes: An apply process can apply the LCRs directly or pass them to an apply handler for processing.

Enqueue Features

The enqueue features available with Oracle Streams Advanced Queuing include the following:

  • Enqueue into a buffered queue or a persistent queue

  • Ordering of messages by priority enqueue time, or commit time

  • Array enqueue of messages

  • Correlation identifiers

  • Message grouping

  • Sender identification

  • Time specification and scheduling


See Also:


PKI.XPOPK&AOEBPS/strms_mintro.htm`E Introduction to Oracle Streams Administration

14 Introduction to Oracle Streams Administration

Several tools are available for configuring, administering, and monitoring your Oracle Streams environment. Oracle-supplied PL/SQL packages are the primary configuration and management tools, and the Oracle Streams tool in Oracle Enterprise Manager provides some configuration, administration, and monitoring capabilities to help you manage your environment. Additionally, Oracle Streams data dictionary views keep you informed about your Oracle Streams environment.

The following topics describe the tools that you can use for Oracle Streams administration:

Oracle-Supplied PL/SQL Packages

The following Oracle-supplied PL/SQL packages contain procedures and functions for configuring and managing an Oracle Streams environment.

DBMS_APPLY_ADM Package

The DBMS_APPLY_ADM package provides an administrative interface for starting, stopping, and configuring an apply process. This package includes procedures that enable you to configure apply handlers, set enqueue destinations for messages, and specify execution directives for messages. This package also provides administrative procedures that set the instantiation SCN for objects at a destination database. This package also includes subprograms for configuring conflict detection and resolution and for managing apply errors.

DBMS_CAPTURE_ADM Package

The DBMS_CAPTURE_ADM package provides an administrative interface for starting, stopping, and configuring a capture process. It also provides an administrative interface for configuring a synchronous capture. This package also provides administrative procedures that prepare database objects at the source database for instantiation at a destination database.

DBMS_COMPARISON Package

The DBMS_COMPARISON package provides interfaces to compare and converge database objects at different databases.

DBMS_PROPAGATION_ADM Package

The DBMS_PROPAGATION_ADM package provides an administrative interface for configuring propagation from a source queue to a destination queue.

DBMS_RULE Package

The DBMS_RULE package contains the EVALUATE procedure, which evaluates a rule set. The goal of this procedure is to produce the list of satisfied rules, based on the data. This package also contains subprograms that enable you to use iterators during rule evaluation. Instead of returning all rules that evaluate to TRUE or MAYBE for an evaluation, iterators can return one rule at a time.

DBMS_RULE_ADM Package

The DBMS_RULE_ADM package provides an administrative interface for creating and managing rules, rule sets, and rule evaluation contexts. This package also contains subprograms for managing privileges related to rules.

DBMS_STREAMS Package

The DBMS_STREAMS package provides interfaces to convert ANYDATA objects into LCR objects, to return information about Oracle Streams attributes and Oracle Streams clients, and to annotate redo entries generated by a session with a tag. This tag can affect the behavior of a capture process, a synchronous capture, a propagation, an apply process, or a messaging client whose rules include specifications for these tags in redo entries or LCRs.

DBMS_STREAMS_ADM Package

The DBMS_STREAMS_ADM package provides an administrative interface for adding and removing simple rules for capture processes, propagations, and apply processes at the table, schema, and database level. This package also enables you to add rules that control which messages a propagation propagates and which messages a messaging client dequeues. This package also contains procedures for creating queues and for managing Oracle Streams metadata, such as data dictionary information. This package also contains procedures that enable you to configure and maintain an Oracle Streams replication environment. This package is provided as an easy way to complete common tasks in an Oracle Streams environment. You can use other packages, such as the DBMS_CAPTURE_ADM, DBMS_PROPAGATION_ADM, DBMS_APPLY_ADM, DBMS_RULE_ADM, and DBMS_AQADM packages, to complete these same tasks, as well as tasks that require additional customization.

DBMS_STREAMS_ADVISOR_ADM Package

The DBMS_STREAMS_ADVISOR_ADM package provides an interface to gather information about an Oracle Streams environment and advise database administrators based on the information gathered. This package is part of the Oracle Streams Performance Advisor.

DBMS_STREAMS_AUTH Package

The DBMS_STREAMS_AUTH package provides interfaces for granting privileges to and revoking privileges from Oracle Streams administrators.

DBMS_STREAMS_HANDLER_ADM Package

The DBMS_STREAMS_HANDLER_ADM package provides interfaces for managing statement DML handlers.

DBMS_STREAMS_MESSAGING Package

The DBMS_STREAMS_MESSAGING package provides interfaces to enqueue messages into and dequeue messages from an ANYDATA queue.

DBMS_STREAMS_TABLESPACE_ADM Package

The DBMS_STREAMS_TABLESPACE_ADM package provides administrative procedures for creating and managing a tablespace repository. This package also provides administrative procedures for copying tablespaces between databases and moving tablespaces from one database to another. This package uses transportable tablespaces, Data Pump, and the DBMS_FILE_TRANSFER package.

UTL_SPADV Package

The UTL_SPADV package provides subprograms to collect and analyze statistics for the Oracle Streams components in a distributed database environment. This package uses the Oracle Streams Performance Advisor to gather statistics.


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about these packages

Oracle Streams Data Dictionary Views

Every database in an Oracle Streams environment has Oracle Streams data dictionary views. These views maintain administrative information about local rules, objects, capture processes, propagations, apply processes, and messaging clients. You can use these views to monitor your Oracle Streams environment.


See Also:


Oracle Streams Tool in Oracle Enterprise Manager

To help configure, administer, and monitor Oracle Streams environments, Oracle provides an Oracle Streams tool in Oracle Enterprise Manager. You can also use the Oracle Streams tool to generate Oracle Streams configuration scripts, which you can then modify and run to configure your Oracle Streams environment. The Oracle Streams tool online Help contains the primary documentation for this tool.

To display an overview of the replication components at a database:

  1. In Oracle Enterprise Manager, log in to the database as the Oracle Streams administrator.

  2. Go to the Database Home page.

  3. Under High Availability, click the number link in Streams Components.

    The Manage Replication page appears, showing the Overview subpage.

Figure 14-1 shows the Manage Replication page in Enterprise Manager.

Figure 14-1 Manage Replication Page in Enterprise Manager

Description of Figure 14-1 follows
Description of "Figure 14-1 Manage Replication Page in Enterprise Manager"


See Also:


PK5\5eE`EPK&A OEBPS/toc.ncx_( Oracle® Streams Concepts and Administration, 11g Release 2 (11.2) Cover Table of Contents List of Figures List of Tables Oracle Streams Concepts and Administration, 11g Release 2 (11.2) Preface What's New in Oracle Streams? Essential Oracle Streams Concepts Introduction to Oracle Streams Oracle Streams Information Capture Oracle Streams Staging and Propagation Oracle Streams Information Consumption How Rules Are Used in Oracle Streams Rule-Based Transformations Advanced Oracle Streams Concepts Advanced Capture Process Concepts Advanced Queue Concepts Advanced Propagation Concepts Advanced Apply Process Concepts Advanced Rule Concepts Combined Capture and Apply Optimization Oracle Streams High Availability Environments Oracle Streams Administration Introduction to Oracle Streams Administration Managing Oracle Streams Implicit Capture Managing Staging and Propagation Managing Oracle Streams Information Consumption Managing Rules Managing Rule-Based Transformations Using Oracle Streams to Record Table Changes Other Oracle Streams Management Tasks Monitoring Oracle Streams Monitoring an Oracle Streams Environment Monitoring the Oracle Streams Topology and Performance Monitoring Oracle Streams Implicit Capture Monitoring Oracle Streams Queues and Propagations Monitoring Oracle Streams Apply Processes Monitoring Rules Monitoring Rule-Based Transformations Monitoring Other Oracle Streams Components Troubleshooting an Oracle Streams Environment Identifying Problems in an Oracle Streams Environment Troubleshooting Implicit Capture Troubleshooting Propagation Troubleshooting Apply Troubleshooting Rules and Rule-Based Transformations Oracle Streams Information Provisioning Information Provisioning Concepts Using Information Provisioning Monitoring File Group and Tablespace Repositories Appendixes How Oracle Streams Works with Other Database Components Oracle Streams Restrictions XML Schema for LCRs Online Database Upgrade and Maintenance with Oracle Streams Online Upgrade of a 10.1 or Earlier Database with Oracle Streams Glossary Index Copyright PKpd(_(PK&AOEBPS/strms_trprop.htmC6 Troubleshooting Propagation

32 Troubleshooting Propagation

The following topics describe identifying and resolving common propagation problems in an Oracle Streams environment:

Does the Propagation Use the Correct Source and Destination Queue?

If messages are not appearing in the destination queue for a propagation as expected, then the propagation might not be configured to propagate messages from the correct source queue to the correct destination queue.

For example, to check the source queue and destination queue for a propagation named dbs1_to_dbs2, run the following query:

COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A35
COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A35

SELECT 
  p.SOURCE_QUEUE_OWNER||'.'||
    p.SOURCE_QUEUE_NAME||'@'||
    g.GLOBAL_NAME SOURCE_QUEUE, 
  p.DESTINATION_QUEUE_OWNER||'.'||
    p.DESTINATION_QUEUE_NAME||'@'||
    p.DESTINATION_DBLINK DESTINATION_QUEUE 
  FROM DBA_PROPAGATION p, GLOBAL_NAME g
  WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2';

Your output looks similar to the following:

Source Queue                        Destination Queue
----------------------------------- -----------------------------------
STRMADMIN.QUEUE1@DBS1.EXAMPLE.COM   STRMADMIN.QUEUE2@DBS2.EXAMPLE.COM

If the propagation is not using the correct queues, then create a different propagation. You might need to remove the existing propagation if it is not appropriate for your environment.


See Also:


Is the Propagation Enabled?

For a propagation job to propagate messages, the propagation must be enabled. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled.

You can find the following information about a propagation:

  • The database link used to propagate messages from the source queue to the destination queue

  • Whether the propagation is ENABLED, DISABLED, or ABORTED

  • The date of the last error, if there are any propagation errors

  • If there are any propagation errors, then the error number of the last error

  • The error message of the last error, if there are any propagation errors

For example, to check whether a propagation named streams_propagation is enabled, run the following query:

COLUMN DESTINATION_DBLINK HEADING 'Database|Link'      FORMAT A15
COLUMN STATUS             HEADING 'Status'             FORMAT A8
COLUMN ERROR_DATE         HEADING 'Error|Date'
COLUMN ERROR_MESSAGE      HEADING 'Error Message'      FORMAT A35
 
SELECT DESTINATION_DBLINK,
       STATUS,
       ERROR_DATE,
       ERROR_MESSAGE
  FROM DBA_PROPAGATION
  WHERE PROPAGATION_NAME = 'STREAMS_PROPAGATION';

If the propagation is disabled currently, then your output looks similar to the following:

Database                 Error
Link            Status   Date      Error Message
--------------- -------- --------- -----------------------------------
D2.EXAMPLE.COM  DISABLED 27-APR-05 ORA-25307: Enqueue rate too high, f
                                   low control enabled

If there is a problem, then try the following actions to correct it:

  • If a propagation is disabled, then you can enable it using the START_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package, if you have not done so already.

  • If the propagation is disabled or aborted, and the Error Date and Error Message fields are populated, then diagnose and correct the problem based on the error message.

  • If the propagation is disabled or aborted, then check the trace file for the propagation job process. The query in "Displaying Information About the Schedules for Propagation Jobs" displays the propagation job process.

  • If the propagation job is enabled, but is not propagating messages, then try stopping and restarting the propagation.

Is Security Configured Properly for the ANYDATA Queue?

ANYDATA queues are secure queues, and security must be configured properly for users to be able to perform operations on them. If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to configure a secure ANYDATA queue, then an error is raised if the agent that SET_UP_QUEUE tries to create already exists and is associated with a user other than the user specified by queue_user in this procedure. In this case, rename or remove the existing agent using the ALTER_AQ_AGENT or DROP_AQ_AGENT procedure, respectively, in the DBMS_AQADM package. Next, retry SET_UP_QUEUE.

In addition, you might encounter one of the following errors if security is not configured properly for an ANYDATA queue:


See Also:

"Secure Queues"

ORA-24093 AQ Agent not granted privileges of database user

Secure queue access must be granted to an Oracle Streams Advanced Queuing (AQ) agent explicitly for both enqueue and dequeue operations. You grant the agent these privileges using the ENABLE_DB_ACCESS procedure in the DBMS_AQADM package.

For example, to grant an agent named explicit_dq privileges of the database user oe, run the following procedure:

BEGIN
  DBMS_AQADM.ENABLE_DB_ACCESS(
    agent_name  => 'explicit_dq',
    db_username => 'oe');
END;
/

To check the privileges of the agents in a database, run the following query:

SELECT AGENT_NAME "Agent", DB_USERNAME "User" FROM DBA_AQ_AGENT_PRIVS;

Your output looks similar to the following:

Agent                          User
------------------------------ ------------------------------
EXPLICIT_ENQ                   OE
APPLY_OE                       OE
EXPLICIT_DQ                    OE

See Also:

"Enabling a User to Perform Operations on a Secure Queue" for a detailed example that grants privileges to an agent

ORA-25224 Sender name must be specified for enqueue into secure queues

To enqueue into a secure queue, the SENDER_ID must be set to an Oracle Streams Advanced Queuing (AQ) agent with secure queue privileges for the queue in the message properties.


See Also:

Oracle Streams Advanced Queuing User's Guide for an example that sets the SENDER_ID for enqueue

PK[uH6C6PK&AOEBPS/content.opfQb Oracle® Streams Concepts and Administration, 11g Release 2 (11.2) en-US E17069-07 Oracle Corporation Oracle Corporation Oracle® Streams Concepts and Administration, 11g Release 2 (11.2) 2011-07-28T22:40:01Z Contains conceptual information about Oracle Streams and information about administering, monitoring, and troubleshooting an Oracle Streams environment. It also contains instructions for using Oracle Streams to record table changes. It also contains conceptual information about information provisioning and instructions for using Oracle Streams for information provisioning. It also includes instructions for using Oracle Streams for database upgrades and database maintenance operations. PKVbQbPK&A OEBPS/lof.htmS List of Figures

List of Figures

PK|$LXSPK&AOEBPS/dcommon/prodbig.gif GIF87a!!!)))111BBBZZZsss{{ZRRcZZ!!1!91)JB9B9)kkcJJB991ssc絽Zcc!!{祽BZc!9B!c{!)c{9{Z{{cZB1)sJk{{Z{kBsZJ91)Z{!{BcsRsBc{9ZZk甽kBkR!BZ9c)JJc{!))BZks{BcR{JsBk9k)Zck!!BZ1k!ZcRBZcZJkBk1Z9c!R!c9kZRZRBZ9{99!R1{99R{1!1)c1J)1B!BJRkk{ƽ絵ތkk絵RRs{{{{JJsssBBkkk!!9ss{{ZZssccJJZZRRccRRZZ))cBBJJ99JJ!!c11991199Z11!c!!))Z!!!1BRck{)!cJBkZRZ,HP)XRÇEZ֬4jJ0 @ "8pYҴESY3CƊ@*U:lY0_0#  5tX1E: C_xޘeKTV%ȣOΏ9??:a"\fSrğjAsKJ:nOzO=}E1-I)3(QEQEQEQEQEQEQE֝Hza<["2"pO#f8M[RL(,?g93QSZ uy"lx4h`O!LŏʨXZvq& c՚]+: ǵ@+J]tQ]~[[eϸ (]6A&>ܫ~+כzmZ^(<57KsHf妬Ϧmnẁ&F!:-`b\/(tF*Bֳ ~V{WxxfCnMvF=;5_,6%S>}cQQjsOO5=)Ot [W9 /{^tyNg#ЄGsֿ1-4ooTZ?K Gc+oyڙoNuh^iSo5{\ܹ3Yos}$.nQ-~n,-zr~-|K4R"8a{]^;I<ȤL5"EԤP7_j>OoK;*U.at*K[fym3ii^#wcC'IIkIp$󿉵|CtĈpW¹l{9>⪦׺*ͯj.LfGߍԁw] |WW18>w.ӯ! VӃ :#1~ +މ=;5c__b@W@ +^]ևՃ7 n&g2I8Lw7uҭ$"&"b eZ":8)D'%{}5{; w]iu;_dLʳ4R-,2H6>½HLKܹR ~foZKZ࿷1[oZ7׫Z7R¢?«'y?A}C_iG5s_~^ J5?œ tp]X/c'r%eܺA|4ծ-Ե+ْe1M38Ǯ `|Kյ OVڅu;"d56, X5kYR<̭CiطXԮ];Oy)OcWj֩}=܅s۸QZ*<~%뺃ȶp f~Bðzb\ݳzW*y{=[ C/Ak oXCkt_s}{'y?AmCjޓ{ WRV7r. g~Q"7&͹+c<=,dJ1V߁=T)TR՜*N4 ^Bڥ%B+=@fE5ka}ędܤFH^i1k\Sgdk> ֤aOM\_\T)8靠㡮3ģR: jj,pk/K!t,=ϯZ6(((((((49 xn_kLk&f9sK`zx{{y8H 8b4>ÇНE|7v(z/]k7IxM}8!ycZRQ pKVr(RPEr?^}'ðh{x+ՀLW154cK@Ng C)rr9+c:׹b Жf*s^ fKS7^} *{zq_@8# pF~ [VPe(nw0MW=3#kȵz晨cy PpG#W:%drMh]3HH<\]ԁ|_W HHҡb}P>k {ZErxMX@8C&qskLۙOnO^sCk7ql2XCw5VG.S~H8=(s1~cV5z %v|U2QF=NoW]ո?<`~׮}=ӬfԵ,=;"~Iy7K#g{ñJ?5$y` zz@-~m7mG宝Gٱ>G&K#]؃y1$$t>wqjstX.b̐{Wej)Dxfc:8)=$y|L`xV8ߙ~E)HkwW$J0uʟk>6Sgp~;4֌W+חc"=|ř9bc5> *rg {~cj1rnI#G|8v4wĿhFb><^ pJLm[Dl1;Vx5IZ:1*p)إ1ZbAK(1ׅ|S&5{^ KG^5r>;X׻K^? s fk^8O/"J)3K]N)iL?5!ƾq:G_=X- i,vi2N3 |03Qas ! 7}kZU781M,->e;@Qz T(GK(ah(((((((Y[×j2F}o־oYYq $+]%$ v^rϭ`nax,ZEuWSܽ,g%~"MrsrY~Ҿ"Fت;8{ѰxYEfP^;WPwqbB:c?zp<7;SBfZ)dϛ; 7s^>}⍱x?Bix^#hf,*P9S{w[]GF?1Z_nG~]kk)9Sc5Ո<<6J-ϛ}xUi>ux#ţc'{ᛲq?Oo?x&mѱ'#^t)ϲbb0 F«kIVmVsv@}kҡ!ˍUTtxO̧]ORb|2yԵk܊{sPIc_?ħ:Ig)=Z~' "\M2VSSMyLsl⺿U~"C7\hz_ Rs$~? TAi<lO*>U}+'f>7_K N s8g1^CeКÿE ;{+Y\ O5|Y{/o+ LVcO;7Zx-Ek&dpzbӱ+TaB0gNy׭ 3^c T\$⫫?F33?t._Q~Nln:U/Ceb1-im WʸQM+VpafR3d׫é|Aү-q*I P7:y&]hX^Fbtpܩ?|Wu󭏤ʫxJ3ߴm"(uqA}j.+?S wV ~ [B&<^U?rϜ_OH\'.;|.%pw/ZZG'1j(#0UT` Wzw}>_*9m>󑓀F?EL3"zpubzΕ$+0܉&3zڶ+jyr1QE ( ( ( ( ( ( ( (UIdC0EZm+]Y6^![ ԯsmܶ捆?+me+ZE29)B[;я*wGxsK7;5w)}gH~.Ɣx?X\ߚ}A@tQ(:ͧ|Iq(CT?v[sKG+*רqҍck <#Ljα5݈`8cXP6T5i.K!xX*p&ќZǓϘ7 *oƽ:wlຈ:Q5yIEA/2*2jAҐe}k%K$N9R2?7ýKMV!{W9\PA+c4w` Wx=Ze\X{}yXI Ү!aOÎ{]Qx)#D@9E:*NJ}b|Z>_k7:d$z >&Vv󃏽WlR:RqJfGإd9Tm(ҝEtO}1O[xxEYt8,3v bFF )ǙrPNE8=O#V*Cc𹾾&l&cmCh<.P{ʦ&ۣY+Gxs~k5$> ӥPquŽўZt~Tl>Q.g> %k#ú:Kn'&{[yWQGqF}AЅ׮/}<;VYZa$wQg!$;_ $NKS}“_{MY|w7G!"\JtRy+贾d|o/;5jz_6fHwk<ѰJ#]kAȎ J =YNu%dxRwwbEQEQEQEQEQEQEQEQEQE'fLQZ(1F)hQ@X1KEQE-Q@ 1KE3h=iPb(((1GjZ(-ʹRPbR@ 1KE7`bڒyS0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((Z5/t bS($ `Dǂ7Tg4*-ƫ8@$x =qh(o^4[7zծ-;UR!PNW|N_w}4x^+%{7iX;e-'N=rnn^i}FeC`qU.66 nč ?2Oq@U=7Vuv K4,$g{xAѮTwWI8#>ƀ5(7zwp]ɝA GRO<6\K0DF dOhJ+Ş#wu&vCr;`p ؠϋ]wClg>g?&6\}Q%u6uBJrHRN2@ϸ J+?St}ll<kH3XvqX]wk&vMDlppAX_r^_ik7<4\X2Hgzvv Ү`GlNO@QEWϋ#>; H1r ܒBVO7U[}WYrFI8%BF{dgPQ\?<; ^k:>`L7s/ YN2WKjZ[ƩZXdbR'8*iͻ\iz KYU pJ3=qj03Y.X ,@84rԼK okzm;䌀dcZO ռW4x䍃+s+xV+{O+8fv'$(r+/R.\-3X&+2qF}jQUo;8,.໵;&A"6  88 £մ>P[G4ʍ3d ',r`zZEcxzew ҭ.g7\FT[Q^G|N_w}4x^+%{7iX;e-'N=CMմfݮ4Br%*8%I.QYztmMoM:ut1\3jƛiͻ\iz KYU pJ3=\(n".4H&gbpI\䒬8A+Q@o6[PǓ8qqB׆Ch'6;}\ēAmbv@TOj&Nkg.X !<2+4o~ Oo-4 J69X ;n?}Jf-E0`'Z Ε̣rC~Icl|9L:z}m=~ITPxo7t,4;?Z)y2N\G^_ᶝ=&RMhR 1322 8*_E麶۵ƗZ_@QeP#8 W?O5 \i>+e9.pdʓ:{Q.A_]O4oWϭ*[nVlbIp"O^O!ck+KxsL2G;\'nɮ|G֖]E3jsDcc@pNW N( 0?~/h-~;]smugkx&9K`: i#ėL5}:źԖ:.Ho-n• yvߛ?v44H ^Q=Y:Dc fo0?xe\Lo>ODWbc/at#U{[jin"ݎ0G8 G+>mJI&{74Bsv䜚>ú~. 6CaQn+$](17Im]OU]XݚD eP2{*8m yYly1Zo~ xݭ4⻑G^ʗPbw F3qj;Cwդs$Um c,mG}S76%-I\(9XHp?),Ɠk_[$3Ŧ\rFYDG jdž>$PВ=%( p6<[oqsA*92<G?0$(GyYT:NQڽ>&wΈeytXC)V*+Lvl6Ambv@TOjյ(tmTY +y.$X,U$ z?T~4ǟdKCp8I*C r4lSH-mѐIpG\d?玴Doo-4CGh뼖am dmpSHiOb4c2ľ+ $A5(uzn, J3h7\-|MWP.!F<""Ghvp81q?> wI[yvo"N3&: ocӒOϫjb[˩e oTWv ke [sn[x.اbt>,T Wڏ?|:Dng(y89'_%3*F:Q^7\wryݳvysW7{|pO_#[[K*6sG*\(%k?c+;i^OmJqyy߻Í0!kmZKHn#HteU}'~pFdc5{_ 7)f?TKm͜I8$WYoDǟ b|ˢmq#v[$2kiq5vLڜfFBL:ߋ"K_A\]Y%ɎRFH%$䃧>Ӯ>}=[ͼ#<[p`r9ihAw{ft@aN wbx}k+vۜg t?<ݬWQI5<~u@ z S^xW ms^ZNBA?/h dOacKWz mVtr,,0F9Fx7~$x0k΋!Vg!c8?) ``ѾVψ|5>Hh-(`$ {+𵿄|x.RE43rcH׎>pj>un=>LDqv@xs?v$*G^+$FYAXE$=/M젃>x*y`}o߶~nݞ1^#Jf#tkv-oEܮI$?6'a2_u\g d |Ć^\F>xbmգ kU9swrKk!uGgm6¬Ӑ qt(3X&ߴ_t'6K{Kaj31ස_riז6]<b``2x{Nş A@*E5a sMjĚgd<uyry$Ë/0]GjZG,rwv䀠u$m>iy?nvپYWv23z>\Ajz4F9zŰ۹YۓP A\GsJ`QԒw ʖ8$6]ޏr+!@>²I x:ܑ,hg m1;s5Q@C L-$ew-nqfx75?(SxPя[PXѭ{Gm$Ы-rʃ=(?BK9HȂY!At9ֻ  =2;; H--c`FIŽI'P?OkY4.fG 8]Cmۜd:K 얲Jfd^L$QV_ |,GT=$򱲂y ^6m^LF9f㌓q[PT]*\4N;;,OчG FPc:}wPr3/N=((5]׿h>ZE1(D02\F>R>(!g_^xºk=16VUzEx~[?5kWi+X̑#{oW'i_|-/Upm͌4f`i lHR0/%~!ƭ_;^F?獶p]x( *mtQ^OZN UukHe;B2ȱdՏA|74-J沊Y?1یHnn y@xy6۝ogv1Ѷ_ּ;K Y{Ԗ& }X*W_ ~|uwꗩ>9][)pz$s@e Q~W}x$f[xTcKnXE kx·Z3z궨ʥB#df9˞W̟ h><֩]P۪ȊB#dF9˞to]HY[[XMpLŸ%񧄴O֑EKXv\d0;b<?ɮ~Ai iq" V~]] ==?yn1.;j4&Mf*0A1`%rC=wۈf\HiĪO@%I (ǁϲۤg8u]y/$:DE|pFk~ʆoGo%< Sr>m(?]Gᆇ7<jmLVQ,wv@f'x XWmWێ7$4|mjky,`FwV.H82HQ_:h:O/#o959 Z^H[sI$ I%B. ( ( ( ( ( ( ( ( ( ( ( ( ( (e>|fPO #L?jbA,Qc$|}^ |Yq]wmy\}26AeP?v$ ÚGtu ĩV TpHMI}izv&uzbD&7A!CHc@yJg_5Ku`5]` 5c<*9Us'PG%[ӊ<)-yVUHX,!cUQF0s{Nş A@*E5Ge6ǚ"X c*6?^?ƭì]H^o#F@`IgKIoQ%s1Ѷݟ|9iI<< r@OĞ |QZqϢ}au $:J o]$:DuOi(t-R{n>lv܍ʒFcj|7g]wOkꣵW' *E-y|'`?M'MFѬtv6QBp{i<Ӽa,cP6FQR$a9t#@8?PTzwUt$Mz ;6x9(Q|VH?Ğ |QZqϢ}au $:ֆ#:[iՌwvbTYH*z8$t&8=gí_N}N(m7{cw3FOv}p!zpX^̑F$`'ulʗzYb2 tdtMzF#Z-vRX,z xtUݟ|9iI<< r@_O8?ZPWj٧v +=79 sw>_HͷVqva sVFT>;yI1%;|_Z;'EtMV6҂!A$qH|Of'E:cCK8>_>|=51_xCNAlUc7ڝf9RkI0:/tQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ^Wwľ(ѵ ~㱟WݚHQԶ ܬ@A߽Sg 6}Z?ϒ/^{5N?WeX/-~n { ܲ6d滊((+еmJo7\Z4bFoʦp܁תPEPEPEPEPEPE~*Ω;?Lm"bRbG;fPQ1A=RWYkzVGtێpI#{C}K*dW=UH A(B+B {DoF?ppt| ]fTtn亝e( fQNqɮo?IoOow!~>fosQ^oS:ϋy^}=AW< 2Տ^@)35ƕHy7:q }zz5?wyWZ}>7;F$EPExu|.o+즞&HpUpqkCk6$hWـ$+X((x!(T)$r(eu#x 1\<>\K;v09v̊ 9UQwP~~ӒHpvB@s,rN95EQEQEQEQEQEQEQEQEQEQEQEQEQEQE|:˵ii;Qd,p98㯀m,R{ IHU^:GO^+4xᵽQ$ȡn AbWៃum.; oik2zWU (+/Ye+ݭլpmǁ0`gNį:|?_xWܶ[:O_8RwR#Z#lϸ.*J$䬋?O$ՑVF9Ei |Rā'85¿Vs]|MS| &o$o[\֗VF=Y,>VRQK? "]Ywu6-NMx$ATT?8㞃ᇋOQ\08A+:7O|5~5o[&Md"'B.f21#%FGcI ~#xz e6`pK1VS./U=Kf#Om&9Pj6RP9-;I vn>1E=Y ǧ@#M[r3d>X7 _R:eoq4l&; 8 I@ht?^,>־fW18lpN1yއC$Pe0؆0`Ó }v+ ͥZ@>6I4@q8\^>rKu-KEWG1)pO>gny`E"eO39'eۅҸ}G\i~ ֟/ueÌ98*T74> |s=NA(B쓑*IAay?mz}ۭΕl\S&RUVFqryOi7`Mv#=!gc%*k<7? iXKtH$2nJT9^_?][oqSY PȒ38zt|13Ic`㻍B#0rsր:J(_^:IF8=deg  ^,ǀomO:UKxC?:ATq^GlCy#'\3޸Tu7X a,Pr9 9s?yZԼ _f1xKkw|Lp69xn:d(IaY?TUC²訫(6!J#O<5`_mCۿG]?xkV)h(x0OKEpA\BA#:.4ȼ)H~$q -v9.h|;wo9noK$(wg۟`CioJ?jx߹[1uF ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (8xcV/|#smWŠ(< |E񖛨(F伏Un$||;'Ɨm#WͷeqaAEx?9 /YlF*y98%֣(o2I2@12K:(Ӽ ?| E/!~t>;'Ɨm#WͷeqaAEx?9 /YlF*y98%֣(o2I2@12K:(Yj[B@یZO`u( g8Ke]R嵿Z"Vu@\?|rۑJ(7~$e ؘqܡr8”,?Ѵ曦'LyyIybDž{>Eq~gF2]jUXg'sKmy>Ŵg\Y@`G$((#ඥ|Z'5M>DY`@@Mps>acoiqv$0vdpV(?Oo4x5гDѢ!Ċsp1^Ex*Y?)zG{Öe>u|va[P\ş M_=Ɯ$]kJIDE* duNx|g e bwMQ?w2$ o?SO[!\ *_t  Exr%*_}!#U #4 & ֩3|b]L ]t b+Da&R_2lEٱZ`aC)/яmvUkS r(-iPE Vv_{z GLt\2s!F A#葡JY r|AA,hB}q|B`du }00(䡆<pb,G+oB C0p/x$…– ]7 @2HFc ) @AD \0 LHG',(A` `@SC)_" PH`}Y+_|1.K8pAKMA @?3҄$[JPA)+NH I ,@8G0/@R T,`pF8Ѓ)$^$ DDTDlA@ s;PKPK&AOEBPS/dcommon/darbbook.cssPKPK&A!OEBPS/dcommon/O_signature_clr.JPG"(JFIF``C    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?O '~MQ$Vz;OlJi8L%\]UFjޙ%ԯS;rA]5ފ<׈]j7Ouyq$z'TQuw7Ŀ KX߁M2=S'TQt?.5w'97;~pq=" ~k?`'9q6 E|yayM^Om'fkC&<5x' ?A?Zx'jß={=SßM gVC.5+Hd֪xc^)Җufz{Cީ|D Vkznq|+Xa+{50rx{|OG.OϞ~f/ xxX[2H )c+#jpUOZYX\=SG ߨC|K@;_߆'e?LT?]:?>w ڔ`D^So~xo[Ӡ3i7B:Q8 Vc-ďoi:FM292~y_*_闱YN\Fr=xZ3鳎OwW_QEzW~c]REeaSM}}Hӏ4&.E]u=gMѠ+mF`rNn$w9gMa꺢nTuhf2Xv>އ a(Û6߭?<=>z'TQuw7Ŀ KX߁M2=S'TQt?.5Kko\.8S$TOX߀Gw?Zx汴X)C7~.i6(Щ=+4{mGӭ¸-]&'t_kV*I<1)4thtIsqpQJ+> \m^[aJ5)ny:4o&QEnyAEPEEss 72,PDۢ׃K W{Wjr+wگ iM/;pd?~&?@;7E4gv8 $l'z'TQuw7Ŀ Gֱ=ɿ&G?. iR(5W*$|?w᫼gkmIbHe/_t>tg%y.l}N5[]+Mk0ĠeHdPrsst'UiC,y8`V%9ZIia|ܪvi מYG,o}+kk{YbyIeb*sAtի82zWoEK5z*o-eo;n(P u-I)4Š(HQEQEQEQEhz(X/Đ?}Bk˩ ݏrk0]4>8XzV? }6$}d^F>nU K ?Bտk_9׾x~w'ߞ  uDŽtL ؈5c-E/"|_Oo.IH쐍=i*Iw5(ںw?t5s.)+tQ2dUt5Vĺ.jZ"@IRrZƅY4ߡ_;}ų(KyQf1Aǵt?sZg+?F5_oQR&Dg߿]6FuRD u>ڿxl7?IT8'shj^=.=J1rj1Wl$얲cPx;E,p$֟ˏkw qg"45(ǛkV/=+ũ)bYl~K#˝J_כ5&\F'I#8/|wʾ_Xj Q:os^T1.M_|TO.;?_  jF?g N 8nA2F%i =qW,G=5OU u8]Rq?wr'˻S+۾.ܼ 87Q^elo/T*?L|ۚ<%<,/v_OKs B5f/29n0=zqQq(ª=VX@*J(э(f5qJN_EVǞQEOuoѕOuoa5}gO?:߂8Wא|cڽ~]N&O( (<]>͠@VQ=^~U ̴m&\խ5i:}|}r~9՝f}_>'vVֲ$~^f30^in{\_.O F8to}?${φ|#x^#^n~w=~k~?'KRtO.㌡h![3Zu*ٷճ(ԟ]z_/W1(ԟ]v~g|Yq<ז0 ; b8֮s,w9\?uEyStKaª@\,)) (!EPEPEPEPEPzѧts{v>C/"N6`d*J2gGӧWqBq_1ZuΓ\X]r?=Ey88Mp&pKtO-"wR2 K^-Z< \c>V0^@O7x2WFjs<׻kZ(<Т(OFw/6$1[:ޯԯ#q~4|,LVPem=@=YLUxӃV}AUbcUB.Ds5*kٸAeG>PJxt͝ b88?*$~@ׯD VkraiJs}Q.20x&mXξ,Z]“A-J#`+-E/"<]\a'tZGy.(|lދ~gMK OZdxDŽU9T6ϯ^<Ϡt5CZ]].t۫S=s`ڳ%8iVK:nqe+#<.T6U>zWoy3^I {F?J~=G}k)K$$;$de8*G Uӟ4Ocºw}|]4=ݣ\x$ʠms?q^ipw\"ȿPs^Z Q_0GڼU.t}ROM[G#]8wٞ ӫ87}Cgw vHȩBM55vof =A_٭`Ygx[6 P,5}>蚊(0(+?>+?> k|TuXq6_ +szk :u_ Z߶Ak_U}Jc2u/1[_»ݸG41-bሬ۴}}Eȹפ_c?5gi @cL\L<68hF_Ih>X4K7UТ sMj =J7CKo>Օ5s:߀t ~ηaٿ?|gdL8+gG%o?x`دOqȱwc¨&TW_V_aI=dpG!wu۞սZ1yL50$(l3(:~'ַo A}a3N*[0ǭ HKQV}G@֜$ 9of$ArNqUOgË05#m?D)^_h//5_/<?4}Jį+GkpG4"$ r| >S4Ђ"S 1%R:ȝ 8;PKPz PK&AOEBPS/dcommon/feedback.gif7GIF89a'%(hp|fdx?AN5:dfeDGHɾTdQc`g*6DC\?ؘ||{;=E6JUՄfeA= >@,4`H.|`a (Q 9:&[|ځ,4p Y&BDb,!2@, $wPA'ܠǃ@CO~/d.`I @8ArHx9H75j L 3B/` P#qD*s 3A:3,H70P,R@ p!(F oԥ D;"0 ,6QBRɄHhI@@VDLCk8@NBBL2&pClA?DAk%$`I2 #Q+l7 "=&dL&PRSLIP)PɼirqМ'N8[_}w;PK-PK&AOEBPS/dcommon/booklist.gifGIF89a1޵֥΄kZ{Jk1Rs!BZ)B),@I9Z͓Ca % Dz8Ȁ0FZЌ0P !x8!eL8aWȠFD(~@p+rMS|ӛR$ v "Z:]ZJJEc{*=AP  BiA ']j4$*   & 9q sMiO?jQ = , YFg4.778c&$c%9;PKː5PK&AOEBPS/dcommon/cpyr.htm1 Oracle Legal Notices

Oracle Legal Notices

Copyright Notice

Copyright © 1994-2012, Oracle and/or its affiliates. All rights reserved.

Trademark Notice

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

License Restrictions Warranty/Consequential Damages Disclaimer

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Warranty Disclaimer

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

Restricted Rights Notice

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

Hazardous Applications Notice

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Third-Party Content, Products, and Services Disclaimer

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Alpha and Beta Draft Documentation Notice

If this document is in prerelease status:

This documentation is in prerelease status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.

Oracle Logo

PKN61PK&AOEBPS/dcommon/masterix.gif.GIF89a1ޜΌscJk1Rs!Bc1J),@IS@0"1 Ѿb$b08PbL,acr B@(fDn Jx11+\%1 p { display: none; } /* Class Selectors */ .ProductTitle { font-family: sans-serif; } .BookTitle { font-family: sans-serif; } .VersionNumber { font-family: sans-serif; } .PrintDate { font-family: sans-serif; font-size: small; } .PartNumber { font-family: sans-serif; font-size: small; } PKeӺ1,PK&AOEBPS/dcommon/larrow.gif#GIF87a絵ƌֵƽ{{ss֜ƔZZ{{{{ZZssZZccJJJJRRBBJJJJ991111))!!{,@pH,Ȥrl:ШtpHc`  өb[.64ꑈ53=Z]'yuLG*)g^!8C?-6(29K"Ĩ0Яl;U+K9^u2,@@ (\Ȱ Ë $P`lj 8x I$4H *(@͉0dа8tA  DсSP v"TUH PhP"Y1bxDǕ̧_=$I /& .)+ 60D)bB~=0#'& *D+l1MG CL1&+D`.1qVG ( "D2QL,p.;u. |r$p+5qBNl<TzB"\9e0u )@D,¹ 2@C~KU 'L6a9 /;<`P!D#Tal6XTYhn[p]݅ 7}B a&AƮe{EɲƮiEp#G}D#xTIzGFǂEc^q}) Y# (tۮNeGL*@/%UB:&k0{ &SdDnBQ^("@q #` @1B4i@ aNȅ@[\B >e007V[N(vpyFe Gb/&|aHZj@""~ӎ)t ? $ EQ.սJ$C,l]A `8A o B C?8cyA @Nz|`:`~7-G|yQ AqA6OzPbZ`>~#8=./edGA2nrBYR@ W h'j4p'!k 00 MT RNF6̙ m` (7%ꑀ;PKl-OJPK&AOEBPS/dcommon/index.gifGIF89a1޵ΥΥ{sc{BZs,@IM" AD B0 3.R~[D"0, ]ШpRNC  /& H&[%7TM/`vS+-+ q D go@" 4o'Uxcxcc&k/ qp zUm(UHDDJBGMԃ;PK(PK&AOEBPS/dcommon/bookbig.gif +GIF89a$!!!)))111999BBBJJJRRRZZZccckkksss{{{skkB991)))!!B11))1!JB9B9!!cZ9ƭƽssk{ZZRccZRRJJJBBB9c!!ν)1)k{s絽ƌkssֽZccJRRBJJ{9BB)11)99!!))11!!k!JZ!)RcJccBcs)1c)JZ!BR!)BZ)99J!Rk9!c11B)Z{)9Bkc1kB9BZ!Z{9Rs)Jkksk9kB1s1Jk9Rƥc{k9s)Z{1k91)s1Rk)Jc1J!))BZ!1k{csc{)19B!)Bcsc{ksc{kZs!RkJkJkքc{9Zks{ck9R)Bks9R9R1J!)Z1B!)c)9)99BR19kksBBJcc{ccBBZ))9kk!!199c11ZBB{9!!R!!Z!!c))!!kR!!s!!BcksRZ1c9B)R91c1)Z!R9B9k1)RcZ{)!1B9JB9B)!)J9B!& Imported from GIF image: bookbig.gif,$!!!)))111999BBBJJJRRRZZZccckkksss{{{skkB991)))!!B11))1!JB9B9!!cZ9ƭƽssk{ZZRccZRRJJJBBB9c!!ν)1)k{s絽ƌkssֽZccJRRBJJ{9BB)11)99!!))11!!k!JZ!)RcJccBcs)1c)JZ!BR!)BZ)99J!Rk9!c11B)Z{)9Bkc1kB9BZ!Z{9Rs)Jkksk9kB1s1Jk9Rƥc{k9s)Z{1k91)s1Rk)Jc1J!))BZ!1k{csc{)19B!)Bcsc{ksc{kZs!RkJkJkքc{9Zks{ck9R)Bks9R9R1J!)Z1B!)c)9)99BR19kksBBJcc{ccBBZ))9kk!!199c11ZBB{9!!R!!Z!!c))!!kR!!s!!BcksRZ1c9B)R91c1)Z!R9B9k1)RcZ{)!1B9JB9B)!)J9BH`\Ȑ:pظа"A6DBH,V@Dڹ'G"v Æ ܥ;n;!;>xAܽ[G.\rQC wr}BŊQ A9ᾑ#5Y0VȒj0l-GqF>ZpM rb ;=.ސW-WѻWo ha!}~ْ ; t 53 :\ 4PcD,0 4*_l0K3-`l.j!c Aa|2L4/1C`@@md;(H*80L0L(h*҇҆o#N84pC (xO@ A)J6rVlF r  fry†$r_pl5xhA+@A=F rGU a 1х4s&H Bdzt x#H%Rr (Ѐ7P`#Rщ'x" #0`@~i `HA'Tk?3!$`-A@1l"P LhʖRG&8A`0DcBH sq@AXB4@&yQhPAppxCQ(rBW00@DP1E?@lP1%T` 0 WB~nQ@;PKGC PK&AOEBPS/dcommon/rarrow.gif/GIF87a絵ƌֵƽ{{ss֜ƔZZ{{{{ZZssZZccJJJJRRBBJJJJ991111))!!{,@pH,Ȥrl:ШLlԸ NCqWEd)#34vwwpN|0yhX!'+-[F 'n5 H $/14w3% C .90" qF 7&E "D mnB|,c96) I @0BW{ᢦdN p!5"D`0 T 0-]ʜ$;PKJV^PK&AOEBPS/dcommon/mix.gifkGIF89aZZZBBBJJJkkk999sss!!!111cccֽ{{{RRR)))猌ƭ{s{sks!,@@pH,B$ 8 t:<8 *'ntPP DQ@rIBJLNPTVEMOQUWfj^!  hhG H  kCúk_a Ǥ^ h`B BeH mm  #F` I lpǎ,p B J\Y!T\(dǏ!Gdˆ R53ټ R;iʲ)G=@-xn.4Y BuU(*BL0PX v`[D! | >!/;xP` (Jj"M6 ;PK枰pkPK&AOEBPS/dcommon/doccd_epub.jsM /* Copyright 2006, 2012, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2012.3.17 */ function addLoadEvent(func) { var oldOnload = window.onload; if (typeof(window.onload) != "function") window.onload = func; else window.onload = function() { oldOnload(); func(); } } function compactLists() { var lists = []; var ul = document.getElementsByTagName("ul"); for (var i = 0; i < ul.length; i++) lists.push(ul[i]); var ol = document.getElementsByTagName("ol"); for (var i = 0; i < ol.length; i++) lists.push(ol[i]); for (var i = 0; i < lists.length; i++) { var collapsible = true, c = []; var li = lists[i].getElementsByTagName("li"); for (var j = 0; j < li.length; j++) { var p = li[j].getElementsByTagName("p"); if (p.length > 1) collapsible = false; for (var k = 0; k < p.length; k++) { if ( getTextContent(p[k]).split(" ").length > 12 ) collapsible = false; c.push(p[k]); } } if (collapsible) { for (var j = 0; j < c.length; j++) { c[j].style.margin = "0"; } } } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(compactLists); function processIndex() { try { if (!/\/index.htm(?:|#.*)$/.test(window.location.href)) return false; } catch(e) {} var shortcut = []; lastPrefix = ""; var dd = document.getElementsByTagName("dd"); for (var i = 0; i < dd.length; i++) { if (dd[i].className != 'l1ix') continue; var prefix = getTextContent(dd[i]).substring(0, 2).toUpperCase(); if (!prefix.match(/^([A-Z0-9]{2})/)) continue; if (prefix == lastPrefix) continue; dd[i].id = prefix; var s = document.createElement("a"); s.href = "#" + prefix; s.appendChild(document.createTextNode(prefix)); shortcut.push(s); lastPrefix = prefix; } var h2 = document.getElementsByTagName("h2"); for (var i = 0; i < h2.length; i++) { var nav = document.createElement("div"); nav.style.position = "relative"; nav.style.top = "-1.5ex"; nav.style.left = "1.5em"; nav.style.width = "90%"; while (shortcut[0] && shortcut[0].toString().charAt(shortcut[0].toString().length - 2) == getTextContent(h2[i])) { nav.appendChild(shortcut.shift()); nav.appendChild(document.createTextNode("\u00A0 ")); } h2[i].parentNode.insertBefore(nav, h2[i].nextSibling); } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(processIndex); PKo"nR M PK&AOEBPS/dcommon/toc.gifGIF89a1ΥΥ{c{Z{JkJk1Rk,@IK% 0| eJB,K-1i']Bt9dz0&pZ1o'q(؟dQ=3S SZC8db f&3v2@VPsuk2Gsiw`"IzE%< C !.hC IQ 3o?39T ҍ;PKv I PK&AOEBPS/dcommon/topnav.gifGIF89a1ֽ筽ޭƔkZZk{Bc{,@ ) l)-'KR$&84 SI) XF P8te NRtHPp;Q%Q@'#rR4P fSQ o0MX[) v + `i9gda/&L9i*1$#"%+ ( E' n7Ȇ(,҅(L@(Q$\x 8=6 'נ9tJ&"[Epljt p#ѣHb :f F`A =l|;&9lDP2ncH R `qtp!dȐYH›+?$4mBA9 i@@ ]@ꃤFxAD*^Ŵ#,(ε  $H}F.xf,BD Z;PK1FAPK&AOEBPS/dcommon/bp_layout.css# @charset "utf-8"; /* bp_layout.css Copyright 2007, Oracle and/or its affiliates. All rights reserved. */ body { margin: 0ex; padding: 0ex; } h1 { display: none; } #FOOTER { border-top: #0d4988 solid 10px; background-color: inherit; color: #e4edf3; clear: both; } #FOOTER p { font-size: 80%; margin-top: 0em; margin-left: 1em; } #FOOTER a { background-color: inherit; color: gray; } #LEFTCOLUMN { float: left; width: 50%; } #RIGHTCOLUMN { float: right; width: 50%; clear: right; /* IE hack */ } #LEFTCOLUMN div.portlet { margin-left: 2ex; margin-right: 1ex; } #RIGHTCOLUMN div.portlet { margin-left: 1ex; margin-right: 2ex; } div.portlet { margin: 2ex 1ex; padding-left: 0.5em; padding-right: 0.5em; border: 1px #bcc solid; background-color: #f6f6ff; color: black; } div.portlet h2 { margin-top: 0.5ex; margin-bottom: 0ex; font-size: 110%; } div.portlet p { margin-top: 0ex; } div.portlet ul { list-style-type: none; padding-left: 0em; margin-left: 0em; /* IE Hack */ } div.portlet li { text-align: right; } div.portlet li cite { font-style: normal; float: left; } div.portlet li a { margin: 0px 0.2ex; padding: 0px 0.2ex; font-size: 95%; } #NAME { margin: 0em; padding: 0em; position: relative; top: 0.6ex; left: 10px; width: 80%; } #PRODUCT { font-size: 180%; } #LIBRARY { color: #0b3d73; background: inherit; font-size: 180%; font-family: serif; } #RELEASE { position: absolute; top: 28px; font-size: 80%; font-weight: bold; } #TOOLS { list-style-type: none; position: absolute; top: 1ex; right: 2em; margin: 0em; padding: 0em; background: inherit; color: black; } #TOOLS a { background: inherit; color: black; } #NAV { float: left; width: 96%; margin: 3ex 0em 0ex 0em; padding: 2ex 0em 0ex 4%; /* Avoiding horizontal scroll bars. */ list-style-type: none; background: transparent url(../gifs/nav_bg.gif) repeat-x bottom; } #NAV li { float: left; margin: 0ex 0.1em 0ex 0em; padding: 0ex 0em 0ex 0em; } #NAV li a { display: block; margin: 0em; padding: 3px 0.7em; border-top: 1px solid gray; border-right: 1px solid gray; border-bottom: none; border-left: 1px solid gray; background-color: #a6b3c8; color: #333; } #SUBNAV { float: right; width: 96%; margin: 0ex 0em 0ex 0em; padding: 0.1ex 4% 0.2ex 0em; /* Avoiding horizontal scroll bars. */ list-style-type: none; background-color: #0d4988; color: #e4edf3; } #SUBNAV li { float: right; } #SUBNAV li a { display: block; margin: 0em; padding: 0ex 0.5em; background-color: inherit; color: #e4edf3; } #SIMPLESEARCH { position: absolute; top: 5ex; right: 1em; } #CONTENT { clear: both; } #NAV a:hover, #PORTAL_1 #OVERVIEW a, #PORTAL_2 #OVERVIEW a, #PORTAL_3 #OVERVIEW a, #PORTAL_4 #ADMINISTRATION a, #PORTAL_5 #DEVELOPMENT a, #PORTAL_6 #DEVELOPMENT a, #PORTAL_7 #DEVELOPMENT a, #PORTAL_11 #INSTALLATION a, #PORTAL_15 #ADMINISTRATION a, #PORTAL_16 #ADMINISTRATION a { background-color: #0d4988; color: #e4edf3; padding-bottom: 4px; border-color: gray; } #SUBNAV a:hover, #PORTAL_2 #SEARCH a, #PORTAL_3 #BOOKS a, #PORTAL_6 #WAREHOUSING a, #PORTAL_7 #UNSTRUCTURED a, #PORTAL_15 #INTEGRATION a, #PORTAL_16 #GRID a { position: relative; top: 2px; background-color: white; color: #0a4e89; } PK3( # PK&AOEBPS/dcommon/bookicon.gif:GIF87a!!!)))111999BBBJJJRRRZZZccckkksss{{{ޭ{{ZRRcZZRJJJBB)!!skRB9{sν{skskcZRJ1)!֭ƽ{ZZRccZJJBBB999111)JJ9BB1ZZB!!ﭵBJJ9BB!!))Jk{)1!)BRZJ{BsR!RRJsJ!J{s!JsBkks{RsB{J{c1RBs1ZB{9BJ9JZ!1BJRRs!9R!!9Z9!1)J19JJRk19R1Z)!1B9R1RB!)J!J1R)J119!9J91!9BkksBBJ119BBR!))9!!!JB1JJ!)19BJRZckތ1)1J9B,H*\hp >"p`ƒFF "a"E|ժOC&xCRz OBtX>XE*O>tdqAJ +,WxP!CYpQ HQzDHP)T njJM2ꔀJ2T0d#+I:<жk 'ꤱF AB @@nh Wz' H|-7f\A#yNR5 /PM09u UjćT|q~Yq@&0YZAPa`EzI /$AD Al!AAal 2H@$ PVAB&c*ؠ p @% p-`@b`uBa l&`3Ap8槖X~ vX$Eh`.JhAepA\"Bl, :Hk;PKx[?:PK&AOEBPS/dcommon/conticon.gif^GIF87a!!!)))111999BBBJJJRRRZZZccckkksss{{{ZRR޽{{ssskkkcccZ991ccRZZBBJJZck)19ZcsBJZ19J!k{k)Z1RZs1!B)!J91{k{)J!B!B911)k{cs!1s!9)s!9!B!k)k1c!)Z!R{9BJcckZZcBBJ99B119{{!!)BBRBBZ!))999R99Z!!999c1!9!)19B1)!B9R,  oua\h2SYPa aowwxYi 9SwyyxxyYSd $'^qYȵYvh ч,/?g{н.J5fe{ڶyY#%/}‚e,Z|pAܠ `KYx,ĉ&@iX9|`p ]lR1khٜ'E 6ÅB0J;t X b RP(*MÄ!2cLhPC <0Ⴁ  $4!B 6lHC%<1e H 4p" L`P!/,m*1F`#D0D^!AO@..(``_؅QWK>_*OY0J@pw'tVh;PKp*c^PK&AOEBPS/dcommon/blafdoc.cssL@charset "utf-8"; /* Copyright 2002, 2011, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2011.10.7 */ body { font-family: Tahoma, sans-serif; /* line-height: 125%; */ color: black; background-color: white; font-size: small; } * html body { /* http://www.info.com.ph/~etan/w3pantheon/style/modifiedsbmh.html */ font-size: x-small; /* for IE5.x/win */ f\ont-size: small; /* for other IE versions */ } h1 { font-size: 165%; font-weight: bold; border-bottom: 1px solid #ddd; width: 100%; } h2 { font-size: 152%; font-weight: bold; } h3 { font-size: 139%; font-weight: bold; } h4 { font-size: 126%; font-weight: bold; } h5 { font-size: 113%; font-weight: bold; display: inline; } h6 { font-size: 100%; font-weight: bold; font-style: italic; display: inline; } a:link { color: #039; background: inherit; } a:visited { color: #72007C; background: inherit; } a:hover { text-decoration: underline; } a img, img[usemap] { border-style: none; } code, pre, samp, tt { font-family: monospace; font-size: 110%; } caption { text-align: center; font-weight: bold; width: auto; } dt { font-weight: bold; } table { font-size: small; /* for ICEBrowser */ } td { vertical-align: top; } th { font-weight: bold; text-align: left; vertical-align: bottom; } ol ol { list-style-type: lower-alpha; } ol ol ol { list-style-type: lower-roman; } td p:first-child, td pre:first-child { margin-top: 0px; margin-bottom: 0px; } table.table-border { border-collapse: collapse; border-top: 1px solid #ccc; border-left: 1px solid #ccc; } table.table-border th { padding: 0.5ex 0.25em; color: black; background-color: #f7f7ea; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } table.table-border td { padding: 0.5ex 0.25em; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } span.gui-object, span.gui-object-action { font-weight: bold; } span.gui-object-title { } p.horizontal-rule { width: 100%; border: solid #cc9; border-width: 0px 0px 1px 0px; margin-bottom: 4ex; } div.zz-skip-header { display: none; } td.zz-nav-header-cell { text-align: left; font-size: 95%; width: 99%; color: black; background: inherit; font-weight: normal; vertical-align: top; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-header-link { font-size: 95%; } td.zz-nav-button-cell { white-space: nowrap; text-align: center; width: 1%; vertical-align: top; padding-left: 4px; padding-right: 4px; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-button-link { font-size: 90%; } div.zz-nav-footer-menu { width: 100%; text-align: center; margin-top: 2ex; margin-bottom: 4ex; } p.zz-legal-notice, a.zz-legal-notice-link { font-size: 85%; /* display: none; */ /* Uncomment to hide legal notice */ } /*************************************/ /* Begin DARB Formats */ /*************************************/ .bold, .codeinlinebold, .syntaxinlinebold, .term, .glossterm, .seghead, .glossaryterm, .keyword, .msg, .msgexplankw, .msgactionkw, .notep1, .xreftitlebold { font-weight: bold; } .italic, .codeinlineitalic, .syntaxinlineitalic, .variable, .xreftitleitalic { font-style: italic; } .bolditalic, .codeinlineboldital, .syntaxinlineboldital, .titleinfigure, .titleinexample, .titleintable, .titleinequation, .xreftitleboldital { font-weight: bold; font-style: italic; } .itemizedlisttitle, .orderedlisttitle, .segmentedlisttitle, .variablelisttitle { font-weight: bold; } .bridgehead, .titleinrefsubsect3 { font-weight: bold; } .titleinrefsubsect { font-size: 126%; font-weight: bold; } .titleinrefsubsect2 { font-size: 113%; font-weight: bold; } .subhead1 { display: block; font-size: 139%; font-weight: bold; } .subhead2 { display: block; font-weight: bold; } .subhead3 { font-weight: bold; } .underline { text-decoration: underline; } .superscript { vertical-align: super; } .subscript { vertical-align: sub; } .listofeft { border: none; } .betadraft, .alphabetanotice, .revenuerecognitionnotice { color: #e00; background: inherit; } .betadraftsubtitle { text-align: center; font-weight: bold; color: #e00; background: inherit; } .comment { color: #080; background: inherit; font-weight: bold; } .copyrightlogo { text-align: center; font-size: 85%; } .tocsubheader { list-style-type: none; } table.icons td { padding-left: 6px; padding-right: 6px; } .l1ix dd, dd dl.l2ix, dd dl.l3ix { margin-top: 0ex; margin-bottom: 0ex; } div.infoboxnote, div.infoboxnotewarn, div.infoboxnotealso { margin-top: 4ex; margin-right: 10%; margin-left: 10%; margin-bottom: 4ex; padding: 0.25em; border-top: 1pt solid gray; border-bottom: 1pt solid gray; } p.notep1 { margin-top: 0px; margin-bottom: 0px; } .tahiti-highlight-example { background: #ff9; text-decoration: inherit; } .tahiti-highlight-search { background: #9cf; text-decoration: inherit; } .tahiti-sidebar-heading { font-size: 110%; margin-bottom: 0px; padding-bottom: 0px; } /*************************************/ /* End DARB Formats */ /*************************************/ @media all { /* * * { line-height: 120%; } */ dd { margin-bottom: 2ex; } dl:first-child { margin-top: 2ex; } } @media print { body { font-size: 11pt; padding: 0px !important; } a:link, a:visited { color: black; background: inherit; } code, pre, samp, tt { font-size: 10pt; } #nav, #search_this_book, #comment_form, #comment_announcement, #flipNav, .noprint { display: none !important; } body#left-nav-present { overflow: visible !important; } } PKʍPK&AOEBPS/dcommon/rightnav.gif&GIF89a1ֽ筽ޭƔkZZk{Bc{,@ ) l)- $CҠҀ ! D1 #:aS( c4B0 AC8 ְ9!%MLj Z * ctypJBa H t>#Sb(clhUԂ̗4DztSԙ9ZQҀEPEPEPEPEPEPEPM=iԍP Gii c*yF 1׆@\&o!QY00_rlgV;)DGhCq7~..p&1c:u֫{fI>fJL$}BBP?JRWc<^j+χ5b[hֿ- 5_j?POkeQ^hֿ1L^ H ?Qi?z?+_xɔŪ\썽O]χ>)xxV/s)e6MI7*ߊޛv֗2J,;~E4yi3[nI`Ѱe9@zXF*W +]7QJ$$=&`a۾?]N T䏟'X)Ɣkf:j |>NBWzYx0t!* _KkoTZ?K Gc+UyڹgNuh^iSo5{\ܹ3Yos}.>if FqR5\/TӮ#]HS0DKu{($"2xִ{SBJ8=}Y=.|Tsц2UЫ%.InaegKo z ݎ3ֹxxwM&2S%';+I',kW&-"_¿_ Vq^ܫ6pfT2RV A^6RKetto^[{w\jPZ@ޢN4/XN#\42j\(z'j =~-I#:q[Eh|X:sp* bifp$TspZ-}NM*B-bb&*xUr#*$M|QWY ~p~- fTED6O.#$m+t$˙H"Gk=t9r娮Y? CzE[/*-{c*[w~o_?%ƔxZ:/5𨴟q}/]22p qD\H"K]ZMKR&\C3zĽ[PJm]AS)Ia^km M@dК)fT[ijW*hnu Ͳiw/bkExG£@f?Zu.s0(<`0ֹoxOaDx\zT-^ѧʧ_1+CP/p[w 9~U^[U<[tĽwPv[yzD1W='u$Oeak[^ |Gk2xv#2?¹TkSݕ| rݞ[Vi _Kz*{\c(Ck_܏|?u jVڔ6f t?3nmZ6f%QAjJf9Rq _j7Z-y.pG$Xb]0')[_k;$̭?&"0FOew7 z-cIX岛;$u=\an$ zmrILu uٞ% _1xcUW%dtÀx885Y^gn;}ӭ)場QEQ@Q@Q@Q@Q@Q@!4xPm3w*]b`F_931˜[ן+(> E ly;<;MF-qst+}DH @YKlLmؤciN<|]IU)Lw(8t9FS(=>og<\Z~u_+X1ylsj'eՃ*U3`C!N9Q_WܱhKc93^ua>H ƕGk=8~e#_?{ǀe-[2ٔ7;=&K挑5zsLdx(e8#{1wS+ΝVkXq9>&yஏh$zq^0~/j@:/«Vnce$$uoPp}MC{$-akH@ɫ1O !8R9s5ԦYmϧ'OUṡ5T,!Ԛ+s#1Veo=[)g>#< s)ƽُA^䠮ωFUj(ǩ|N3Jڷ睁ϱuږZYGOTsI<&drav?A^_f׻B$,O__ԿC`it{6>G׈C~&$y؎v1q9Sc1fH[ѽ>,gG'0'@Vw,BO [#>ﱺg5ΒFVD%Yr:O5 Tu+O멃]ی38Ze}R&ѝ_xzc1DXgس;<,_,{ƽY'AS#oF.M#~cBuEx7G+Y)(5q+GCV;qF+CLQ)qEC&6z𿊘z}?&w=+)??&\g{;V??׻xGœdٿ׼-Nc')3K]N)iLTӿCdb7Q^a N sd>Fz[0S^s'Zi 77D}kWus ab~~H(>.fif9,~|Jk;YN3H8Y(t6Q݉k͇_÷Z+2߄&[ +Tr^藺97~c܎=[f1RrBǓ^kEMhxYVm<[џ6| kqbѱ| YA{G8p?\UM7Z66 g1U1igU69 u5Pƪ:VVZC=[@ҹ¨$kSmɳО\vFz~i3^a Osŧυ9Q}_3 όO{/wgoet39 vO2ea;Ύ7$U#?k+Ek&dpzbӱ+TaB0gN{[N7Gי}U7&@?>Fz~E!a@s ?'67XxO*!?qi]֏TQN@tI+\^s8l0)2k!!iW8F$(yOּT.k,/#1:}8uT˾+5=O/`IW G֯b.-<= HOm;~so~hW5+kS8s.zwE| ?4ӿw/K N 9?j(#0UT` Wzw}:_*9m>󑓀F?ELzv=8q:=WgJ`nDr Zе<ֹ](Q@Q@Q@Q@Q@Q@Q@Q@ 'IdC0EYJVcMty_~u+Sw-aO n<[YJgL#6i g5ЖDZ14cʝ!!\/M}/_AYR__>oC? _?7_G#RERW쏞KB}JxGSkǕA pƱơP m]hwB7U$Zq M95"3q1ioATߚ{g.t uu2k=;h#YB= fgS :TdLԃ!44mFK{Hrd^7oz|BVr<{)6AXգV»|>*/hS܏z͆OM=Εq (s|s׊LKQI :9NJ)P+!ʣoAF>+=@I}"x/}۠1aנc¹4emC:>p_xWKX` >R3_S½èųp3޺u3N e یbmͺ<_ mnݮ1Op?Gm)Qb%N585'%Ahs\6yw!"&Ɨ._wk)}GP;Z!#\"< *oƾ\)}N>"լ/~]Lg}pBG X?<zZ#x69S=6) jzx=y9O&>+e!!? ?s~k5Gʏ)?*ce7Ox~k5􇔾Q/e7/Ԑ#3OgNC0] ;_FiRl>Q.g>!%k#ú:Kn'&}?U@\pџPtp)v<{_i}Oվֲ3XIYIx~b<D?(=_JXH=bbi=Oh?_ C_O)}oW쏜? %Ƶ;-RYFi`wۭ{ϖZMtQ$"c_+ԃx1*0b;ԕ݋ESQEQEQEQEQEQEQEQEQEQZ(1F)h1K@XLRE&9P (bf{RӨ&)PEPEPbԴPGKZ(iإbn(:A%S0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((PKje88PK&AOEBPS/dcommon/help.gif!GIF89a1εֵ֜֜{kZsBc{,@ )sƠTQ$8(4ʔ%ŌCK$A HP`$h8ŒSd+ɡ\ H@%' 6M HO3SJM /:Zi[7 \( R9r ERI%  N=aq   qƦs *q-n/Sqj D XZ;PKއ{&!PK&AOEBPS/strms_trouble.htm7[Ȥ Identifying Problems in an Oracle Streams Environment

30 Identifying Problems in an Oracle Streams Environment

The following topics describe identifying and resolving common problems in an Oracle Streams environment:

Viewing Oracle Streams Alerts

An alert is a warning about a potential problem or an indication that a critical threshold has been crossed. There are two types of alerts:

  • Stateless: Alerts that indicate single events that are not necessarily tied to the system state. For example, an alert that indicates that a capture aborted with a specific error is a stateless alert.

  • Stateful: Alerts that are associated with a specific system state. Stateful alerts are usually based on a numeric value, with thresholds defined at warning and critical levels. For example, an alert on the current Oracle Streams pool memory usage percentage, with the warning level at 85% and the critical level at 95%, is a stateful alert.

An Oracle Database 11g Release 1 or later database generates a stateless Oracle Streams alert under the following conditions:

  • A capture process aborts.

  • A propagation aborts after 16 consecutive errors.

  • An apply process aborts.

  • An apply process with an empty error queue encounters an apply error.

An Oracle Database 11g Release 1 or later database generates a stateful Oracle Streams alert under the following condition:

  • Oracle Streams pool memory usage exceeds the percentage specified by the STREAMS_POOL_USED_PCT metric. You can manage this metric in Oracle Enterprise Manager or with the SET_THRESHOLD procedure in the DBMS_SERVER_ALERT package.

You can view alerts in Enterprise Manager, or you can query the following data dictionary views:

  • The DBA_OUTSTANDING_ALERTS view records current stateful alerts. The DBA_ALERT_HISTORY view records stateless alerts and stateful alerts that have been cleared. For example, if the memory usage in the Oracle Streams pool exceeds the specified threshold, then a stateful alert is recorded in the DBA_OUTSTANDING_ALERTS view.

  • The DBA_ALERT_HISTORY data dictionary view shows alerts that have been cleared from the DBA_OUTSTANDING_ALERTS view. For example, if the memory usage in the Oracle Streams pool falls below the specified threshold, then the alert recorded in the DBA_OUTSTANDING_ALERTS view is cleared and moved to the DBA_ALERT_HISTORY view.

For example, to list the current stateful Oracle Streams alerts, run the following query on the DBA_OUTSTANDING_ALERTS view:

COLUMN REASON HEADING 'Reason for Alert' FORMAT A35
COLUMN SUGGESTED_ACTION HEADING 'Suggested Response' FORMAT A35
 
SELECT REASON, SUGGESTED_ACTION 
   FROM DBA_OUTSTANDING_ALERTS
   WHERE MODULE_ID LIKE '%STREAMS%';

To list the Oracle Streams stateless alerts and cleared Oracle Streams stateful alerts, run the following query on the DBA_ALERT_HISTORY view:

COLUMN REASON HEADING 'Reason for Alert' FORMAT A35
COLUMN SUGGESTED_ACTION HEADING 'Suggested Response' FORMAT A35
 
SELECT REASON, SUGGESTED_ACTION 
   FROM DBA_ALERT_HISTORY 
   WHERE MODULE_ID LIKE '%STREAMS%';

The following is example output from a query on the DBA_ALERT_HISTORY view:

Reason for Alert                    Suggested Response
----------------------------------- -----------------------------------
STREAMS apply process "APPLY_EMP_DE Obtain the exact error message in d
P" aborted with ORA-26714           ba_apply, take the appropriate acti
                                    on for this error, and restart the
                                    apply process using dbms_apply_adm.
                                    start_apply.  If the error is an OR
                                    A-26714, consider setting the 'DISA
                                    BLE_ON_ERROR' apply parameter to 'N
                                    ' to avoid aborting on future user
                                    errors.
 
STREAMS error queue for apply proce Look at the contents of the error q
ss "APPLY_EMP_DEP" contains new tra ueue as well as dba_apply_error to
nsaction with ORA-26786             determine the cause of the error.
                                    Once the errors are resolved, reexe
                                    cute them using dbms_apply_adm.exec
                                    ute_error or dbms_apply_adm.execute
                                    _all_errors.

Note:

Oracle Streams alerts are informational only. They do not need to be managed. If you monitor your Oracle Streams environment regularly and address problems as they arise, then you might not need to monitor Oracle Streams alerts.


See Also:


Using the Streams Configuration Report and Health Check Script

The Streams Configuration Report and Health Check Script provides important information about the Oracle Streams components in an individual Oracle database. The report is useful to confirm that the prerequisites for Oracle Streams are met and to identify the database objects of interest for Oracle Streams. The report also analyzes the rules in the database to identify common problems with Oracle Streams rules.

The Streams Configuration Report and Health Check Script is available on the My Oracle Support (formerly OracleMetaLink) Web site. To run the script, complete the following steps:

  1. Using a Web browser, go to the My Oracle Support Web site:

    http://support.oracle.com/
    
  2. Log in to My Oracle Support.


    Note:

    If you are not a My Oracle Support registered user, then click Register Here and register.

  3. Find the database bulletin with the following title:

    Streams Configuration Report and Health Check Script
    

    The doc ID for this bulletin is 273674.1.

  4. Follow the instructions to download the script for your release, run the script, and analyze the results.

Handling Performance Problems Because of an Unavailable Destination

When a database in Oracle Streams replication environment has one capture process that captures changes for multiple destination databases, performance problems can result when one of the destination databases becomes unavailable. If this happens, and the changes for the unavailable destination cannot be propagated, then these changes can build up the capture process's queue and eventually spill to hard disk. Spilling messages to hard disk at the capture database can degrade the performance of the Oracle Streams replication environment. You can query the V$BUFFERED_QUEUES view to check the number of messages in a queue and how many have spilled to hard disk. Also, you can query the DBA_PROPAGATION and V$PROPAGATION_SENDER views to show the propagations in a database and the status of each propagation.

If you encounter this situation, then you can use the SPLIT_STREAMS and MERGE_STREAMS_JOB procedures in the DBMS_STREAMS_ADM package to address the problem. The SPLIT_STREAMS procedure splits the problem stream off from the other streams flowing from the capture process. By splitting the stream off, you can avoid performance problems while the destination is unavailable. After the problem at the destination is resolved, the MERGE_STREAMS_JOB procedure merges the stream back with the other streams flowing from the capture process.


See Also:

Oracle Streams Replication Administrator's Guide for more information about splitting and merging a destination

Checking the Trace Files and Alert Log for Problems

Messages about each capture process, propagation, and apply process are recorded in trace files for the database in which the process or propagation job is running. A local capture process runs on a source database, a downstream capture process runs on a downstream database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. These trace file messages can help you to identify and resolve problems in an Oracle Streams environment.

All trace files for background processes are written to the Automatic Diagnostic Repository. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file.

For example, on some operating systems, the trace file name for a process is sid_xxxx_iiiii.trc, where:

  • sid is the system identifier for the database

  • xxxx is the name of the process

  • iiiii is the operating system process number

Also, you can set the write_alert_log parameter to y for both a capture process and an apply process. When this parameter is set to y, which is the default setting, the alert log for the database contains messages about why the capture process or apply process stopped.

You can control the information in the trace files by setting the trace_level capture process or apply process parameter using the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM and DBMS_APPLY_ADM packages.

Use the following checklist to check the trace files related to Oracle Streams:


See Also:


Does a Capture Process Trace File Contain Messages About Capture Problems?

A capture process is an Oracle background process named CPnn, where nn can include letters and numbers. For example, on some operating systems, if the system identifier for a database running a capture process is hqdb and the capture process number is 01, then the trace file for the capture process starts with hqdb_CP01.


See Also:

"Displaying Change Capture Information About Each Capture Process" for a query that displays the capture process number of a capture process

Do the Trace Files Related to Propagation Jobs Contain Messages About Problems?

Each propagation uses a propagation job that depends on one or more slave processes named jnnn, where nnn is the slave process number. For example, on some operating systems, if a slave process is 001, then the trace file for the slave process includes j001 in its name. You can check the process name by querying the PROCESS_NAME column in the DBA_QUEUE_SCHEDULES data dictionary view.


See Also:

"Is the Propagation Enabled?" for a query that displays the job slave used by a propagation job

Does an Apply Process Trace File Contain Messages About Apply Problems?

An apply process is an Oracle background process named APnn, where nn can include letters and numbers. For example, on some operating systems, if the system identifier for a database running an apply process is hqdb and the apply process number is 01, then the trace file for the apply process starts with hqdb_AP01.

An apply process also uses other processes. Information about an apply process might be recorded in the trace file for one or more of these processes. The process name of the reader server and apply servers is ASnn, where nn can include letters and numbers. So, on some operating systems, if the system identifier for a database running an apply process is hqdb and the process number is 01, then the trace file that contains information about a process used by an apply process starts with hqdb_AS01.


See Also:


PKjX<[7[PK&AOEBPS/strms_prop.htm Oracle Streams Staging and Propagation

3 Oracle Streams Staging and Propagation

The following topics contain conceptual information about staging messages in queues and propagating messages from one queue to another:

Introduction to Message Staging and Propagation

Oracle Streams uses queues to stage messages. Staged messages can be consumed or propagated, or both. Staged messages can be consumed by an apply process, a messaging client, or a user application. A running apply process implicitly dequeues messages, but messaging clients and user applications explicitly dequeue messages. Even after a message is consumed, it can remain in the queue if you also have configured an Oracle Streams propagation to propagate, or send, the message to one or more other queues or if message retention is specified for the queue. Message retention applies to messages captured by a synchronous capture or enqueued explicitly, but it does not apply to messages captured by a capture process.

Queues

A queue is an abstract storage unit used by a messaging system to store messages. This section includes the following topics:

ANYDATA Queues and Typed Queues

A queue of ANYDATA type can stage messages of almost any type and is called an ANYDATA queue. A typed queue can stage messages of a specific type. Oracle Streams clients always use ANYDATA queues.

In an Oracle Streams replication environment, logical change records (LCRs) must be staged in ANYDATA queues. In an Oracle Streams messaging environment, both ANYDATA queues and typed queues can stage messages. Publishing applications can enqueue messages into a single queue, and subscribing applications can dequeue these messages.

Two types of messages can be encapsulated into an ANYDATA object and staged in an ANYDATA queue: LCRs and user messages. An LCR is an object that contains information about a change to a database object. A user message is a message of a user-defined type created by users or applications. Both types of messages can be used for information sharing within a single database or between databases.

ANYDATA queues can stage user messages whose payloads are of ANYDATA type. An ANYDATA payload can be a wrapper for payloads of different data types.

By using ANYDATA wrappers for message payloads, publishing applications can enqueue messages of different types into a single queue, and subscribing applications can dequeue these messages, either explicitly using a messaging client or an application, or implicitly using an apply process. If the subscribing application is remote, then the messages can be propagated to the remote site, and the subscribing application can dequeue the messages from a local queue in the remote database. Alternatively, a remote subscribing application can dequeue messages directly from the source queue using a variety of standard protocols, such as PL/SQL and OCI.

You can wrap almost any type of payload in an ANYDATA payload. To do this, you use the Convertdata_type static functions of the ANYDATA type, where data_type is the type of object to wrap. These functions take the object as input and return an ANYDATA object.

Oracle Streams includes the features of Oracle Streams Advanced Queuing (AQ), which supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, internet propagation, transformations, and gateways to other messaging subsystems.


See Also:


Persistent Queues and Buffered Queues

Oracle Streams supports the following message modes:

  • Persistent messaging: Messages are always stored on disk in a database table called a queue table. This type of storage is sometimes called persistent queue storage.

  • Buffered messaging: Messages are stored in memory but can spill to a queue table under certain conditions. This type of storage is sometimes called buffered queue storage. The memory includes Oracle Streams pool memory that is associated with a queue that contains messages that were captured by a capture process or enqueued by applications.

Buffered queues enable Oracle to optimize messages by buffering them in the System Global Area (SGA) instead of always storing them in a queue table. Buffered messaging provides better performance, but it does not support some messaging features, such as message retention. Message retention lets you specify the amount of time a message is retained in the queue table after being dequeued.

If the size of the Oracle Streams pool is not managed automatically, then you should increase the size of the Oracle Streams pool by 10 MB for each buffered queue in a database. Buffered queues improve performance, but some of the information in a buffered queue can be lost if the instance containing the buffered queue shuts down normally or abnormally. Oracle Streams automatically recovers from these cases, assuming full database recovery is performed on the instance.

Messages in a buffered queue can spill from memory into the queue table if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate AQ$_queue_table_name_p table, where queue_table_name is the name of the queue table for the queue. Also, for each spilled message, information is stored in the AQ$_queue_table_name_d table about any propagations and apply processes that are eligible for processing the message.

LCRs that were captured by a capture process are always stored in a buffered queue, but LCRs that were captured by a synchronous capture are always stored in a persistent queue. Other types of messages might or might not be stored in a buffered queue. When an application enqueues a message, the enqueue operation specifies whether the enqueued message is stored in the buffered queue or in the persistent queue. The delivery_mode attribute in the enqueue_options parameter of the DBMS_AQ.ENQUEUE procedure determines whether a message is stored in the buffered queue or the persistent queue. Specifically, if the delivery_mode attribute is the default PERSISTENT, then the message is enqueued into the persistent queue. If it is set to BUFFERED, then the message is enqueued as the buffered queue. When a transaction is moved to the error queue, all messages in the transaction always are stored in a queue table, not in a buffered queue.


Note:

Although buffered and persistent messages can be stored in the same queue, it is sometimes more convenient to think of a queue having a buffered portion and a persistent portion, referred to here as "buffered queue" and "persistent queue." Also, both ANYDATA queues and typed queues can include both a buffered queue and a persistent queue.


See Also:


Queues and Oracle Streams Clients

Oracle Streams clients always use ANYDATA queues. The following sections discuss how queues interact with Oracle Streams clients:


See Also:


Queues and Capture Processes

A capture processes can only enqueue LCRs into a buffered queue. LCRs enqueued into a buffered queue by a capture process can be dequeued only by an apply process. Captured LCRs cannot be dequeued by applications or users.

Queues and Synchronous Capture

A synchronous capture can only enqueue LCRs into a persistent queue. LCRs captured by synchronous capture can be dequeued by apply processes, messaging clients, applications, and users.

Queues and Propagations

A propagation propagates any messages in its source queue that satisfy its rule sets. These messages can be stored in a buffered queue or in a persistent queue. A propagation can propagate both types of messages if the messages satisfy the rule sets used by the propagation.

Queues and Apply Processes

A single apply process can either dequeue messages from a buffered queue or a persistent queue, but not both. Apply processes can dequeue and process captured LCRs in a buffered queue. To dequeue captured LCRs, the apply process must be configured with the apply_captured parameter set to TRUE. Apply processes cannot dequeue buffered LCRs or buffered user messages. To dequeue persistent LCRs or persistent user messages, the apply process must be configured with the apply_captured parameter set to FALSE.

Queues and Messaging Clients

A messaging clients can only dequeue messages from a persistent queue. In addition, the DBMS_STREAMS_MESSAGING package cannot be used to enqueue messages into or dequeue messages from a buffered queue.


Note:

The DBMS_AQ and DBMS_AQADM packages support buffered messaging.


See Also:

Oracle Streams Advanced Queuing User's Guide for more information about using the DBMS_AQ and DBMS_AQADM packages

Message Propagation Between Queues

You can use Oracle Streams to configure message propagation between two queues. These queues can reside in the same database or in different databases. Oracle Streams uses Oracle Scheduler jobs to propagate messages.

A propagation is always between a source queue and a destination queue. Although propagation is always between two queues, a single queue can participate in many propagations. That is, a single source queue can propagate messages to multiple destination queues, and a single destination queue can receive messages from multiple source queues. Also, a single queue can be a destination queue for some propagations and a source queue for other propagations. However, only one propagation is allowed between a particular source queue and a particular destination queue.

Figure 3-1 shows propagation from a source queue to a destination queue.

Figure 3-1 Propagation from a Source Queue to a Destination Queue

Description of Figure 3-1 follows
Description of "Figure 3-1 Propagation from a Source Queue to a Destination Queue"

You can create, alter, and drop a propagation, and you can define propagation rules that control which messages are propagated. The user who owns the source queue is the user who propagates messages, and this user must have the necessary privileges to propagate messages. These privileges include the following:

  • EXECUTE privilege on the rule sets used by the propagation

  • EXECUTE privilege on all custom rule-based transformation functions used in the rule sets

  • Enqueue privilege on the destination queue if the destination queue is in the same database

If the propagation propagates messages to a destination queue in a remote database, then the owner of the source queue must be able to use the database link used by the propagation, and the user to which the database link connects at the remote database must have enqueue privilege on the destination queue.

A propagation can propagate all of the messages in a source queue to a destination queue, or a propagation can propagate only a subset of the messages. A single propagation can propagate messages in both the buffered queue portion and persistent queue portion of a queue. Also, a single propagation can propagate LCRs and user messages. You can use rules to control which messages in the source queue are propagated to the destination queue and which messages are discarded.

Depending on how you set up your Oracle Streams environment, changes could be sent back to the site where they originated. You must ensure that your environment is configured to avoid cycling a change in an endless loop. You can use Oracle Streams tags to avoid such a change cycling loop.

The following sections describe propagations in more detail:


See Also:


Propagation Rules

A propagation either propagates or discards messages based on rules that you define. For LCRs, each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. For user messages, you can create rules to control propagation behavior for specific types of messages. You can place these rules in a positive rule set or a negative rule set used by the propagation.

If a rule evaluates to TRUE for a message, and the rule is in the positive rule set for a propagation, then the propagation propagates the change. If a rule evaluates to TRUE for a message, and the rule is in the negative rule set for a propagation, then the propagation discards the change. If a propagation has both a positive and a negative rule set, then the negative rule set is always evaluated first.

You can specify propagation rules for LCRs at the following levels:

  • A table rule propagates or discards either row changes resulting from DML changes or DDL changes to a particular table. Subset rules are table rules that include a subset of the row changes to a particular table.

  • A schema rule propagates or discards either row changes resulting from DML changes or DDL changes to the database objects in a particular schema.

  • A global rule propagates or discards either all row changes resulting from DML changes or all DDL changes in the source queue.

A queue subscriber that specifies a condition causes the system to generate a rule. The rule sets for all subscribers to a queue are combined into a single system-generated rule set to make subscription more efficient.

Queue-to-Queue Propagations

A propagation can be queue-to-queue or queue-to-database link (queue-to-dblink). A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue.;. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. Even when multiple queue-to-queue propagations use the same database link, you can enable, disable, or set the propagation schedule for each queue-to-queue propagation separately. Propagation jobs are described in detail later in this chapter.

A single database link can be used by multiple queue-to-queue propagations. The database link must be created with the service name specified as the global name of the database that contains the destination queue.

In contrast, a queue-to-dblink propagation shares a propagation job with other queue-to-dblink propagations from the same source queue that use the same database link. Therefore, these propagations share the same propagation schedule, and any change to the propagation schedule affects all of the queue-to-dblink propagations from the same source queue that use the database link.

Ensured Message Delivery

A captured LCR is propagated successfully to a destination queue when both of the following actions are completed:

  • The message is processed by all relevant apply processes associated with the destination queue.

  • The message is propagated successfully from the source queue to all of its relevant destination queues.

Any other type of message is propagated successfully to a destination queue when the enqueue into the destination queue is committed. Other types of messages include buffered LCRs, buffered user messages, persistent LCRs, and buffered user messages.

When a message is successfully propagated between two queues, the destination queue acknowledges successful propagation of the message. If the source queue is configured to propagate a message to multiple destination queues, then the message remains in the source queue until each destination queue has sent confirmation of message propagation to the source queue. When each destination queue acknowledges successful propagation of the message, and all local consumers in the source queue database have consumed the message, the source queue can drop the message.

This confirmation system ensures that messages are always propagated from the source queue to the destination queue, but, in some configurations, the source queue can become larger than an optimal size. When a source queue increases, it uses more System Global Area (SGA) memory and might use more disk space.

There are two common reasons for a source queue to become larger:

  • If a message cannot be propagated to a specified destination queue for some reason (such as a network problem), then the message remains in the source queue until the destination queue becomes available. This situation could cause the source queue to become large. So, you should monitor your queues regularly to detect problems early.

  • Suppose a source queue is propagating messages captured by a capture process or synchronous capture to multiple destination queues, and one or more destination databases acknowledge successful propagation of messages much more slowly than the other queues. In this case, the source queue can grow because the slower destination databases create a backlog of messages that have already been acknowledged by the faster destination databases. In such an environment, consider creating more than one capture process or synchronous capture to capture changes at the source database. Doing so lets you use one source queue for the slower destination databases and another source queue for the faster destination databases.

Directed Networks

A directed network is one in which propagated messages pass through one or more intermediate databases before arriving at a destination database. A message might or might not be processed by an apply process at an intermediate database. Using Oracle Streams, you can choose which messages are propagated to each destination database, and you can specify the route that messages will traverse on their way to a destination database. Figure 3-2 shows an example of a directed networks environment.

Figure 3-2 Example Directed Networks Environment

Description of Figure 3-2 follows
Description of "Figure 3-2 Example Directed Networks Environment"

The advantage of using a directed network is that a source database does not need to have a physical network connection with a destination database. So, if you want messages to propagate from one database to another, but there is no direct network connection between the computers running these databases, then you can still propagate the messages without reconfiguring your network, if one or more intermediate databases connect the source database to the destination database.

If you use directed networks, and an intermediate site goes down for an extended period of time or is removed, then you might need to reconfigure the network and the Oracle Streams environment.

Queue Forwarding and Apply Forwarding

An intermediate database in a directed network can propagate messages using either queue forwarding or apply forwarding. Queue forwarding means that the messages being forwarded at an intermediate database are the messages received by the intermediate database. The source database for a message is the database where the message originated.

Apply forwarding means that the messages being forwarded at an intermediate database are first processed by an apply process. These messages are then recaptured by a capture process or a synchronous capture at the intermediate database and forwarded. When you use apply forwarding, the intermediate database becomes the new source database for the messages. Either a capture process recaptures the messages from the redo log generated at the intermediate database, or a synchronous capture configured at the intermediate database recaptures the messages.

Consider the following differences between queue forwarding and apply forwarding when you plan your Oracle Streams environment:

  • With queue forwarding, a message is propagated through the directed network without being changed, assuming there are no capture or propagation transformations. With apply forwarding, messages are applied and recaptured at intermediate databases and can be changed by conflict resolution, apply handlers, or apply transformations.

  • With queue forwarding, a destination database must have a separate apply process to apply messages from each source database. With apply forwarding, fewer apply processes might be required at a destination database because recapturing of messages at intermediate databases can result in fewer source databases when changes reach a destination database.

  • With queue forwarding, one or more intermediate databases are in place between a source database and a destination database. With apply forwarding, because messages are recaptured at intermediate databases, the source database for a message can be the same as the intermediate database connected directly with the destination database.

A single Oracle Streams environment can use a combination of queue forwarding and apply forwarding.

Advantages of Queue Forwarding

Queue forwarding has the following advantages compared with apply forwarding:

  • Performance might be improved because a message is captured only once.

  • Less time might be required to propagate a message from the database where the message originated to the destination database, because the messages are not applied and recaptured at one or more intermediate databases. In other words, latency might be lower with queue forwarding.

  • The origin of a message can be determined easily by running the GET_SOURCE_DATABASE_NAME member procedure on the LCR contained in the message. If you use apply forwarding, then determining the origin of a message requires the use of Oracle Streams tags and apply handlers.

  • Parallel apply might scale better and provide more throughput when separate apply processes are used because there are fewer dependencies, and because there are multiple apply coordinators and apply reader processes to perform the work.

  • If one intermediate database goes down, then you can reroute the queues and reset the start SCN at the capture site to reconfigure end-to-end capture, propagation, and apply.

    If you use apply forwarding, then substantially more work might be required to reconfigure end-to-end capture, propagation, and apply of messages, because the destination database(s) downstream from the unavailable intermediate database were using the SCN information of this intermediate database. Without this SCN information, the destination databases cannot apply the changes properly.

Advantages of Apply Forwarding

Apply forwarding has the following advantages compared with queue forwarding:

  • An Oracle Streams environment might be easier to configure because each database can apply changes only from databases directly connected to it, rather than from multiple remote source databases.

  • In a large Oracle Streams environment where intermediate databases apply changes, the environment might be easier to monitor and manage because fewer apply processes might be required. An intermediate database that applies changes must have one apply process for each source database from which it receives changes. In an apply forwarding environment, the source databases of an intermediate database are only the databases to which it is directly connected. In a queue forwarding environment, the source databases of an intermediate database are all of the other source databases in the environment, whether they are directly connected to the intermediate database or not.


See Also:


PK18.PK&A OEBPS/toc.htm Table of Contents

Contents

List of Figures

List of Tables

Title and Copyright Information

Preface

What's New in Oracle Streams?

Part I Essential Oracle Streams Concepts

1 Introduction to Oracle Streams

2 Oracle Streams Information Capture

3 Oracle Streams Staging and Propagation

4 Oracle Streams Information Consumption

5 How Rules Are Used in Oracle Streams

6 Rule-Based Transformations

Part II Advanced Oracle Streams Concepts

7 Advanced Capture Process Concepts

8 Advanced Queue Concepts

9 Advanced Propagation Concepts

10 Advanced Apply Process Concepts

11 Advanced Rule Concepts

12 Combined Capture and Apply Optimization

13 Oracle Streams High Availability Environments

Part III Oracle Streams Administration

14 Introduction to Oracle Streams Administration

15 Managing Oracle Streams Implicit Capture

16 Managing Staging and Propagation

17 Managing Oracle Streams Information Consumption

18 Managing Rules

19 Managing Rule-Based Transformations

20 Using Oracle Streams to Record Table Changes

21 Other Oracle Streams Management Tasks

Part IV Monitoring Oracle Streams

22 Monitoring an Oracle Streams Environment

23 Monitoring the Oracle Streams Topology and Performance

24 Monitoring Oracle Streams Implicit Capture

25 Monitoring Oracle Streams Queues and Propagations

26 Monitoring Oracle Streams Apply Processes

27 Monitoring Rules

28 Monitoring Rule-Based Transformations

29 Monitoring Other Oracle Streams Components

Part V Troubleshooting an Oracle Streams Environment

30 Identifying Problems in an Oracle Streams Environment

31 Troubleshooting Implicit Capture

32 Troubleshooting Propagation

33 Troubleshooting Apply

34 Troubleshooting Rules and Rule-Based Transformations

Part VI Oracle Streams Information Provisioning

35 Information Provisioning Concepts

36 Using Information Provisioning

37 Monitoring File Group and Tablespace Repositories

Part VII Appendixes

A How Oracle Streams Works with Other Database Components

B Oracle Streams Restrictions

C XML Schema for LCRs

D Online Database Upgrade and Maintenance with Oracle Streams

E Online Upgrade of a 10.1 or Earlier Database with Oracle Streams

Glossary

Index

PKEc˂PK&AOEBPS/strms_trcapture.htm Troubleshooting Implicit Capture

31 Troubleshooting Implicit Capture

The following topics describe identifying and resolving common problems with capture processes and synchronous captures in an Oracle Streams environment:

Troubleshooting Capture Process Problems

If a capture process is not capturing changes as expected, or if you are having other problems with a capture process, then use the following checklist to identify and resolve capture problems:

Is Capture Process Creation or Data Dictionary Build Taking a Long Time?

If capture process creation or a data dictionary build is taking an inordinately long time, then it might be because one or more in-flight transactions have not yet committed. An in-flight transaction is one that is active during capture process creation or a data dictionary build.

To determine whether there are in-flight transactions, check the alert log for the following messages:

wait for inflight txns at this scn
Done with waiting for inflight txns at this scn

If you see only the first message in the alert log, then the capture process creation or data dictionary build is waiting for the inflight transactions and will complete after all of the in-flight transactions have committed.

Is the Capture Process Enabled?

A capture process captures changes only when it is enabled.

You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE data dictionary view. For example, to check whether a capture process named capture is enabled, run the following query:

SELECT STATUS FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'CAPTURE';

If the capture process is disabled, then your output looks similar to the following:

STATUS
--------
DISABLED

If the capture process is disabled, then try restarting it. If the capture process is aborted, then you might need to correct an error before you can restart it successfully.

To determine why the capture process aborted, query the DBA_CAPTURE data dictionary view or check the trace file for the capture process. The following query shows when the capture process aborted and the error that caused it to abort:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40

SELECT CAPTURE_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE
  FROM DBA_CAPTURE WHERE STATUS='ABORTED';

See Also:


Is the Capture Process Waiting for Redo?

If an enabled capture process is not capturing changes as expected, then the capture process might be in WAITING FOR REDO state.

To check the state of each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30
COLUMN STATE HEADING 'State' FORMAT A30
 
SELECT CAPTURE_NAME, STATE FROM V$STREAMS_CAPTURE;

If the capture process state is WAITING FOR REDO, then the capture process is waiting for new redo log files to be added to the capture process session. This state is possible if a redo log file is missing or if there is no activity at a source database. For a downstream capture process, this state is possible if the capture process is waiting for new log files to be added to its session.

Additional information might be displayed along with the state information when you query the V$STREAMS_CAPTURE view. The additional information can help you to determine why the capture process is waiting for redo. For example, a statement similar to the following might appear for the STATE column when you query the view:

WAITING FOR REDO: LAST SCN MINED 8077284

In this case, the output only identifies the last system change number (SCN) scanned by the capture process. In other cases, the output might identify the redo log file name explicitly. Either way, the additional information can help you identify the redo log file for which the capture process is waiting. To correct the problem, make any missing redo log files available to the capture process.

Is the Capture Process Paused for Flow Control?

If an enabled capture process is not capturing changes as expected, then the capture process might be in PAUSED FOR FLOW CONTROL state.

To check the state of each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30
COLUMN STATE HEADING 'State' FORMAT A30
 
SELECT CAPTURE_NAME, STATE FROM V$STREAMS_CAPTURE;

If the capture process state is PAUSED FOR FLOW CONTROL, then the capture process cannot enqueue logical change records (LCRs) either because of low memory or because propagations and apply processes are consuming messages at a slower rate than the capture process is creating them. This state indicates flow control that is used to reduce the spilling of captured LCRs when propagation or apply has fallen behind or is unavailable.

If a capture process is in this state, then check for the following issues:

  • An apply process is disabled or is performing slowly.

  • A propagation is disabled or is performing poorly.

  • There is not enough memory in the Streams pool.

You can query the V$STREAMS_APPLY_READER view to monitor the LCRs being received by the apply process. You can also query V$STREAMS_APPLY_SERVER view to determine whether all apply servers are applying LCRs and executing transactions.

Also, you can query the PUBLISHER_STATE column in the V$BUFFERED_PUBLISHERS view to determine the exact reason why the capture process is paused for flow control.

To correct the problem, perform one or more of the following actions:

  • If any propagation or apply process is disabled, then enable the propagation or apply process.

  • If the apply reader is not receiving data fast enough, then try removing propagation and apply process rules or simplifying the rule conditions.

  • If there is not enough memory in the Streams pool at the capture process database, then try increasing the size of the Streams pool.

Is the Capture Process Current?

If a capture process has not captured recent changes, then the cause might be that the capture process has fallen behind. To check, you can query the V$STREAMS_CAPTURE dynamic performance view. If capture process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism capture process parameter.

Are Required Redo Log Files Missing?

When a capture process is started or restarted, it might need to scan redo log files that were generated before the log file that contains the start SCN. You can query the DBA_CAPTURE data dictionary view to determine the first SCN and start SCN for a capture process. Removing required redo log files before they are scanned by a capture process causes the capture process to abort and results in the following error in a capture process trace file:

ORA-01291: missing logfile

If you see this error, then try restoring any missing redo log files and restarting the capture process. You can check the V$LOGMNR_LOGS dynamic performance view to determine the missing SCN range, and add the relevant redo log files. A capture process needs the redo log file that includes the required checkpoint SCN and all subsequent redo log files. You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process.

If you are using the fast recovery area feature of Recovery Manager (RMAN) on a source database in an Oracle Streams environment, then RMAN might delete archived redo log files that are required by a capture process. RMAN might delete these files when the disk space used by the recovery-related files is nearing the specified disk quota for the fast recovery area. To prevent this problem in the future, complete one or more of the following actions:

  • Increase the disk quota for the fast recovery area. Increasing the disk quota makes it less likely that RMAN will delete a required archived redo log file, but it will not always prevent the problem.

  • Configure the source database to store archived redo log files in a location other than the fast recovery area. A local capture process will be able to use the log files in the other location if the required log files are missing in the fast recovery area. In this case, a database administrator must manage the log files manually in the other location.

RMAN always ensures that archived redo log files are backed up before it deletes them. If RMAN deletes an archived redo log file that is required by a capture process, then RMAN records this action in the alert log.

Is a Downstream Capture Process Waiting for Redo Data?

If a downstream capture process is not capturing changes, then it might be waiting for redo data to scan. Redo log files can be registered implicitly or explicitly for a downstream capture process. Redo log files registered implicitly typically are registered in one of the following ways:

  • For a real-time downstream capture process, redo transport services use the log writer process (LGWR) to transfer the redo data from the source database to the standby redo log at the downstream database. Next, the archiver at the downstream database registers the redo log files with the downstream capture process when it archives them.

  • For an archived-log downstream capture process, redo transport services transfer the archived redo log files from the source database to the downstream database and register the archived redo log files with the downstream capture process.

If redo log files are registered explicitly for a downstream capture process, then you must manually transfer the redo log files to the downstream database and register them with the downstream capture process.

Regardless of whether the redo log files are registered implicitly or explicitly, the downstream capture process can capture changes made to the source database only if the appropriate redo log files are registered with the downstream capture process. You can query the V$STREAMS_CAPTURE dynamic performance view to determine whether a downstream capture process is waiting for a redo log file. For example, run the following query for a downstream capture process named strm05_capture:

SELECT STATE FROM V$STREAMS_CAPTURE WHERE CAPTURE_NAME='STRM05_CAPTURE';

If the capture process state is either WAITING FOR DICTIONARY REDO or WAITING FOR REDO, then verify that the redo log files have been registered with the downstream capture process by querying the DBA_REGISTERED_ARCHIVED_LOG and DBA_CAPTURE data dictionary views. For example, the following query lists the redo log files currently registered with the strm05_capture downstream capture process:

COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 9999999
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A30
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10

SELECT r.SOURCE_DATABASE,
       r.SEQUENCE#, 
       r.NAME, 
       r.DICTIONARY_BEGIN, 
       r.DICTIONARY_END 
  FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
  WHERE c.CAPTURE_NAME = 'STRM05_CAPTURE' AND 
        r.CONSUMER_NAME = c.CAPTURE_NAME;

If this query does not return any rows, then no redo log files are registered with the capture process currently. If you configured redo transport services to transfer redo data from the source database to the downstream database for this capture process, then ensure that the redo transport services are configured correctly. If the redo transport services are configured correctly, then run the ALTER SYSTEM ARCHIVE LOG CURRENT statement at the source database to archive a log file. If you did not configure redo transport services to transfer redo data, then ensure that the method you are using for log file transfer and registration is working properly. You can register log files explicitly using an ALTER DATABASE REGISTER LOGICAL LOGFILE statement.

If the downstream capture process is waiting for redo, then it also is possible that there is a problem with the network connection between the source database and the downstream database. There also might be a problem with the log file transfer method. Check your network connection and log file transfer method to ensure that they are working properly.

If you configured a real-time downstream capture process, and no redo log files are registered with the capture process, then try switching the log file at the source database. You might need to switch the log file more than once if there is little or no activity at the source database.

Also, if you plan to use a downstream capture process to capture changes to historical data, then consider the following additional issues:

  • Both the source database that generates the redo log files and the database that runs a downstream capture process must be Oracle Database 10g or later databases.

  • The start of a data dictionary build must be present in the oldest redo log file added, and the capture process must be configured with a first SCN that matches the start of the data dictionary build.

  • The database objects for which the capture process will capture changes must be prepared for instantiation at the source database, not at the downstream database. In addition, you cannot specify a time in the past when you prepare objects for instantiation. Objects are always prepared for instantiation at the current database SCN, and only changes to a database object that occurred after the object was prepared for instantiation can be captured by a capture process.

Are You Trying to Configure Downstream Capture Incorrectly?

To create a downstream capture process, you must use one of the following procedures:

  • DBMS_CAPTURE_ADM.CREATE_CAPTURE

  • DBMS_STREAMS_ADM.MAINTAIN_GLOBAL

  • DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS

  • DBMS_STREAMS_ADM.MAINTAIN_SIMPLE_TTS

  • DBMS_STREAMS_ADM.MAINTAIN_TABLES

  • DBMS_STREAMS_ADM.MAINTAIN_TTS

  • PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP in the DBMS_STREAMS_ADM package

The procedures in the DBMS_STREAMS_ADM package can configure a downstream capture process as well as the other Oracle Streams components in an Oracle Streams replication environment.

If you try to create a downstream capture process without using one of these procedures, then Oracle returns the following error:

ORA-26678: Streams capture process must be created first

To correct the problem, use one of these procedures to create the downstream capture process.

If you are trying to create a local capture process using a procedure in the DBMS_STREAMS_ADM package, and you encounter this error, then make sure the database name specified in the source_database parameter of the procedure you are running matches the global name of the local database.


See Also:

Oracle Streams Replication Administrator's Guide for information about configuring a capture process

Are You Trying to Configure Downstream Capture without Proper Authentication?

If authentication is not configured properly between the source database and the downstream capture database, redo data transfer fails with one of the following errors:

ORA-16191: Primary log shipping client not logged on standby
 
ORA-1017: Invalid username/password; login denied

Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. The password file must be the same at the source database and the downstream capture database.

To correct the problem, perform one of the following actions:

  • If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system. After copying the file, you might need to restart both databases for the change to take effect.

  • Turn off the case sensitivity option by setting the initialization parameter SEC_CASE_SENSITIVE_LOGON to FALSE. Next, create the password file on the source and downstream capture database systems using ORAPWD. Make sure the password is the same on both systems, and set the ignorecase argument to Y.


See Also:

Oracle Data Guard Concepts and Administration for detailed information about authentication requirements for redo transport

Are More Actions Required for Downstream Capture without a Database Link?

When downstream capture is configured with a database link, the database link can be used to perform operations at the source database and obtain information from the source database automatically. When downstream capture is configured without a database link, these actions must be performed manually, and the information must be obtained manually. If you do not complete these actions manually, then errors result when you try to create the downstream capture process.

Specifically, the following actions must be performed manually when you configure downstream capture without a database link:

  • In certain situations, you must run the DBMS_CAPTURE_ADM.BUILD procedure at the source database to extract the data dictionary at the source database to the redo log before a capture process is created.

  • You must prepare the source database objects for instantiation.

  • You must obtain the first SCN for the downstream capture process and specify the first SCN using the first_scn parameter when you create the capture process with the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package.


See Also:

Oracle Streams Replication Administrator's Guide for information about configuring a capture process

Troubleshooting Synchronous Capture Problems

If a synchronous capture is not capturing changes as expected, then use this section to identify and resolve synchronous capture problems.

Is a Synchronous Capture Failing to Capture Changes to Tables?

If a synchronous capture is not capturing changes to tables as you expected, then the rules in the synchronous capture rule set might not be configured properly. To avoid problems, always use the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure in the DBMS_STREAMS_ADM package to add rules to a synchronous capture rules set.

The following are common reasons why a synchronous capture is not capturing changes as expected:

  • Global rules or schema rules are being used to try to control the behavior of the synchronous capture. A synchronous capture ignores global rules and schema rules in its rule set. A synchronous capture only captures changes that satisfy table rules and subset rules.

  • The DBMS_RULE_ADM package was used to configure the rules for a synchronous capture. A synchronous capture does not behave correctly when

    • The DBMS_RULE_ADM package is used to create rules that are added to a synchronous capture rule set.

    • The DBMS_RULE_ADM package is used to add rules to a synchronous capture rule set.

If a synchronous capture is not capturing changes to tables as expected, then complete the following steps to identify and correct problems:

  1. Query the DBA_SYNC_CAPTURE_TABLES data dictionary view to determine the tables for which a synchronous capture is capturing changes. The synchronous capture captures changes to a table only if the ENABLED column is set to YES for the table.

  2. If the DBA_SYNC_CAPTURE_TABLES view does not list tables for which a synchronous capture should capture changes, then use the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure in the DBMS_STREAMS_ADM package to add rules for the tables.

If the DBA_SYNC_CAPTURE_TABLES view shows ENABLED for a table, and a synchronous capture still does not capture changes to the table, then there might be a problem with the rule condition in the rule for the table. In this case, check the rule condition and correct any errors, or drop the rule and re-create it using the ADD_TABLE_RULES or ADD_SUBSET_RULES procedure.


Note:

Oracle recommends that you use the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package to remove a rule from a synchronous capture rule set or drop a rule used by synchronous capture. However, you can also use the REMOVE_RULE or DROP_RULE procedure in the DBMS_RULE_ADM package to perform these actions.

PK,Xט͘PK&AOEBPS/strms_adha.htmLr Oracle Streams High Availability Environments

13 Oracle Streams High Availability Environments

The following topics contain information about Oracle Streams high availability environments:

Overview of Oracle Streams High Availability Environments

Configuring a high availability solution requires careful planning and analysis of failure scenarios. Database backups and physical standby databases provide physical copies of a source database for failover protection. Oracle Data Guard, in SQL apply mode, implements a logical standby database in a high availability environment. Because Oracle Data Guard is designed for a high availability environment, it handles most failure scenarios. However, some environments might require the flexibility available in Oracle Streams, so that they can take advantage of the extended feature set offered by Oracle Streams.

This chapter discusses some of the scenarios that can benefit from an Oracle Streams-based solution and explains Oracle Streams-specific issues that arise in high availability environments.

Protection from Failures

Oracle Real Application Clusters (Oracle RAC) is the preferred method for protecting from an instance or system failure. After a failure, services are provided by a surviving node in the cluster. However, clustering does not protect from user error, media failure, or disasters. These types of failures require redundant copies of the database. You can make both physical and logical copies of a database.

Physical copies are identical, block for block, with the source database, and are the preferred means of protecting data. There are three types of physical copies: database backup, mirrored or multiplexed database files, and a physical standby database.

Logical copies contain the same information as the source database, but the information can be stored differently within the database. Creating a logical copy of your database offers many advantages. However, you should always create a logical copy in addition to a physical copy, not instead of physical copy.

A logical copy has the following benefits:

  • A logical copy can be open while being updated. This ability makes the logical copy useful for near real-time reporting.

  • A logical copy can have a different physical layout that is optimized for its own purpose. For example, it can contain additional indexes, and thereby improve the performance of reporting applications that use the logical copy.

  • A logical copy provides better protection from corruptions. Because data is logically captured and applied, it is very unlikely that a physical corruption can propagate to the logical copy of the database.

There are three types of logical copies of a database:

  • Logical standby databases

  • Oracle Streams replica databases

  • Application-maintained copies

Logical standby databases are best maintained using Oracle Data Guard in SQL apply mode. The rest of this chapter discusses Oracle Streams replica databases and application maintained copies.


See Also:


Oracle Streams Replica Database

Like Oracle Data Guard in SQL apply mode, Oracle Streams can capture database changes, propagate them to destinations, and apply the changes at these destinations. Oracle Streams is optimized for replicating data. Oracle Streams can capture changes at a source database, and the captured changes can be propagated asynchronously to replica databases. This optimization can reduce the latency and can enable the replicas to lag the primary database by no more than a few seconds.

Nevertheless, you might choose to use Oracle Streams to configure and maintain a logical copy of your production database. Although using Oracle Streams might require additional work, it offers increased flexibility that might be required to meet specific business requirements. A logical copy configured and maintained using Oracle Streams is called a replica, not a logical standby, because it provides many capabilities that are beyond the scope of the normal definition of a standby database. Some of the requirements that can best be met using an Oracle Streams replica are listed in the following sections.


See Also:

Oracle Streams Replication Administrator's Guide for more information about replicating database changes with Oracle Streams

Updates at the Replica Database

The greatest difference between a replica database and a standby database is that a replica database can be updated and a standby database cannot. Applications that must update data can run against the replica, including jobs and reporting applications that log reporting activity. Replica databases also allow local applications to operate autonomously, protecting local applications from WAN failures and reducing latency for database operations.

Heterogeneous Platform Support

The production and the replica do not need to be running on the exact same platform. This provides more flexibility in using computing assets, and facilitates migration between platforms.

Multiple Character Sets

Oracle Streams replicas can use different character sets than the production database. Data is automatically converted from one character set to another before being applied. This ability is extremely important if you have global operations and you must distribute data in multiple countries.

Mining the Online Redo Logs to Minimize Latency

If the replica is used for near real-time reporting, Oracle Streams can lag the production database by no more than a few seconds, providing up-to-date and accurate queries. Changes can be read from the online redo logs as the logs are written, rather than from the redo logs after archiving.

Fast Failover

Oracle Streams replicas can be open to read/write operations at all times. If a primary database fails, then Oracle Streams replicas are able to instantly resume processing. A small window of data might be left at the primary database, but this data will be automatically applied when the primary database recovers. This ability can be important if you value fast recovery time over no lost data. Assuming the primary database can eventually be recovered, the data is only temporarily unavailable.

Single Capture for Multiple Destinations

In a complex environment, changes need only be captured once. These changes can then be sent to multiple destinations. When a capture process is used to capture changes, this ability enables more efficient use of the resources needed to mine the redo logs for changes.

When Not to Use Oracle Streams

As mentioned previously, there are scenarios in which you might choose to use Oracle Streams to meet some of your high availability requirements. One of the rules of high availability is to keep it simple. Oracle Data Guard is designed for high availability and is easier to implement than an Oracle Streams-based high availability solution. If you decide to leverage the flexibility offered by Oracle Streams, then you must be prepared to invest in the expertise and planning required to make an Oracle Streams-based solution robust. You might need to write scripts to implement much of the automation and management tools provided with Oracle Data Guard.

Application-Maintained Copies

The best availability can be achieved by designing the maintenance of logical copies of data directly into an application. The application knows what data is valuable and must be immediately moved off-site to guarantee no data loss. It can also synchronously replicate truly critical data, while asynchronously replicating less critical data. Applications maintain copies of data by either synchronously or asynchronously sending data to other applications that manage another logical copy of the data. Synchronous operations are performed using the distributed SQL or remote procedure features of the database. Asynchronous operations are performed using Advanced Queuing. Advanced Queuing is a database message queuing feature that is part of Oracle Streams.

Although the highest levels of availability can be achieved with application-maintained copies of data, great care is required to realize these results. Typically, a great amount of custom development is required. Many of the difficult boundary conditions that have been analyzed and solved with solutions such as Oracle Data Guard and Oracle Streams replication must be reanalyzed and solved by the custom application developers. In addition, standard solutions like Oracle Data Guard and Oracle Streams replication undergo stringent testing both by Oracle and its customers. It will take a great deal of effort before a custom-developed solution can exhibit the same degree of maturity. For these reasons, only organizations with substantial patience and expertise should attempt to build a high availability solution with application maintained copies.


See Also:

Oracle Streams Advanced Queuing User's Guide for more information about developing applications with Advanced Queuing

Best Practices for Oracle Streams High Availability Environments

Implementing Oracle Streams in a high availability environment requires consideration of possible failure and recovery scenarios, and the implementation of procedures to ensure Oracle Streams continues to capture, propagate, and apply changes after a failure. Some of the issues that must be examined include the following:

Configuring Oracle Streams for High Availability

When configuring a solution using Oracle Streams, it is important to anticipate failures and design availability into the architecture. You must examine every database in the distributed system, and design a recovery plan in case of failure of that database. In some situations, failure of a database affects only services accessing data on that database. In other situations, a failure is multiplied, because it can affect other databases.

This section contains these topics:

Directly Connecting Every Database to Every Other Database

A configuration where each database is directly connected to every other database in the distributed system is the most resilient to failures, because a failure of one database will not prevent any other databases from operating or communicating. Assuming all data is replicated, services that were using the failed database can connect to surviving replicas.


See Also:


Creating Hub-and-Spoke Configurations

Although configurations where each database is directly connected to every other database provide the best high availability characteristics, they can become difficult to manage when the number of databases becomes large. Hub-and-spoke configurations solve this manageability issue by funneling changes from many databases into a hub database, and then to other hub databases, or to other spoke databases. To add a new source or destination, you simply connect it to a hub database, rather than establishing connections to every other database.

A hub, however, becomes a very important node in your distributed environment. Should it fail, all communications flowing through the hub will fail. Due to the asynchronous nature of the messages propagating through the hub, it can be very difficult to redirect a stream from one hub to another. A better approach is to make the hub resilient to failures.

The same techniques used to make a single database resilient to failures also apply to distributed hub databases. Oracle recommends Oracle Real Application Clusters (Oracle RAC) to provide protection from instance and node failures. This configuration should be combined with a "no loss" physical standby database, to protect from disasters and data errors. Oracle does not recommend using an Oracle Streams replica as the only means to protect from disasters or data errors.


See Also:


Local or Downstream Capture with Oracle Streams Capture Processes

Oracle Streams capture processes support capturing changes from the redo log on the local source database or at a downstream database at a different site. The choice of local capture or downstream capture has implications for availability. When a failure occurs at a source database, some changes might not have been captured. With local capture, those changes might not be available until the source database is recovered. In the event of a catastrophic failure, those changes might be lost.

Downstream capture at a remote database reduces the window of potential data loss in the event of a failure. Depending on the configuration, downstream capture enables you to guarantee all changes committed at the source database are safely copied to a remote site, where they can be captured and propagated to other databases and applications. Oracle Streams uses the same mechanism as Oracle Data Guard to copy redo data or log files to remote destinations, and supports the same operational modes, including maximum protection, maximum availability, and maximum performance.


Note:

Synchronous capture is always configured at the source database.

Recovering from Failures

The following sections provide best practices for recovering from failures.

This section contains these topics:

Automatic Capture Process Restart After a Failover

After a failure and restart of a single-node database, or a failure and restart of a database on another node in a cold failover cluster, the capture process automatically returns to the status it was in at the time of the failure. That is, if it was running at the time of the failure, then the capture process restarts automatically.


See Also:


Database Links Reestablishment After a Failover

It is important to ensure that a propagation continues to function after a failure of a destination database instance. A propagation job will retry (with increasing delay between retries) its database link sixteen times after a failure until the connection is reestablished. If the connection is not reestablished after sixteen tries, then the propagation schedule is aborted.

If the database is restarted on the same node, or on a different node in a cold failover cluster, then the connection should be reestablished. In some circumstances, the database link could be waiting on a read or write, and will not detect the failure until a lengthy time out expires. The time out is controlled by the TCP_KEEPALIVE_INTERVAL TCP/IP parameter. In such circumstances, you should drop and re-create the database link to ensure that communication is reestablished quickly.

In a high availability environment, you can prepare scripts that will drop and re-create all necessary database links. After a failover, you can execute these scripts so that Oracle Streams can resume propagation.


See Also:


Propagation Job Restart After a Failover

For messages to be propagated from a source queue to a destination queue, a propagation job must run on the instance owning the source queue. In a single-node database, or cold failover cluster, propagation resumes when the single database instance is restarted.

Automatic Apply Process Restart After a Failover

After a failure and restart of a single-node database, or a failure and restart of a database on another node in a cold failover cluster, the apply process automatically returns to the status it was in at the time of the failure. That is, if it was running at the time of the failure, then the apply process restarts automatically.


See Also:


PKk\$QrLrPK&AOEBPS/strms_over.htm Introduction to Oracle Streams

1 Introduction to Oracle Streams

This chapter briefly describes the basic concepts and terminology related to Oracle Streams. These concepts are described in more detail in other chapters in this book and in the Oracle Streams Replication Administrator's Guide.

This chapter contains these topics:

Overview of Oracle Streams

Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and you can share these messages in a stream. The stream can propagate information within a database or from one database to another. You specify which information is routed and the destinations to which it is routed. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Oracle Streams provides the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. You can use all of the capabilities of Oracle Streams at the same time. If your needs change, then you can implement a new capability of Oracle Streams without sacrificing existing capabilities.

Using Oracle Streams, you control what information is put into a stream, how the stream flows or is routed from database to database, what happens to messages in the stream as they flow into each database, and how the stream terminates. By configuring specific capabilities of Oracle Streams, you can address specific requirements. Based on your specifications, Oracle Streams can capture, stage, and manage messages in the database automatically, including, but not limited to, data manipulation language (DML) changes and data definition language (DDL) changes. You can also put user-defined messages into a stream, and Oracle Streams can propagate the information to other databases or applications automatically. When messages reach a destination, Oracle Streams can consume them based on your specifications.

Figure 1-1 shows the Oracle Streams information flow.

Figure 1-1 Oracle Streams Information Flow

Description of Figure 1-1 follows
Description of "Figure 1-1 Oracle Streams Information Flow"

What Can Oracle Streams Do?

The following sections provide an overview of what Oracle Streams can do:

Capture Messages at a Database

Oracle Streams provides two ways to capture database changes implicitly: capture processes and synchronous captures. A capture process can capture DML changes made to tables, schemas, or an entire database, and DDL changes. A synchronous capture can capture DML changes made to tables. Rules determine which changes are captured by a capture process or synchronous capture.

Database changes are recorded in the redo log for the database. A capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The messages captured by a capture process are called captured LCRs.

A synchronous capture uses an internal mechanism to capture changes and format each captured change into an LCR. The messages captured by a synchronous capture are called persistent LCRs.

The rules used by a capture process or a synchronous capture determine which changes it captures. When changes are captured by a capture process, the database where changes are generated in the redo log is the source database. When changes are captured by a synchronous capture, the database where the synchronous capture is configured is the source database.

A capture process can capture changes locally at the source database, or it can capture changes remotely at a downstream database. A synchronous capture can only capture changes locally at the source database. Both a capture process and a synchronous capture enqueue logical change records (LCRs) into a queue. When a capture process or a synchronous capture captures changes, it is referred to as implicit capture.

Users and applications can also enqueue messages manually. These messages can be LCRs, or they can be messages of a user-defined type called user messages. When users and applications enqueue messages manually, it is referred to as explicit capture.

Stage Messages in a Queue

Messages are stored (or staged) in a queue. These messages can be logical change records (LCRs) or user messages. Capture processes and synchronous captures enqueue messages into an ANYDATA queue, which can stage messages of different types. Users and applications can enqueue messages into an ANYDATA queue or into a typed queue. A typed queue can stage messages of one specific type only.

Propagate Messages from One Queue to Another

Oracle Streams propagations can propagate messages from one queue to another. These queues can be in the same database or in different databases. Rules determine which messages are propagated by a propagation.

Oracle Streams enables you to configure an environment in which changes are shared through directed networks. In a directed network, propagated messages pass through one or more intermediate databases before arriving at a destination database where they are consumed. The messages might or might not be consumed at an intermediate database in addition to the destination database. Using Oracle Streams, you can choose which messages are propagated to each destination database, and you can specify the route messages will traverse on their way to a destination database.

Consume Messages

A message is consumed when it is dequeued from a queue. An apply process can dequeue messages implicitly. A user, application, or messaging client can dequeue messages explicitly. The database where messages are consumed is called the destination database. In some configurations, the source database and the destination database can be the same.

Rules determine which messages are dequeued and processed by an apply process. An apply process can apply messages directly to database objects or pass messages to custom PL/SQL subprograms for processing.

Rules determine which messages are dequeued by a messaging client. A messaging client dequeues messages when it is invoked by an application or a user.

Detect and Resolve Conflicts

An apply process detects conflicts automatically when directly applying LCRs in a replication environment. A conflict is a mismatch between the old values in an LCR and the expected data in a table. Typically, a conflict results when the same row in the source database and destination database is changed at approximately the same time.

When a conflict occurs, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules. Oracle Streams offers a variety of prebuilt conflict handlers. Using these prebuilt handlers, you can define a conflict resolution system for each of your databases that resolves conflicts in accordance with your business rules. If you have a unique situation that prebuilt conflict resolution handlers cannot resolve, then you can build your own conflict resolution handlers.

If a conflict is not resolved, or if a handler procedure raises an error, then all messages in the transaction that raised the error are saved in the error queue for later analysis and possible reexecution.

Transform Messages

A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE. There are two types of rule-based transformations: declarative and custom.

Declarative rule-based transformations cover a set of common transformation scenarios for row LCRs, including renaming a schema, renaming a table, adding a column, renaming a column, keeping columns, and deleting a column. You specify (or declare) such a transformation using Oracle Enterprise Manager or a procedure in the DBMS_STREAMS_ADM package. Oracle Streams performs declarative transformations internally, without invoking PL/SQL.

A custom rule-based transformation requires a user-defined PL/SQL function to perform the transformation. Oracle Streams invokes the PL/SQL function to perform the transformation. A custom rule-based transformation can modify either LCRs or user messages. For example, a custom rule-based transformation can change the data type of a particular column in an LCR.

Either type of rule-based transformation can occur at the following times:

  • During enqueue of a message by a capture process, which can be useful for formatting a message in a manner appropriate for all destination databases

  • During propagation of a message, which can be useful for transforming a message before it is sent to a specific remote site

  • During dequeue of a message by an apply process or messaging client, which can be useful for formatting a message in a manner appropriate for a specific destination database

When a transformation is performed during apply, an apply process can apply the transformed message directly or send the transformed message to an apply handler for processing.


Note:

  • A rule must be in a positive rule set for its rule-based transformation to be invoked. A rule-based transformation specified for a rule in a negative rule set is ignored by capture processes, propagations, apply processes, and messaging clients.

  • Throughout this document, "rule-based transformation" is used when the text applies to both declarative and custom rule-based transformations. This document distinguishes between the two types of rule-based transformations when necessary.


Track Messages with Oracle Streams Tags

Every redo entry in the redo log has a tag associated with it. The data type of the tag is RAW. By default, when a user or application generates redo entries, the value of the tag is NULL for each redo entry, and a NULL tag consumes no space in the redo entry. The size limit for a tag value is 2000 bytes.

In Oracle Streams, rules can have conditions relating to tag values to control the behavior of Oracle Streams clients. For example, you can use a tag to determine whether an LCR contains a change that originated in the local database or at a different database, so that you can avoid change cycling (sending an LCR back to the database where it originated). Also, you can use a tag to specify the set of destination databases for each LCR. Tags can be used for other LCR tracking purposes as well.

You can specify Oracle Streams tags for redo entries generated by a certain session or by an apply process. These tags then become part of the LCRs captured by a capture process or synchronous capture. Typically, tags are used in Oracle Streams replication environments, but you can use them whenever it is necessary to track database changes and LCRs.

Share Information with Non-Oracle Databases

In addition to information sharing between Oracle databases, Oracle Streams supports heterogeneous information sharing between Oracle databases and non-Oracle databases.

What Are the Uses of Oracle Streams?

The following topics briefly describe some of the reasons for using Oracle Streams:

In some cases, Oracle Streams components provide an infrastructure for various features of Oracle.

Data Replication

Oracle Streams can capture data manipulation language (DML) and data definition language (DDL) changes made to database objects and replicate those changes to one or more other databases. An Oracle Streams capture process or synchronous capture captures changes made to source database objects and formats them into LCRs, which can be propagated to destination databases and then applied by Oracle Streams apply processes.

The destination databases can allow DML and DDL changes to the same database objects, and these changes might or might not be propagated to the other databases in the environment. In other words, you can configure an Oracle Streams environment with one database that propagates changes, or you can configure an environment where changes are propagated between databases bidirectionally. Also, the tables for which data is shared do not need to be identical copies at all databases. Both the structure and the contents of these tables can differ at different databases, and the information in these tables can be shared between these databases.


See Also:


Data Warehouse Loading

Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing existing data, and adding new data from the operational databases. Oracle Streams components can capture changes made to a production system and send those changes to a staging database or directly to a data warehouse or operational data store. Oracle Streams capture of redo data with a capture process avoids unnecessary overhead on the production systems. Oracle Streams provides a "one-step" procedure (DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE) that configures Oracle Streams to record the changes made to a table. Support for data transformations and user-defined apply procedures enables the necessary flexibility to reformat data or update warehouse-specific data fields as data is loaded. In addition, Change Data Capture uses some of the components of Oracle Streams to identify data that has changed so that this data can be loaded into a data warehouse.

Database Availability During Upgrade and Maintenance Operations

You can use the features of Oracle Streams to achieve little or no database down time during database upgrade and maintenance operations. Maintenance operations include migrating a database to a different platform, migrating a database to a different character set, modifying database schema objects to support upgrades to user-created applications, and applying an Oracle software patch.

Message Queuing

Oracle Streams Advanced Queuing (AQ) enables user applications to enqueue messages into a queue, propagate messages to subscribing queues, notify user applications that messages are ready for consumption, and dequeue messages at the destination. A queue can be configured to stage messages of a particular type only, or a queue can be configured as an ANYDATA queue. Messages of almost any type can be wrapped in an ANYDATA wrapper and staged in ANYDATA queues. Oracle Streams AQ supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, Internet propagation, transformations, and gateways to other messaging subsystems.

You can create a queue at a database, and applications can enqueue messages into the queue explicitly. Subscribing applications or messaging clients can dequeue messages directly from this queue. If an application is remote, then a queue can be created in a remote database that subscribes to messages published in the source queue. The destination application can dequeue messages from the remote queue. Alternatively, the destination application can dequeue messages directly from the source queue using a variety of standard protocols.


See Also:


Event Management and Notification

Business events are valuable communications between applications or organizations. An application can enqueue messages that represent events into a queue explicitly, or an Oracle Streams capture process or synchronous capture can capture database events and encapsulate them into messages called LCRs. These messages can be the results of DML or DDL changes. Propagations can propagate messages in a stream through multiple queues. Finally, a user application can dequeue messages explicitly, or an Oracle Streams apply process can dequeue messages implicitly. An apply process can reenqueue these messages explicitly into the same queue or a different queue if necessary.

You can configure queues to retain explicitly-enqueued messages after consumption for a specified period of time. This capability enables you to use Oracle Streams Advanced Queuing (AQ) as a business event management system. Oracle Streams AQ stores all messages in the database in a transactional manner, where they can be automatically audited and tracked. You can use this audit trail to extract intelligence about the business operations.

Oracle Streams capture processes, synchronous captures, propagations, apply processes, and messaging clients perform actions based on rules. You specify which events are captured, propagated, applied, and dequeued using rules, and a built-in rules engine evaluates events based on these rules. The ability to capture events and propagate them to relevant consumers based on rules means that you can use Oracle Streams for event notification. Messages representing events can be staged in a queue and dequeued explicitly by a messaging client or an application, and then actions can be taken based on these events, which can include an e-mail notification, or passing the message to a wireless gateway for transmission to a cell phone or pager.

Data Protection

One solution for data protection is to create a local or remote copy of a production database. In the event of human error or a catastrophe, you can use the copy to resume processing.

You can use Oracle Data Guard SQL Apply, a data protection feature that uses some of the same infrastructure as Oracle Streams, to create and maintain a logical standby database, which is a logically equivalent standby copy of a production database. As in Oracle Streams replication, a capture process captures changes in the redo log and formats these changes into LCRs. These LCRs are applied at the standby databases. The standby databases are open for read/write and can include specialized indexes or other database objects. Therefore, these standby databases can be queried as updates are applied.

It is important to move the updates to the remote site as soon as possible with a logical standby database. Doing so ensures that, in the event of a failure, lost transactions are minimal. By directly and synchronously writing the redo logs at the remote database, you can achieve no data loss in the event of a disaster. At the standby system, the changes are captured and directly applied to the standby database with an apply process.

Sample Oracle Streams Configurations

Each of the following sections provide an overview of a sample Oracle Streams configuration:

Sample Hub-and-Spoke Replication Configuration

Figure 1-2 shows a sample hub-and-spoke replication configuration. A hub-and-spoke replication configuration typically is used to distribute information to multiple target databases and to consolidate information from multiple databases to a single database.

A hub-and-spoke replication configuration is one in which a central database, or hub, communicates with one or more secondary databases, or spokes. The spokes do not communicate directly with each other. In a hub-and-spoke replication configuration, the spokes might or might not allow changes to the replicated database objects.

In the sample hub-and-spoke replication configuration shown in Figure 1-2, there is one hub database and two spoke databases. The spoke databases allow changes to the replicated database objects.

Figure 1-2 Sample Hub-and-Spoke Replication Configuration

Description of Figure 1-2 follows
Description of "Figure 1-2 Sample Hub-and-Spoke Replication Configuration"

For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.

Sample Replication Configuration with Downstream Capture

Figure 1-3 shows a sample replication configuration that uses a downstream capture process. Downstream capture means that the capture process runs on a remote database instead of the source database. Using downstream capture removes the capture workload from the production database.

In the sample replication configuration shown in Figure 1-3, the downstream capture process runs at the remote database dest.example.com, and the redo data is sent from the source database src.example.com to the remote database. At the remote database, a downstream capture process captures the changes in the redo data sent from the source database and an apply process applies these changes to the local database objects.

Figure 1-3 Sample Replication Configuration with Downstream Capture

Description of Figure 1-3 follows
Description of "Figure 1-3 Sample Replication Configuration with Downstream Capture"

For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.

Sample Replication Configuration That Uses Synchronous Captures

Figure 1-4 shows a sample replication configuration that uses synchronous captures to capture changes instead of capture processes. You can use a synchronous capture replication configuration to replicate changes to tables with infrequent data changes in a highly active database or in situations where capturing changes from the redo logs is not possible.

Figure 1-4 Sample Replication Configuration with Synchronous Captures

Description of Figure 1-4 follows
Description of "Figure 1-4 Sample Replication Configuration with Synchronous Captures"

For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.

Sample N-Way Replication Configuration

Figure 1-5 shows a sample n-way replication configuration. An n-way replication configuration typically is used in an environment with several peer databases and each database must replicate data with each of the other databases. An n-way replication configuration can provide load balancing, and it can provide failover protection if a single database becomes unavailable.

An n-way replication configuration is one in which each database communicates directly with each other database in the environment. The changes made to replicated database objects at one database are captured and sent directly to each of the other databases in the environment, where they are applied.

In the sample n-way replication configuration shown in Figure 1-5, each of the three databases captures changes to the replicated database objects and sends these changes to the other two databases in the configuration. Apply processes at each database apply the changes sent from the other two databases.

Figure 1-5 Sample N-Way Replication Configuration

Description of Figure 1-5 follows
Description of "Figure 1-5 Sample N-Way Replication Configuration"

For more information about this configuration, see Oracle Streams Extended Examples.

Sample Configuration That Performs Capture and Apply in a Single Database

Figure 1-6 shows a sample configuration that captures database changes with a capture process and applies these changes with an apply process in a single database. In this configuration, the apply process reenqueues the changes into the queue for processing by an application. Also, a procedure DML handler inserts rows that were deleted from the hr.employees table into a hr.emp_del table.

Figure 1-6 Sample Single Database Capture and Apply Configuration

Description of Figure 1-6 follows
Description of "Figure 1-6 Sample Single Database Capture and Apply Configuration"

For more information about this configuration, see Oracle Streams Extended Examples.

Sample Messaging Configuration

Figure 1-7 shows a sample messaging configuration. A messaging configuration sends messages from one queue to another queue. The two queues can be in the same database or in different databases. The messages can be dequeued and processed by applications in a customized way.

In the sample messaging configuration shown in Figure 1-7, a trigger at one database creates and enqueues messages. A propagation sends the messages to another database, where a PL/SQL procedure dequeues the messages and processes them.

Figure 1-7 Sample Messaging Configuration

Description of Figure 1-7 follows
Description of "Figure 1-7 Sample Messaging Configuration"

For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.

Oracle Streams Documentation Roadmap

Oracle Streams provides many options for setting up, managing, and monitoring information-sharing environments. This section provides a documentation roadmap to help you find the documentation you need.

The Oracle Streams documentation set includes the following documents:

  • Oracle Database 2 Day + Data Replication and Integration Guide contains the essential concepts related to Oracle Streams, examples that set up the most common replication and messaging environments, and basic instructions for managing and monitoring Oracle Streams components. The instructions in this document show you how to complete tasks using Oracle Enterprise Manager when possible. Some instructions show you how to complete tasks using SQL*Plus and Oracle-supplied packages.

  • Oracle Streams Concepts and Administration contains detailed conceptual information about Oracle Streams, detailed instructions for managing Oracle Streams components using Oracle-supplied packages, and detailed instructions for monitoring Oracle Streams components with data dictionary views.

  • Oracle Streams Replication Administrator's Guide contains conceptual information that relates to Oracle Streams replication environments, information about configuring an Oracle Streams replication environment using Oracle-supplied packages, and information about managing an Oracle Streams replication environment using Oracle-supplied packages.

  • Oracle Streams Extended Examples contains detailed example that configure different types of Oracle Streams environment, including replication environments, using Oracle-supplied packages.

  • Oracle Streams Advanced Queuing User's Guide contains conceptual information about Oracle Streams messaging (Advanced Queuing) environments, information about configuring a messaging environment, and information about managing a messaging environment using Oracle-supplied packages and other administrative interfaces.

  • Oracle Database PL/SQL Packages and Types Reference contains reference information about the Oracle-supplied packages and types related to Oracle Streams.

  • Oracle Database Reference contains reference information about the data dictionary views related to Oracle Streams.

  • The Oracle Streams online help in Oracle Enterprise Manager contains instructions for setting up, managing, and monitoring an Oracle Streams environment using Oracle Enterprise Manager.

This documentation roadmap is intended to guide you to the information you need in these documents.

This section contains the following topics:

Documentation for Learning About Oracle Streams

Before setting up an Oracle Streams environment, it is best to understand the features of Oracle Streams and how you can use them. Table 1-1 helps you find conceptual information about Oracle Streams.

Table 1-1 Documentation for Learning About Oracle Streams

For conceptual information aboutSee

apply processes

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about applying database changes and other types of messages with apply processes

"Implicit Consumption with an Apply Process" for general apply process concepts

Chapter 10, "Advanced Apply Process Concepts" for advanced apply process concepts, such as information about applying changes with dependencies and applying DML and DDL changes

capture processes

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about capturing database changes that were recorded in the redo log using capture processes

"Implicit Capture with an Oracle Streams Capture Process" for general capture process concepts

Chapter 7, "Advanced Capture Process Concepts" for advanced capture process concepts, such as information about multiple capture processes in a single database and capture process checkpoints

Oracle Streams Replication Administrator's Guide for conceptual information about supplemental logging

capturing messages with applications (explicit capture)

"Explicit Capture by Applications" for an overview of capturing messages with applications

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about capturing messages with applications

Oracle Streams Advanced Queuing User's Guide for detailed information about capturing messages with applications

combined capture and apply optimization

Chapter 12, "Combined Capture and Apply Optimization" for information about improving performance by sending database changes more efficiently from capture processes to apply processes in a replication environment

comparing and converging data

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about how to compare database objects at two different databases and about how to converge differences in these database objects

Oracle Streams Replication Administrator's Guide for detailed information about comparing database objects at two different databases and converging differences in these database objects

conflicts and conflict resolution

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about conflicts that result when changes are made to the same row in two or more replicated tables at nearly the same time, and for essential information about resolving these conflicts automatically

Oracle Streams Replication Administrator's Guide for detailed information about conflicts and conflict resolution

consuming messages with applications (explicit consumption)

"Explicit Consumption with Manual Dequeue" for an overview of consuming messages with applications

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about consuming messages with applications

Oracle Streams Advanced Queuing User's Guide for detailed information about consuming messages with applications

heterogeneous information sharing


Oracle Database 2 Day + Data Replication and Integration Guide for essential information about working with non-Oracle databases

Oracle Database XStream Guide for information about using XStream for heterogeneous information sharing

Oracle Streams Replication Administrator's Guide for detailed information about working with non-Oracle databases

high availability

Chapter 13, "Oracle Streams High Availability Environments"

Oracle Database High Availability Overview for information about your high availability options

information provisioning

Chapter 35, "Information Provisioning Concepts" for information about moving or copying large amounts of information efficiently

instantiation


"Instantiation in an Oracle Streams Environment" for essential information about preparing database objects for replication at two or more databases

Oracle Streams Replication Administrator's Guide for detailed information about instantiation

logical change records (LCRs)

"Logical Change Records (LCRs)" for information about how Oracle Streams uses messages that describe database changes

messaging clients

"Explicit Consumption with a Messaging Client"


Oracle Streams best practices

Oracle Streams Replication Administrator's Guide


Oracle Streams capabilities

"What Can Oracle Streams Do?"


Oracle Streams interoperability with other Oracle Database components

Appendix A, "How Oracle Streams Works with Other Database Components"


Oracle Streams restrictions

Appendix B, "Oracle Streams Restrictions"


Oracle Streams uses

"What Are the Uses of Oracle Streams?"


propagations

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about sending messages between queues

"Message Propagation Between Queues" for general propagation concepts

Chapter 9, "Advanced Propagation Concepts" for advanced propagation concepts

queues

"Queues" for essential information about how queues store messages

Chapter 8, "Advanced Queue Concepts" for advanced queue concepts

Oracle Streams Advanced Queuing User's Guide for detailed information about queues

rules

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about rules and how Oracle Streams uses them

Chapter 5, "How Rules Are Used in Oracle Streams" for information about the ways in which rules determine the behavior of Oracle Streams clients

Chapter 11, "Advanced Rule Concepts" for advanced rule concepts

rule-based transformations

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about how rule-based transformations support non-identical replicas of database objects

Chapter 6, "Rule-Based Transformations" for detailed information about rule-based transformations

synchronous captures

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about capturing database changes using synchronous captures

"Implicit Capture with Synchronous Capture" for detailed information about synchronous captures

tags

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about how tags can add additional information to captured database changes

Oracle Streams Replication Administrator's Guide for detailed information about tags

user messages

"User Messages" for essential information about messages that are created and enqueued by users and applications

Oracle Streams Advanced Queuing User's Guide for detailed information about user messages


Documentation About Setting Up or Extending an Oracle Streams Environment

You can set up many different types of Oracle Streams environments, and you have several options for setting them up. Table 1-2 helps you find the documentation you need to set up an Oracle Streams environment.

Table 1-2 Documentation About Setting Up or Extending an Oracle Streams Environment

For instructions aboutSee

setting up an Oracle Streams replication environment using Oracle Enterprise Manager

Oracle Database 2 Day + Data Replication and Integration Guide for examples that use Oracle Enterprise Manager to set up the most common types of Oracle Streams replication environments

Online help for the Setup Streams Replication Wizard in Oracle Enterprise Manager

Oracle Streams Replication Administrator's Guide for instructions about opening the Setup Streams Replication Wizard in Oracle Enterprise Manager

setting up an Oracle Streams replication environment using a one-step procedure

Oracle Streams Replication Administrator's Guide for detailed instructions about using the one-step procedures in the DBMS_STREAMS_ADM package, including information about decisions to make and tasks to complete before running a procedure

Oracle Database PL/SQL Packages and Types Reference for reference information about the one-step procedures in the DBMS_STREAMS_ADM package

setting up an Oracle Streams replication environment by configuring components individually

Oracle Streams Replication Administrator's Guide for step-by-step instructions to set up an Oracle Streams replication environment by configuring individual components in the correct order

Oracle Database 2 Day + Data Replication and Integration Guide for an example that provides step-by-step instructions for setting up an Oracle Streams replication environment that uses synchronous captures

Oracle Streams Extended Examples for the following examples:

  • An example that provides step-by-step instructions for setting up a simple replication environment that replicates changes to a single table

  • An example that provides step-by-step instructions for setting up a heterogeneous replication environment that includes a rule-based transformation

  • An example that provides step-by-step instructions for setting up an n-way replication environment with conflict resolution

Oracle Database PL/SQL Packages and Types Reference for reference information about the packages that can set up an Oracle Streams replication environment. These packages are described in "Oracle-Supplied PL/SQL Packages".

extending an Oracle Streams replication environment using Oracle Enterprise Manager

Oracle Database 2 Day + Data Replication and Integration Guide for examples that use Oracle Enterprise Manager to extend the most common types of Oracle Streams replication environments by adding databases and tables

extending an Oracle Streams replication environment using a one-step procedure

Oracle Streams Replication Administrator's Guide for examples that use the one-step procedures in the DBMS_STREAMS_ADM package to extend the most common types of Oracle Streams replication environments by adding databases and tables

Oracle Database PL/SQL Packages and Types Reference for reference information about the one-step procedures that can extend an Oracle Streams replication environment

extending an Oracle Streams replication environment by configuring components individually

Oracle Streams Replication Administrator's Guide for step-by-step instructions to extend an Oracle Streams replication environment by configuring individual components in the correct order

Oracle Streams Extended Examples for an example that provides step-by-step instructions for extending a heterogeneous replication environment

Oracle Database PL/SQL Packages and Types Reference for reference information about the packages that can extend an Oracle Streams replication environment. These packages are described in "Oracle-Supplied PL/SQL Packages".

setting up an Oracle Streams messaging environment

Oracle Database 2 Day + Data Replication and Integration Guide for the following examples:

  • An example that provides step-by-step instructions for setting up a messaging environment that sends messages between databases

  • An example that provides step-by-step instructions for setting up message notifications that inform applications when new messages are in a queue

Oracle Streams Advanced Queuing User's Guide for detailed instructions about setting up messaging environments

Oracle Database PL/SQL Packages and Types Reference for reference information about the packages used to set up messaging environments, including DBMS_STREAMS_ADM, DBMS_STREAMS_MESSAGING, DBMS_AQADM, and DBMS_AQ

Oracle Streams best practices

Oracle Streams Replication Administrator's Guide for information about the best practices to follow when setting up an Oracle Streams environment

setting up a tablespace repository

"Using a Tablespace Repository"


setting up a file group repository

"Using a File Group Repository"



Documentation About Managing an Oracle Streams Environment

You can use Oracle-supplied PL/SQL packages and Oracle Enterprise Manager to manage an Oracle Streams environment. Table 1-3 helps you find the documentation you need to manage an Oracle Streams environment.

Table 1-3 Documentation About Managing an Oracle Streams Environment

For instructions about managingSee

apply processes

Oracle Database 2 Day + Data Replication and Integration Guide for information about starting an apply process, stopping an apply process, and setting apply process parameters using Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about managing apply handlers and apply tags using Oracle Enterprise Manager, and about dropping apply processes using Oracle Enterprise Manager

Chapter 17, "Managing Oracle Streams Information Consumption" for information about managing apply processes using Oracle-supplied packages

capture processes

Oracle Database 2 Day + Data Replication and Integration Guide for information about starting a capture process, stopping a capture process, and setting capture process parameters using Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about setting a first SCN or start SCN for a capture process using Oracle Enterprise Manager, and about dropping a capture process using Oracle Enterprise Manager

"Managing a Capture Process" for information about managing capture processes using Oracle-supplied packages

Oracle Streams Replication Administrator's Guide for information about managing supplemental logging

changing the DBID or global name of an Oracle Streams database

Oracle Streams Replication Administrator's Guide


comparing and converging data

Oracle Database 2 Day + Data Replication and Integration Guide for essential information about using the DBMS_COMPARISON package and its related data dictionary views

Oracle Streams Replication Administrator's Guide for detailed information about using the DBMS_COMPARISON package and its related data dictionary views

conflicts and conflict resolution

Oracle Streams Replication Administrator's Guide for information about avoiding conflicts and configuring conflict resolution

export/import and Oracle Streams

"Performing Full Database Export/Import in an Oracle Streams Environment"


information provisioning

Chapter 36, "Using Information Provisioning"


instantiation


Oracle Streams Replication Administrator's Guide for information about performing instantiations

logical change records (LCRs)

"Managing Extra Attributes in Captured LCRs"

Oracle Streams Replication Administrator's Guide


Oracle Streams best practices

Oracle Streams Replication Administrator's Guide for information about the best practices to follow when managing an Oracle Streams environment

Oracle Streams replication environments

Oracle Streams Replication Administrator's Guide


Oracle-supplied packages related to Oracle Streams

Oracle Database PL/SQL Packages and Types Reference for reference information about the packages that you can use to manage an Oracle Streams environment. These packages are briefly described in "Oracle-Supplied PL/SQL Packages".

point-in-time recovery and Oracle Streams

Oracle Streams Replication Administrator's Guide


propagations

Oracle Database 2 Day + Data Replication and Integration Guide for information about enabling and disabling propagations using Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about scheduling, unscheduling, and dropping propagations using Oracle Enterprise Manager

"Managing Oracle Streams Propagations and Propagation Jobs" for information about managing propagations using Oracle-supplied packages

Oracle Streams Advanced Queuing User's Guide for information about managing propagations using Oracle-supplied packages and other administrative interfaces

queues

Oracle Database 2 Day + Data Replication and Integration Guide for information about modifying queues and queue tables using Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about managing queues, queue tables, and Advanced Queuing transformations using Oracle Enterprise Manager

Oracle Streams Advanced Queuing User's Guide for information about managing queues using Oracle-supplied packages and other administrative interfaces

"Managing Queues"

removing an Oracle Streams configuration

"Removing an Oracle Streams Configuration"


resynchronizing a source database

Oracle Streams Replication Administrator's Guide


rules

Oracle Enterprise Manager online help for information about managing rules using Oracle Enterprise Manager

Chapter 18, "Managing Rules" for information about managing rules using Oracle-supplied packages

rule-based transformations

Oracle Enterprise Manager online help for information about managing rule-based transformations using Oracle Enterprise Manager

Chapter 19, "Managing Rule-Based Transformations" for information about managing rule-based transformations using Oracle-supplied packages

synchronous captures

"Managing a Synchronous Capture" for information about managing synchronous captures using Oracle-supplied packages

tags

Oracle Streams Replication Administrator's Guide


troubleshooting

Oracle Database 2 Day + Data Replication and Integration Guide for information about responding to Oracle Streams alerts and managing apply errors using Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about troubleshooting an Oracle Streams environment using Oracle Enterprise Manager

Part V, "Troubleshooting an Oracle Streams Environment" for information about troubleshooting an Oracle Streams environment

unavailable destination database

Oracle Streams Replication Administrator's Guide for information about splitting off an unavailable destination database from a replication environment and merging the database back into the replication environment when it becomes available again


Documentation About Monitoring an Oracle Streams Environment

You primarily use Oracle supplied PL/SQL packages, data dictionary views, and Oracle Enterprise Manager to manage an Oracle Streams environment. Table 1-4 helps you find the documentation you need to manage an Oracle Streams environment.

Table 1-4 Documentation About Monitoring an Oracle Streams Environment

For instructions about monitoringSee

apply processes

Oracle Database 2 Day + Data Replication and Integration Guide for information about monitoring apply process properties and statistics using3B Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about monitoring apply process parameters, apply handlers, and apply errors using Oracle Enterprise Manager

Chapter 26, "Monitoring Oracle Streams Apply Processes" for information about monitoring apply processes using data dictionary views

capture processes

Oracle Database 2 Day + Data Replication and Integration Guide for information about monitoring capture process properties and statistics using Oracle Enterprise Manager

Oracle Enterprise Manager online help for information about monitoring capture process parameters using Oracle Enterprise Manager

"Monitoring a Capture Process" for information about monitoring capture processes using data dictionary views

combined capture and apply optimization

"Determining Which Capture Processes Use Combined Capture and Apply"

"Determining Which Apply Processes Use Combined Capture and Apply"


compatibility

"Monitoring Compatibility in an Oracle Streams Environment" for information about listing database objects that are not compatible with Oracle Streams clients

conflicts and conflict resolution

Oracle Database 2 Day + Data Replication and Integration Guide for information about viewing update conflict handlers using data dictionary views

Oracle Streams Replication Administrator's Guide for information about monitoring conflict detection and update conflict handlers using data dictionary views

data dictionary views related to Oracle Streams

Chapter 22, "Oracle Streams Static Data Dictionary Views"

Oracle Database Reference


information provisioning

Chapter 37, "Monitoring File Group and Tablespace Repositories"


instantiation


Oracle Streams Replication Administrator's Guide


logical change records (LCRs)

Oracle Streams Replication Administrator's Guide for information about tracking LCRs through a stream

messaging

Oracle Database 2 Day + Data Replication and Integration Guide for information about viewing the messages in a queue, queue statistics, and queue subscribers using Oracle Enterprise Manager

Oracle Streams Advanced Queuing User's Guide for information about monitoring messaging environments using data dictionary views

"Monitoring Queues and Messaging"

"Monitoring Buffered Queues"

Oracle Streams administrators

"Monitoring Oracle Streams Administrators and Other Oracle Streams Users"


Oracle Streams pool


"Monitoring the Oracle Streams Pool"


Oracle Streams topology and performance statistics

Chapter 23, "Monitoring the Oracle Streams Topology and Performance"


propagations

Oracle Database 2 Day + Data Replication and Integration Guide for information about monitoring propagation properties and statistics using Oracle Enterprise Manager

"Monitoring Oracle Streams Propagations and Propagation Jobs" for information about monitoring propagations using data dictionary views

rules

Oracle Enterprise Manager online help for information about monitoring rules using Oracle Enterprise Manager

Chapter 27, "Monitoring Rules" for information about monitoring rules using data dictionary views

rule-based transformations

Chapter 28, "Monitoring Rule-Based Transformations" for information about monitoring rule-based transformations using data dictionary views

synchronous captures

"Monitoring a Synchronous Capture" for information about monitoring synchronous captures using data dictionary views

Note: Oracle Enterprise Manager currently does not support monitoring synchronous captures.

tags

Oracle Streams Replication Administrator's Guide for information about monitoring tags using data dictionary views


Documentation About Using Oracle Streams for Upgrade and Maintenance

You can use Oracle Streams to achieve little or no down time for one-time operations, such as upgrading a database. Table 1-5 helps you find the documentation you need to perform one-time operations with Oracle Streams.

Table 1-5 Documentation About Data Upgrade and Maintenance with Oracle Streams

For instructions aboutSee

performing database upgrade and maintenance operations and using Oracle Streams to achieve little or no down time

Appendix D, "Online Database Upgrade and Maintenance with Oracle Streams" for information about using Oracle Streams to perform a database upgrade from a 10.2 or later database to the current release with little or no down time and for information about using Oracle Streams to perform a database maintenance operations with little or no down time. These database maintenance operations include migrating a database to a different platform, migrating a database to a different character set, modifying database schema objects to support upgrades to user-created applications, and applying an Oracle Database software patch or patch set.

upgrading a database and using Oracle Streams to achieve little or no down time

Appendix E, "Online Upgrade of a 10.1 or Earlier Database with Oracle Streams" for information about using Oracle Streams to perform a database upgrade from a 10.1 or earlier database to the current release with little or no down time


PKfIBѳPK&AOEBPS/strms_rules.htm How Rules Are Used in Oracle Streams

5 How Rules Are Used in Oracle Streams

The following topics contain information about how rules are used in Oracle Streams:

Overview of How Rules Are Used in Oracle Streams

In Oracle Streams, each of the following mechanisms is called an Oracle Streams client because each one is a client of a rules engine (when the mechanism is associated with one or more rule sets):

  • Capture process

  • Synchronous capture

  • Propagation

  • Apply process

  • Messaging client

Except for synchronous capture, each of these clients can be associated with at most two rule sets: a positive rule set and a negative rule set. A synchronous capture can be associated with at most one positive rule set. A synchronous capture cannot be associated with a negative rule set.

A single rule set can be used by multiple capture processes, synchronous captures, propagations, apply processes, and messaging clients within the same database. Also, a single rule set can be a positive rule set for one Oracle Streams client and a negative rule set for another Oracle Streams client.

Figure 5-1 illustrates how multiple clients of a rules engine can use one rule set.

Figure 5-1 One Rule Set Can Be Used by Multiple Clients of a Rules Engine

Description of Figure 5-1 follows
Description of "Figure 5-1 One Rule Set Can Be Used by Multiple Clients of a Rules Engine"

An Oracle Streams client performs a task if a message satisfies its rule sets. In general, a message satisfies the rule sets for an Oracle Streams client if no rules in the negative rule set evaluate to TRUE for the message, and at least one rule in the positive rule set evaluates to TRUE for the message.

"Rule Sets and Rule Evaluation of Messages" contains more detailed information about how a message satisfies the rule sets for an Oracle Streams client, including information about Oracle Streams client behavior when one or more rule sets are not specified.

You use rule sets in Oracle Streams in the following ways:

  • Specify the changes that a capture process captures from the redo log or discards. That is, if a change found in the redo log satisfies the rule sets for a capture process, then the capture process captures the change. If a change found in the redo log causes does not satisfy the rule sets for a capture process, then the capture process discards the change.

  • Specify the changes that a synchronous capture captures. That is, if DML change satisfies the rule set for a synchronous capture, then the synchronous capture captures the change immediately after the change is committed. If a DML change made to a table does not satisfy the rule set for a synchronous capture, then the synchronous capture does not capture the change.

  • Specify the messages that a propagation propagates from one queue to another or discards. That is, if a message in a queue satisfies the rule sets for a propagation, then the propagation propagates the message. If a message in a queue does not satisfy the rule sets for a propagation, then the propagation discards the message.

  • Specify the messages that an apply process dequeues or discards. That is, if a message in a queue satisfies the rule sets for an apply process, then the message is dequeued and processed by the apply process. If a message in a queue does not satisfy the rule sets for an apply process, then the apply process discards the message.

  • Specify the persistent LCRs or persistent user messages that a messaging client dequeues or discards. That is, if a message in a persistent queue satisfies the rule sets for a messaging client, then the user or application that is using the messaging client dequeues the message. If a message in a persistent queue does not satisfy the rule sets for a messaging client, then the user or application that is using the messaging client discards the message.

For a propagation, the messages evaluated against the rule sets can be any type of message, including captured LCRs, persistent LCRs, buffered LCRs, persistent user messages or buffered user messages.

For an apply process, the messages evaluated against the rule sets can be captured LCRs, persistent LCRs, or persistent user messages.

If there are conflicting rules in the positive rule set associated with a client, then the client performs the task if either rule evaluates to TRUE. For example, if a rule in the positive rule set for a capture process contains one rule that instructs the capture process to capture the results of data manipulation language (DML) changes to the hr.employees table, but another rule in the rule set instructs the capture process not to capture the results of DML changes to the hr.employees table, then the capture process captures these changes.

Similarly, if there are conflicting rules in the negative rule set associated with a client, then the client discards a message if either rule evaluates to TRUE for the message. For example, if a rule in the negative rule set for a capture process contains one rule that instructs the capture process to discard the results of DML changes to the hr.departments table, but another rule in the rule set instructs the capture process not to discard the results of DML changes to the hr.departments table, then the capture process discards these changes.

Rule Sets and Rule Evaluation of Messages

Oracle Streams clients perform the following tasks based on rules:

These Oracle Streams clients are all clients of the rules engine. An Oracle Streams client performs its task for a message when the message satisfies the rule sets used by the Oracle Streams client. An Oracle Streams client can have no rule set, only a positive rule set, only a negative rule set, or both a positive and a negative rule set.

The following sections explain how rule evaluation works in each of these cases:

Oracle Streams Client with No Rule Set

An Oracle Streams client with no rule set performs its task for all of the messages it encounters. An empty rule set is not the same as no rule set at all.

A capture process should always have at least one rule set because it must not try to capture changes to unsupported database objects. If a propagation should always propagate all messages in its source queue, or if an apply process should always dequeue all messages in its queue, then removing all rule sets from the propagation or apply process might improve performance. A synchronous capture must have a positive rule set. A synchronous capture cannot be configured without a rule set.

Oracle Streams Client with a Positive Rule Set Only

An Oracle Streams client with a positive rule set, but no negative rule set, performs its task for a message if any rule in the positive rule set evaluates to TRUE for the message. However, if all of the rules in a positive rule set evaluate to FALSE for the message, then the Oracle Streams client discards the message.

Oracle Streams Client with a Negative Rule Set Only

An Oracle Streams client with a negative rule set, but no positive rule set, discards a message if any rule in the negative rule set evaluates to TRUE for the message. However, if all of the rules in a negative rule set evaluate to FALSE for the message, then the Oracle Streams client performs its task for the message. A synchronous capture cannot have a negative rule set.

Oracle Streams Client with Both a Positive and a Negative Rule Set

If an Oracle Streams client has both a positive and a negative rule set, then the negative rule set is evaluated first for a message. If any rule in the negative rule set evaluates to TRUE for the message, then the message is discarded, and the message is never evaluated against the positive rule set.

However, if all of the rules in the negative rule set evaluate to FALSE for the message, then the message is evaluated against the positive rule set. At this point, the behavior is the same as when the Oracle Streams client only has a positive rule set. That is, the Oracle Streams client performs its task for a message if any rule in the positive rule set evaluates to TRUE for the message. If all of the rules in a positive rule set evaluate to FALSE for the message, then the Oracle Streams client discards the message.

A synchronous capture cannot have a negative rule set.

Oracle Streams Client with One or More Empty Rule Sets

An Oracle Streams client can have one or more empty rule sets. An Oracle Streams client behaves in the following ways if it has one or more empty rule sets:

  • If an Oracle Streams client has no positive rule set, and its negative rule set is empty, then the Oracle Streams client performs its task for all messages.

  • If an Oracle Streams client has both a positive and a negative rule set, and the negative rule set is empty but its positive rule set contains rules, then the Oracle Streams client performs its task based on the rules in the positive rule set.

  • If an Oracle Streams client has a positive rule set that is empty, then the Oracle Streams client discards all messages, regardless of the state of its negative rule set.

Summary of Rule Sets and Oracle Streams Client Behavior

Table 5-1 summarizes the Oracle Streams client behavior described in the previous sections.

Table 5-1 Rule Sets and Oracle Streams Client Behavior

Negative Rule SetPositive Rule SetOracle Streams Client Behavior

None

None

Performs its task for all messages

None

Exists with rules

Performs its task for messages that evaluate to TRUE against the positive rule set

Exists with rules

None

Discards messages that evaluate to TRUE against the negative rule set, and performs its task for all other messages

Exists with rules

Exists with rules

Discards messages that evaluate to TRUE against the negative rule set, and performs its task for remaining messages that evaluate to TRUE against the positive rule set. The negative rule set is evaluated first.

Exists but is empty

None

Performs its task for all messages

Exists but is empty

Exists with rules

Performs its task for messages that evaluate to TRUE against the positive rule set

None

Exists but is empty

Discards all messages

Exists but is empty

Exists but is empty

Discards all messages

Exists with rules

Exists but is empty

Discards all messages


System-Created Rules

An Oracle Streams client performs its task for a message if the message satisfies its rule sets. A system-created rule is created by the DBMS_STREAMS_ADM package and can specify one of the following levels of granularity: table, schema, or global. This section describes each of these levels. You can specify more than one level for a particular task. For example, you can instruct a single apply process to perform table-level apply for specific tables in the oe schema and schema-level apply for the entire hr schema. In addition, a single rule pertains to either the results of data manipulation language (DML) changes or data definition language (DDL) changes. So, for example, you must use at least two system-created rules to include all of the changes to a particular table: one rule for the results of DML changes and another rule for DDL changes. The results of a DML change are the row changes that result from the DML change, or the row LCRs in a queue that encapsulate each row change.

Table 5-2 shows what each level of rule means for each Oracle Streams task. Remember that a negative rule set is evaluated before a positive rule set.

Table 5-2 Types of Tasks and Rule Levels

TaskTable RuleSchema RuleGlobal Rule

Capture with a capture process

If the table rule is in a negative rule set, then discard the changes in the redo log for the specified table.

If the table rule is in a positive rule set, then capture all or a subset of the changes in the redo log for the specified table, convert them into logical change records (LCRs), and enqueue them.

If the schema rule is in a negative rule set, then discard the changes in the redo log for the schema itself and for the database objects in the specified schema.

If the schema rule is in a positive rule set, then capture the changes in the redo log for the schema itself and for the database objects in the specified schema, convert them into LCRs, and enqueue them.

If the global rule is in a negative rule set, then discard the changes to all of the database objects in the database.

If the global rule is in a positive rule set, then capture the changes to all of the database objects in the database, convert them into LCRs, and enqueue them.

Capture with a synchronous capture

If the table rule is in a positive rule set, then capture all or a subset of the changes made to the specified table, convert them into logical change records (LCRs), and enqueue them.

A synchronous capture cannot have a negative rule set.

A synchronous capture cannot use schema rules.

A synchronous capture cannot use global rules.

Propagate with a propagation

If the table rule is in a negative rule set, then discard the LCRs relating to the specified table in the source queue.

If the table rule is in a positive rule set, then propagate all or a subset of the LCRs relating to the specified table in the source queue to the destination queue.

If the schema rule is in a negative rule set, then discard the LCRs related to the specified schema itself and the LCRs related to database objects in the schema in the source queue.

If the schema rule is in a positive rule set, then propagate the LCRs related to the specified schema itself and the LCRs related to database objects in the schema in the source queue to the destination queue.

If the global rule is in a negative rule set, then discard all of the LCRs in the source queue.

If the global rule is in a positive rule set, then propagate all of the LCRs in the source queue to the destination queue.

Apply with an apply process

If the table rule is in a negative rule set, then discard the LCRs in the queue relating to the specified table.

If the table rule is in a positive rule set, then apply all or a subset of the LCRs in the queue relating to the specified table.

If the schema rule is in a negative rule set, then discard the LCRs in the queue relating to the specified schema itself and the database objects in the schema.

If the schema rule is in a positive rule set, then apply the LCRs in the queue relating to the specified schema itself and the database objects in the schema.

If the global rule is in a negative rule set, then discard all of the LCRs in the queue.

If the global rule is in a positive rule set, then apply all of the LCRs in the queue.

Dequeue with a messaging client

If the table rule is in a negative rule set, then, when the messaging client is invoked, discard the persistent LCRs relating to the specified table in the queue.

If the table rule is in a positive rule set, then, when the messaging client is invoked, dequeue all or a subset of the persistent LCRs relating to the specified table in the queue.

If the schema rule is in a negative rule set, then, when the messaging client is invoked, discard the persistent LCRs relating to the specified schema itself and the database objects in the schema in the queue.

If the schema rule is in a positive rule set, then, when the messaging client is invoked, dequeue the persistent LCRs relating to the specified schema itself and the database objects in the schema in the queue.

If the global rule is in a negative rule set, then, when the messaging client is invoked, discard all of the persistent LCRs in the queue.

If the global rule is in a positive rule set, then, when the messaging client is invoked, dequeue all of the persistent LCRs in the queue.


You can use procedures in the DBMS_STREAMS_ADM package to create rules at each of these levels. A system-created rule can include conditions that modify the Oracle Streams client behavior beyond the descriptions in Table 5-2. For example, some rules can specify a particular source database for LCRs, and, in this case, the rule evaluates to TRUE only if an LCR originated at the specified source database. Table 5-3 lists the types of system-created rule conditions that can be specified in the rules created by the DBMS_STREAMS_ADM package.

Table 5-3 System-Created Rule Conditions Generated by DBMS_STREAMS_ADM Package

Rule Condition Evaluates to TRUE forOracle Streams ClientCreate Using Procedure

All row changes recorded in the redo log because of DML changes to any of the tables in a particular database

Capture Process

ADD_GLOBAL_RULES

All DDL changes recorded in the redo log to any of the database objects in a particular database

Capture Process

ADD_GLOBAL_RULES

All row changes recorded in the redo log because of DML changes to any of the tables in a particular schema

Capture Process

ADD_SCHEMA_RULES

All DDL changes recorded in the redo log to a particular schema and any of the database objects in the schema

Capture Process

ADD_SCHEMA_RULES

All row changes recorded in the redo log because of DML changes to a particular table

Capture Process

ADD_TABLE_RULES

All DDL changes recorded in the redo log to a particular table

Capture Process

ADD_TABLE_RULES

All row changes recorded in the redo log because of DML changes to a subset of rows in a particular table

Capture Process

ADD_SUBSET_RULES

All row changes made to a particular table resulting from DML statements

Synchronous Capture

ADD_TABLE_RULES

All row changes made to a subset of rows in a particular table resulting from DML statements

Synchronous Capture

ADD_SUBSET_RULES

All row LCRs in the source queue

Propagation

ADD_GLOBAL_PROPAGATION_RULES

All DDL LCRs in the source queue

Propagation

ADD_GLOBAL_PROPAGATION_RULES

All row LCRs in the source queue relating to the tables in a particular schema

Propagation

ADD_SCHEMA_PROPAGATION_RULES

All DDL LCRs in the source queue relating to a particular schema and any of the database objects in the schema

Propagation

ADD_SCHEMA_PROPAGATION_RULES

All row LCRs in the source queue relating to a particular table

Propagation

ADD_TABLE_PROPAGATION_RULES

All DDL LCRs in the source queue relating to a particular table

Propagation

ADD_TABLE_PROPAGATION_RULES

All row LCRs in the source queue relating to a subset of rows in a particular table

Propagation

ADD_SUBSET_PROPAGATION_RULES

All user messages in the source queue of the specified type that satisfy the user-specified rule condition

Propagation

ADD_MESSAGE_PROPAGATION_RULE

All row LCRs in the queue used by the apply process

Apply Process

ADD_GLOBAL_RULES

All DDL LCRs in the queue used by the apply process

Apply Process

ADD_GLOBAL_RULES

All row LCRs in the queue used by the apply process relating to the tables in a particular schema

Apply Process

ADD_SCHEMA_RULES

All DDL LCRs in the queue used by the apply process relating to a particular schema and any of the database objects in the schema

Apply Process

ADD_SCHEMA_RULES

All row LCRs in the queue used by the apply process relating to a particular table

Apply Process

ADD_TABLE_RULES

All DDL LCRs in the queue used by the apply process relating to a particular table

Apply Process

ADD_TABLE_RULES

All row LCRs in the queue used by the apply process relating to a subset of rows in a particular table

Apply Process

ADD_SUBSET_RULES

All persistent user messages in the queue used by the apply process of the specified type that satisfy the user-specified rule condition

Apply Process

ADD_MESSAGE_RULE

All persistent row LCRs in the queue used by the messaging client

Messaging Client

ADD_GLOBAL_RULES

All persistent DDL LCRs in the queue used by the messaging client

Messaging Client

ADD_GLOBAL_RULES

All persistent row LCRs in the queue used by the messaging client relating to the tables in a particular schema

Messaging Client

ADD_SCHEMA_RULES

All persistent DDL LCRs in the queue used by the messaging client relating to a particular schema and any of the database objects in the schema

Messaging Client

ADD_SCHEMA_RULES

All persistent row LCRs in the queue for the messaging client relating to a particular table

Messaging Client

ADD_TABLE_RULES

All persistent DDL LCRs in the queue used by the messaging client relating to a particular table

Messaging Client

ADD_TABLE_RULES

All persistent row LCRs in the queue used by the messaging client relating to a subset of rows in a particular table

Messaging Client

ADD_SUBSET_RULES

All persistent messages in the queue used by the messaging client of the specified type that satisfy the user-specified rule condition

Messaging Client

ADD_MESSAGE_RULE


Each procedure listed in Table 5-3 does the following:

  • Creates a capture process, synchronous capture, propagation, apply process, or messaging client if it does not already exist.

  • Creates a rule set for the specified capture process, synchronous capture, propagation, apply process, or messaging client if a rule set does not already exist for it. For a capture process, propagation, apply process, or messaging client, the rule set can be a positive rule set or a negative rule set. You can create each type of rule set by running the procedure at least twice. For a synchronous capture, the rule set must be a positive rule set.

  • Creates zero or more rules and adds the rules to the rule set for the specified capture process, synchronous capture, propagation, apply process, or messaging client. Based on your specifications when you run one of these procedures, the procedure adds the rules either to the positive rule set or to the negative rule set.

Except for the ADD_MESSAGE_RULE and ADD_MESSAGE_PROPAGATION_RULE procedures, these procedures create rule sets that use the SYS.STREAMS$_EVALUATION_CONTEXT evaluation context, which is an Oracle-supplied evaluation context for Oracle Streams environments.

Global, schema, table, and subset rules use the SYS.STREAMS$_EVALUATION_CONTEXT evaluation context. However, when you create a rule using either the ADD_MESSAGE_RULE or the ADD_MESSAGE_PROPAGATION_RULE procedure, the rule uses a system-generated evaluation context that is customized specifically for each message type. Rule sets created by the ADD_MESSAGE_RULE or the ADD_MESSAGE_PROPAGATION_RULE procedure do not have an evaluation context.

Except for ADD_SUBSET_RULES, ADD_SUBSET_PROPAGATION_RULES, ADD_MESSAGE_RULE, and ADD_MESSAGE_PROPAGATION_RULE, these procedures create either zero, one, or two rules. If you want to perform the Oracle Streams task for only the row changes resulting from DML changes or only for only DDL changes, then only one rule is created. If, however, you want to perform the Oracle Streams task for both the results of DML changes and DDL changes, then a rule is created for each. If you create a DML rule for a table now, then you can create a DDL rule for the same table in the future without modifying the DML rule created earlier. The same applies if you create a DDL rule for a table first and a DML rule for the same table in the future.

The ADD_SUBSET_RULES and ADD_SUBSET_PROPAGATION_RULES procedures always create three rules for three different types of DML operations on a table: INSERT, UPDATE, and DELETE. These procedures do not create rules for DDL changes to a table. You can use the ADD_TABLE_RULES or ADD_TABLE_PROPAGATION_RULES procedure to create a DDL rule for a table. In addition, you can add subset rules to positive rule sets only, not to negative rule sets.

The ADD_MESSAGE_RULE and ADD_MESSAGE_PROPAGATION_RULE procedures always create one rule with a user-specified rule condition. These procedures create rules for user messages. They do not create rules for the results of DML changes or DDL changes to a table.

When you create propagation rules for captured LCRs, Oracle recommends that you specify a source database for the changes. An apply process uses transaction control messages to assemble captured LCRs into committed transactions. These transaction control messages, such as COMMIT and ROLLBACK, contain the name of the source database where the message occurred. To avoid unintended cycling of these messages, propagation rules should contain a condition specifying the source database, and you accomplish this by specifying the source database when you create the propagation rules.

The following sections describe system-created rules in more detail:


Note:

  • To create rules with more complex rule conditions, such as rules that use the NOT or OR logical conditions, either use the and_condition parameter, which is available with some of the procedures in the DBMS_STREAMS_ADM package, or use the DBMS_RULE_ADM package.

  • Each example in the sections that follow should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.

  • Some of the examples in this section have additional prerequisites. For example, a queue specified by a procedure parameter must exist.



See Also:


Global Rules

When you use a rule to specify an Oracle Streams task that is relevant either to an entire database or to an entire queue, you are specifying a global rule. You can specify a global rule for DML changes, a global rule for DDL changes, or a global rule for each type of change (two rules total).

A single global rule in the positive rule set for a capture process means that the capture process captures the results of either all DML changes or all DDL changes to the source database. A single global rule in the negative rule set for a capture process means that the capture process discards the results of either all DML changes or all DDL changes to the source database.

A single global rule in the positive rule set for a propagation means that the propagation propagates either all row LCRs or all DDL LCRs in the source queue to the destination queue. A single global rule in the negative rule set for a propagation means that the propagation discards either all row LCRs or all DDL LCRs in the source queue.

A single global rule in the positive rule set for an apply process means that the apply process applies either all row LCRs or all DDL LCRs in its queue for a specified source database. A single global rule in the negative rule set for an apply process means that the apply process discards either all row LCRs or all DDL LCRs in its queue for a specified source database.

If you want to use global rules, but you are concerned about changes to database objects that are not supported by Oracle Streams, then you can create rules using the DBMS_RULE_ADM package to discard unsupported changes.

Global Rules Example

Suppose you use the ADD_GLOBAL_RULES procedure in the DBMS_STREAMS_ADM package to instruct an Oracle Streams capture process to capture all DML changes and DDL changes in a database.

Run the ADD_GLOBAL_RULES procedure to create the rules:

BEGIN 
  DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
    streams_type        =>  'capture',
    streams_name        =>  'capture',
    queue_name          =>  'streams_queue',
    include_dml         =>  TRUE,
    include_ddl         =>  TRUE,
    include_tagged_lcr  =>  FALSE,
    source_database     =>  NULL,
    inclusion_rule      =>  TRUE);
END;
/

Notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rules are added to the positive rule set for the capture process.

NULL can be specified for the source_database parameter because rules are being created for a local capture process. You can also specify the global name of the local database. When creating rules for a downstream capture process or apply process using ADD_GLOBAL_RULES, specify a source database name.

The ADD_GLOBAL_RULES procedure creates two rules: one for row LCRs (which contain the results of DML changes) and one for DDL LCRs.

Here is the rule condition used by the row LCR rule:

(:dml.is_null_tag() = 'Y' )

Notice that the condition in the DML rule begins with the variable :dml. The value is determined by a call to the specified member function for the row LCR being evaluated. So, :dml.is_null_tag() is a call to the IS_NULL_TAG member function for the row LCR being evaluated.

Here is the rule condition used by the DDL LCR rule:

(:ddl.is_null_tag() = 'Y' )

Notice that the condition in the DDL rule begins with the variable :ddl. The value is determined by a call to the specified member function for the DDL LCR being evaluated. So, :ddl.is_null_tag() is a call to the IS_NULL_TAG member function for the DDL LCR being evaluated.

For a capture process, these conditions indicate that the tag must be NULL in a redo record for the capture process to capture a change. For a propagation, these conditions indicate that the tag must be NULL in an LCR for the propagation to propagate the LCR. For an apply process, these conditions indicate that the tag must be NULL in an LCR for the apply process to apply the LCR.

Given the rules created by this example in the positive rule set for the capture process, the capture process captures all supported DML and DDL changes made to the database.


Caution:

If you add global rules to the positive rule set for a capture process, then ensure that you add rules to the negative capture process rule set to exclude database objects that are not support by capture processes. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which database objects are not supported by capture processes. If unsupported database objects are not excluded, then capture errors will result.

If you add global rules to the positive rule set for an apply process, then ensure that the apply process does not attempt to apply changes to unsupported columns. To do so, you can add rules to the negative apply process rule set to exclude the table that contains the column, or you can exclude the column with a rule-based transformation or DML handler. Query the DBA_STREAMS_COLUMNS data dictionary view to determine which columns are not supported by apply processes. If unsupported columns are not excluded, then apply errors will result.


System-Created Global Rules Avoid Empty Rule Conditions Automatically

You can omit the is_null_tag condition in system-created rules by specifying TRUE for the include_tagged_lcr parameter when you run a procedure in the DBMS_STREAMS_ADM package. For example, the following ADD_GLOBAL_RULES procedure creates rules without the is_null_tag condition:

BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
   streams_type        =>  'capture',
   streams_name        =>  'capture_002',
   queue_name          =>  'streams_queue',
   include_dml         =>  TRUE,
   include_ddl         =>  TRUE,
   include_tagged_lcr  =>  TRUE,
   source_database     =>  NULL,
   inclusion_rule      =>  TRUE);
END;
/

When you set the include_tagged_lcr parameter to TRUE for a global rule, and the source_database_name parameter is set to NULL, the rule condition used by the row LCR rule is the following:

(( :dml.get_source_database_name()>=' ' OR 
:dml.get_source_database_name()<=' ') )

Here is the rule condition used by the DDL LCR rule:

(( :ddl.get_source_database_name()>=' ' OR 
:ddl.get_source_database_name()<=' ') )

The system-created global rules contain these conditions to enable all row and DDL LCRs to evaluate to TRUE.

These rule conditions are specified to avoid NULL rule conditions for these rules. NULL rule conditions are not supported. In this case, if you want to capture all DML and DDL changes to a database, and you do not want to use any rule-based transformations for these changes upon capture, then you can choose to run the capture process without a positive rule set instead of specifying global rules.


Note:

  • When you create a capture process using a procedure in the DBMS_STREAMS_ADM package and generate one or more rules for the capture process, the objects for which changes are captured are prepared for instantiation automatically, unless it is a downstream capture process and there is no database link from the downstream database to the source database.

  • The capture process does not capture some types of DML and DDL changes, and it does not capture changes made in the SYS, SYSTEM, or CTXSYS schemas.



See Also:


Schema Rules

When you use a rule to specify an Oracle Streams task that is relevant to a schema, you are specifying a schema rule. You can specify a schema rule for DML changes, a schema rule for DDL changes, or a schema rule for each type of change to the schema (two rules total).

A single schema rule in the positive rule set for a capture process means that the capture process captures either the DML changes or the DDL changes to the schema. A single schema rule in the negative rule set for a capture process means that the capture process discards either the DML changes or the DDL changes to the schema.

A single schema rule in the positive rule set for a propagation means that the propagation propagates either the row LCRs or the DDL LCRs in the source queue that contain changes to the schema. A single schema rule in the negative rule set for a propagation means that the propagation discards either the row LCRs or the DDL LCRs in the source queue that contain changes to the schema.

A single schema rule in the positive rule set for an apply process means that the apply process applies either the row LCRs or the DDL LCRs in its queue that contain changes to the schema. A single schema rule in the negative rule set for an apply process means that the apply process discards either the row LCRs or the DDL LCRs in its queue that contain changes to the schema.

If you want to use schema rules, but you are concerned about changes to database objects in a schema that are not supported by Oracle Streams, then you can create rules using the DBMS_RULE_ADM package to discard unsupported changes.

Schema Rule Example

Suppose you use the ADD_SCHEMA_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package to instruct an Oracle Streams propagation to propagate row LCRs and DDL LCRs relating to the hr schema from a queue at the dbs1.example.com database to a queue at the dbs2.example.com database.

Run the ADD_SCHEMA_PROPAGATION_RULES procedure at dbs1.example.com to create the rules:

BEGIN 
  DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name              =>  'hr',
    streams_name             =>  'dbs1_to_dbs2',
    source_queue_name        =>  'streams_queue',
    destination_queue_name   =>  'streams_queue@dbs2.example.com',
    include_dml              =>  TRUE,
    include_ddl              =>  TRUE,
    include_tagged_lcr       =>  FALSE,
    source_database          =>  'dbs1.example.com',
    inclusion_rule           =>  TRUE);
END;
/

Notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rules are added to the positive rule set for the propagation.

The ADD_SCHEMA_PROPAGATION_RULES procedure creates two rules: one for row LCRs (which contain the results of DML changes) and one for DDL LCRs.

Here is the rule condition used by the row LCR rule:

((:dml.get_object_owner() = 'HR') and :dml.is_null_tag() = 'Y' 
and :dml.get_source_database_name() = 'DBS1.EXAMPLE.COM' )

Here is the rule condition used by the DDL LCR rule:

((:ddl.get_object_owner() = 'HR' or :ddl.get_base_table_owner() = 'HR') 
and :ddl.is_null_tag() = 'Y' and :ddl.get_source_database_name() = 
'DBS1.EXAMPLE.COM' )

The GET_BASE_TABLE_OWNER member function is used in the DDL LCR rule because the GET_OBJECT_OWNER function can return NULL if a user who does not own an object performs a DDL change on the object.

Given these rules in the positive rule set for the propagation, the following list provides examples of changes propagated by the propagation:

  • A row is inserted into the hr.countries table.

  • The hr.loc_city_ix index is altered.

  • The hr.employees table is truncated.

  • A column is added to the hr.countries table.

  • The hr.update_job_history trigger is altered.

  • A new table named candidates is created in the hr schema.

  • Twenty rows are inserted into the hr.candidates table.

The propagation propagates the LCRs that contain all of the changes previously listed from the source queue to the destination queue.

Now, given the same rules, suppose a row is inserted into the oe.inventories table. This change is ignored because the oe schema was not specified in a schema rule, and the oe.inventories table was not specified in a table rule.


Caution:

If you add schema rules to the positive rule set for a capture process, then ensure that you add rules to the negative capture process rule set to exclude database objects in the schema that are not support by capture processes. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which database objects are not supported by capture processes. If unsupported database objects are not excluded, then capture errors will result.

If you add schema rules to the positive rule set for an apply process, then ensure that the apply process does not attempt to apply changes to unsupported columns. To do so, you can add rules to the negative apply process rule set to exclude the table that contains the column, or you can exclude the column with a rule-based transformation or DML handler. Query the DBA_STREAMS_COLUMNS data dictionary view to determine which columns are not supported by apply processes. If unsupported columns are not excluded, then apply errors will result.


Table Rules

When you use a rule to specify an Oracle Streams task that is relevant only for an individual table, you are specifying a table rule. You can specify a table rule for DML changes, a table rule for DDL changes, or a table rule for each type of change to a specific table (two rules total).

A single table rule in the positive rule set for a capture process means that the capture process captures the results of either the DML changes or the DDL changes to the table. A single table rule in the negative rule set for a capture process means that the capture process discards the results of either the DML changes or the DDL changes to the table.

A single table rule in the positive rule set for a synchronous capture means that the synchronous capture captures the results of either the DML changes to the table. A synchronous capture cannot have a negative rule set.

A single table rule in the positive rule set for a propagation means that the propagation propagates either the row LCRs or the DDL LCRs in the source queue that contain changes to the table. A single table rule in the negative rule set for a propagation means that the propagation discards either the row LCRs or the DDL LCRs in the source queue that contain changes to the table.

A single table rule in the positive rule set for an apply process means that the apply process applies either the row LCRs or the DDL LCRs in its queue that contain changes to the table. A single table rule in the negative rule set for an apply process means that the apply process discards either the row LCRs or the DDL LCRs in its queue that contain changes to the table.

Table Rules Example

Suppose you use the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to instruct an Oracle Streams apply process to behave in the following ways:

Apply All Row LCRs Related to the hr.locations Table

The changes in these row LCRs originated at the dbs1.example.com source database.

Run the ADD_TABLE_RULES procedure to create this rule:

BEGIN 
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name          =>  'hr.locations',
    streams_type        =>  'apply',
    streams_name        =>  'apply',
    queue_name          =>  'streams_queue',
    include_dml         =>  TRUE,
    include_ddl         =>  FALSE,
    include_tagged_lcr  =>  FALSE,
    source_database     =>  'dbs1.example.com',
    inclusion_rule      =>  TRUE);
END;
/

Notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rule is added to the positive rule set for the apply process.

The ADD_TABLE_RULES procedure creates a rule with a rule condition similar to the following:

(((:dml.get_object_owner() = 'HR' and :dml.get_object_name() = 'LOCATIONS')) 
and :dml.is_null_tag() = 'Y' and :dml.get_source_database_name() = 
'DBS1.EXAMPLE.COM' )
Apply All DDL LCRs Related to the hr.countries Table

The changes in these DDL LCRs originated at the dbs1.example.com source database.

Run the ADD_TABLE_RULES procedure to create this rule:

BEGIN 
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name          =>  'hr.countries',
    streams_type        =>  'apply',
    streams_name        =>  'apply',
    queue_name          =>  'streams_queue',
    include_dml         =>  FALSE,
    include_ddl         =>  TRUE,
    include_tagged_lcr  =>  FALSE,
    source_database     =>  'dbs1.example.com',
    inclusion_rule      =>  TRUE);
END;
/

Notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rule is added to the positive rule set for the apply process.

The ADD_TABLE_RULES procedure creates a rule with a rule condition similar to the following:

(((:ddl.get_object_owner() = 'HR' and :ddl.get_object_name() = 'COUNTRIES')
or (:ddl.get_base_table_owner() = 'HR' 
and :ddl.get_base_table_name() = 'COUNTRIES')) and :ddl.is_null_tag() = 'Y' 
and :ddl.get_source_database_name() = 'DBS1.EXAMPLE.COM' )

The GET_BASE_TABLE_OWNER and GET_BASE_TABLE_NAME member functions are used in the DDL LCR rule because the GET_OBJECT_OWNER and GET_OBJECT_NAME functions can return NULL if a user who does not own an object performs a DDL change on the object.

The generated DDL table rule evaluates to TRUE for any DDL change that operates on the table or on an object that is part of the table, such as an index or trigger on the table. The rule evaluates to FALSE for any DDL change that either does not refer to the table or refers to the table in a subordinate way. For example, the rule evaluates to FALSE for changes that create synonyms or views based on the table. The rule also evaluates to FALSE for a change to a PL/SQL subprogram that refers to the table.

Summary of Rules

In this example, the following table rules were defined:

  • A table rule that evaluates to TRUE if a row LCR contains a row change that results from a DML operation on the hr.locations table.

  • A table rule that evaluates to TRUE if a DDL LCR contains a DDL change performed on the hr.countries table.

Given these rules, the following list provides examples of changes applied by an apply process:

  • A row is inserted into the hr.locations table.

  • Five rows are deleted from the hr.locations table.

  • A column is added to the hr.countries table.

The apply process dequeues the LCRs containing these changes from its associated queue and applies them to the database objects at the destination database.

Given these rules, the following list provides examples of changes that are ignored by the apply process:

  • A row is inserted into the hr.employees table. This change is not applied because a change to the hr.employees table does not satisfy any of the rules.

  • A row is updated in the hr.countries table. This change is a DML change, not a DDL change. This change is not applied because the rule on the hr.countries table is for DDL changes only.

  • A column is added to the hr.locations table. This change is a DDL change, not a DML change. This change is not applied because the rule on the hr.locations table is for DML changes only.


Caution:

Do not add table rules to the positive rule set of a capture process for tables that are not supported by capture processes. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which tables are not supported by capture processes. If unsupported table are not excluded, then capture errors will result.

If you add table rules to the positive rule set for a synchronous capture or an apply process, then ensure that these Oracle Streams clients do not attempt to process changes to unsupported columns. If a table includes an unsupported column, then you can exclude the column with a rule-based transformation or, for an apply process, with a DML handler. Query the DBA_STREAMS_COLUMNS data dictionary view to determine which columns are not supported by synchronous captures and apply processes. If unsupported columns are not excluded, then errors will result.


Subset Rules

A subset rule is a special type of table rule for DML changes that is relevant only to a subset of the rows in a table. You can create subset rules for capture processes, synchronous captures, apply processes, and messaging clients using the ADD_SUBSET_RULES procedure. You can create subset rules for propagations using the ADD_SUBSET_PROPAGATION_RULES procedure. These procedures enable you to use a condition similar to a WHERE clause in a SELECT statement to specify the following:

  • That a capture process only captures a subset of the row changes resulting from DML changes to a particular table

  • That a synchronous capture only captures a subset of the row changes resulting from DML changes to a particular table

  • That a propagation only propagates a subset of the row LCRs relating to a particular table

  • That an apply process only applies a subset of the row LCRs relating to a particular table

  • That a messaging client only dequeues a subset of the row LCRs relating to a particular table

The ADD_SUBSET_RULES procedure and the ADD_SUBSET_PROPAGATION_RULES procedure can add subset rules to the positive rule set only of an Oracle Streams client. You cannot add subset rules to the negative rule set for an Oracle Streams client using these procedures.

The following sections describe subset rules in more detail:

Subset Rules Example

This example instructs an Oracle Streams apply process to apply a subset of row LCRs relating to the hr.regions table where the region_id is 2. These changes originated at the dbs1.example.com source database.

Run the ADD_SUBSET_RULES procedure to create three rules:

BEGIN 
  DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
    table_name               =>  'hr.regions',
    dml_condition            =>  'region_id=2',
    streams_type             =>  'apply',
    streams_name             =>  'apply',
    queue_name               =>  'streams_queue',
    include_tagged_lcr       =>  FALSE,
    source_database          =>  'dbs1.example.com');
END;
/

The ADD_SUBSET_RULES procedure creates three rules: one for INSERT operations, one for UPDATE operations, and one for DELETE operations.

Here is the rule condition used by the insert rule:

:dml.get_object_owner()='HR' AND :dml.get_object_name()='REGIONS' 
AND :dml.is_null_tag()='Y' AND :dml.get_source_database_name()='DBS1.EXAMPLE.COM'
AND :dml.get_command_type() IN ('UPDATE','INSERT') 
AND (:dml.get_value('NEW','"REGION_ID"') IS NOT NULL) 
AND (:dml.get_value('NEW','"REGION_ID"').AccessNumber()=2) 
AND (:dml.get_command_type()='INSERT' 
OR ((:dml.get_value('OLD','"REGION_ID"') IS NOT NULL) 
AND (((:dml.get_value('OLD','"REGION_ID"').AccessNumber() IS NOT NULL) 
AND NOT (:dml.get_value('OLD','"REGION_ID"').AccessNumber()=2)) 
OR ((:dml.get_value('OLD','"REGION_ID"').AccessNumber() IS NULL) 
AND NOT EXISTS (SELECT 1 FROM SYS.DUAL 
WHERE (:dml.get_value('OLD','"REGION_ID"').AccessNumber()=2))))))

Based on this rule condition, row LCRs are evaluated in the following ways:

  • For an insert, if the new value in the row LCR for region_id is 2, then the insert is applied.

  • For an insert, if the new value in the row LCR for region_id is not 2 or is NULL, then the insert is filtered out.

  • For an update, if the old value in the row LCR for region_id is not 2 or is NULL and the new value in the row LCR for region_id is 2, then the update is converted into an insert and applied. This automatic conversion is called row migration. See "Row Migration and Subset Rules" for more information.

Here is the rule condition used by the update rule:

:dml.get_object_owner()='HR' AND :dml.get_object_name()='REGIONS' 
AND :dml.is_null_tag()='Y' AND :dml.get_source_database_name()='DBS1.EXAMPLE.COM'
AND :dml.get_command_type()='UPDATE' 
AND (:dml.get_value('NEW','"REGION_ID"') IS NOT NULL) 
AND (:dml.get_value('OLD','"REGION_ID"') IS NOT NULL) 
AND (:dml.get_value('OLD','"REGION_ID"').AccessNumber()=2) 
AND (:dml.get_value('NEW','"REGION_ID"').AccessNumber()=2)

Based on this rule condition, row LCRs are evaluated in the following ways:

  • For an update, if both the old value and the new value in the row LCR for region_id are 2, then the update is applied as an update.

  • For an update, if either the old value or the new value in the row LCR for region_id is not 2 or is NULL, then the update does not satisfy the update rule. The LCR can satisfy the insert rule, the delete rule, or neither rule.

Here is the rule condition used by the delete rule:

:dml.get_object_owner()='HR' AND :dml.get_object_name()='REGIONS' 
AND :dml.is_null_tag()='Y' AND :dml.get_source_database_name()='DBS1.EXAMPLE.COM'
AND :dml.get_command_type() IN ('UPDATE','DELETE') 
AND (:dml.get_value('OLD','"REGION_ID"') IS NOT NULL) 
AND (:dml.get_value('OLD','"REGION_ID"').AccessNumber()=2) 
AND (:dml.get_command_type()='DELETE' 
OR ((:dml.get_value('NEW','"REGION_ID"') IS NOT NULL) 
AND (((:dml.get_value('NEW','"REGION_ID"').AccessNumber() IS NOT NULL) 
AND NOT (:dml.get_value('NEW','"REGION_ID"').AccessNumber()=2)) 
OR ((:dml.get_value('NEW','"REGION_ID"').AccessNumber() IS NULL) 
AND NOT EXISTS (SELECT 1 FROM SYS.DUAL 
WHERE (:dml.get_value('NEW','"REGION_ID"').AccessNumber()=2))))))

Based on this rule condition, row LCRs are evaluated in the following ways:

  • For a delete, if the old value in the row LCR for region_id is 2, then the delete is applied.

  • For a delete, if the old value in the row LCR for region_id is not 2 or is NULL, then the delete is filtered out.

  • For an update, if the old value in the row LCR for region_id is 2 and the new value in the row LCR for region_id is not 2 or is NULL, then the update is converted into a delete and applied. This automatic conversion is called row migration. See "Row Migration and Subset Rules" for more information.

Given these subset rules, the following list provides examples of changes applied by an apply process:

  • A row is updated in the hr.regions table where the old region_id is 4 and the new value of region_id is 2. This update is transformed into an insert.

  • A row is updated in the hr.regions table where the old region_id is 2 and the new value of region_id is 1. This update is transformed into a delete.

The apply process dequeues row LCRs containing these changes from its associated queue and applies them to the hr.regions table at the destination database.

Given these subset rules, the following list provides examples of changes that are ignored by the apply process:

  • A row is inserted into the hr.employees table. This change is not applied because a change to the hr.employees table does not satisfy the subset rules.

  • A row is updated in the hr.regions table where the region_id was 1 before the update and remains 1 after the update. This change is not applied because the subset rules for the hr.regions table evaluate to TRUE only when the new or old (or both) values for region_id is 2.


Caution:

Do not add subset rules to the positive rule set of a capture process for tables that are not supported by capture processes. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which tables are not supported by capture processes. If unsupported table are not excluded, then capture errors will result.

If you add subset rules to the positive rule set for a synchronous capture or an apply process, then ensure that these Oracle Streams clients do not attempt to process changes to unsupported columns. If a table includes an unsupported column, then you can exclude the column with a rule-based transformation or, for an apply process, with a DML handler. Query the DBA_STREAMS_COLUMNS data dictionary view to determine which columns are not supported by synchronous captures and apply processes. If unsupported columns are not excluded, then errors will result.


Row Migration and Subset Rules

When you use subset rules, an update operation can be converted into an insert or delete operation when it is captured, propagated, applied, or dequeued. This automatic conversion is called row migration and is performed by an internal transformation specified automatically in the action context for a subset rule. The following sections describe row migration during capture, propagation, apply, and dequeue.

This section contains these topics:


Caution:

Subset rules should reside only in positive rule sets. Do not add subset rules to negative rule sets. Doing so can have unpredictable results, because row migration would not be performed on LCRs that are not discarded by the negative rule set. Also, row migration is not performed on LCRs discarded because they evaluate to TRUE against a negative rule set.

Row Migration During Capture

When a subset rule is in the rule set for a capture process or synchronous capture, an update that satisfies the subset rule can be converted into an insert or delete when it is captured.

For example, suppose you use a subset rule to specify that a capture process or a synchronous capture captures changes to the hr.employees table where the employee's department_id is 50 using the following subset condition: department_id = 50. Assume that the table at the source database contains records for employees from all departments. If a DML operation changes an employee's department_id from 80 to 50, then the subset rule converts the update operation into an insert operation and captures the change. Therefore, a row LCR that contains an INSERT is enqueued into the queue. Figure 5-2 illustrates this example with a subset rule for a capture process.

Figure 5-2 Row Migration During Capture by a Capture Process

Description of Figure 5-2 follows
Description of "Figure 5-2 Row Migration During Capture by a Capture Process"

Similarly, if a captured update changes an employee's department_id from 50 to 20, then a capture process or synchronous capture with this subset rule converts the update operation into a DELETE operation.

Row Migration During Propagation

When a subset rule is in the rule set for a propagation, an update operation can be converted into an insert or delete operation when a row LCR is propagated.

For example, suppose you use a subset rule to specify that a propagation propagates changes to the hr.employees table where the employee's department_id is 50 using the following subset condition: department_id = 50. If the source queue for the propagation contains a row LCR with an update operation on the hr.employees table that changes an employee's department_id from 50 to 80, then the propagation with the subset rule converts the update operation into a delete operation and propagates the row LCR to the destination queue. Therefore, a row LCR that contains a DELETE is enqueued into the destination queue. Figure 5-3 illustrates this example.

Figure 5-3 Row Migration During Propagation

Description of Figure 5-3 follows
Description of "Figure 5-3 Row Migration During Propagation"

Similarly, if a captured update changes an employee's department_id from 80 to 50, then a propagation with this subset rule converts the update operation into an INSERT operation.

Row Migration During Apply

When a subset rule is in the rule set for an apply process, an update operation can be converted into an insert or delete operation when a row LCR is applied.

For example, suppose you use a subset rule to specify that an apply process applies changes to the hr.employees table where the employee's department_id is 50 using the following subset condition: department_id = 50. Assume that the table at the destination database is a subset table that only contains records for employees whose department_id is 50. If a source database captures a change to an employee that changes the employee's department_id from 80 to 50, then the apply process with the subset rule at a destination database applies this change by converting the update operation into an insert operation. This conversion is needed because the employee's row does not exist in the destination table. Figure 5-4 illustrates this example.

Figure 5-4 Row Migration During Apply

Description of Figure 5-4 follows
Description of "Figure 5-4 Row Migration During Apply"

Similarly, if a captured update changes an employee's department_id from 50 to 20, then an apply process with this subset rule converts the update operation into a DELETE operation.

Row Migration During Dequeue by a Messaging Client

When a subset rule is in the rule set for a messaging client, an update operation can be converted into an insert or delete operation when a row LCR is dequeued.

For example, suppose you use a subset rule to specify that a messaging client dequeues changes to the hr.employees table when the employee's department_id is 50 using the following subset condition: department_id = 50. If the queue for a messaging client contains a persistent row LCR with an update operation on the hr.employees table that changes an employee's department_id from 50 to 90, then when a user or application invokes a messaging client with this subset rule, the messaging client converts the update operation into a delete operation and dequeues the row LCR. Therefore, a row LCR that contains a DELETE is dequeued. The messaging client can process this row LCR in any customized way. For example, it can send the row LCR to a custom application. Figure 5-5 illustrates this example.

Figure 5-5 Row Migration During Dequeue by a Messaging Client

Description of Figure 5-5 follows
Description of "Figure 5-5 Row Migration During Dequeue by a Messaging Client"

Similarly, if a persistent row LCR contains an update that changes an employee's department_id from 90 to 50, then a messaging client with this subset rule converts the UPDATE operation into an INSERT operation during dequeue.

Subset Rules and Supplemental Logging

Supplemental logging is required when you specify the following types of subset rules:

  • Subset rules for a capture process

  • Subset rules for a propagation that will propagate LCRs captured by a capture process

  • Subset rules for an apply process that will apply LCRs captured by a capture process

In any of these cases, an unconditional supplemental log group must be specified at the source database for all the columns in the subset condition and all of the columns in the table(s) at the destination database(s) that will apply these changes. In some cases, when a subset rule is specified, an update can be converted to an insert, and, in these cases, supplemental information might be needed for some or all of the columns.

For example, if you specify a subset rule for an apply process that will apply captured LCRs at database dbs2.example.com on the postal_code column in the hr.locations table, and the source database for changes to this table is dbs1.example.com, then specify supplemental logging at dbs1.example.com for all of the columns that exist in the hr.locations table at dbs2.example.com, and for the postal_code column, even if this column does not exist in the table at the destination database.


Note:

Supplemental logging is not required when subset rules are used by a synchronous capture. Also, supplemental logging is not required propagations or apply processes process LCRs captured by synchronous capture.

Guidelines for Using Subset Rules

The following sections provide guidelines for using subset rules:

Use Capture Subset Rules When All Destinations Need Only a Subset of Changes

Use subset rules with a capture process or a synchronous capture when all destination databases of the captured changes need only row changes that satisfy the subset condition for the table. In this case, a capture process or a synchronous capture captures a subset of the DML changes to the table, and one or more propagations propagate these changes in the form of row LCRs to one or more destination databases. At each destination database, an apply process applies these row LCRs to a subset table in which all of the rows satisfy the subset condition in the subset rules for the capture process. None of the destination databases need all of the DML changes made to the table. When you use subset rules for a local capture process or a synchronous capture, some additional overhead is incurred to perform row migrations at the site running the source database.

Use Propagation or Apply Subset Rules When Some Destinations Need Subsets

Use subset rules with a propagation or an apply process when some destinations in an environment need only a subset of captured DML changes. The following are examples of such an environment:

  • Most of the destination databases for captured DML changes to a table need a different subset of these changes.

  • Most of the destination databases need all of the captured DML changes to a table, but some destination databases need only a subset of these changes.

In these types of environments, the capture process or synchronous capture must capture all of the changes to the table, but you can use subset rules with propagations and apply processes to ensure that subset tables at destination databases only apply the correct subset of captured DML changes.

Consider these factors when you decide to use subset rules with a propagation in this type of environment:

  • You can reduce network traffic because fewer row LCRs are propagated over the network.

  • The site that contains the source queue for the propagation incurs some additional overhead to perform row migrations.

Consider these factors when you decide to use subset rules with an apply process in this type of environment:

  • The queue used by the apply process can contain all row LCRs for the subset table. In a directed networks environment, propagations can propagate any of the row LCRs for the table to destination queues as appropriate, whether the apply process applies these row LCRs.

  • The site that is running the apply process incurs some additional overhead to perform row migrations.

Ensure That the Table Where Subset Row LCRs Are Applied Is a Subset Table

If an apply process might apply row LCRs that have been transformed by a row migration, then Oracle recommends that the table at the destination database be a subset table where each row matches the condition in the subset rule. If the table is not such a subset table, then apply errors might result.

For example, consider a scenario in which a subset rule for a capture process has the condition department_id = 50 for DML changes to the hr.employees table. If the hr.employees table at a destination database of this capture process contains rows for employees in all departments, not just in department 50, then a constraint violation might result during apply:

  1. At the source database, a DML change updates the hr.employees table and changes the department_id for the employee with an employee_id of 100 from 90 to 50.

  2. A capture process using the subset rule captures the change and converts the update into an insert and enqueues the change into the capture process's queue as a row LCR.

  3. A propagation propagates the row LCR to the destination database without modifying it.

  4. An apply process attempts to apply the row LCR as an insert at the destination database, but an employee with an employee_id of 100 already exists in the hr.employees table, and an apply error results.

In this case, if the table at the destination database were a subset of the hr.employees table and only contained rows of employees whose department_id was 50, then the insert would have been applied successfully.

Similarly, if an apply process might apply row LCRs that have been transformed by a row migration to a table, and you allow users or applications to perform DML operations on the table, then Oracle recommends that all DML changes satisfy the subset condition. If you allow local changes to the table, then the apply process cannot ensure that all rows in the table meet the subset condition. For example, suppose the condition is department_id = 50 for the hr.employees table. If a user or an application inserts a row for an employee whose department_id is 30, then this row remains in the table and is not removed by the apply process. Similarly, if a user or an application updates a row locally and changes the department_id to 30, then this row also remains in the table.

Message Rules

When you use a rule to specify an Oracle Streams task that is relevant only for a user message of a specific, non-LCR message type, you are specifying a message rule. You can specify message rules for propagations, apply processes, and messaging clients.

A single message rule in the positive rule set for a propagation means that the propagation propagates the user messages of the message type in the source queue that satisfy the rule condition. A single message rule in the negative rule set for a propagation means that the propagation discards the user messages of the message type in the source queue that satisfy the rule condition.

A single message rule in the positive rule set for an apply process means that the apply process dequeues user messages of the message type that satisfy the rule condition. The apply process then sends these user messages to its message handler. A single message rule in the negative rule set for an apply process means that the apply process discards user messages of the message type in its queue that satisfy the rule condition.

A single message rule in the positive rule set for a messaging client means that a user or an application can use the messaging client to dequeue user messages of the message type that satisfy the rule condition. A single message rule in the negative rule set for a messaging client means that the messaging client discards user messages of the message type in its queue that satisfy the rule condition. Unlike propagations and apply processes, which propagate or apply messages automatically when they are running, a messaging client does not automatically dequeue or discard messages. Instead, a messaging client must be invoked by a user or application to dequeue or discard messages.

Message Rule Example

Suppose you use the ADD_MESSAGE_RULE procedure in the DBMS_STREAMS_ADM package to instruct an Oracle Streams client to behave in the following ways:

The first instruction in the previous list pertains to a messaging client, while the second instruction pertains to an apply process.

The rules created in these examples are for messages of the following type:

CREATE TYPE strmadmin.region_pri_msg AS OBJECT(
  region         VARCHAR2(100),
  priority       NUMBER,
  message        VARCHAR2(3000))
/
Dequeue User Messages If region Is EUROPE and priority Is 1

Run the ADD_MESSAGE_RULE procedure to create a rule for messages of region_pri_msg type:

BEGIN
  DBMS_STREAMS_ADM.ADD_MESSAGE_RULE (
    message_type    =>  'strmadmin.region_pri_msg',
    rule_condition  =>  ':msg.region = ''EUROPE'' AND  ' ||
                        ':msg.priority = ''1'' ',
    streams_type    =>  'dequeue',
    streams_name    =>  'msg_client',
    queue_name      =>  'streams_queue',
    inclusion_rule  =>  TRUE);
END;
/

Notice that dequeue is specified for the streams_type parameter. Therefore, this procedure creates a messaging client named msg_client if it does not already exist. If this messaging client already exists, then this procedure adds the message rule to its rule set. Also, notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rule is added to the positive rule set for the messaging client. The user who runs this procedure is granted the privileges to dequeue from the queue using the messaging client.

The ADD_MESSAGE_RULE procedure creates a rule with a rule condition similar to the following:

:"VAR$_52".region = 'EUROPE' AND  :"VAR$_52".priority = '1'

The variables in the rule condition that begin with VAR$ are variables that are specified in the system-generated evaluation context for the rule.

Send User Messages to a Message Handler If region Is AMERICAS and priority Is 2

Run the ADD_MESSAGE_RULE procedure to create a rule for messages of region_pri_msg type:

BEGIN
  DBMS_STREAMS_ADM.ADD_MESSAGE_RULE (
    message_type    =>  'strmadmin.region_pri_msg',
    rule_condition  =>  ':msg.region = ''AMERICAS'' AND  ' ||
                        ':msg.priority = ''2'' ',
    streams_type    =>  'apply',
    streams_name    =>  'apply_msg',
    queue_name      =>  'streams_queue',
    inclusion_rule  =>  TRUE);
END;
/

Notice that apply is specified for the streams_type parameter. Therefore, this procedure creates an apply process named apply_msg if it does not already exist. If this apply process already exis=V©ts, then this procedure adds the message rule to its rule set. Also, notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rule is added to the positive rule set for the messaging client.

The ADD_MESSAGE_RULE procedure creates a rule with a rule condition similar to the following:

:"VAR$_56".region = 'AMERICAS' AND  :"VAR$_56".priority = '2'

The variables in the rule condition that begin with VAR$ are variables that are specified in the system-generated evaluation context for the rule.

Summary of Rules

In this example, the following message rules were defined:

  • A message rule for a messaging client named msg_client that evaluates to TRUE if a message has EUROPE for its region and 1 for its priority. Given this rule, a user or application can use the messaging client to dequeue messages of region_pri_msg type that satisfy the rule condition.

  • A message rule for an apply process named apply_msg that evaluates to TRUE if a message has AMERICAS for its region and 2 for its priority. Given this rule, the apply process dequeues messages of region_pri_msg type that satisfy the rule condition and sends these messages to its message handler or reenqueues the messages into a specified queue.

System-Created Rules and Negative Rule Sets

You add system-created rules to a negative rule set to specify that you do not want an Oracle Streams client to perform its task for changes that satisfy these rules. Specifically, a system-created rule in a negative rule set means the following for each type of Oracle Streams client:


Note:

A synchronous capture cannot have a negative rule set.

If an Oracle Streams client does not have a negative rule set, then you can create a negative rule set and add rules to it by running one of the following procedures and setting the inclusion_rule parameter to FALSE:

If a negative rule set already exists for the Oracle Streams client when you run one of these procedures, then the procedure adds the system-created rules to the existing negative rule set.

Alternatively, you can create a negative rule set when you create an Oracle Streams client by running one of the following procedures and specifying a non-NULL value for the negative_rule_set_name parameter:

Also, you can specify a negative rule set for an existing Oracle Streams client by altering the client. For example, to specify a negative rule set for an existing capture process, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure. After an Oracle Streams client has a negative rule set, you can use the procedures in the DBMS_STREAMS_ADM package listed previously to add system-created rules to it.

Instead of adding rules to a negative rule set, you can also exclude changes to certain tables or schemas in the following ways:

  • Do not add system-created rules for the table or schema to a positive rule set for an Oracle Streams client. For example, to capture DML changes to all of the tables in a particular schema except for one table, add a DML table rule for each table in the schema, except for the excluded table, to the positive rule set for the capture process. The disadvantages of this approach are that there can be many tables in a schema and each one requires a separate DML rule, and, if a new table is added to the schema, and you want to capture changes to this new table, then a new DML rule must be added for this table to the positive rule set for the capture process.

  • Use the NOT logical condition in the rule condition of a complex rule in the positive rule set for an Oracle Streams client. For example, to capture DML changes to all of the tables in a particular schema except for one table, use the DBMS_STREAMS_ADM.ADD_SCHEMA_RULES procedure to add a system-created DML schema rule to the positive rule set for the capture process that instructs the capture process to capture changes to the schema, and use the and_condition parameter to exclude the table with the NOT logical condition. The disadvantages to this approach are that it involves manually specifying parts of rule conditions, which can be error prone, and rule evaluation is not as efficient for complex rules as it is for unmodified system-created rules.

Given the goal of capturing DML changes to all of the tables in a particular schema except for one table, you can add a DML schema rule to the positive rule set for the capture process and a DML table rule for the excluded table to the negative rule set for the capture process.

This approach has the following advantages over the alternatives described previously:

  • You add only two rules to achieve the goal.

  • If a new table is added to the schema, and you want to capture DML changes to the table, then the capture process captures these changes without requiring modifications to existing rules or additions of new rules.

  • You do not need to specify or edit rule conditions manually.

  • Rule evaluation is more efficient because you avoid using complex rules.

Negative Rule Set Example

Suppose you want to apply row LCRs that contain the results of DML changes to all of the tables in hr schema except for the job_history table. To do so, you can use the ADD_SCHEMA_RULES procedure in the DBMS_STREAMS_ADM package to instruct an Oracle Streams apply process to apply row LCRs that contain the results of DML changes to the tables in the hr schema. In this case, the procedure creates a schema rule and adds the rule to the positive rule set for the apply process.

You can use the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to instruct the Oracle Streams apply process to discard row LCRs that contain the results of DML changes to the tables in the hr.job_history table. In this case, the procedure creates a table rule and adds the rule to the negative rule set for the apply process.

The following sections explain how to run these procedures:

Apply All DML Changes to the Tables in the hr Schema

These changes originated at the dbs1.example.com source database.

Run the ADD_SCHEMA_RULES procedure to create this rule:

BEGIN
  DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name         =>  'hr',   
    streams_type        =>  'apply',
    streams_name        =>  'apply',
    queue_name          =>  'streams_queue',
    include_dml         =>  TRUE,
    include_ddl         =>  FALSE,
    include_tagged_lcr  =>  FALSE,
    source_database     =>  'dbs1.example.com',
    inclusion_rule      =>  TRUE);
END;
/

Notice that the inclusion_rule parameter is set to TRUE. This setting means that the system-created rule is added to the positive rule set for the apply process.

The ADD_SCHEMA_RULES procedure creates a rule with a rule condition similar to the following:

((:dml.get_object_owner() = 'HR') and :dml.is_null_tag() = 'Y' 
and :dml.get_source_database_name() = 'DBS1.EXAMPLE.COM' )
Discard Row LCRs Containing DML Changes to the hr.job_history Table

These changes originated at the dbs1.example.com source database.

Run the ADD_TABLE_RULES procedure to create this rule:

BEGIN 
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name          =>  'hr.job_history',
    streams_type        =>  'apply',
    streams_name        =>  'apply',
    queue_name          =>  'streams_queue',
    include_dml         =>  TRUE,
    include_ddl         =>  FALSE,
    include_tagged_lcr  =>  TRUE,
    source_database     =>  'dbs1.example.com',
    inclusion_rule      =>  FALSE);
END;
/

Notice that the inclusion_rule parameter is set to FALSE. This setting means that the system-created rule is added to the negative rule set for the apply process.

Also notice that the include_tagged_lcr parameter is set to TRUE. This setting means that all changes for the table, including tagged LCRs that satisfy all of the other rule conditions, will be discarded. In most cases, specify TRUE for the include_tagged_lcr parameter if the inclusion_rule parameter is set to FALSE.

The ADD_TABLE_RULES procedure creates a rule with a rule condition similar to the following:

(((:dml.get_object_owner() = 'HR' and :dml.get_object_name() = 'JOB_HISTORY')) 
and :dml.get_source_database_name() = 'DBS1.EXAMPLE.COM' )
Summary of Rules

In this example, the following rules were defined:

  • A schema rule that evaluates to TRUE if a DML operation is performed on the tables in the hr schema. This rule is in the positive rule set for the apply process.

  • A table rule that evaluates to TRUE if a DML operation is performed on the hr.job_history table. This rule is in the negative rule set for the apply process.

Given these rules, the following list provides examples of changes applied by the apply process:

  • A row is inserted into the hr.departments table.

  • Five rows are updated in the hr.employees table.

  • A row is deleted from the hr.countries table.

The apply process dequeues these changes from its associated queue and applies them to the database objects at the destination database.

Given these rules, the following list provides examples of changes that are ignored by the apply process:

  • A row is inserted into the hr.job_history table.

  • A row is updated in the hr.job_history table.

  • A row is deleted from the hr.job_history table.

These changes are not applied because they satisfy a rule in the negative rule set for the apply process.

System-Created Rules with Added User-Defined Conditions

Some of the procedures that create rules in the DBMS_STREAMS_ADM package include an and_condition parameter. This parameter enables you to add conditions to system-created rules. The condition specified by the and_condition parameter is appended to the system-created rule condition using an AND clause in the following way:

(system_condition) AND (and_condition)

The variable in the specified condition must be :lcr. For example, to specify that the table rules generated by the ADD_TABLE_RULES procedure evaluate to TRUE only if the table is hr.departments, the source database is dbs1.example.com, and the Oracle Streams tag is the hexadecimal equivalent of '02', run the following procedure:

BEGIN 
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name          =>  'hr.departments',
    streams_type        =>  'apply',
    streams_name        =>  'apply_02',
    queue_name          =>  'streams_queue',
    include_dml         =>  TRUE,
    include_ddl         =>  TRUE,
    include_tagged_lcr  =>  TRUE,
    source_database     =>  'dbs1.example.com',
    inclusion_rule      =>  TRUE,
    and_condition       =>  ':lcr.get_tag() = HEXTORAW(''02'')');
END;
/

The ADD_TABLE_RULES procedure creates a DML rule with the following condition:

(((((:dml.get_object_owner() = 'HR' and :dml.get_object_name() = 'DEPARTMENTS'))
 and :dml.get_source_database_name() = 'DBS1.EXAMPLE.COM' )) 
and (:dml.get_tag() = HEXTORAW('02')))

It creates a DDL rule with the following condition:

(((((:ddl.get_object_owner() = 'HR' and :ddl.get_object_name() = 'DEPARTMENTS')
or (:ddl.get_base_table_owner() = 'HR' 
and :ddl.get_base_table_name() = 'DEPARTMENTS')) 
and :ddl.get_source_database_name() = 'DBS1.EXAMPLE.COM' )) 
and (:ddl.get_tag() = HEXTORAW('02')))

Notice that the :lcr in the specified condition is converted to :dml or :ddl, depending on the rule that is being generated. If you are specifying an LCR member subprogram that is dependent on the LCR type (row or DDL), then ensure that this procedure only generates the appropriate rule. Specifically, if you specify an LCR member subprogram that is valid only for row LCRs, then specify TRUE for the include_dml parameter and FALSE for the include_ddl parameter. If you specify an LCR member subprogram that is valid only for DDL LCRs, then specify FALSE for the include_dml parameter and TRUE for the include_ddl parameter.

For example, the GET_OBJECT_TYPE member function only applies to DDL LCRs. Therefore, if you use this member function in an and_condition, then specify FALSE for the include_dml parameter and TRUE for the include_ddl parameter.


See Also:


PK8GVV=VPK&AOEBPS/strms_apply.htm Oracle Streams Information Consumption

4 Oracle Streams Information Consumption

The following topics contain information about consuming information with Oracle Streams.

Overview of Information Consumption with Oracle Streams

Consuming information with Oracle Streams means dequeuing a message that contains the information from a queue and either processing or discarding the message. The consumed information can describe a database change, or it can be any other type of information. A dequeued message might have originated at the same database where it is dequeued, or it might have originated at a different database.

This section contains these topics:

Ways to Consume Information with Oracle Streams

The following are ways to consume information with Oracle Streams:

Implicit Consumption

With implicit consumption, an apply process automatically dequeues either captured LCRs, persistent LCRs, or persistent user messages. The queue must be an ANYDATA queue. If a message contains a logical change record (LCR), then the apply process can either apply it directly or call a user-specified procedure for processing. If the message does not contain an LCR, then the apply process can invoke a user-specified procedure called a message handler to process it.


Note:

Captured LCRs must be dequeued by an apply process. However, if an apply process or a user procedure called by an apply process re-enqueues a captured LCR, then the LCR becomes a persistent LCR and can be explicitly dequeued.

Explicit Consumption

With explicit consumption, messages are dequeued in one of the following ways:

Types of Information Consumed with Oracle Streams

The following types of information can be consumed with Oracle Streams:

Captured LCRs

A captured LCR is a logical change record (LCR) that was captured implicitly by a capture process and enqueued into the buffered queue portion of an ANYDATA queue.

Only an apply process can dequeue captured LCRs. After dequeue, an apply process can apply the captured LCR directly to make a database change, discard the captured LCR, send the captured LCR to an apply handler for processing, or re-enqueue the captured LCR into a persistent queue.

Persistent LCRs

A persistent LCR is a logical change record (LCR) that was enqueued into the persistent queue portion of an ANYDATA queue. A persistent LCR can be enqueued in one of the following ways:

  • Captured implicitly by a synchronous capture and enqueued

  • Constructed explicitly by an application and enqueued

  • Dequeued by an apply process and enqueued by the same apply process using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package

Persistent LCRs can be dequeued by an apply process, a messaging client, or an application.

Buffered LCRs

A buffered LCR is a logical change record (LCR) that was constructed explicitly by an application and enqueued into the buffered queue portion of an ANYDATA queue. Only an application can dequeue buffered LCRs.

Persistent User Messages

A persistent user message is a non-LCR message of a user-defined type that was enqueued into a persistent queue. A persistent user message can be enqueued in one of the following ways:

  • Created explicitly by an application and enqueued

  • Dequeued by an apply process and enqueued by the same apply process using the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package

Apply processes and messaging clients can only dequeue persistent user messages that are in an ANYDATA queue. Applications can dequeue persistent user messages that are in an ANYDATA queue or a typed queue.

Buffered User Messages

A buffered user message is a non-LCR message of a user-defined type that was created explicitly by an application and enqueued into a buffered queue. A buffered user message can be enqueued into the buffered queue portion of an ANYDATA queue or a typed queue. Only an application can dequeue buffered user messages.

Summary of Information Consumption Options

Table 4-1 summarizes the information consumption options available with Oracle Streams.

Table 4-1 Information Consumption Options with Oracle Streams

Consumption TypeDequeues MessagesMessage TypesUse When

Implicit Consumption with an Apply Process


Continually and automatically when enabled

Captured LCRs

Persistent LCRs

Persistent user messages

You want to dequeue and process captured LCRs.

You want to dequeue persistent LCRs or persistent user messages continually and automatically from the persistent queue portion of an ANYDATA queue.

You want to dequeue LCRs that must be applied directly to database objects to make database changes.

You want to dequeue messages and process them with an apply handler.

Explicit Consumption with a Messaging Client


When invoked by an application

Persistent LCRs

Persistent user messages

You want to use a simple method for dequeuing on demand persistent LCRs or persistent user messages from the persistent queue portion of an ANYDATA queue.

You want to send messages to an application for processing after dequeue.

Explicit Consumption with Manual Dequeue


Manually according to application logic

Persistent LCRs

Buffered LCRs

Persistent user messages

Buffered user messages

You want an application to dequeue manually persistent LCRs or buffered LCRs from an ANYDATA queue and process them.

You want an application to dequeue manually persistent user messages or buffered user messages from an ANYDATA queue or a typed queue and process them.



Note:

A single database can use any combination of the information consumption options summarized in the table.


See Also:


Implicit Consumption with an Apply Process

This section explains the concepts related to Oracle Streams apply processes.

This section contains these topics:

Introduction to the Apply Process

An apply process is an optional Oracle background process that dequeues messages from a specific queue and either applies each message directly, discards it, passes it as a parameter to an apply handler, or re-enqueues it. These messages can be logical change records (LCRs) or user messages.


Note:

An apply process can only dequeue messages from an ANYDATA queue, not a typed queue.

Apply Process Rules

An apply process applies messages based on rules that you define. For LCRs, each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. For user messages, you can create rules to control apply process behavior for specific types of messages. You can place these rules in the positive rule set or negative rule set for the apply process.

If a rule evaluates to TRUE for a message, and the rule is in the positive rule set for an apply process, then the apply process dequeues and processes the message. If a rule evaluates to TRUE for a message, and the rule is in the negative rule set for an apply process, then the apply process discards the message. If an apply process has both a positive and a negative rule set, then the negative rule set is always evaluated first.

You can specify apply process rules for LCRs at the following levels:

  • A table rule applies or discards either row changes resulting from DML changes or DDL changes to a particular table. A subset rule is a table rule that include a subset of the row changes to a particular table.

  • A schema rule applies or discards either row changes resulting from DML changes or DDL changes to the database objects in a particular schema.

  • A global rule applies or discards either all row changes resulting from DML changes or all DDL changes in the queue associated with an apply process.

Types of Messages That Can Be Processed with an Apply Process

Apply processes can dequeue the following types of messages:

  • Captured LCRs: A logical change record (LCR) that was captured implicitly by a capture process and enqueued into the buffered queue portion of an ANYDATA queue. In some situations, an optimization enables capture processes to send LCRs to apply processes more efficiently. This optimization is called combined capture and apply.

  • Persistent LCRs: An LCR that was captured implicitly by a synchronous capture, constructed and enqueued persistently by an application, or enqueued by an apply process. A persistent LCR is enqueued into the persistent queue portion of an ANYDATA queue.

  • Persistent user messages: A non-LCR message of a user-defined type that was enqueued explicitly by an application or an apply process. A persistent user message is enqueued into the persistent queue portion of an ANYDATA queue. In addition, a user message can be enqueued into an ANYDATA queue or a typed queue, but an apply process can dequeue only user messages in an ANYDATA queue.

A single apply process cannot dequeue both from the buffered queue and persistent queue portions of a queue. If messages in both the buffered queue and persistent queue must be processed by an apply process, then the destination database must have at least two apply processes to process the messages.

Message Processing Options for an Apply Process

An apply process can either apply messages directly or send messages to an apply handler for processing. Your options for message processing depend on whether the message received by an apply process is a row logical change record (row LCR), a DDL logical change record (DDL LCR), or a user message.

Figure 4-1 shows the message processing options for an apply process and which options can be used for different types of messages.

Figure 4-1 Apply Process Message Processing Options

Description of Figure 4-1 follows
Description of "Figure 4-1 Apply Process Message Processing Options"

By default, an apply process applies LCRs directly. The apply process executes the change in the LCR on the database object identified in the LCR. The apply process either successfully applies the change in the LCR or, if a conflict or an apply error is encountered, tries to resolve the error with a conflict handler or a user-specified procedure called an error handler.

If a conflict handler can resolve the conflict, then it either applies the LCR or it discards the change in the LCR. If an error handler can resolve the error, then it should apply the LCR, if appropriate. An error handler can resolve an error by modifying the LCR before applying it. If the conflict handler or error handler cannot resolve the error, then the apply process places the transaction, and all LCRs associated with the transaction, into the error queue.

Instead of applying LCRs directly, you can process LCRs in a customized way with apply handlers. When you use an apply handler, an apply process passes a message to a collection of SQL statements or to a user-defined PL/SQL procedure for processing. An apply handler can process the message in a customized way.

An apply process cannot apply user messages directly. An apply process that dequeues user messages must have a message handler to process the user messages.

There are several types of apply handlers. This section uses the following categories to describe apply handlers:

Table 4-2 Characteristics of Apply Handlers

CategoryDescription

Mechanism

The means by which the apply handler processes messages. The mechanism for an apply handler is either SQL statements or a user-defined PL/SQL procedure.

Type of message

The type of message processed by the apply handler. The message type is either row logical change record (row LCR), DDL logical change record (DDL LCR), persistent user message, or transaction control directive.

Message creator

The component that creates the messages processed by the apply handler. The message creator is either a capture process, a synchronous capture, or an application.

Scope

The level at which the apply handler is set. The scope is either one operation on one table or all operations on all database objects.

Number allowed for each apply process

The number of apply handlers of a specific type allowed for each apply process. The number allowed is either one or many.


The following sections describe different types of apply handlers:


Note:

An apply process cannot apply non-LCR messages directly. Each user message dequeued by an apply process must be processed with a message handler.

DML Handlers

DML handlers process row logical change records (row LCRs) dequeued by an apply process. There are two types of DML handlers: statement DML handlers and procedure DML handlers. A statement DML handler uses a collection of SQL statements to process row LCRs, while a procedure DML handler uses a PL/SQL procedure to process row LCRs.

The following sections describe DML handlers and error handlers:

Statement DML Handlers

A statement DML handler has the following characteristics:

  • Mechanism: A collection of SQL statements

  • Type of message: Row LCR

  • Message creator: Capture process, synchronous capture, or application

  • Scope: One operation on one table

  • Number allowed for each apply process: Many, and many can be specified for the same operation on the same table

Each SQL statement included in a statement DML handler has a unique execution sequence number. When a statement DML handler is invoked, it executes its statements in order from the statement with the lowest execution sequence number to the statement with the highest execution sequence number. An execution sequence number can be a positive number, a negative number, or a decimal number.

For each table associated with an apply process, you can set a separate statement DML handler to process each of the following types of operations in row LCRs:

  • INSERT

  • UPDATE

  • DELETE

A statement DML handler is invoked when the apply process dequeues a row LCR that performs the specified operation on the specified table. For example, the hr.employees table can have one statement DML handler to process INSERT operations and a different statement DML handler to process UPDATE operations. Alternatively, the hr.employees table can use the same statement DML handler for each type of operation.

You can specify multiple statement DML handlers for the same operation on the same table. In this case, these statement DML handlers can execute in any order, and each statement DML handler receives a copy of the original row LCR that was dequeued by the apply process.

A SQL statement in a statement DML handler can include the following types of operations in row LCRs:

  • INSERT

  • UPDATE

  • DELETE

  • MERGE

For example, a SQL statement in a statement DML handler can process a row LCR that updates the hr.employees table, and this statement can include an INSERT operation that inserts a row into a different table.

Statement DML handlers can run valid DML statements on row LCRs, but statement DML handlers cannot modify the column values in row LCRs. However, statement DML handlers can use SQL to insert a row or update a row with column values that are different than the ones in the row LCR. Also, statement DML handlers should never commit and never roll back.

To execute a row LCR in a statement DML handler, invoke the EXECUTE member procedure for the row LCR. A statement that runs the EXECUTE member procedure can be placed anywhere in the execution sequence order of the statement DML handler. It is not necessary to execute a row LCR unless the goal is to apply the changes in the row LCR to a table in addition to performing any other SQL statements in the statement DML handler.

To add a statement to a statement DML handler, use the ADD_STMT_TO_HANDLER procedure in the DBMS_STREAMS_HANDLER_ADM package. To add a statement DML handler to an apply process, use the ADD_STMT_HANDLER procedure in the DBMS_APPLY_ADM package. You can either add a statement DML handler to a specific apply process, or you can add a statement DML handler as a general statement DML handler that is used by all apply processes in the database. If a statement DML handler for an operation on a table is used by a specific apply process, and another statement DML handler is a general handler for the same operation on the same table, then both handlers are invoked when an apply process dequeues a row LCR with the operation on the table. Each statement DML handler receives the original row LCR, and the statement DML handlers can execute in any order.

Statement DML handlers are often used to record the changes made to tables. Statement DML handlers can also perform changes that do not modify column values. For example, statement DML handlers can change the data type of a column.


Note:

  • When you run the ADD_STMT_HANDLER procedure, you specify the object for which the handler is used. This object does not need to exist at the destination database when you run the procedure.

  • A change handler is a special type of statement DML handler that tracks table changes and was created by either the DBMS_STREAMS_ADM.MAINTAIN_CHANGE_TABLE procedure or the DBMS_APPLY_ADM.SET_CHANGE_HANDLER procedure.


Procedure DML Handlers

A procedure DML handler has the following characteristics:

  • Mechanism: A user-defined PL/SQL procedure

  • Type of message: Row LCR

  • Message creator: Capture process, synchronous capture, or application

  • Scope: One operation on one table

  • Number allowed for each apply process: Many, but only one can be specified for the same operation on the same table

For each table associated with an apply process, you can set a separate procedure DML handler to process each of the following types of operations in row LCRs:

  • INSERT

  • UPDATE

  • DELETE

  • LOB_UPDATE

A procedure DML handler is invoked when the apply process dequeues a row LCR that performs the specified operation on the specified table. For example, the hr.employees table can have one procedure DML handler to process INSERT operations and a different procedure DML handler to process UPDATE operations. Alternatively, the hr.employees table can use the same procedure DML handler for each type of operation.

The PL/SQL procedure can perform any customized processing of row LCRs. For example, if you want each insert into a particular table at the source database to result in inserts into multiple tables at the destination database, then you can create a user-defined PL/SQL procedure that processes INSERT operations on the table to accomplish this. Unlike statement DML handlers, procedure DML handlers can modify the column values in row LCRs.

A procedure DML handler should never commit and never roll back, except to a named savepoint that the user-defined PL/SQL procedure has established. To execute a row LCR inside a procedure DML handler, invoke the EXECUTE member procedure for the row LCR. Also, a procedure DML handler should handle any errors that might occur during processing.

To set a procedure DML handler, use the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. You can either set a procedure DML handler for a specific apply process, or you can set a procedure DML handler to be a general procedure DML handler that is used by all apply processes in the database. If a procedure DML handler for an operation on a table is set for a specific apply process, and another procedure DML handler is a general handler for the same operation on the same table, then the specific procedure DML handler takes precedence over the general procedure DML handler.

Typically, procedure DML handlers are used in Oracle Streams replication environments to perform custom processing of row LCRs, but procedure DML handlers can be used in nonreplication environments as well. For example, you can use such handlers to record changes made to database objects without replicating these changes.


Note:

When you run the SET_DML_HANDLER procedure, you specify the object for which the handler is used. This object does not need to exist at the destination database when you run the procedure.

Error Handlers

An error handler has the following characteristics:

  • Mechanism: A user-defined PL/SQL procedure

  • Type of message: Row LCR

  • Message creator: Capture process, synchronous capture, or application

  • Scope: One operation on one table

  • Number allowed for each apply process: Many, but only one can be specified for the same operation on the same table

An error handler is similar to a procedure DML handler. The difference between the two is that an error handler is invoked only if an apply error results when an apply process tries to apply a row LCR for the specified operation on the specified table.

You create an error handler in the same way that you create a procedure DML handler, except that you set the error_handler parameter to TRUE when you run the SET_DML_HANDLER procedure.

An error handler cannot coexist with a procedure DML handler for the same operation on the same table. However, an error handler can coexist with a statement DML handler for the same operation on the same table.


Note:

Statement DML handlers cannot be used as error handlers.

DDL Handlers

A DDL handler has the following characteristics:

  • Mechanism: A user-defined PL/SQL procedure

  • Type of message: DDL LCR

  • Message creator: Capture process or application

  • Scope: All DDL LCRs dequeued by the apply process

  • Number allowed for each apply process: One

The user-defined PL/SQL procedure can perform any customized processing of DDL LCRs. For example, to log DDL changes before applying them, you can create a procedure that processes DDL operations to accomplish this.

To execute a DDL LCR inside a DDL handler, invoke the EXECUTE member procedure for the DDL LCR. To associate a DDL handler with a particular apply process, use the ddl_handler parameter in the CREATE_APPLY or the ALTER_APPLY procedure in the DBMS_APPLY_ADM package.

Typically, DDL handlers are used in Oracle Streams replication environments to perform custom processing of DDL LCRs, but these handlers can be used in nonreplication environments as well. For example, you can use such handlers to record changes made to database objects without replicating these changes.


See Also:


Message Handlers

A message handler has the following characteristics:

  • Mechanism: A user-defined PL/SQL procedure

  • Type of message: Persistent user message (non-LCR)

  • Message creator: Application

  • Scope: All user messages dequeued by the apply process

  • Number allowed for each apply process: One

A message handler offers advantages in any environment that has applications that must update one or more remote databases or perform some other remote action. These applications can enqueue persistent user messages into a queue at the local database, and Oracle Streams can propagate each persistent user message to the appropriate queues at destination databases. If there are multiple destinations, then Oracle Streams provides the infrastructure for automatic propagation and processing of these messages at these destinations. If there is only one destination, then Oracle Streams still provides a layer between the application at the source database and the application at the destination database, so that, if the application at the remote database becomes unavailable, then the application at the source database can continue to function normally.

For example, a message handler can convert a persistent user message into an electronic mail message. In this case, the persistent user message can contain the attributes you would expect in an electronic mail message, such as from, to, subject, text_of_message, and so on. After converting a message into an electronic mail messages, the message handler can send it out through an electronic mail gateway.

You can specify a message handler for an apply process using the message_handler parameter in the CREATE_APPLY or the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. An Oracle Streams apply process always assumes that a non-LCR message has no dependencies on any other messages in the queue. If parallelism is greater than 1 for an apply process that applies persistent user messages, then these messages can be dequeued by a message handler in any order. Therefore, if dependencies exist between these messages in your environment, then Oracle recommends that you set apply process parallelism to 1.

Precommit Handlers

A precommit handler has the following characteristics:

  • Mechanism: A user-defined PL/SQL procedure

  • Type of message: Commit directive for transactions that include row LCRs or persistent user messages

  • Message creator: Capture process, synchronous capture, or application

  • Scope: All row LCRs with commit directives dequeued by the apply process

  • Number allowed for each apply process: One

You can use a precommit handler to audit commit directives for captured LCRs and transaction boundaries for persistent LCRs and persistent user messages. A commit directive is a transaction control directive that contains a COMMIT. A precommit handler is a user-defined PL/SQL procedure that can receive the commit information for a transaction and process the commit information in any customized way. A precommit handler can work with a statement DML handler, procedure DML handler, or message handler.

For example, a precommit handler can improve performance by caching data for the length of a transaction. This data can include cursors, temporary LOBs, data from a message, and so on. The precommit handler can release or execute the objects cached by the handler when a transaction completes.

A precommit handler executes when the apply process commits a transaction. You can use the commit_serialization apply process parameter to control the commit order for an apply process.

The following list describes commit directives and transaction boundaries:

  • Commit Directives for Captured LCRs: When you are using a capture process, and a user commits a transaction, the capture process captures an internal commit directive for the transaction if the transaction contains row LCRs that were captured by the capture process. The capture process also records the transaction identifier in each captured LCR in a transaction.

    Once enqueued, these commit directives can be propagated to destination queues, along with the LCRs in a transaction. A precommit handler receives each commit SCN for these internal commit directives in the queue of an apply process before they are processed by the apply process.

  • Transaction Boundaries for Persistent LCRs Enqueued by Synchronous Captures: When you are using a synchronous capture, and a user commits a transaction, the persistent LCRs that were enqueued by the synchronous capture are organized into a message group. The synchronous capture records the transaction identifier in each persistent LCR in a transaction.

    After persistent LCRs are enqueued by a synchronous capture, the persistent LCRs in the message group can be propagated to other queues. When an apply process is configured to process these persistent LCRs, it generates a commit SCN for all of the persistent LCRs in a message group. The commit SCN values generated by an individual apply process have no relation to the source transaction, or to the values generated by any other apply process. A precommit handler configured for such an apply process receives the commit SCN supplied by the apply process.

  • Transaction Boundaries for Messages Enqueued by Applications: An application can enqueue persistent LCRs and persistent user messages, as well as other types of messages. When the user performing these enqueue operations issues a COMMIT statement to end the transaction, the enqueued persistent LCRs and persistent user messages are organized into a message group.

    When messages that were enqueued by an application are organized into a message group, the messages in the message group can be propagated to other queues. When an apply process is configured to process these messages, it generates a single transaction identifier and commit SCN for all the messages in a message group. Transaction identifiers and commit SCN values generated by an individual apply process have no relation to the source transaction, or to the values generated by any other apply process. A precommit handler configured for such an apply process receives the commit SCN supplied by the apply process.

Considerations for Apply Handlers

The following are considerations for using apply handlers:

  • Both statement DML handlers and procedure DML handlers process row LCRs. Procedure DML handlers require PL/SQL processing while statement DML handlers do not. Therefore, statement DML handlers typically perform better than procedure DML handlers. Statement DML handlers also are usually easier to configure that procedure DML handlers. However, procedure DML handlers can perform operations that are not possible with a statement DML handler, such as controlling program flow and trapping errors. In addition, procedure DML handlers can modify column values in row LCRs while statement DML handlers cannot.

  • Statement DML handlers, procedure DML handlers, error handlers, DDL handlers, and message handlers can execute an LCR by calling the LCR's EXECUTE member procedure.

  • All applied DDL LCRs commit automatically. Therefore, if a DDL handler calls the EXECUTE member procedure of a DDL LCR, then a commit is performed automatically.

  • An apply handler that uses a PL/SQL procedure can set an Oracle Streams session tag. Statement DML handlers cannot set an Oracle Streams session tag.

  • An apply handler that uses a user-defined PL/SQL procedure can call a Java stored procedure that is published (or wrapped) in a PL/SQL procedure. Statement DML handlers cannot call a Java stored procedure.

  • If an apply process tries to invoke an apply handler that does not exist or is invalid, then the apply process aborts.

  • If an apply handler that uses a PL/SQL procedure invokes a procedure or function in an Oracle-supplied package, then the user who runs the apply handler must have direct EXECUTE privilege on the package. It is not sufficient to grant this privilege through a role. The DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE procedure grants EXECUTE privilege on all Oracle Streams packages, and other privileges relevant to Oracle Streams. A statement DML handler cannot invoke a procedure or function.


See Also:


Summary of Message Processing Options

The table in this section summarizes the message processing options available when you are using one or more of the apply handlers described in the previous sections. Apply handlers are optional for row LCRs and DDL LCRs because an apply process can apply these messages directly. However, a message handler is required for processing persistent user messages. In addition, an apply process dequeues a message only if the message satisfies the rule sets for the apply process. In general, a message satisfies the rule sets for an apply process if no rules in the negative rule set evaluate to TRUE for the message, and at least one rule in the positive rule set evaluates to TRUE for the message.

Table 4-3 summarizes the message processing options for an apply process.

Table 4-3 Summary of Message Processing Options

Message Processing OptionMechanismType of MessageMessage CreatorDefault Apply Process BehaviorScope of HandlerNumber Allowed for Each Apply Process

Apply Message Directly

Not applicable

Row LCR or DDL LCR

Capture process, synchronous capture, or application

Execute DML or DDL

Not applicable

Not applicable

Statement DML Handler

SQL statements

Row LCR

Capture process, synchronous capture, or application

Execute DML

One operation on one table

Many, and many can be specified for the same operation on the same table

Procedure DML Handler or Error Handler

User-defined PL/SQL procedure

Row LCR

Capture process, synchronous capture, or application

Execute DML

One operation on one table

Many, but only one can be specified for the same operation on the same table

DDL Handler

User-defined PL/SQL procedure

DDL LCR

Capture process or application

Execute DDL

Entire apply process

One

Message Handler

User-defined PL/SQL procedure

Persistent user message

Application

Create error transaction (if no message handler exists)

Entire apply process

One

Precommit Handler

User-defined PL/SQL procedure

Commit directive for transactions that include row LCRs or user messages

Capture process, synchronous capture, or application

Commit transaction

Entire apply process

One


In addition to the message processing options described in this section, you can use the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package to instruct an apply process to enqueue messages into the persistent queue portion of a specified destination queue. Also, you can control message execution using the SET_EXECUTE procedure in the DBMS_APPLY_ADM package.

The Source of Messages Applied by an Apply Process

The following list describes the source database for different types of messages that are processed by an apply process:

  • For a captured LCR, the source database is the database where the change encapsulated in the LCR was generated in the redo log.

  • For a persistent LCR captured by a synchronous capture, the source database is the database where the synchronous capture that captured the row LCR is configured.

  • For a persistent LCR constructed and enqueued by an application, the source database is the database where the message was first enqueued.

  • For a user message, the source database is the database where the message was first enqueued.

A single apply process can apply user messages that originated at multiple databases. However, a single apply process can apply captured LCRs from only one source database. Similarly, a single apply process can apply persistent LCRs captured by a synchronous capture from only one source database. Applying these LCRs requires knowledge of the dependencies, meaningful transaction ordering, and transactional boundaries at the source database.

Captured LCRs from multiple databases can be sent to a single destination queue. The same is true for persistent LCRs captured by a synchronous capture. However, if a single queue contains these LCRs from multiple source databases, then there must be multiple apply processes retrieving these LCRs. Each of these apply processes should be configured to receive messages from exactly one source database using rules. Oracle recommends that you use a separate ANYDATA queue for messages from each source database.

Also, each apply process can apply captured LCRs from only one capture process. If multiple capture processes are running on a source database, and LCRs from more than one of these capture processes are applied at a destination database, then there must be one apply process to apply changes from each capture process. In such an environment, Oracle recommends that each ANYDATA queue used by a capture process, propagation, or apply process have captured LCRs from at most one capture process from a particular source database. A queue can contain LCRs from more than one capture process if each capture process is capturing changes that originated at a different source database.

The same restriction applies to persistent LCRs captured by multiple synchronous captures at the same source database. Store these LCRs in separate ANYDATA queues, and use a separate apply process to apply the LCRs from each synchronous capture.


Note:

Captured LCRs are in the buffered queue portion of a queue while persistent LCRs are in the persistent queue portion of a queue. Therefore, a single apply process cannot apply both captured LCRs and persistent LCRs.

Data Types Applied

When applying row LCRs resulting from DML changes to tables, an apply process applies changes made to columns of the following data types:

  • VARCHAR2

  • NVARCHAR2

  • NUMBER

  • FLOAT

  • LONG

  • DATE

  • BINARY_FLOAT

  • BINARY_DOUBLE

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

  • RAW

  • LONG RAW

  • CHAR

  • NCHAR

  • CLOB with BASICFILE or SECUREFILE storage

  • NCLOB with BASICFILE or SECUREFILE storage

  • BLOB with BASICFILE or SECUREFILE storage

  • UROWID

  • XMLType stored as CLOB, object relationally, or as binary XML


Note:

Oracle Streams capture processes can only capture changes to XMLType columns that are stored as CLOBs. However, apply processes can apply these captured LCRs to XMLType columns that are stored as CLOBs, object relationally, or as binary XML.

Automatic Data Type Conversion During Apply

During apply, an apply process automatically converts certain data types when there is a mismatch between the data type of a column in the row logical change record (row LCR) and the data type of the corresponding column in a table.

Table 4-4 shows which data type combinations are converted automatically during apply.

Table 4-4 Data Type Combinations Converted Automatically During Apply

Data TypesTo CHARTo NCHARTo VARCHAR2To NVARCHAR2To CLOBTo BLOBTo DATETo TIMESTAMP

From CHAR

Not Applicable

Yes

Yes

Yes

Yes

No

No

No

From NCHAR

Yes

Not Applicable

Yes

Yes

Yes

No

No

No

From VARCHAR2

Yes

Yes

Not Applicable

Yes

Yes

No

No

No

From NVARCHAR2

Yes

Yes

Yes

Not Applicable

Yes

No

No

No

From NUMBER

Yes

Yes

Yes

Yes

No

No

No

No

From LONG

No

No

No

No

Yes

No

No

No

From LONG RAW

No

No

No

No

No

Yes

Yes

No

From RAW

No

No

No

No

No

Yes

Yes

No

From DATE

No

No

No

No

No

No

Not Applicable

Yes

From TIMESTAMP

No

No

No

No

No

No

Yes

Not Applicable


An apply process automatically performs data type conversion for a data type combination when Table 4-4 specifies "Yes" for the combination. An apply process does not perform data type conversion for a data type combination when Table 4-4 specifies "No" for the combination. For example, an apply process automatically converts a CHAR to an NCHAR, but it does not convert a CHAR to a BLOB.

Also, if the corresponding table column is not large enough to hold the converted string from a row LCR column, then the apply process raises an error.

The following sections provide more information about automatic data type conversion during apply:


Note:

An apply process must be part of Oracle Database 11g Release 1 (11.1.0.7) or later to perform automatic data type conversion. However, an apply process can convert columns in row LCRs that were captured or constructed on an earlier Oracle Database release.




See Also:

Oracle Database SQL Language Reference for more information about data types

Automatic Trimming of Character Data Types During Apply

The rtrim_on_implicit_conversion apply process parameter determines whether the apply process trims data when it converts a CHAR or NCHAR to a VARCHAR2, NVARCHAR2, or CLOB. When this parameter is set to Y, the apply process automatically removes blank padding from the right end of a column during data type conversion. When this parameter is set to N, the apply process preserves blank padding during data type conversion.

Consider the following example:

  • A row LCR contains 'abc' for a CHAR(10) column.

  • The corresponding table column for the row LCR is NVARCHAR2(10).

If the rtrim_on_implicit_conversion apply process parameter is set to Y, the apply process inserts 'abc' into the table column and trims the padding after these characters. If the rtrim_on_implicit_conversion apply process parameter is set to N, then the apply process inserts 'abc' into the table column, and the remaining space in the column is filled with blanks.

Automatic Conversion and LOB Data Types

Procedure DML handlers and error handlers can use LOB assembly for data that has been converted from LONG to CLOB or from LONG RAW to BLOB.

SQL Generation

SQL generation is the ability to generate the SQL statement required to perform the change encapsulated in a row logical change record (row LCR). Apply processes can generate the SQL statement necessary to perform the insert, update, or delete operation in a row LCR.

This section contains these topics:


Note:

This section describes using SQL generation with the PL/SQL interface. You can also use SQL generation with XStream interfaces.


See Also:


Interfaces for Performing SQL Generation

You can use the GET_ROW_TEXT and GET_WHERE_CLAUSE member procedures for row LCRs to perform SQL generation. The PL/SQL interface generates SQL in a CLOB data type.

SQL Generation Formats

SQL statement can be generated in one of two formats: inline values or bind variables. Use inline values when the returned SQL statement is relatively small. For larger SQL statements, use bind variables. In this case, the bind variables are passed in a separate list that includes pointers to both old and new column values.

For information about using bind variables with each interface, see the documentation about the GET_ROW_TEXT and GET_WHERE_CLAUSE row LCR member procedures in Oracle Database PL/SQL Packages and Types Reference.


Note:

For generated SQL statements with the values inline, SQL injection is possible. SQL injection is a technique for maliciously exploiting applications that use client-supplied data in SQL statements, thereby gaining unauthorized access to a database in order to view or manipulate restricted data. Oracle strongly recommends using bind variables if you plan to execute the generated SQL statement. See Oracle Database PL/SQL Language Reference for more information about SQL injection.

SQL Generation and Data Types

SQL generation supports the following data types:

  • VARCHAR2

  • NVARCHAR2

  • NUMBER

  • FLOAT

  • DATE

  • BINARY_FLOAT

  • BINARY_DOUBLE

  • LONG

  • TIMESTAMP

  • TIMESTAMP WITH TIME ZONE

  • TIMESTAMP WITH LOCAL TIME ZONE

  • INTERVAL YEAR TO MONTH

  • INTERVAL DAY TO SECOND

  • RAW

  • LONG RAW

  • CHAR

  • NCHAR

  • CLOB with BASICFILE storage

  • NCLOB with BASICFILE storage

  • BLOB with BASICFILE storage

  • XMLType stored as CLOB

SQL Generation and Automatic Data Type Conversion

An apply process performs implicit data type conversion where it is possible, and the generated SQL follows ANSI standards where it is possible. The following are considerations for automatic data type conversions:

  • NULL is specified as "NULL".

  • Single quotation marks are converted into double quotation marks for the following data types when they are inline values: CHAR, VARCHAR2, NVARCHAR2, NCHAR, CLOB, and NCLOB.

  • LONG data is converted into CLOB data.

  • LONG RAW data is converted into BLOB data.

SQL Generation and LOB, LONG, LONG RAW, and XMLType Data Types

For INSERT and UPDATE operations on LOB columns, an apply process automatically assembles the LOB chunks using LOB assembly. For these operations, the generated SQL includes a non-NULL empty value. The actual values of the chunked columns arrive in subsequent LCRs. For each chunk, you must perform the correct SQL operation on the correct column.

Similarly, for LONG, LONG RAW, and XMLType data types, an apply process generates a non-NULL empty value, and the actual values of the column arrive in chunks in subsequent LCRs. For each chunk, you must perform the correct SQL operation on the correct column.

In the inline version of the generated SQL, for LOB, LONG, LONG RAW, and XMLType data type columns, the following SQL is generated for inserts and updates:

  • For CLOB, NCLOB, and LONG data type columns:

    EMPTY_CLOB()
    
  • For BLOB and LONG RAW data type columns:

    EMPTY_BLOB()
    
  • For XMLType columns:

    XMLTYPE.CREATEXML('xml /')
    

    where xml / is the XML chunk.

After the LCR that contains the DML statement arrives, the data for these changes arrive in separate chunks. You can generate the WHERE clause for such a change and use the generated WHERE clause to identify the row for the modifications contained in the chunks. For example, in PL/SQL you can use the GET_WHERE_CLAUSE row LCR member procedure to generate the WHERE clause for a row change.

For INSERT and UPDATE operations, the generated WHERE clause identifies the row after the insert or update. For example, consider the following update to the hr.departments table:

UPDATE hr.departments SET department_name='Management' 
  WHERE department_name='Administration';

The generated WHERE clause for this change is the following:

WHERE "DEPARTMENT_NAME"='Management'

For piecewise LOB operation performed by subprograms in the DBMS_LOB package (including the WRITE, TRIM, and ERASE procedures), the generated SQL includes a SELECT FOR UPDATE statement.

For example, a LOB_WRITE operation on a clob_col results in generated SQL similar to the following:

SELECT "CLOB_COL" FROM "HR"."LOB_TAB" WHERE "N1"=2 FOR UPDATE

The selected clob_col must be defined. You can use the LOB locator to perform piecewise LOB operations with the LOB chunks that follow the row LCR.

SQL Generation and Character Sets

When you use the LCR methods, the generated SQL is in the database character set. SQL keywords, such as INSERT, UPDATE, and INTO, do not change with the character set.


See Also:


Sample Generated SQL Statements

This section provides examples of generated SQL statements:

Sample Generated SQL Statements for the hr.employees Table

This section provides examples of SQL statements generated by an apply process for changes made to the hr.employees table.

This section includes these examples:


Note:

Generated SQL is in a single line and is not formatted.

Example 4-1 Generated Insert

Assume the following insert is executed:

INSERT INTO hr.employees (employee_id, 
                           last_name, 
                           email, 
                           hire_date, 
                           job_id, 
                           salary, 
                           commission_pct) 
                   VALUES (207, 
                           'Gregory', 
                           'pgregory@example.com', 
                           SYSDATE, 
                           'PU_CLERK', 
                           9000, 
                           NULL);

The following is the generated SQL with inline values:

INSERT INTO "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME",
"EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT",
"MANAGER_ID","DEPARTMENT_ID" ) VALUES ( 207, NULL,'Gregory',
'pgregory@example.com', NULL , TO_DATE(' 2009-04-15','syyyy-mm-dd'),
'PU_CLERK',9000, NULL , NULL , NULL )

The following is the generated SQL with bind variables:

INSERT INTO "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME",
"EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY",
"COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID" ) VALUES ( :1   ,:2   ,:3   
,:4   ,:5   ,:6   ,:7   ,:8   ,:9   ,:10  ,:11  )

Example 4-2 Generated Update

Assume the following update is executed:

UPDATE hr.employees SET salary=10000 WHERE employee_id=207;

The following is the generated SQL with inline values:

UPDATE "HR"."EMPLOYEES" SET "SALARY"=10000 WHERE "EMPLOYEE_ID"=207 
AND "SALARY"=9000

The following is the generated SQL with bind variables:

UPDATE "HR"."EMPLOYEES" SET "SALARY"=:1    WHERE "EMPLOYEE_ID"=:2    
AND "SALARY"=:3

Example 4-3 Generated Delete

Assume the following delete is executed:

DELETE FROM hr.employees WHERE employee_id=207;

The following is the generated SQL with inline values:

DELETE  FROM "HR"."EMPLOYEES" WHERE "EMPLOYEE_ID"=207 AND "FIRST_NAME" IS NULL 
AND "LAST_NAME"='Gregory' AND "EMAIL"='pgregory@example.com' AND 
"PHONE_NUMBER" IS NULL  AND "HIRE_DATE"= TO_DATE(' 2009-04-15','syyyy-mm-dd') 
AND "JOB_ID"='PU_CLERK' AND "SALARY"=10000 AND "COMMISSION_PCT" IS NULL  
AND "MANAGER_ID" IS NULL  AND "DEPARTMENT_ID" IS NULL 

The following is the generated SQL with bind variables:

DELETE  FROM "HR"."EMPLOYEES" WHERE "EMPLOYEE_ID"=:1    AND "FIRST_NAME"=:2    
AND "LAST_NAME"=:3    AND "EMAIL"=:4    AND "PHONE_NUMBER"=:5    AND 
"HIRE_DATE"=:6    AND "JOB_ID"=:7    AND "SALARY"=:8    AND 
"COMMISSION_PCT"=:9   AND "MANAGER_ID"=:10   AND "DEPARTMENT_ID"=:11 
Sample Generated SQL Statements for a Table With LOB Columns

This section provides examples of SQL statements generated by an apply process for changes made to the following table:

CREATE TABLE hr.lob_tab(
   n1        number primary key,
   clob_col  CLOB,
   nclob_col NCLOB,
   blob_col  BLOB);

This section includes these examples:


Note:

Generated SQL is in a single line and is not formatted.

Example 4-4 Generated Insert for a Table with LOB Columns

Assume the following insert is executed:

INSERT INTO hr.lob_tab VALUES (2, 'test insert', NULL, NULL);

The following is the generated SQL with inline values:

INSERT INTO "HR"."LOB_TAB"("N1","BLOB_COL","CLOB_COL","NCLOB_COL" ) 
VALUES ( 2,, EMPTY_CLOB() ,)

The following is the generated SQL with bind variables:

INSERT INTO "HR"."LOB_TAB"("N1","BLOB_COL","CLOB_COL","NCLOB_COL" ) 
VALUES ( :1   ,:2   ,:3   ,:4   )

The GET_WHERE_CLAUSE member procedure generates the following WHERE clause for this insert:

  • Inline:

    WHERE "N1"=2
    
  • Bind variables:

    WHERE "N1"=:1
    

You can use the WHERE clause to identify the row that was inserted when the subsequent chunks arrive for the LOB column change.

Example 4-5 Generated Update for a Table with LOB Columns

Assume the following update is executed:

UPDATE hr.lob_tab SET clob_col='test update' WHERE n1=2;

The following is the generated SQL with inline values:

UPDATE "HR"."LOB_TAB" SET "CLOB_COL"= EMPTY_CLOB()  WHERE "N1"=2

The following is the generated SQL with bind variables:

UPDATE "HR"."LOB_TAB" SET "CLOB_COL"=:1    WHERE "N1"=:2

Example 4-6 Generated Delete for a Table with LOB Columns

Assume the following delete is executed:

DELETE FROM hr.lob_tab WHERE n1=2;

The following is the generated SQL with inline values:

DELETE  FROM "HR"."LOB_TAB" WHERE "N1"=2

The following is the generated SQL with bind variables:

DELETE  FROM "HR"."LOB_TAB" WHERE "N1"=:1

Oracle Streams Apply Processes and RESTRICTED SESSION

When restricted session is enabled during system startup by issuing a STARTUP RESTRICT statement, apply processes do not start, even if they were running when the database shut down. When the restricted session is disabled, each apply process that was not stopped is started.

When restricted session is enabled in a running database by the SQL statement ALTER SYSTEM ENABLE RESTRICTED SESSION, it does not affect any running apply processes. These apply processes continue to run and apply messages. If a stopped apply process is started in a restricted session, then the apply process does not actually start until the restricted session is disabled.

Apply Process Subcomponents

An apply process consists of the following subcomponents:

  • A reader server that dequeues messages. The reader server is a process that computes dependencies between logical change records (LCRs) and assembles messages into transactions. The reader server then returns the assembled transactions to the coordinator process.

  • A coordinator process that gets transactions from the reader server and passes them to apply servers. The coordinator process name is APnn, where nn can include letters and numbers. The coordinator process is an Oracle background process.

  • One or more apply servers that apply LCRs to database objects as DML or DDL statements or that pass the LCRs to their appropriate apply handlers. For non-LCR messages, the apply servers pass the messages to the message handler. Apply servers can also enqueue LCR and non-LCR messages into the persistent queue portion of a queue specified by the DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION procedure. Each apply server is a process. If an apply server encounters an error, then it then tries to resolve the error with a user-specified conflict handler or error handler. If an apply server cannot resolve an error, then it rolls back the transaction and places the entire transaction, including all of its messages, in the error queue.

    When an apply server commits a completed transaction, this transaction has been applied. When an apply server places a transaction in the error queue and commits, this transaction also has been applied.

The reader server and the apply server process names are ASnn, where nn can include letters and numbers. If a transaction being handled by an apply server has a dependency on another transaction that is not known to have been applied, then the apply server contacts the coordinator process and waits for instructions. The coordinator process monitors all of the apply servers to ensure that transactions are applied and committed in the correct order.

The following sections describe the possible states for each apply process subcomponent:

Reader Server States

The state of a reader server describes what the reader server is doing currently. You can view the state of the reader server for an apply process by querying the V$STREAMS_APPLY_READER dynamic performance view. The following reader server states are possible:

  • INITIALIZING - Starting up

  • IDLE - Performing no work

  • DEQUEUE MESSAGES - Dequeuing messages from the apply process's queue

  • SCHEDULE MESSAGES - Computing dependencies between messages and assembling messages into transactions

  • SPILLING - Spilling unapplied messages from memory to hard disk

  • PAUSED - WAITING FOR DDL TO COMPLETE - Paused while waiting for a DDL LCR to be applied


See Also:


Coordinator Process States

The state of a coordinator process describes what the coordinator process is doing currently. You can view the state of a coordinator process by querying the V$STREAMS_APPLY_COORDINATOR dynamic performance view. The following coordinator process states are possible:

  • INITIALIZING - Starting up

  • IDLE - Performing no work

  • APPLYING - Passing transactions to apply servers

  • SHUTTING DOWN CLEANLY - Stopping without an error

  • ABORTING - Stopping because of an apply error


See Also:


Apply Server States

The state of an apply server describes what the apply server is doing currently. You can view the state of each apply server for an apply process by querying the V$STREAMS_APPLY_SERVER dynamic performance view. The following apply server states are possible:

  • INITIALIZING - Starting up.

  • IDLE - Performing no work.

  • RECORD LOW-WATERMARK - Performing an administrative action that maintains information about the apply progress, which is used in the ALL_APPLY_PROGRESS and DBA_APPLY_PROGRESS data dictionary views.

  • ADD PARTITION - Performing an administrative action that adds a partition that is used for recording information about in-progress transactions.

  • DROP PARTITION - Performing an administrative action that drops a partition that was used to record information about in-progress transactions.

  • EXECUTE TRANSACTION - Applying a transaction.

  • WAIT COMMIT - Waiting to commit a transaction until all other transactions with a lower commit SCN are applied. This state is possible only if the commit_serialization apply process parameter is set to a value other than DEPENDENT_TRANSACTIONS and the parallelism apply process parameter is set to a value greater than 1.

  • WAIT DEPENDENCY - Waiting to apply an LCR in a transaction until another transaction, on which it has a dependency, is applied. This state is possible only if the PARALLELISM apply process parameter is set to a value greater than 1.

  • WAIT FOR CLIENT - Waiting for an XStream In client application to request more logical change records (LCRs).

  • WAIT FOR NEXT CHUNK - Waiting for the next set of LCRs for a large transaction.

  • ROLLBACK TRANSACTION - Rolling back a transaction.

  • TRANSACTION CLEANUP - Cleaning up an applied transaction, which includes removing LCRs from the apply process's queue.


See Also:


Apply User

An apply process applies messages in the security domain of its apply user. The apply user dequeues all messages that satisfy the apply process rule sets. The apply user can apply messages directly to database objects. In addition, the apply user runs all custom rule-based transformations specified by the rules in these rule sets. The apply user also runs user-defined apply handlers.

The apply user must have the necessary privileges to apply changes, including the following privileges:

  • EXECUTE privilege on the rule sets used by the apply process

  • EXECUTE privilege on all custom rule-based transformation functions specified for rules in the positive rule set

  • EXECUTE privilege on any apply handlers

  • Privileges to dequeue messages from the apply process's queue

An apply process can be associated with only one user, but one user can be associated with many apply processes.


See Also:


Apply Process Parameters

After creation, an apply process is disabled so that you can set the apply process parameters for your environment before starting the process for the first time. Apply process parameters control the way an apply process operates. For example, the parallelism apply process parameter specifies the number of apply servers that can concurrently apply transactions, and the time_limit apply process parameter specifies the amount of time an apply process runs before it is shut down automatically. After you set the apply process parameters, you can start the apply process.


See Also:


Persistent Apply Process Status Upon Database Restart

An apply process maintains a persistent status when the database running the apply process is shut down and restarted. For example, if an apply process is enabled when the database is shut down, then the apply process automatically starts when the database is restarted. Similarly, if an apply process is disabled or aborted when a database is shut down, then the apply process is not started and retains the disabled or aborted status when the database is restarted.

The Error Queue

The error queue contains all of the current apply errors for a database. If there are multiple apply processes in a database, then the error queue contains the apply errors for each apply process. To view information about apply errors, query the DBA_APPLY_ERROR data dictionary view or use Enterprise Manager.

The error queue stores information about transactions that could not be applied successfully by the apply processes running in a database. A transaction can include many messages. When an unhandled error occurs during apply, an apply process automatically moves all of the messages in the transaction that satisfy the apply process rule sets to the error queue.

You can correct the condition that caused an error and then reexecute the transaction that caused the error. For example, you might modify a row in a table to correct the condition that caused an error.

When the condition that caused the error has been corrected, you can either reexecute the transaction in the error queue using the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure, or you can delete the transaction from the error queue using the DELETE_ERROR or DELETE_ALL_ERRORS procedure. These procedures are in the DBMS_APPLY_ADM package.

When you reexecute a transaction in the error queue, you can specify that the transaction be executed either by the user who originally placed the error in the error queue or by the user who is reexecuting the transaction. Also, the current Oracle Streams tag for the apply process is used when you reexecute a transaction in the error queue.

A reexecuted transaction uses any relevant apply handlers and conflict resolution handlers. If, to resolve the error, a row LCR in an error queue must be modified before it is executed, then you can configure a procedure DML handler to process the row LCR that caused the error in the error queue. In this case, the DML handler can modify the row LCR to avoid a repetition of the same error. The row LCR is passed to the DML handler when you reexecute the error containing the row LCR. For example, a statement DML handler might insert different values than the ones present in an insert row LCR, while a procedure DML handler might modify one or more columns in the row LCR to avoid a repetition of the same error.

The error queue contains information about errors encountered at the local destination database only. It does not contain information about errors for apply processes running in other databases in an Oracle Streams environment.

The error queue uses the exception queues in the database. When you create an ANYDATA queue using the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package, the procedure creates a queue table for the queue if one does not already exist. When a queue table is created, an exception queue is created automatically for the queue table. Multiple queues can use a single queue table, and each queue table has one exception queue. Therefore, a single exception queue can store errors for multiple queues and multiple apply processes.

An exception queue only contains the apply errors for its queue table, but the Oracle Streams error queue contains information about all of the apply errors in each exception queue in a database. You should use the procedures in the DBMS_APPLY_ADM package to manage Oracle Streams apply errors. You should not dequeue apply errors from an exception queue directly.


Note:

If a messaging client encounters an error when it is dequeuing messages, then the messaging client moves these messages to the exception queue associated with the its queue table. However, information about messaging client errors is not stored in the error queue. Only information about apply process errors is stored in the error queue.

Explicit Consumption with a Messaging Client

A messaging client dequeues messages from its persistent queue when it is invoked by an application or a user. You use rules to specify which messages in the queue are dequeued by a messaging client. These messages can be persistent LCRs or persistent user messages.

You can create a messaging client by specifying dequeue for the streams_type parameter when you run one of the following procedures in the DBMS_STREAMS_ADM package:

When you create a messaging client, you specify the name of the messaging client and the ANYDATA queue from which the messaging client dequeues messages. These procedures can also add rules to the positive rule set or negative rule set of a messaging client. You specify the message type for each rule, and a single messaging client can dequeue messages of different types.

The user who creates a messaging client is granted the privileges to dequeue from the queue using the messaging client. This user is the messaging client user. The messaging client user can dequeue messages that satisfy the messaging client rule sets. A messaging client can be associated with only one user, but one user can be associated with many messaging clients.

Figure 4-2 shows a messaging client dequeuing messages.

Figure 4-2 Messaging Client

Description of Figure 4-2 follows
Description of "Figure 4-2 Messaging Client"

Explicit Consumption with Manual Dequeue

With explicit consumption with manual dequeue, an application explicitly dequeues buffered LCRs, persistent LCRs, buffered user messages, or persistent user messages manually and processes them. The queue from which the messages are dequeued can be an ANYDATA queue or a typed queue. You can use either the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package to dequeue messages.

The dequeue features available with Oracle Streams Advanced Queuing include the following:

  • Dequeue from a buffered queue or a persistent queue

  • Concurrent dequeues

  • Dequeue methods

  • Dequeue modes

  • Dequeue an array of messages

  • Message states

  • Navigation of messages in dequeuing

  • Waiting for messages

  • Retries with delays

  • Optional transaction protection

  • Exception queues


See Also:


PK{G`RGRPK&AOEBPS/pt_trouble.htm; Troubleshooting an Oracle Streams Environment PKQ@;PK&AOEBPS/strms_otmon.htm Monitoring Other Oracle Streams Components

29 Monitoring Other Oracle Streams Components

This chapter provides sample queries that you can use to monitor various Oracle Streams components.

The following topics describe monitoring various Oracle Streams components:


Note:

The Oracle Streams tool in Oracle Enterprise Manager is also an excellent way to monitor an Oracle Streams environment. See the online Help for the Oracle Streams tool for more information.


See Also:

Oracle Database Reference for information about the data dictionary views described in this chapter

Monitoring Oracle Streams Administrators and Other Oracle Streams Users

The following sections contain queries that you can run to list Oracle Streams administrators and other users who allow access to remote Oracle Streams administrators:


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about configuring Oracle Streams administrators and other Oracle Streams users using the DBMS_STREAMS_AUTH package

Listing Local Oracle Streams Administrators

You can grant privileges to a local Oracle Streams administrator by running the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package. The DBA_STREAMS_ADMINISTRATOR data dictionary view contains only the local Oracle Streams administrators created with the grant_privileges parameter set to TRUE when the GRANT_ADMIN_PRIVILEGE procedure was run for the user. If you created an Oracle Streams administrator using generated scripts and set the grant_privileges parameter to FALSE when the GRANT_ADMIN_PRIVILEGE procedure was run for the user, then the DBA_STREAMS_ADMINISTRATOR data dictionary view does not list the user as an Oracle Streams administrator.

To list the local Oracle Streams administrators created with the grant_privileges parameter set to TRUE when running the GRANT_ADMIN_PRIVILEGE procedure, run the following query:

COLUMN USERNAME HEADING 'Local Streams Administrator' FORMAT A30

SELECT USERNAME FROM DBA_STREAMS_ADMINISTRATOR
  WHERE LOCAL_PRIVILEGES = 'YES';

Your output looks similar to the following:

Local Streams Administrator
------------------------------
STRMADMIN

The GRANT_ADMIN_PRIVILEGE might not have been run on a user who is an Oracle Streams administrator. Such administrators are not returned by the query in this section. Also, you can change the privileges for the users listed after the GRANT_ADMIN_PRIVILEGE procedure has been run for them. The DBA_STREAMS_ADMINISTRATOR view does not track these changes unless they are performed by the DBMS_STREAMS_AUTH package. For example, you can revoke the privileges granted by the GRANT_ADMIN_PRIVILEGE procedure for a particular user using the REVOKE SQL statement, but this user would be listed when you query the DBA_STREAMS_ADMINISTRATOR view.

Oracle recommends using the REVOKE_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package to revoke privileges from a user listed by the query in this section. When you revoke privileges from a user using this procedure, the user is removed from the DBA_STREAMS_ADMINISTRATOR view.


See Also:

Oracle Streams Replication Administrator's Guide for information about creating an Oracle Streams administrator

Listing Users Who Allow Access to Remote Oracle Streams Administrators

You can configure a user to allow access to remote Oracle Streams administrators by running the GRANT_REMOTE_ADMIN_ACCESS procedure in the DBMS_STREAMS_AUTH package. Such a user allows the remote Oracle Streams administrator to perform administrative actions in the local database using a database link.

Typically, you configure such a user at a local source database if a downstream capture process captures changes originating at the local source database. The Oracle Streams administrator at a downstream capture database administers the source database using this connection.

To list the users who allow to remote Oracle Streams administrators, run the following query:

COLUMN USERNAME HEADING 'Users Who Allow Remote Access' FORMAT A30

SELECT USERNAME FROM DBA_STREAMS_ADMINISTRATOR
  WHERE ACCESS_FROM_REMOTE = 'YES'; 

Your output looks similar to the following:

Users Who Allow Remote Access
------------------------------
STRMREMOTE

Monitoring the Oracle Streams Pool

The Oracle Streams pool is a portion of memory in the System Global Area (SGA) that is used by Oracle Streams. The Oracle Streams pool stores enqueued messages in memory, and it provides memory for capture processes and apply processes. The Oracle Streams pool always stores LCRs captured by a capture process, and it can store other types of messages that are enqueued manually into a buffered queue.

The Oracle Streams pool size is managed automatically when the MEMORY_TARGET, MEMORY_MAX_TARGET, or SGA_TARGET initialization parameter is set to a nonzero value. If these parameters are all set to 0 (zero), then you can specify the size of the Oracle Streams pool in bytes using the STREAMS_POOL_SIZE initialization parameter. In this case, the V$STREAMS_POOL_ADVICE dynamic performance view provides information about an appropriate setting for the STREAMS_POOL_SIZE initialization parameter.

This section contains example queries that show when you should increase, retain, or decrease the size of the Oracle Streams pool. Each query shows the following information about the Oracle Streams pool:

  • STREAMS_POOL_SIZE_FOR_ESTIMATE shows the size, in megabytes, of the Oracle Streams pool for the estimate. The size ranges from values smaller than the current Oracle Streams pool size to values larger than the current Oracle Streams pool size, and there is a separate row for each increment. There always is an entry that shows the current Oracle Streams pool size, and there always are 20 increments. The range and the size of the increments depend on the current size of the Oracle Streams pool.

  • STREAMS_POOL_SIZE_FACTOR shows the size factor of an estimate as it relates to the current size of the Oracle Streams pool. For example, a size factor of.2 means that the estimate is for 20% of the current size of the Oracle Streams pool, while a size factor of 1.6 means that the estimate is for 160% of the current size of the Oracle Streams pool. The row with a size factor of 1.0 shows the current size of the Oracle Streams pool.

  • ESTD_SPILL_COUNT shows the estimated number messages that will spill from memory to the queue table for each STREAMS_POOL_SIZE_FOR_ESTIMATE and STREAMS_POOL_SIZE_FACTOR returned by the query.

  • ESTD_SPILL_TIME shows the estimated elapsed time, in seconds, spent spilling messages from memory to the queue table for each STREAMS_POOL_SIZE_FOR_ESTIMATE and STREAMS_POOL_SIZE_FACTOR returned by the query.

  • ESTD_UNSPILL_COUNT shows the estimated number messages that will unspill from the queue table back into memory for each STREAMS_POOL_SIZE_FOR_ESTIMATE and STREAMS_POOL_SIZE_FACTOR returned by the query.

  • ESTD_UNSPILL_TIME shows the estimated elapsed time, in seconds, spent unspilling messages from the queue table back into memory for each STREAMS_POOL_SIZE_FOR_ESTIMATE and STREAMS_POOL_SIZE_FACTOR returned by the query.

If any capture processes, propagations, or apply processes are disabled when you query the V$STREAMS_POOL_ADVICE view, and you plan to enable them in the future, then ensure that you consider the memory resources required by these Oracle Streams clients before you decrease the size of the Oracle Streams pool.


Tips:

  • In general, the best size for the Oracle Streams pool is the smallest size for which spilled and unspilled messages and times are close to zero.

  • For the most accurate results, you should run a query on the V$STREAMS_POOL_ADVICE view when there is a typical amount of dequeue activity by propagations and apply processes in a database. If dequeue activity is far lower than typical, or far higher than typical, then the query results might not be a good guide for adjusting the size of the Oracle Streams pool.



See Also:


Query Result that Advises Increasing the Oracle Streams Pool Size

Consider the following results returned by the V$STREAMS_POOL_ADVICE view:

COLUMN STREAMS_POOL_SIZE_FOR_ESTIMATE HEADING 'Oracle Streams Pool Size|for Estimate(MB)'
  FORMAT 999999999999
COLUMN STREAMS_POOL_SIZE_FACTOR HEADING 'Oracle Streams Pool|Size|Factor' FORMAT 99.9
COLUMN ESTD_SPILL_COUNT HEADING 'Estimated|Spill|Count' FORMAT 99999999
COLUMN ESTD_SPILL_TIME HEADING 'Estimated|Spill|Time' FORMAT 99999999.99
COLUMN ESTD_UNSPILL_COUNT HEADING 'Estimated|Unspill|Count' FORMAT 99999999
COLUMN ESTD_UNSPILL_TIME HEADING 'Estimated|Unspill|Time' FORMAT 99999999.99

SELECT STREAMS_POOL_SIZE_FOR_ESTIMATE,
       STREAMS_POOL_SIZE_FACTOR, 
       ESTD_SPILL_COUNT, 
       ESTD_SPILL_TIME, 
       ESTD_UNSPILL_COUNT,
       ESTD_UNSPILL_TIME
  FROM V$STREAMS_POOL_ADVICE;

                         Oracle Streams Pool Estimated    Estimated Estimated
Oracle Streams Pool Size                Size     Spill        Spill   Unspill
        for Estimate(MB)              Factor     Count         Time     Count
------------------------ ------------------- --------- ------------ ---------
               24           .1       158        62.00         0          .00
               48           .2       145        59.00         0          .00
               72           .3       137        53.00         0          .00
               96           .4       122        50.00         0          .00
              120           .5       114        48.00         0          .00
              144           .6       103        45.00         0          .00
              168           .7        95        39.00         0          .00
              192           .8        87        32.00         0          .00
              216           .9        74        26.00         0          .00
              240          1.0        61        21.00         0          .00
              264          1.1        56        17.00         0          .00
              288          1.2        43        15.00         0          .00
              312          1.3        36        11.00         0          .00
              336          1.4        22         8.00         0          .00
              360          1.5         9         2.00         0          .00
              384          1.6         0          .00         0          .00
              408          1.7         0          .00         0          .00
              432          1.8         0          .00         0          .00
              456          1.9         0          .00         0          .00
              480          2.0         0          .00         0          .00

Based on these results, 384 megabytes, or 160% of the size of the current Oracle Streams pool, is the optimal size for the Oracle Streams pool. That is, this size is the smallest size for which the estimated number of spilled and unspilled messages is zero.


Note:

After you adjust the size of the Oracle Streams pool, it might take some time for the new size to result in new output for the V$STREAMS_POOL_ADVICE view.

Query Result that Advises Retaining the Current Oracle Streams Pool Size

Consider the following results returned by the V$STREAMS_POOL_ADVICE view:

COLUMN STREAMS_POOL_SIZE_FOR_ESTIMATE  HEADING 'Oracle Streams Pool|Size for Estimate'
  FORMAT 999999999999
COLUMN STREAMS_POOL_SIZE_FACTOR HEADING 'Oracle Streams Pool|Size|Factor' FORMAT 99.9
COLUMN ESTD_SPILL_COUNT HEADING 'Estimated|Spill|Count' FORMAT 99999999
COLUMN ESTD_SPILL_TIME HEADING 'Estimated|Spill|Time' FORMAT 99999999.99
COLUMN ESTD_UNSPILL_COUNT HEADING 'Estimated|Unspill|Count' FORMAT 99999999
COLUMN ESTD_UNSPILL_TIME HEADING 'Estimated|Unspill|Time' FORMAT 99999999.99
 
SELECT STREAMS_POOL_SIZE_FOR_ESTIMATE,
       STREAMS_POOL_SIZE_FACTOR, 
       ESTD_SPILL_COUNT, 
       ESTD_SPILL_TIME, 
       ESTD_UNSPILL_COUNT,
       ESTD_UNSPILL_TIME
  FROM V$STREAMS_POOL_ADVICE;

                         Oracle Streams Pool Estimated    Estimated Estimated
Oracle Streams Pool Size                Size     Spill        Spill   Unspill
        for Estimate(MB)              Factor     Count         Time     Count
------------------------ ------------------- --------- ------------ ---------
               24           .1        89        52.00         0          .00
               48           .2        78        48.00         0          .00
               72           .3        71        43.00         0          .00
               96           .4        66        37.00         0          .00
              120           .5        59        32.00         0          .00
              144           .6        52        26.00         0          .00
              168           .7        39        20.00         0          .00
              192           .8        27        12.00         0          .00
              216           .9        15         5.00         0          .00
              240          1.0         0          .00         0          .00
              264          1.1         0          .00         0          .00
              288          1.2         0          .00         0          .00
              312          1.3         0          .00         0          .00
              336          1.4         0          .00         0          .00
              360          1.5         0          .00         0          .00
              384          1.6         0          .00         0          .00
              408          1.7         0          .00         0          .00
              432          1.8         0          .00         0          .00
              456          1.9         0          .00         0          .00
              480          2.0         0          .00         0          .00

Based on these results, the current size of the Oracle Streams pool is the optimal size. That is, this size is the smallest size for which the estimated number of spilled and unspilled messages is zero.

Query Result that Advises Decreasing the Oracle Streams Pool Size

Consider the following results returned by the V$STREAMS_POOL_ADVICE view:

COLUMN STREAMS_POOL_SIZE_FOR_ESTIMATE  HEADING 'Oracle Streams Pool|Size for Estimate'
  FORMAT 999999999999
COLUMN STREAMS_POOL_SIZE_FACTOR HEADING 'Oracle Streams Pool|Size|Factor' FORMAT 99.9
COLUMN ESTD_SPILL_COUNT HEADING 'Estimated|Spill|Count' FORMAT 99999999
COLUMN ESTD_SPILL_TIME HEADING 'Estimated|Spill|Time' FORMAT 99999999.99
COLUMN ESTD_UNSPILL_COUNT HEADING 'Estimated|Unspill|Count' FORMAT 99999999
COLUMN ESTD_UNSPILL_TIME HEADING 'Estimated|Unspill|Time' FORMAT 99999999.99
 
SELECT STREAMS_POOL_SIZE_FOR_ESTIMATE,
       STREAMS_POOL_SIZE_FACTOR, 
       ESTD_SPILL_COUNT, 
       ESTD_SPILL_TIME, 
       ESTD_UNSPILL_COUNT,
       ESTD_UNSPILL_TIME
  FROM V$STREAMS_POOL_ADVICE;

                         Oracle Streams Pool Estimated    Estimated Estimated
Oracle Streams Pool Size                Size     Spill        Spill   Unspill
        for Estimate(MB)              Factor     Count         Time     Count
------------------------ ------------------- --------- ------------ ---------
               24           .1       158        62.00         0          .00
               48           .2       145        59.00         0          .00
               72           .3       137        53.00         0          .00
               96           .4       122        50.00         0          .00
              120           .5       114        48.00         0          .00
              144           .6       103        45.00         0          .00
              168           .7         0          .00         0          .00
              192           .8         0          .00         0          .00
              216           .9         0          .00         0          .00
              240          1.0         0          .00         0          .00
              264          1.1         0          .00         0          .00
              288          1.2         0          .00         0          .00
              312          1.3         0          .00         0          .00
              336          1.4         0          .00         0          .00
              360          1.5         0          .00         0          .00
              384          1.6         0          .00         0          .00
              408          1.7         0          .00         0          .00
              432          1.8         0          .00         0          .00
              456          1.9         0          .00         0          .00
              480          2.0         0          .00         0          .00

Based on these results, 168 megabytes, or 70% of the size of the current Oracle Streams pool, is the optimal size for the Oracle Streams pool. That is, this size is the smallest size for which the estimated number of spilled and unspilled messages is zero.


Note:

After you adjust the size of the Oracle Streams pool, it might take some time for the new size to result in new output for the V$STREAMS_POOL_ADVICE view.

Monitoring Compatibility in an Oracle Streams Environment

Some database objects and data types are not compatible with Oracle Streams capture processes, synchronous captures, and apply processes. If one of these Oracle Streams clients tries to process an unsupported database object or data type, errors result.

The queries in the following sections show Oracle Streams compatibility for database objects and columns in the local database:

Monitoring Compatibility for Capture Processes

This section contains these topics:

Listing the Database Objects That Are Not Compatible with Capture Processes

A database object is not compatible with capture processes if capture processes cannot capture changes to it. The query in this section displays the following information about database objects that are not compatible with capture processes:

  • The object owner

  • The object name

  • The reason why the object is not compatible with capture processes

  • Whether capture processes automatically filter out changes to the database object (AUTO_FILTERED column)

If capture processes automatically filter out changes to a database object, then the rule sets used by the capture processes do not need to filter them out explicitly. For example, capture processes automatically filter out changes to domain indexes. However, if changes to incompatible database objects are not filtered out automatically, then the rule sets used by the capture process must filter them out to avoid errors.

For example, suppose the rule sets for a capture process instruct the capture process to capture all of the changes made to a specific schema. Also suppose that the query in this section shows that one object in this schema is not compatible with capture processes, and that changes to the object are not filtered out automatically. In this case, you can add a rule to the negative rule set for the capture process to filter out changes to the incompatible database object.

Run the following query to list the database objects in the local database that are not compatible with capture processes:

COLUMN OWNER HEADING 'Object|Owner' FORMAT A8
COLUMN TABLE_NAME HEADING 'Object Name' FORMAT A30
COLUMN REASON HEADING 'Reason' FORMAT A30
COLUMN AUTO_FILTERED HEADING 'Auto|Filtered?' FORMAT A9

SELECT OWNER, TABLE_NAME, REASON, AUTO_FILTERED FROM DBA_STREAMS_UNSUPPORTED;

Your output looks similar to the following:

Object                                                                 Auto
Owner    Object Name                    Reason                         Filtered?
-------- ------------------------------ ------------------------------ ---------
IX       AQ$_ORDERS_QUEUETABLE_G        column with user-defined type  NO
IX       AQ$_ORDERS_QUEUETABLE_H        unsupported column exists      NO
IX       AQ$_ORDERS_QUEUETABLE_I        unsupported column exists      NO
IX       AQ$_ORDERS_QUEUETABLE_L        AQ queue table                 NO
IX       AQ$_ORDERS_QUEUETABLE_S        AQ queue table                 NO
IX       AQ$_ORDERS_QUEUETABLE_T        AQ queue table                 NO
IX       AQ$_STREAMS_QUEUE_TABLE_C      AQ queue table                 NO
IX       AQ$_STREAMS_QUEUE_TABLE_G      column with user-defined type  NO
IX       AQ$_STREAMS_QUEUE_TABLE_H      unsupported column exists      NO
IX       AQ$_STREAMS_QUEUE_TABLE_I      unsupported column exists      NO
IX       AQ$_STREAMS_QUEUE_TABLE_L      AQ queue table                 NO
IX       AQ$_STREAMS_QUEUE_TABLE_S      AQ queue table                 NO
IX       AQ$_STREAMS_QUEUE_TABLE_T      AQ queue table                 NO
IX       ORDERS_QUEUETABLE              column with user-defined type  NO
IX       STREAMS_QUEUE_TABLE            column with user-defined type  NO
OE       ACTION_TABLE                   column with user-defined type  NO
OE       CATEGORIES_TAB                 column with user-defined type  NO
.
.
.

Notice that the Auto Filtered? column is YES for the sh.dr$sup_text_indx$i domain index. A capture process automatically filters out data manipulation language (DML) changes to this database object, even if the rules sets for a capture process instruct the capture process to capture changes to it. By default, a capture process also filters out data definition language (DDL) changes to these database objects. However, if you want to capture these DDL changes, then use the DBMS_CAPTURE_ADM.SET_PARAMETER procedure to set the set_autofiltered_table_ddl capture process parameter to N and configure the capture process rule sets to capture these DDL changes.

Because the Auto Filtered? column is NO for other database objects listed in the example output, capture processes do not filter out changes to these database objects automatically. If a capture process attempts to process changes to these unsupported database objects, then the capture process raises an error. However, you can avoid these errors by configuring rules sets that instruct the capture process not to capture changes to these unsupported objects.


Note:

  • The results of the query in this section depend on the compatibility level of the database. More database objects are incompatible with capture processes at lower compatibility levels. The COMPATIBLE initialization parameter controls the compatibility level of the database.

  • For capture processes, you cannot use rule-based transformations to exclude a column of an unsupported data type. The entire database object must be excluded to avoid capture errors.

  • The DBA_STREAMS_UNSUPPORTED view only pertains to capture processes in Oracle Database 11g Release 1 (11.1) and later databases. This view does not pertain to synchronous captures and apply processes.



See Also:


Listing the Database Objects Recently Compatible with Capture Processes

The query in this section displays the following information about database objects that have become compatible with capture processes in a recent release of Oracle Database:

  • The object owner

  • The object name

  • The reason why the object was not compatible with capture processes in previous releases of Oracle Database

  • The Oracle Database release in which the object became compatible with capture processes

Run the following query to display this information for the local database:

COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN TABLE_NAME HEADING 'Object Name' FORMAT A20
COLUMN REASON HEADING 'Reason' FORMAT A30
COLUMN COMPATIBLE HEADING 'Compatible' FORMAT A10

SELECT OWNER, TABLE_NAME, REASON, COMPATIBLE FROM DBA_STREAMS_NEWLY_SUPPORTED;

The following is a sample of the output from this query:

Owner      Object Name          Reason                         Compatible
---------- -------------------- ------------------------------ ----------
HR         COUNTRIES            IOT                            10.1
OE         WAREHOUSES           table with XMLType column      11.1
SH         CAL_MONTH_SALES_MV   materialized view              10.1
SH         FWEEK_PSCAT_SALES_MV materialized view              10.1

The Compatible column shows the minimum database compatibility for capture processes to support the database object. If the local database compatibility is equal to or higher than the value in the Compatible column for a database object, then capture processes can capture changes to the database object successfully. You control the compatibility of a database using the COMPATIBLE initialization parameter.

If your Oracle Streams environment includes databases that are running different versions of the Oracle Database, then you can configure rules that use the GET_COMPATIBLE member function for LCRs to filter out LCRs that are not compatible with particular databases. These rules can be added to the rule sets of capture processes, synchronous captures, propagations, and apply processes to filter out incompatible LCRs wherever necessary in a stream.


Note:

The DBA_STREAMS_NEWLY_SUPPORTED view only pertains to capture processes in Oracle Database 11g Release 1 (11.1) and later databases. This view does not pertain to synchronous captures and apply processes.


See Also:


Listing Database Objects and Columns Not Compatible with Synchronous Captures

A database object or a column in a table is not compatible with synchronous captures if synchronous captures cannot capture changes to it. For example, synchronous captures cannot capture changes to object tables. Synchronous captures can capture changes to relational tables, but they cannot capture changes to columns of some data types.

The query in this section displays the following information about database objects and columns that are not compatible with synchronous captures:

  • The object owner

  • The object name

  • The column name

  • The reason why the column is not compatible with synchronous captures

To list the columns that are not compatible with synchronous captures in the local database, run the following query:

COLUMN OWNER HEADING 'Object|Owner' FORMAT A8
COLUMN TABLE_NAME HEADING 'Object Name' FORMAT A20
COLUMN COLUMN_NAME HEADING 'Column Name' FORMAT A20
COLUMN SYNC_CAPTURE_REASON HEADING 'Synchronous|Capture Reason' FORMAT A25
 
SELECT OWNER,
       TABLE_NAME,
       COLUMN_NAME,
       SYNC_CAPTURE_REASON
 FROM DBA_STREAMS_COLUMNS
 WHERE SYNC_CAPTURE_VERSION IS NULL;

When a query on the DBA_STREAMS_COLUMNS view returns NULL for SYNC_CAPTURE_VERSION, it means that synchronous captures do not support the column. The WHERE clause in the query ensures that the query only returns columns that are not supported by synchronous captures.

The following is a sample of the output from this query:

Object                                             Synchronous
Owner    Object Name          Column Name          Capture Reason
-------- -------------------- -------------------- -------------------------
.
.
.
SH       SALES_TRANSACTIONS_E UNIT_COST            external table
         XT
OE       LINEITEM_TABLE       SYS_XDBPD$           object table
OE       LINEITEM_TABLE       ITEMNUMBER           object table
PM       PRINT_MEDIA          AD_FINALTEXT         table with nested table c
                                                   olumn
.
.
.

To avoid synchronous capture errors, configure the synchronous capture rule set to ensure that the synchronous capture does not try to capture changes to an unsupported database object, such as an object table. To avoid synchronous capture errors while capturing changes to relational tables, you have the following options:

  • Configure the synchronous capture rule set to ensure that the synchronous capture does not try to capture changes to a table that contains one or more unsupported columns.

  • Configure rule-based transformations to exclude columns that are not supported by synchronous captures.


Note:

Synchronous capture is available in Oracle Database 11g Release 1 (11.1) and later databases. It is not available in previous releases of Oracle Database.

Monitoring Compatibility for Apply Processes

This section contains these topics:

Listing Database Objects and Columns Not Compatible with Apply Processes

A database object or a column in a table is not compatible with apply processes if apply processes cannot apply changes to it. For example, apply processes cannot apply changes to object tables. Apply processes can apply changes to relational tables, but they cannot apply changes to columns of some data types.

The query in this section displays the following information about database objects and columns that are not compatible with apply processes:

  • The object owner

  • The object name

  • The column name

  • The reason why the column is not compatible with apply processes

To list the columns that are not compatible with apply processes in the local database, run the following query:

COLUMN OWNER HEADING 'Object|Owner' FORMAT A8
COLUMN TABLE_NAME HEADING 'Object Name' FORMAT A20
COLUMN COLUMN_NAME HEADING 'Column Name' FORMAT A20
COLUMN APPLY_REASON HEADING 'Apply Process Reason' FORMAT A25
 
SELECT OWNER,
       TABLE_NAME,
       COLUMN_NAME,
       APPLY_REASON
 FROM DBA_STREAMS_COLUMNS
 WHERE APPLY_VERSION IS NULL;

When a query on the DBA_STREAMS_COLUMNS view returns NULL for APPLY_VERSION, it means that apply processes do not support the column. The WHERE clause in the query ensures that the query only returns columns that are not supported by apply processes.

The following is a sample of the output from this query:

Object
Owner    Object Name          Column Name          Apply Process Reason
-------- -------------------- -------------------- -------------------------
.
.
.
SH       SALES_TRANSACTIONS_E CHANNEL_ID           external table
         XT
OE       ACTION_TABLE         ACTIONED_BY          object table
OE       LINEITEM_TABLE       PART                 object table
PM       ONLINE_MEDIA         PRODUCT_AUDIO        ADT column
OE       CATEGORIES_TAB       CATEGORY_DESCRIPTION object table
.
.
.

To avoid apply errors, configure the apply process rule sets to ensure that the apply process does not try to apply changes to an unsupported database object, such as an object table. To avoid apply errors while applying changes to relational tables, you have the following options:

  • Configure the apply process rule sets to ensure that the apply process does not try to apply changes to a table that contains one or more unsupported columns.

  • Configure rule-based transformations to exclude columns that are not supported by apply processes.

  • Configure procedure DML handlers to exclude columns that are not supported by apply processes.

Listing Columns That Have Become Compatible with Apply Processes Recently

The query in this section displays the following information about database objects and columns that have become compatible with apply processes in a recent release of Oracle Database:

  • The object owner

  • The object name

  • The column name

  • The reason why the object was not compatible with apply processes in previous releases of Oracle Database

  • The Oracle Database release in which the object became compatible with apply processes

Run the following query to display this information for the local database:

COLUMN OWNER HEADING 'Object|Owner' FORMAT A8
COLUMN TABLE_NAME HEADING 'Object Name' FORMAT A15
COLUMN COLUMN_NAME HEADING 'Column Name' FORMAT A15
COLUMN APPLY_VERSION HEADING 'Apply|Process|Vesion' FORMAT 99.9
COLUMN APPLY_REASON HEADING 'Apply|Process Reason' FORMAT A25
 
SELECT OWNER,
       TABLE_NAME,
       COLUMN_NAME,
       APPLY_VERSION,
       APPLY_REASON
 FROM DBA_STREAMS_COLUMNS
 WHERE APPLY_VERSION > 11;

When a query on the DBA_STREAMS_COLUMNS view returns a non-NULL value for APPLY_VERSION, it means that apply processes support the column. The WHERE clause in the query ensures that the query only returns columns that are supported by apply processes. This query returns the columns that have become supported by apply processes in Oracle Database 11g Release 1 and later.

The following is a sample of the output from this query:

                                           Apply
Object                                   Process Apply
Owner    Object Name     Column Name      Vesion Process Reason
-------- --------------- --------------- ------- -------------------------
OE       WAREHOUSES      WAREHOUSE_SPEC     11.1 XMLType column

The Apply Process Version column shows the minimum database compatibility for apply processes to support the column. If the local database compatibility is equal to or higher than the value in the Apply Process Version column for a column, then apply processes can apply changes to the column successfully. You control the compatibility of a database using the COMPATIBLE initialization parameter.

If your Oracle Streams environment includes databases that are running different versions of the Oracle Database, then you can configure rules that use the GET_COMPATIBLE member function for LCRs to filter out LCRs that are not compatible with particular databases. These rules can be added to the rule sets of capture processes, synchronous captures, propagations, and apply processes to filter out incompatible LCRs wherever necessary in a stream.


Note:

When this query returns NULL for Apply Process Reason, it means that the column has always been supported by apply processes since the first Oracle Database release that included Oracle Streams.


See Also:


Monitoring Oracle Streams Performance Using AWR and Statspack

You can use Automatic Workload Repository (AWR) to monitor performance statistics related to Oracle Streams. If AWR is not available on your database, then you can use the Statspack package to monitor performance statistics related to Oracle Streams. The most current instructions and information about installing and using the Statspack package are contained in the spdoc.txt file installed with your database. Refer to that file for Statspack information. On UNIX systems, the file is located in the ORACLE_HOME/rdbms/admin directory. On Windows systems, the file is located in the ORACLE_HOME\rdbms\admin directory.


See Also:


PK\KAPK&AOEBPS/pt_admin.htm R Oracle Streams Administration PKr3 PK&A OEBPS/lot.htm2 List of Tables

List of Tables

PK72PK&AOEBPS/pt_ap.htmZ Appendixes PK|LƪPK&AOEBPS/pt_concepts.htmR Essential Oracle Streams Concepts PKCrWRPK&AOEBPS/strms_mapply.htm Managing Oracle Streams Information Consumption

17 Managing Oracle Streams Information Consumption

An apply process implicitly consumes information in an Oracle Streams environment. An apply process dequeues logical change records (LCRs) and user messages from a specific queue and either applies each one directly or passes it as a parameter to a user-defined procedure.

The following topics describe managing Oracle Streams apply processes:

Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.


See Also:


Starting an Apply Process

You run the START_APPLY procedure in the DBMS_APPLY_ADM package to start an existing apply process. For example, the following procedure starts an apply process named strm01_apply:

BEGIN
  DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'strm01_apply');
END;
/

See Also:

Oracle Database 2 Day + Data Replication and Integration Guide for instructions about starting an apply process with Oracle Enterprise Manager

Stopping an Apply Process

You run the STOP_APPLY procedure in the DBMS_APPLY_ADM package to stop an existing apply process. For example, the following procedure stops an apply process named strm01_apply:

BEGIN
  DBMS_APPLY_ADM.STOP_APPLY(
    apply_name => 'strm01_apply');
END;
/

See Also:

Oracle Database 2 Day + Data Replication and Integration Guide for instructions about stopping an apply process with Oracle Enterprise Manager

Managing the Rule Set for an Apply Process

This section contains instructions for completing the following tasks:

Specifying the Rule Set for an Apply Process

You can specify one positive rule set and one negative rule set for an apply process. The apply process applies a message if it evaluates to TRUE for at least one rule in the positive rule set and discards a message if it evaluates to TRUE for at least one rule in the negative rule set. The negative rule set is evaluated before the positive rule set.

Specifying a Positive Rule Set for an Apply Process

You specify an existing rule set as the positive rule set for an existing apply process using the rule_set_name parameter in the ALTER_APPLY procedure. This procedure is in the DBMS_APPLY_ADM package.

For example, the following procedure sets the positive rule set for an apply process named strm01_apply to strm02_rule_set.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name    => 'strm01_apply',
    rule_set_name => 'strmadmin.strm02_rule_set');
END;
/

Specifying a Negative Rule Set for an Apply Process

You specify an existing rule set as the negative rule set for an existing apply process using the negative_rule_set_name parameter in the ALTER_APPLY procedure. This procedure is in the DBMS_APPLY_ADM package.

For example, the following procedure sets the negative rule set for an apply process named strm01_apply to strm03_rule_set.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name             => 'strm01_apply',
    negative_rule_set_name => 'strmadmin.strm03_rule_set');
END;
/

Adding Rules to the Rule Set for an Apply Process

To add rules to the rule set for an apply process, you can run one of the following procedures:

Excluding the ADD_SUBSET_RULES procedure, these procedures can add rules to the positive rule set or negative rule set for an apply process. The ADD_SUBSET_RULES procedure can add rules only to the positive rule set for an apply process.

Adding Rules to the Positive Rule Set for an Apply Process

The following example runs the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the positive rule set of an apply process named strm01_apply:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name       => 'hr.departments',
    streams_type     => 'apply',
    streams_name     => 'strm01_apply',
    queue_name       => 'streams_queue',
    include_dml      => TRUE,
    include_ddl      => TRUE,
    source_database  => 'dbs1.example.com',
    inclusion_rule   => TRUE);
END;
/

Running this procedure performs the following actions:

  • Creates one rule that evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.departments table. The rule name is system generated.

  • Creates one rule that evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.departments table. The rule name is system generated.

  • Specifies that both rules evaluate to TRUE only for LCRs whose changes originated at the dbs1.example.com source database.

  • Adds the rules to the positive rule set associated with the apply process because the inclusion_rule parameter is set to TRUE.

Adding Rules to the Negative Rule Set for an Apply Process

The following example runs the ADD_TABLE_RULES procedure in the DBMS_STREAMS_ADM package to add rules to the negative rule set of an apply process named strm01_apply:

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name       => 'hr.regions',
    streams_type     => 'apply',
    streams_name     => 'strm01_apply',
    queue_name       => 'streams_queue',
    include_dml      => TRUE,
    include_ddl      => TRUE,
    source_database  => 'dbs1.example.com',
    inclusion_rule   => FALSE);
END;
/

Running this procedure performs the following actions:

  • Creates one rule that evaluates to TRUE for row LCRs that contain the results of DML changes to the hr.regions table. The rule name is system generated.

  • Creates one rule that evaluates to TRUE for DDL LCRs that contain DDL changes to the hr.regions table. The rule name is system generated.

  • Specifies that both rules evaluate to TRUE only for LCRs whose changes originated at the dbs1.example.com source database.

  • Adds the rules to the negative rule set associated with the apply process because the inclusion_rule parameter is set to FALSE.

Removing a Rule from the Rule Set for an Apply Process

You remove a rule from a rule set for an existing apply process by running the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package. For example, the following procedure removes a rule named departments3 from the positive rule set of an apply process named strm01_apply.

BEGIN
  DBMS_STREAMS_ADM.REMOVE_RULE(
    rule_name        => 'departments3',
    streams_type     => 'apply',
    streams_name     => 'strm01_apply',
    drop_unused_rule => TRUE,
    inclusion_rule   => TRUE);
END;
/

In this example, the drop_unused_rule parameter in the REMOVE_RULE procedure is set to TRUE, which is the default setting. Therefore, if the rule being removed is not in any other rule set, then it will be dropped from the database. If the drop_unused_rule parameter is set to FALSE, then the rule is removed from the rule set, but it is not dropped from the database even if it is not in any other rule set.

If the inclusion_rule parameter is set to FALSE, then the REMOVE_RULE procedure removes the rule from the negative rule set for the apply process, not from the positive rule set.

To remove all of the rules in a rule set for the apply process, then specify NULL for the rule_name parameter when you run the REMOVE_RULE procedure.

Removing a Rule Set for an Apply Process

You remove a rule set from an existing apply process using the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. This procedure can remove the positive rule set, negative rule set, or both. Specify TRUE for the remove_rule_set parameter to remove the positive rule set for the apply process. Specify TRUE for the remove_negative_rule_set parameter to remove the negative rule set for the apply process.

For example, the following procedure removes both the positive and negative rule sets from an apply process named strm01_apply.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name               => 'strm01_apply',
    remove_rule_set          => TRUE,
    remove_negative_rule_set => TRUE);
END;
/

Note:

If an apply process that dequeues messages from a buffered queues does not have a positive or negative rule set, then the apply process dequeues all captured LCRs in its queue. Similarly, if an apply process that dequeues messages from a persistent queue does not have a positive or negative rule set, the apply process dequeues all persistent LCRs and persistent user messages in its queue.

Setting an Apply Process Parameter

Set an apply process parameter using the SET_PARAMETER procedure in the DBMS_APPLY_ADM package. Apply process parameters control the way an apply process operates.

For example, the following procedure sets the commit_serialization parameter for an apply process named strm01_apply to DEPENDENT_TRANSACTIONS. This setting for the commit_serialization parameter enables the apply process to commit transactions in any order.

BEGIN
  DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name   => 'strm01_apply',
    parameter    => 'commit_serialization',
    value        => 'DEPENDENT_TRANSACTIONS');
END;
/

Note:

  • The value parameter is always entered as a VARCHAR2 value, even if the parameter value is a number.

  • If the value parameter is set to NULL or is not specified, then the parameter is set to its default value.

  • If you set the parallelism apply process parameter to a value greater than 1, then you must specify a conditional supplemental log group at the source database for all of the unique key and foreign key columns in the tables for which an apply process applies changes. supplemental logging might be required for other columns in these tables as well, depending on your configuration.



See Also:


Setting the Apply User for an Apply Process

The apply user is the user who applies all DML changes and DDL changes that satisfy the apply process rule sets and who runs user-defined apply handlers. Set the apply user for an apply process using the apply_user parameter in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package.

To change the apply user, the user who invokes the ALTER_APPLY procedure must be granted DBA role. Only the SYS user can set the apply_user to SYS.

For example, the following procedure sets the apply user for an apply process named strm03_apply to hr.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'strm03_apply',
    apply_user => 'hr');
END;
/

Running this procedure grants the new apply user dequeue privilege on the queue used by the apply process and configures the user as a secure queue user of the queue. In addition, ensure that the apply user has the following privileges:

  • The necessary privileges to perform DML and DDL changes on the apply objects

  • EXECUTE privilege on the rule sets used by the apply process

  • EXECUTE privilege on all custom rule-based transformation functions used in the rule set

  • EXECUTE privilege on all apply handler procedures

These privileges can be granted to the apply user directly or through roles.

In addition, the apply user must be granted EXECUTE privilege on all packages, including Oracle-supplied packages, that are invoked in subprograms run by the apply process. These privileges must be granted directly to the apply user. They cannot be granted through roles.


Note:

If Oracle Database Vault is installed, follow the steps outlined in "Oracle Streams and Oracle Data Vault" to ensure the correct privileges and roles have been granted.

Managing a DML Handler

DML handlers process row logical change records (row LCRs) dequeued by an apply process. There are two types of DML handlers: statement DML handlers and procedure DML handlers. A statement DML handler uses a collection of SQL statements to process row LCRs, while a procedure DML handler uses a PL/SQL procedure to process row LCRs.

This section contains instructions for managing a DML handler:

Managing a Statement DML Handler

This section contains the following instructions for managing a statement DML handler:

Creating a Statement DML Handler and Adding It to an Apply Process

There are two ways to create a statement DML handler and add it to an apply process:

  • One way creates the statement DML handler, adds one statement to it, and adds the statement DML handler to an apply process all in one step.

  • The other way uses distinct steps to create the statement DML handler, add one or more statements to it, and add the statement DML handler to an apply process.

Typically, the one-step method is best when a statement DML handler will have only one statement. The multiple-step method is best when a statement DML handler will have several statements.

The following sections include examples that illustrate each method in detail:

Creating a Statement DML Handler With One Statement

In some Oracle Streams replication environments, a replicated table is not exactly the same at the databases that share the table. In such environments, a statement DML handler can modify the DML change performed by row LCRs. Statement DML handlers cannot change the values of the columns in a row LCR. However, statement DML handlers can use SQL to insert a row or update a row with column values that are different than the ones in the row LCR.

The example in this section makes the following assumptions:

  • An Oracle Streams replication environment is configured to replicate changes to the oe.orders table between a source database and a destination database. Changes to the oe.orders table are captured by a capture process or a synchronous capture at the source database, sent to the destination database by a propagation, and applied by an apply process at the destination database.

  • At the source database, the oe.orders table includes an order_status column. Assume that when an insert with an order_status of 1 is applied at the destination database, the order_status should be changed to 2. The statement DML handler in this example makes this change. For inserts with an order_status that is not equal to 1, the statement DML handler applies the original change in the row LCR without changing the order_status value.

To create a statement DML handler that modifies inserts to the oe.orders table, complete the following steps:

  1. For the purposes of this example, specify the required supplemental logging at the source database:

    1. Connect to the source database as the Oracle Streams administrator.

      See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

    2. Specify an unconditional supplemental log group that includes the order_status column in the oe.orders table:

      ALTER TABLE oe.orders ADD SUPPLEMENTAL LOG GROUP log_group_ord_stat  (order_status) ALWAYS;
      

      Any columns used by a statement DML handler at a destination database must be in an unconditional log group at the source database.

  2. Connect to the destination database the Oracle Streams administrator.

  3. Create the statement DML handler and add it to the apply process:

    DECLARE
      stmt CLOB;
    BEGIN
      stmt := 'INSERT INTO oe.orders(
                 order_id,
                 order_date, 
                 order_mode,
                 customer_id,
                 order_status,
                 order_total,
                 sales_rep_id,
                 promotion_id) 
               VALUES(
                 :new.order_id,
                 :new.order_date, 
                 :new.order_mode,
                 :new.customer_id,
                 DECODE(:new.order_status, 1, 2, :new.order_status),
                 :new.order_total,
                 :new.sales_rep_id,
                 :new.promotion_id)';
      DBMS_APPLY_ADM.ADD_STMT_HANDLER(
        object_name        => 'oe.orders',
        operation_name     => 'INSERT',
        handler_name       => 'modify_orders',
        statement          => stmt,
        apply_name         => 'apply$_sta_2',
        comment            => 'Modifies inserts into the orders table');
    END;
    /
    

    Notice that the DECODE function changes an order_status of 1 to 2. If the order_status in the row LCR is not 1, then the DECODE function uses the original order_status value by specifying :new.order_status for the default in the DECODE function.

    The ADD_STMT_HANDLER procedure creates the modify_orders statement DML handler and adds it to the apply$_sta_2 apply process. The statement DML handler is invoked when this apply process dequeues a row LCR that performs an insert on the oe.orders table. To modify row LCRs that perform updates and deletes made to this table, separate statement DML handlers are required.


Note:

  • This statement in the modify_orders statement DML handler performs the row change on the destination table. Therefore, you do not need to add an execute statement to the statement DML handler. The row change performed by the statement is committed when the apply process dequeues a commit directive for the row LCR's transaction.

  • The ADD_STMT_HANDLER procedure in this example adds the statement DML handler to the apply$_sta_2 apply process. To add a general statement DML handler that is used by all of the apply processes in the database, omit the apply_name parameter in this procedure or set the apply_name parameter to NULL.


Creating a Statement DML Handler With More Than One Statement

A statement DML handler can track the changes made to a table. The statement DML handler in this example tracks the updates made to the hr.jobs table.

The example in this section makes the following assumptions:

  • An Oracle Streams replication environment is configured to replicate changes to the hr.jobs table between a source database and a destination database. Changes to the hr.jobs table are captured by a capture process or a synchronous capture at the source database, sent to the destination database by a propagation, and applied by an apply process at the destination database. The hr.jobs table contains the minimum and maximum salary for various jobs at an organization.

  • The goal is to track the updates to the salary information and when these updates were made. To accomplish this goal, the statement DML handler inserts rows into the hr.track_jobs table.

  • The apply process must also execute the row LCRs to replicate the changes to the hr.jobs table.

To create a statement DML handler that tracks updates to the hr.jobs, complete the following steps:

  1. Connect to the source database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Specify an unconditional supplemental log group that includes all of the columns in the hr.jobs table. For example:

    ALTER TABLE hr.jobs ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    

    Any columns used by a statement DML handler at a destination database must be in an unconditional log group at the source database.

  3. Connect to the destination database as the hr user.

  4. Create a sequence for the tracking table:

    CREATE SEQUENCE hr.track_jobs_seq 
       START WITH 1
       INCREMENT BY 1;
    
  5. Create the table that will track the changes to the hr.jobs table:

    CREATE TABLE hr.track_jobs( 
       change_id       NUMBER  CONSTRAINT track_jobs_pk PRIMARY KEY,
       job_id          VARCHAR2(10), 
       job_title       VARCHAR2(35),
       min_salary_old  NUMBER(6),
       min_salary_new  NUMBER(6),
       max_salary_old  NUMBER(6),
       max_salary_new  NUMBER(6),
       timestamp       TIMESTAMP);
    

    The statement DML handler will use the sequence created in Step 4 to insert a unique value for each change that it tracks into the change_id column of the hr.track_jobs table.

  6. Connect to the destination database as the Oracle Streams administrator.

  7. Create the statement DML handler:

    BEGIN
      DBMS_STREAMS_HANDLER_ADM.CREATE_STMT_HANDLER(
        handler_name => 'track_jobs',
        comment      => 'Tracks updates to the jobs table');
    END;
    /
    
  8. Add a statement to the statement DML handler that executes the row LCR:

    DECLARE
      stmt CLOB;
    BEGIN
      stmt := ':lcr.execute TRUE';
      DBMS_STREAMS_HANDLER_ADM.ADD_STMT_TO_HANDLER(
        handler_name       => 'track_jobs',
        statement          => stmt,
        execution_sequence => 10);
    END;
    /
    

    The TRUE argument is for the conflict_resolution parameter in the EXECUTE member procedure for the LCR$_ROW_RECORD type. The TRUE argument indicates that any conflict resolution defined for the table is used when the row LCR is executed. Specify FALSE if you do not want conflict resolution to be used when the row LCR is executed.


    Tip:

    If you want to track the changes to a table without replicating them, then do not include an execute statement in the statement DML handler.

  9. Add a statement to the statement DML handler that tracks the changes the row LCR:

    DECLARE
      stmt CLOB;
    BEGIN
      stmt := 'INSERT INTO hr.track_jobs(
                 change_id,
                 job_id, 
                 job_title,
                 min_salary_old,
                 min_salary_new,
                 max_salary_old,
                 max_salary_new,
                 timestamp) 
               VALUES(
                 hr.track_jobs_seq.NEXTVAL,
                 :new.job_id,
                 :new.job_title,
                 :old.min_salary,
                 :new.min_salary,
                 :old.max_salary,
                 :new.max_salary,
                 :source_time)';
      DBMS_STREAMS_HANDLER_ADM.ADD_STMT_TO_HANDLER(
        handler_name       => 'track_jobs',
        statement          => stmt,
        execution_sequence => 20);
    END;
    /
    

    This statement inserts a row into the hr.track_jobs table for each row LCR that updates a row in the hr.jobs table. Notice that the values inserted into the hr.track_jobs table use the old and new values in the row LCR to track the old and new value for each salary column. Also, notice that the source_time attribute in the row LCR is used to populate the timestamp column.

  10. Add the statement DML handler to the apply process. For example, the following procedure adds the statement DML handler to an apply process named apply$_sta_2:

    BEGIN
      DBMS_APPLY_ADM.ADD_STMT_HANDLER(
        object_name    => 'hr.jobs',
        operation_name => 'UPDATE',
        handler_name   => 'track_jobs',
        apply_name     => 'apply$_sta_2');
    END;
    /
    

    Note:

    The ADD_STMT_HANDLER procedure in this example adds the statement DML handler to the apply$_sta_2 apply process. To add a general statement DML handler that is used by all of the apply processes in the database, omit the apply_name parameter in this procedure or set the apply_name parameter to NULL.

Adding Statements to a Statement DML Handler

To add statements to a statement DML handler, run the ADD_STMT_TO_HANDLER procedure in the DBMS_STREAMS_HANDLER_ADM package and specify an execution sequence number that has not been specified for the statement DML handler.

The example in this section adds a statement to the modify_orders statement DML handler. This statement DML handler is created in "Creating a Statement DML Handler With One Statement". It modifies inserts into the oe.orders table.

For the example in this section, assume that the destination database should discount orders by 10% for a specific customer. This customer has a customer_id value of 118 in the oe.orders table. To do this, the SQL statement in the statement DML handler multiplies the order_total value by .9 for inserts into the oe.orders table with a customer_id value of 118.

Complete the following steps to add a statement to the modify_orders statement DML handler:

  1. Connect to the destination database where the apply process is configured as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Check the execution sequence numbers that are already used by the statements in the statement DML handler:

    COLUMN HANDLER_NAME HEADING 'Statement|Handler' FORMAT A15
    COLUMN EXECUTION_SEQUENCE HEADING 'Execution|Sequence' FORMAT 999999
    COLUMN STATEMENT HEADING 'Statement' FORMAT A50
    
    SET LONG  8000
    SET PAGES 8000
    SELECT HANDLER_NAME,
           EXECUTION_SEQUENCE,
           STATEMENT
      FROM DBA_STREAMS_STMTS
      WHERE HANDLER_NAME = 'MODIFY_ORDERS'
      ORDER BY EXECUTION_SEQUENCE;
    

    Your output is similar to the following:

    Statement       Execution
    Handler          Sequence Statement
    --------------- --------- --------------------------------------------------
    MODIFY_ORDERS           1 INSERT INTO oe.orders(
                                           order_id,
                                           order_date,
                                           order_mode,
                                           customer_id,
                                           order_status,
                                           order_total,
                                           sales_rep_id,
                                           promotion_id)
                                         VALUES(
                                           :new.order_id,
                                           :new.order_date,
                                           :new.order_mode,
                                           :new.customer_id,
                                           DECODE(:new.order_status, 1, 2, :new.
                              order_status),
                                           :new.order_total,
                                           :new.sales_rep_id,
                                           :new.promotion_id)
    

    This output shows that the statement DML handler has only one statement, and this one statement is at execution sequence number 1.

  3. Add a statement to the statement DML handler that discounts all orders by 10%:

    DECLARE
      stmt CLOB;
    BEGIN
      stmt := 'UPDATE oe.orders SET order_total=order_total*.9
                 WHERE order_id=:new.order_id AND :new.customer_id=118';
      DBMS_STREAMS_HANDLER_ADM.ADD_STMT_TO_HANDLER(
        handler_name       => 'modify_orders',
        statement          => stmt,
        execution_sequence => 10);
    END;
    /
    

    This statement updates the row that was inserted by the statement with execution sequence number 1. Therefore, this statement must have an execution sequence number that is greater than 1. This example specifies 10 for the execution sequence number of the added statement.


    Tip:

    When the execution_sequence parameter is set to NULL in the ADD_STMT_TO_HANDLER procedure, the statement is added to the statement DML handler with an execution sequence number that is larger than the execution sequence number for any statement in the statement DML handler. Therefore, in this example, the execution_sequence parameter can be omitted or set to NULL.

After completing these steps, the output for the query in Step 2 shows:

Statement       Execution
Handler          Sequence Statement
--------------- --------- --------------------------------------------------
MODIFY_ORDERS           1 INSERT INTO oe.orders(
                                       order_id,
                                       order_date,
                                       order_mode,
                                       customer_id,
                                       order_status,
                                       order_total,
                                       sales_rep_id,
                                       promotion_id)
                                     VALUES(
                                       :new.order_id,
                                       :new.order_date,
                                       :new.order_mode,
                                       :new.customer_id,
                                       DECODE(:new.order_status, 1, 2, :new.
                          order_status),
                                       :new.order_total,
                                       :new.sales_rep_id,
                                       :new.promotion_id)
 
MODIFY_ORDERS          10 UPDATE oe.orders SET order_total=order_total*.9
                                       WHERE order_id=:new.order_id AND :new.
                          customer_id=118

This output shows that the new statement with execution sequence number 10 is added to the statement DML handler.

Modifying a Statement in a Statement DML Handler

To modify a statement in a statement DML handler, run the ADD_STMT_TO_HANDLER procedure in the DBMS_STREAMS_HANDLER_ADM package and specify the execution sequence number of the statement you are modifying.

The example in this section modifies the statement with execution sequence number 20 in the track_jobs statement DML handler. This statement DML handler is created in "Creating a Statement DML Handler With More Than One Statement". It uses the hr.track_jobs table to track changes to the hr.jobs table.

For the example in this section, assume that you also want to track which user updated the hr.jobs table. To do this, you must add this information to the row LCRs captured at the source database, add a user_name column to the hr.track_jobs table, and modify the statement in the statement DML handler to track the user.

Complete the following steps to modify the statement in the statement DML handler:

  1. Connect to the source database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Add the username to the row LCR information captured at the source database:

    BEGIN
      DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
        capture_name   => 'sta$cap',
        attribute_name => 'username',
        include        => TRUE);
    END;
    /
    

    In the capture_name parameter, specify the capture process or synchronous capture that captures the changes that will be processed by the statement DML handler.

  3. Connect to the destination database as the Oracle Streams administrator.

  4. Add the user_name column to the hr.track_jobs table:

    ALTER TABLE hr.track_jobs
      ADD (user_name VARCHAR2(30));
    
  5. Modify the statement with execution sequence number 20 in the track_jobs statement DML handler:

    DECLARE
      stmt CLOB;
    BEGIN
      stmt := 'INSERT INTO hr.track_jobs(
                 change_id,
                 job_id, 
                 job_title,
                 min_salary_old,
                 min_salary_new,
                 max_salary_old,
                 max_salary_new,
                 timestamp,
                 user_name) 
               VALUES(
                 hr.track_jobs_seq.NEXTVAL,
                 :new.job_id,
                 :new.job_title,
                 :old.min_salary,
                 :new.min_salary,
                 :old.max_salary,
                 :new.max_salary,
                 :source_time,
                 :extra_attribute.username)';
      DBMS_STREAMS_HANDLER_ADM.ADD_STMT_TO_HANDLER(
        handler_name       => 'track_jobs',
        statement          => stmt,
        execution_sequence => 20);
    END;
    /
    

    The modified statement adds user tracking by inserting the username information in the row LCR into the user_name column in the hr.track_jobs table. Notice that username is an extra LCR attribute and must be specified using the following syntax:

    :extra_attribute.username
    

Removing Statements from a Statement DML Handler

To remove a statement from a statement DML handler, run the REMOVE_STMT_FROM_HANDLER procedure in the DBMS_STREAMS_HANDLER_ADM package and specify the execution sequence number of the statement you are removing.

The example in this section removes the statement with execution sequence number 10 from the track_jobs statement DML handler. This statement DML handler is created in "Creating a Statement DML Handler With More Than One Statement". It uses the hr.track_jobs table to track changes to the hr.jobs table.

For the example in this section, assume that you no longer want to execute the row LCRs with updates to the hr.jobs table. To do this, you must remove the statement that executes the row LCRs, and this statement uses execution sequence number 10 in the track_jobs statement DML handler.

Complete the following steps to remove the statement from the statement DML handler:

  1. Connect to the database that contains the statement DML handler as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Remove the statement from the statement DML handler:

    BEGIN
      DBMS_STREAMS_HANDLER_ADM.REMOVE_STMT_FROM_HANDLER(
        handler_name       => 'track_jobs',
        execution_sequence => 10);
    END;
    /
    

Removing a Statement DML Handler from an Apply Process

To remove a statement DML handler from an apply process, run the REMOVE_STMT_HANDLER procedure in the DBMS_APPLY_ADM package.

The example in this section removes the track_jobs statement DML handler from the apply$_sta_2 apply process. This statement DML handler is created in "Creating a Statement DML Handler With More Than One Statement". It uses the hr.track_jobs table to track changes to the hr.jobs table.

Complete the following steps to remove the statement DML handler from the apply process:

  1. Connect to the database that contains the apply process as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Remove the statement DML handler from the apply process:

    BEGIN
      DBMS_APPLY_ADM.REMOVE_STMT_HANDLER(
        object_name    => 'hr.jobs',
        operation_name => 'UPDATE',
        handler_name   => 'track_jobs',
        apply_name     => 'apply$_sta_2');
    END;
    /
    

After the statement DML handler is removed from the apply process, the statement DML handler still exists in the database.

Dropping a Statement DML Handler

To drop a statement DML handler from a database, run the DROP_STMT_HANDLER procedure in the DBMS_STREAMS_HANDLER_ADM package.

The example in this section drops the track_jobs statement DML handler. This statement DML handler is created in "Creating a Statement DML Handler With More Than One Statement". It uses the hr.track_jobs table to track changes to the hr.jobs table.

Complete the following steps to drop the statement DML handler:

  1. Connect to the database that contains the statement DML handler as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Drop the statement DML handler:

    exec DBMS_STREAMS_HANDLER_ADM.DROP_STMT_HANDLER('track_jobs');
    

Managing a Procedure DML Handler

This section contains the following instructions for managing a procedure DML handler:

Creating a Procedure DML Handler

A procedure DML handler must have the following signature:

PROCEDURE user_procedure (
   parameter_name   IN  ANYDATA);

Here, user_procedure stands for the name of the procedure and parameter_name stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is an ANYDATA encapsulation of a row logical change record (row LCR).

The following restrictions apply to the user procedure:

  • Do not execute COMMIT or ROLLBACK statements. Doing so can endanger the consistency of the transaction that contains the row LCR.

  • If you are manipulating a row using the EXECUTE member procedure for the row LCR, then do not attempt to manipulate more than one row in a row operation. You must construct and execute manually any DML statements that manipulate more than one row.

  • If the command type is UPDATE or DELETE, then row operations resubmitted using the EXECUTE member procedure for the LCR must include the entire key in the list of old values. The key is the primary key or the smallest unique key that has at least one NOT NULL column, unless a substitute key has been specified by the SET_KEY_COLUMNS procedure. If there is no specified key, then the key consists of all table columns, except for columns of the following data types: LOB, LONG, LONG RAW, user-defined types (including object types, REFs, varrays, nested tables), and Oracle-supplied types (including Any types, XML types, spatial types, and media types).

  • If the command type is INSERT, then row operations resubmitted using the EXECUTE member procedure for the LCR should include the entire key in the list of new values. Otherwise, duplicate rows are possible. The key is the primary key or the smallest unique key that has at least one NOT NULL column, unless a substitute key has been specified by the SET_KEY_COLUMNS procedure. If there is no specified key, then the key consists of all non LOB, non LONG, and non LONG RAW columns.

A procedure DML handler can be used for any customized processing of row LCRs. For example, the handler can modify an LCR and then execute it using the EXECUTE member procedure for the LCR. When you execute a row LCR in a procedure DML handler, the apply process applies the LCR without calling the procedure DML handler again.

You can also use SQL generation in a procedure DML handler to record the DML changes made to a table. You can record these changes in a table or in a file. For example, the sample procedure DML handler in this section uses SQL generation to record each UPDATE SQL statement made to the hr.departments table using the GET_ROW_TEXT member procedure. The procedure DML handler also applies the row LCR using the EXECUTE member procedure.

To create the procedure used in this procedure DML handler, complete the following steps:

  1. In SQL*Plus, connect to the database as the Oracle Streams administrator.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create the directory object for the directory that contains the text file.

    In this example, the apply process writes the UPDATE SQL statements performed on the hr.departments table to the text file in this directory.

    For example, to create a directory object named SQL_GEN_DIR for the /usr/sql_gen directory, enter the following SQL statement:

    CREATE DIRECTORY SQL_GEN_DIR AS '/usr/sql_gen';
    
  3. Ensure that the text file to which the SQL statements will be written exists in the directory specified in Step 2.

    In this example, ensure that the sql_gen_file.txt file exists in the /usr/sql_gen directory on the file system.

  4. Create the procedure for the procedure DML handler:

    CREATE OR REPLACE PROCEDURE strmadmin.sql_gen_dep(lcr_anydata IN SYS.ANYDATA) IS
      lcr          SYS.LCR$_ROW_RECORD;
      int          PLS_INTEGER;
      row_txt_clob CLOB;
      fp           UTL_FILE.FILE_TYPE;
    BEGIN
      int   := lcr_anydata.GETOBJECT(lcr);
      DBMS_LOB.CREATETEMPORARY(row_txt_clob, TRUE);
      -- Generate SQL from row LCR and save to file
      lcr.GET_ROW_TEXT(row_txt_clob);
      fp := UTL_FILE.FOPEN (
         location     => 'SQL_GEN_DIR',
         filename     => 'sql_gen_file.txt',
         open_mode    => 'a',
         max_linesize => 5000);
      UTL_FILE.PUT_LINE(
         file      => fp,
         buffer    => row_txt_clob,
         autoflush => TRUE);
      DBMS_LOB.TRIM(row_txt_clob, 0);
      UTL_FILE.FCLOSE(fp); 
      --  Apply row LCR
      lcr.EXECUTE(TRUE);
    END;
    /
    

After you create the procedure, you can set it as a procedure DML handler by following the instructions in "Setting a Procedure DML Handler".


Note:

  • You must specify an unconditional supplemental log group at the source database for any columns needed by a procedure DML handler at the destination database. This sample procedure DML handler does not require any additional supplemental logging because it records the SQL statement and does not manipulate the row LCR in any other way.

  • To test a procedure DML handler before using it, or to debug a procedure DML handler, you can construct row LCRs and run the procedure DML handler procedure outside the context of an apply process.



See Also:


Setting a Procedure DML Handler

A procedure DML handler processes each row LCR dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple procedure DML handlers on the same table, to handle different operations on the table. All apply processes that apply changes to the specified table in the local database use the specified procedure DML handler.

Set the procedure DML handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. For example, the following procedure sets the procedure DML handler for UPDATE operations on the hr.departments table. Therefore, when any apply process that applies changes locally dequeues a row LCR containing an UPDATE operation on the hr.departments table, the apply process sends the row LCR to the sql_gen_dep PL/SQL procedure in the strmadmin schema for processing. The apply process does not apply a row LCR containing such a change directly.

In this example, the apply_name parameter is set to NULL. Therefore, the procedure DML handler is a general procedure DML handler that is used by all of the apply processes in the database.

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name         => 'hr.departments',
    object_type         => 'TABLE',
    operation_name      => 'UPDATE',
    error_handler       => FALSE,
    user_procedure      => 'strmadmin.sql_gen_dep',
    apply_database_link => NULL,
    apply_name          => NULL);
END;
/

Note:

  • To specify the procedure DML handler for only one apply process, specify the apply process name in the apply_name parameter.

  • If an apply process applies changes to a remote non-Oracle database, then it can use a different procedure DML handler for the same table. You can run the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package to specify a procedure DML handler for changes that will be applied to a remote non-Oracle database by setting the apply_database_link parameter to a non-NULL value.

  • You can specify DEFAULT for the operation_name parameter to set the procedure as the default procedure DML handler for the database object. In this case, the procedure DML handler is used for any INSERT, UPDATE, DELETE, and LOB_WRITE on the database object, if another procedure DML handler is not specifically set for the operation on the database object.


Unsetting a Procedure DML Handler

You unset a procedure DML handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. When you run that procedure, set the user_procedure parameter to NULL for a specific operation on a specific table. After the procedure DML handler is unset, any apply process that applies changes locally will apply a row LCR containing such a change directly.

For example, the following procedure unsets the procedure DML handler for UPDATE operations on the hr.departments table:

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name    => 'hr.departments',
    object_type    => 'TABLE',
    operation_name => 'UPDATE',
    error_handler  => FALSE,
    user_procedure => NULL,
    apply_name     => NULL);
END;
/

Managing a DDL Handler

This section contains instructions for creating, specifying, and removing the DDL handler for an apply process.


Note:

All applied DDL LCRs commit automatically. Therefore, if a DDL handler calls the EXECUTE member procedure of a DDL LCR, then a commit is performed automatically.


See Also:


Creating a DDL Handler for an Apply Process

A DDL handler must have the following signature:

PROCEDURE handler_procedure (
   parameter_name   IN  ANYDATA);

Here, handler_procedure stands for the name of the procedure and parameter_name stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is an ANYDATA encapsulation of a DDL LCR.

A DDL handler can be used for any customized processing of DDL LCRs. For example, the handler can modify the LCR and then execute it using the EXECUTE member procedure for the LCR. When you execute a DDL LCR in a DDL handler, the apply process applies the LCR without calling the DDL handler again.

You can also use a DDL handler to record the history of DDL changes. For example, a DDL handler can insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE member procedure.

To create such a DDL handler, first create a table to hold the history information:

CREATE TABLE strmadmin.history_ddl_lcrs(
  timestamp             DATE,
  source_database_name  VARCHAR2(128),
  command_type          VARCHAR2(30),
  object_owner          VARCHAR2(32),
  object_name           VARCHAR2(32),
  object_type           VARCHAR2(18),
  ddl_text              CLOB,
  logon_user            VARCHAR2(32),
  current_schema        VARCHAR2(32),
  base_table_owner      VARCHAR2(32),
  base_table_name       VARCHAR2(32),
  tag                   RAW(10),
  transaction_id        VARCHAR2(10),
  scn                   NUMBER);
CREATE OR REPLACE PROCEDURE history_ddl(in_any IN ANYDATA)  
 IS
   lcr       SYS.LCR$_DDL_RECORD;
   rc        PLS_INTEGER;
   ddl_text  CLOB;
 BEGIN
   -- Access the LCR
   rc := in_any.GETOBJECT(lcr);
   DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
   lcr.GET_DDL_TEXT(ddl_text);
   --  Insert DDL LCR information into history_ddl_lcrs table
   INSERT INTO strmadmin.history_ddl_lcrs VALUES( 
     SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), 
     lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(), 
     ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(), 
     lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(), lcr.GET_TAG(), 
     lcr.GET_TRANSACTION_ID(), lcr.GET_SCN());
   --  Apply DDL LCR
   lcr.EXECUTE();
   -- Free temporary LOB space
   DBMS_LOB.FREETEMPORARY(ddl_text);
END;
/

Setting the DDL Handler for an Apply Process

A DDL handler processes all DDL LCRs dequeued by an apply process. Set the DDL handler for an apply process using the ddl_handler parameter in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure sets the DDL handler for an apply process named strep01_apply to the history_ddl procedure in the strmadmin schema.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name  => 'strep01_apply',
    ddl_handler => 'strmadmin.history_ddl');
END;
/

Removing the DDL Handler for an Apply Process

A DDL handler processes all DDL LCRs dequeued by an apply process. You remove the DDL handler for an apply process by setting the remove_ddl_handler parameter to TRUE in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure removes the DDL handler from an apply process named strep01_apply.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name         => 'strep01_apply',
    remove_ddl_handler => TRUE);
END;
/

Managing the Message Handler for an Apply Process

A message handler is an apply handler that processes persistent user messages. The following sections contain instructions for setting and unsetting the message handler for an apply process:

Setting the Message Handler for an Apply Process

Set the message handler for an apply process using the message_handler parameter in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure sets the message handler for an apply process named strm03_apply to the mes_handler procedure in the oe schema.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name      => 'strm03_apply',
    message_handler => 'oe.mes_handler');
END;
/

The user who runs the ALTER_APPLY procedure must have EXECUTE privilege on the specified message handler. If the message handler is already set for an apply process, then you can run the ALTER_APPLY procedure to change the message handler for the apply process.

Unsetting the Message Handler for an Apply Process

You unset the message handler for an apply process by setting the remove_message_handler parameter to TRUE in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure unsets the message handler for an apply process named strm03_apply.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name             => 'strm03_apply',
    remove_message_handler => TRUE);
END;
/

Managing the Precommit Handler for an Apply Process

A precommit handler is an apply handler that can receive the commit information for a transaction and process the commit information in any customized way.

The following sections contain instructions for creating, setting, and unsetting the precommit handler for an apply process:

Creating a Precommit Handler for an Apply Process

A precommit handler must have the following signature:

PROCEDURE handler_procedure (
   parameter_name   IN  NUMBER);

Here, handler_procedure stands for the name of the procedure and parameter_name stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is a commit SCN from an internal commit directive in the queue used by the apply process.

You can use a precommit handler to record information about commits processed by an apply process. The apply process can apply captured LCRs, persistent LCRs, or persistent user messages. For a captured row LCR, a commit directive contains the commit SCN of the transaction from the source database. For a persistent LCRs and persistent user messages, the commit SCN is generated by the apply process.

The precommit handler procedure must conform to the following restrictions:

  • Any work that commits must be an autonomous transaction.

  • Any rollback must be to a named save point created in the procedure.

If a precommit handler raises an exception, then the entire apply transaction is rolled back, and all of the messages in the transaction are moved to the error queue.

For example, a precommit handler can be used for auditing the row LCRs applied by an apply process. Such a precommit handler is used with one or more separate procedure DML handlers to record the source database commit SCN for a transaction, and possibly the time when the apply process applies the transaction, in an audit table.

Specifically, this example creates a precommit handler that is used with a procedure DML handler that records information about row LCRs in the following table:

CREATE TABLE strmadmin.history_row_lcrs(
  timestamp             DATE,
  source_database_name  VARCHAR2(128),
  command_type          VARCHAR2(30),
  object_owner          VARCHAR2(32),
  object_name           VARCHAR2(32),
  tag                   RAW(10),
  transaction_id        VARCHAR2(10),
  scn                   NUMBER,
  commit_scn            NUMBER,
  old_values            SYS.LCR$_ROW_LIST,
  new_values            SYS.LCR$_ROW_LIST)
    NESTED TABLE old_values STORE AS old_values_ntab
    NESTED TABLE new_values STORE AS new_values_ntab;

The procedure DML handler inserts a row in the strmadmin.history_row_lcrs table for each row LCR processed by an apply process. The precommit handler created in this example inserts a row into the strmadmin.history_row_lcrs table when a transaction commits.

Create the procedure that inserts the commit information into the history_row_lcrs table:

CREATE OR REPLACE PROCEDURE strmadmin.history_commit(commit_number IN NUMBER)  
 IS
 BEGIN
  -- Insert commit information into the history_row_lcrs table
  INSERT INTO strmadmin.history_row_lcrs (timestamp, commit_scn) 
    VALUES (SYSDATE, commit_number);
END;
/

Setting the Precommit Handler for an Apply Process

A precommit handler processes all commit directives dequeued by an apply process. When you set a precommit handler for an apply process, the apply process uses it to process all of the commit directives that it dequeues. An apply process can have only one precommit handler.

Set the precommit handler for an apply process using the precommit_handler parameter in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure sets the precommit handler for an apply process named strm01_apply to the history_commit procedure in the strmadmin schema.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name        => 'strm01_apply',
    precommit_handler => 'strmadmin.history_commit');
END;
/

You can also specify a precommit handler when you create an apply process using the CREATE_APPLY procedure in the DBMS_APPLY_ADM package. If the precommit handler is already set for an apply process, then you can run the ALTER_APPLY procedure to change the precommit handler for the apply process.

Unsetting the Precommit Handler for an Apply Process

You unset the precommit handler for an apply process by setting the remove_precommit_handler parameter to TRUE in the ALTER_APPLY procedure in the DBMS_APPLY_ADM package. For example, the following procedure unsets the precommit handler for an apply process named strm01_apply.

BEGIN
  DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name               => 'strm01_apply',
    remove_precommit_handler => TRUE);
END;
/

Specifying That Apply Processes Enqueue Messages

This section contains instructions for setting a destination queue into which apply processes that use a specified rule in a positive rule set will enqueue messages that satisfy the rule. This section also contains instructions for removing destination queue settings.

Setting the Destination Queue for Messages that Satisfy a Rule

You use the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package to set a destination queue for messages that satisfy a specific rule. For example, to set the destination queue for a rule named employees5 to the queue hr.change_queue, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name               =>  'employees5',
    destination_queue_name  =>  'hr.change_queue');
END;
/

This procedure modifies the action context of the rule to specify the queue. Any apply process in the local database with the employees5 rule in its positive rule set will enqueue a message into hr.change_queue if the message satisfies the employees5 rule. To change the destination queue for the employees5 rule, run the SET_ENQUEUE_DESTINATION procedure again and specify a different queue.

The apply user of each apply process using the specified rule must have the necessary privileges to enqueue messages into the specified queue. If the queue is a secure queue, then the apply user must be a secure queue user of the queue.

A message that has been enqueued using the SET_ENQUEUE_DESTINATION procedure is the same as any other message that is enqueued manually. Such messages can be manually dequeued, applied by an apply process created with the apply_captured parameter set to FALSE, or propagated to another queue.


Note:

  • The specified rule must be in the positive rule set for an apply process. If the rule is in the negative rule set for an apply process, then the apply process does not enqueue the message into the destination queue.

  • The apply process always enqueues messages into a persistent queue. It cannot enqueue messages into a buffered queue.



See Also:


Removing the Destination Queue Setting for a Rule

You use the SET_ENQUEUE_DESTINATION procedure in the DBMS_APPLY_ADM package to remove a destination queue for messages that satisfy a specified rule. Specifically, you set the destination_queue_name parameter in this procedure to NULL for the rule. When a destination queue specification is removed for a rule, messages that satisfy the rule are no longer enqueued into the queue by an apply process.

For example, to remove the destination queue for a rule named employees5, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name               =>  'employees5',
    destination_queue_name  =>  NULL);
END;
/

Any apply process in the local database with the employees5 rule in its positive rule set no longer enqueues a message into hr.change_queue if the message satisfies the employees5 rule.

Specifying Execute Directives for Apply Processes

This section contains instructions for setting an apply process execute directive for messages that satisfy a specified rule in the positive rule set for the apply process.

Specifying that Messages that Satisfy a Rule Are Not Executed

You use the SET_EXECUTE procedure in the DBMS_APPLY_ADM package to specify that apply processes do not execute messages that satisfy a specified rule. Specifically, you set the execute parameter in this procedure to FALSE for the rule. After setting the execution directive to FALSE for a rule, an apply process with the rule in its positive rule set does not execute a message that satisfies the rule.

For example, to specify that apply processes do not execute messages that satisfy a rule named departments8, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_EXECUTE(
    rule_name   =>  'departments8',
    execute     =>  FALSE);
END;
/

This procedure modifies the action context of the rule to specify the execution directive. Any apply process in the local database with the departments8 rule in its positive rule set will not execute a message if the message satisfies the departments8 rule. That is, if the message is an LCR, then an apply process does not apply the change in the LCR to the relevant database object. Also, an apply process does not send a message that satisfies this rule to any apply handler.


Note:

  • The specified rule must be in the positive rule set for an apply process for the apply process to follow the execution directive. If the rule is in the negative rule set for an apply process, then the apply process ignores the execution directive for the rule.

  • The SET_EXECUTE procedure can be used with the SET_ENQUEUE_DESTINATION procedure to enqueue messages that satisfy a particular rule into a destination queue without executing these messages. After a message is enqueued using the SET_ENQUEUE_DESTINATION procedure, it is the same as any message that is enqueued manually. Therefore, it can be manually dequeued, applied by an apply process, or propagated to another queue.



See Also:


Specifying that Messages that Satisfy a Rule Are Executed

You use the SET_EXECUTE procedure in the DBMS_APPLY_ADM package to specify that apply processes execute messages that satisfy a specified rule. Specifically, you set the execute parameter in this procedure to TRUE for the rule. By default, each apply process executes messages that satisfy a rule in the positive rule set for the apply process, assuming that the message does not satisfy a rule in the negative rule set for the apply process. Therefore, you must set the execute parameter to TRUE for a rule only if this parameter was set to FALSE for the rule earlier.

For example, to specify that apply processes executes messages that satisfy a rule named departments8, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_EXECUTE(
    rule_name   =>  'departments8',
    execute     =>  TRUE);
END;
/

Any apply process in the local database with the departments8 rule in its positive rule set will execute a message if the message satisfies the departments8 rule. That is, if the message is an LCR, then an apply process applies the change in the LCR to the relevant database object. Also, an apply process sends a message that satisfies this rule to an apply handler if it is configured to do so.

Managing an Error Handler

An error handler handles errors resulting from a row LCR dequeued by any apply process that contains a specific operation on a specific table.

The following sections contain instructions for creating, setting, and unsetting an error handler:

Creating an Error Handler

You create an error handler by running the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package and setting the error_handler parameter to TRUE.

An error handler must have the following signature:

PROCEDURE user_procedure (
     message             IN ANYDATA,
     error_stack_depth   IN NUMBER,
     error_numbers       IN DBMS_UTILITY.NUMBER_ARRAY,
     error_messages      IN emsg_array);

Here, user_procedure stands for the name of the procedure. Each parameter is required and must have the specified data type. However, you can change the names of the parameters. The emsg_array parameter must be a user-defined array that is a PL/SQL table of type VARCHAR2 with at least 76 characters.


Note:

Some conditions on the user procedure specified in SET_DML_HANDLER must be met for error handlers. See "Managing a DML Handler" for information about these conditions.

Running an error handler results in one of the following outcomes:

  • The error handler successfully resolves the error, applies the row LCR if appropriate, and returns control back to the apply process.

  • The error handler fails to resolve the error, and the error is raised. The raised error causes the transaction to be rolled back and placed in the error queue.

If you want to retry the DML operation, then have the error handler procedure run the EXECUTE member procedure for the LCR.

The following example creates an error handler named regions_pk_error that resolves primary key violations for the hr.regions table. At a destination database, assume users insert rows into the hr.regions table and an apply process applies changes to the hr.regions table that originated from a capture process at a remote source database. In this environment, there is a possibility of errors resulting from users at the destination database inserting a row with the same primary key value as an insert row LCR applied from the source database.

This example creates a table in the strmadmin schema called errorlog to record the following information about each primary key violation error on the hr.regions table:

  • The time stamp when the error occurred

  • The name of the apply process that raised the error

  • The user who caused the error (sender), which is the capture process name for captured LCRs, the synchronous capture name for persistent LCRs captured by the synchronous capture, or the name of the Oracle Streams Advanced Queuing (AQ) agent for persistent LCRs and persistent user messages enqueued by an application

  • The name of the object on which the DML operation was run, because errors for other objects might be logged in the future

  • The type of command used in the DML operation

  • The name of the constraint violated

  • The error message

  • The LCR that caused the error

This error handler resolves only errors that are caused by a primary key violation on the hr.regions table. To resolve this type of error, the error handler modifies the region_id value in the row LCR using a sequence and then executes the row LCR to apply it. If other types of errors occur, then you can use the row LCR you stored in the errorlog table to resolve the error manually.

For example, the following error is resolved by the error handler:

  1. At the destination database, a user inserts a row into the hr.regions table with a region_id value of 6 and a region_name value of 'LILLIPUT'.

  2. At the source database, a user inserts a row into the hr.regions table with a region_id value of 6 and a region_name value of 'BROBDINGNAG'.

  3. A capture process at the source database captures the change described in Step 2.

  4. A propagation propagates the LCR containing the change from a queue at the source database to the queue used by the apply process at the destination database.

  5. When the apply process tries to apply the LCR, an error results because of a primary key violation.

  6. The apply process invokes the error handler to handle the error.

  7. The error handler logs the error in the strmadmin.errorlog table.

  8. The error handler modifies the region_id value in the LCR using a sequence and executes the LCR to apply it.

Complete the following steps to create the regions_pk_error error handler:

  1. In SQL*Plus, connect to the database as the hr user.

    See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.

  2. Create the sequence used by the error handler to assign new primary key values:

    CREATE SEQUENCE hr.reg_exception_s START WITH 9000;
    

    This example assumes that users at the destination database will never insert a row into the hr.regions table with a region_id greater than 8999.

  3. Grant the Oracle Streams administrator ALL privilege on the sequence:

    GRANT ALL ON reg_exception_s TO strmadmin;
    
  4. Connect to the database as the Oracle Streams administrator.

  5. Create the errorlog table:

    CREATE TABLE strmadmin.errorlog(
      logdate       DATE,
      apply_name    VARCHAR2(30),
      sender        VARCHAR2(100),
      object_name   VARCHAR2(32),
      command_type  VARCHAR2(30),
      errnum        NUMBER,
      errmsg        VARCHAR2(2000),
      text          VARCHAR2(2000),
      lcr           SYS.LCR$_ROW_RECORD);
    
  6. Create a package that includes the regions_pk_error procedure:

    CREATE OR REPLACE PACKAGE errors_pkg 
    AS
     TYPE emsg_array IS TABLE OF VARCHAR2(2000) INDEX BY BINARY_INTEGER;
     PROCEDURE regions_pk_error( 
       message            IN ANYDATA,
       error_stack_depth  IN NUMBER,
       error_numbers      IN DBMS_UTILITY.NUMBER_ARRAY,
       error_messages     IN EMSG_ARRAY);
    END errors_pkg ;
    /
    
  7. Create the package body:

    CREATE OR REPLACE PACKAGE BODY errors_pkg AS
     PROCEDURE regions_pk_error ( 
       message            IN ANYDATA,
       error_stack_depth  IN NUMBER,
       error_numbers      IN DBMS_UTILITY.NUMBER_ARRAY,
       error_messages     IN EMSG_ARRAY )
     IS
      reg_id     NUMBER;
      ad         ANYDATA;
      lcr        SYS.LCR$_ROW_RECORD;
      ret        PLS_INTEGER;
      vc         VARCHAR2(30);
      apply_name VARCHAR2(30);
      errlog_rec errorlog%ROWTYPE ;
      ov2        SYS.LCR$_ROW_LIST;
     BEGIN
      -- Access the error number from the top of the stack.
      -- In case of check constraint violation,
      -- get the name of the constraint violated.
      IF error_numbers(1) IN ( 1 , 2290 ) THEN
       ad  := DBMS_STREAMS.GET_INFORMATION('CONSTRAINT_NAME');
       ret := ad.GetVarchar2(errlog_rec.text);
      ELSE 
       errlog_rec.text := NULL ;
      END IF ;
      -- Get the name of the sender and the name of the apply process.
      ad  := DBMS_STREAMS.GET_INFORMATION('SENDER');
      ret := ad.GETVARCHAR2(errlog_rec.sender);
      apply_name := DBMS_STREAMS.GET_STREAMS_NAME();
      -- Try to access the LCR.
      ret := message.GETOBJECT(lcr);
      errlog_rec.object_name  := lcr.GET_OBJECT_NAME() ;
      errlog_rec.command_type := lcr.GET_COMMAND_TYPE() ;
      errlog_rec.errnum := error_numbers(1) ;
      errlog_rec.errmsg := error_messages(1) ;
      INSERT INTO strmadmin.errorlog VALUES (SYSDATE, apply_name, 
           errlog_rec.sender, errlog_rec.object_name, errlog_rec.command_type, 
           errlog_rec.errnum, errlog_rec.errmsg, errlog_rec.text, lcr);
      -- Add the logic to change the contents of LCR with correct values.
      -- In this example, get a new region_id number 
      -- from the hr.reg_exception_s sequence.
      ov2 := lcr.GET_VALUES('new', 'n');
      FOR i IN 1 .. ov2.count
      LOOP
        IF ov2(i).column_name = 'REGION_ID' THEN
         SELECT hr.reg_exception_s.NEXTVAL INTO reg_id FROM DUAL; 
         ov2(i).data := ANYDATA.ConvertNumber(reg_id) ;
        END IF ;
      END LOOP ;
      -- Set the NEW values in the LCR.
      lcr.SET_VALUES(value_type => 'NEW', value_list => ov2);
      -- Execute the modified LCR to apply it.
      lcr.EXECUTE(TRUE);
     END regions_pk_error;
    END errors_pkg;
    /
    

Note:

  • For subsequent changes to the modified row to be applied successfully, you should converge the rows at the two databases as quickly as possible. That is, you should make the region_id for the row match at the source and destination database. If you do not want these manual changes to be recaptured at a database, then use the SET_TAG procedure in the DBMS_STREAMS package to set the tag for the session in which you make the change to a value that is not captured.

  • This example error handler illustrates the use of the GET_VALUES member function and SET_VALUES member procedure for the LCR. If you are modifying only one value in the LCR, then the GET_VALUE member function and SET_VALUE member procedure might be more convenient and more efficient.



See Also:


Setting an Error Handler

An error handler handles errors resulting from a row LCR dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple error handlers on the same table, to handle errors resulting from different operations on the table. You can either set an error handler for a specific apply process, or you can set an error handler as a general error handler that is used by all apply processes that apply the specified operation to the specified table.

Set an error handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. When you run this procedure to set an error handler, set the error_handler parameter to TRUE.

For example, the following procedure sets the error handler for INSERT operations on the hr.regions table. Therefore, when any apply process dequeues a row LCR containing an INSERT operation on the local hr.regions table, and the row LCR results in an error, the apply process sends the row LCR to the strmadmin.errors_pkg.regions_pk_error PL/SQL procedure for processing. If the error handler cannot resolve the error, then the row LCR and all of the other row LCRs in the same transaction are moved to the error queue.

In this example, the apply_name parameter is set to NULL. Therefore, the error handler is a general error handler that is used by all of the apply processes in the database.

Run the following procedure to set the error handler:

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name         => 'hr.regions',
    object_type         => 'TABLE',
    operation_name      => 'INSERT',
    error_handler       => TRUE,
    user_procedure      => 'strmadmin.errors_pkg.regions_pk_error',
    apply_database_link => NULL,
    apply_name          => NULL);
END;
/

If the error handler is already set, then you can run the SET_DML_HANDLER procedure to change the error handler.

Unsetting an Error Handler

You unset an error handler using the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package. When you run that procedure, set the user_procedure parameter to NULL for a specific operation on a specific table.

For example, the following procedure unsets the error handler for INSERT operations on the hr.regions table:

BEGIN
  DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name    => 'hr.regions',
    object_type    => 'TABLE',
    operation_name => 'INSERT',
    user_procedure => NULL,
    apply_name     => NULL);
END;
/

Note:

The error_handler parameter does not need to be specified.

Managing Apply Errors

The following sections contain instructions for retrying and deleting apply errors:


See Also:


Retrying Apply Error Transactions

You can retry a specific error transaction or you can retry all error transactions for an apply process. You might need to make DML or DDL changes to database objects to correct the conditions that caused one or more apply errors before you retry error transactions. You can also have one or more capture processes or synchronous captures configured to capture changes to the same database objects, but you might not want the changes captured. In this case, you can set the session tag to a value that will not be captured for the session that makes the changes.


See Also:

Oracle Streams Replication Administrator's Guide for more information about setting tag values generated by the current session

Retrying a Specific Apply Error Transaction

When you retry an error transaction, you can execute it immediately or send the error transaction to a user procedure for modifications before executing it. The following sections provide instructions for each method:


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the EXECUTE_ERROR procedure

Retrying a Specific Apply Error Transaction Without a User Procedure

After you correct the conditions that caused an apply error, you can retry the transaction by running the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package without specifying a user procedure. In this case, the transaction is executed without any custom processing.

For example, to retry a transaction with the transaction identifier 5.4.312, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.EXECUTE_ERROR(
    local_transaction_id => '5.4.312',
    execute_as_user      => FALSE,
    user_procedure       => NULL);
END;
/

If execute_as_user is TRUE, then the apply process executes the transaction in the security context of the current user. If execute_as_user is FALSE, then the apply process executes the transaction in the security context of the original receiver of the transaction. The original receiver is the user who was processing the transaction when the error was raised.

In either case, the user who executes the transaction must have privileges to perform DML and DDL changes on the apply objects and to run any apply handlers. This user must also have dequeue privileges on the queue used by the apply process.

Retrying a Specific Apply Error Transaction with a User Procedure

You can retry an error transaction by running the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify a user procedure to modify one or more messages in the transaction before the transaction is executed. The modifications should enable successful execution of the transaction. The messages in the transaction can be LCRs or user messages.

For example, consider a case in which an apply error resulted because of a conflict. Examination of the error transaction reveals that the old value for the salary column in a row LCR contained the wrong value. Specifically, the current value of the salary of the employee with employee_id of 197 in the hr.employees table did not match the old value of the salary for this employee in the row LCR. Assume that the current value for this employee is 3250 in the hr.employees table.

Given this scenario, the following user procedure modifies the salary in the row LCR that caused the error:

CREATE OR REPLACE PROCEDURE strmadmin.modify_emp_salary(
  in_any                        IN      ANYDATA,
  error_record                  IN      DBA_APPLY_ERROR%ROWTYPE,
  error_message_number          IN      NUMBER,
  messaging_default_processing  IN OUT  BOOLEAN,
  out_any                       OUT     ANYDATA)
AS
  row_lcr          SYS.LCR$_ROW_RECORD;
  row_lcr_changed  BOOLEAN := FALSE;
  res              NUMBER;
  ob_owner         VARCHAR2(32);
  ob_name          VARCHAR2(32);
  cmd_type         VARCHAR2(30);
  employee_id      NUMBER;
BEGIN
  IF in_anl{y.getTypeName() = 'SYS.LCR$_ROW_RECORD' THEN
    -- Access the LCR
    res := in_any.GETOBJECT(row_lcr);
    -- Determine the owner of the database object for the LCR
    ob_owner := row_lcr.GET_OBJECT_OWNER;
    -- Determine the name of the database object for the LCR
    ob_name := row_lcr.GET_OBJECT_NAME;
    -- Determine the type of DML change
    cmd_type := row_lcr.GET_COMMAND_TYPE;
    IF (ob_owner = 'HR' AND ob_name = 'EMPLOYEES' AND cmd_type = 'UPDATE') THEN
      -- Determine the employee_id of the row change
      IF row_lcr.GET_VALUE('old', 'employee_id') IS NOT NULL THEN
        employee_id := row_lcr.GET_VALUE('old', 'employee_id').ACCESSNUMBER();
        IF (employee_id = 197) THEN
          -- error_record.message_number should equal error_message_number
          row_lcr.SET_VALUE(
          value_type => 'OLD',
          column_name => 'salary',
          column_value => ANYDATA.ConvertNumber(3250));
          row_lcr_changed := TRUE;
        END IF;
      END IF;
    END IF;
  END IF;
  -- Specify that the apply process continues to process the current message
  messaging_default_processing := TRUE;
  -- assign out_any appropriately
  IF row_lcr_changed THEN
    out_any := ANYDATA.ConvertObject(row_lcr);
  ELSE
    out_any := in_any;
  END IF;
END;
/

To retry a transaction with the transaction identifier 5.6.924 and process the transaction with the modify_emp_salary procedure in the strmadmin schema before execution, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.EXECUTE_ERROR(
    local_transaction_id => '5.6.924',
    execute_as_user      => FALSE,
    user_procedure       => 'strmadmin.modify_emp_salary');
END;
/

Note:

The user who runs the procedure must have SELECT privilege on DBA_APPLY_ERROR data dictionary view.

Retrying All Error Transactions for an Apply Process

After you correct the conditions that caused all of the apply errors for an apply process, you can retry all of the error transactions by running the EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package. For example, to retry all of the error transactions for an apply process named strm01_apply, you can run the following procedure:

BEGIN
  DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(
    apply_name       => 'strm01_apply',
    execute_as_user  => FALSE);
END;
/

Note:

If you specify NULL for the apply_name parameter, and you have multiple apply processes, then all of the apply errors are retried for all of the apply processes.

Deleting Apply Error Transactions

You can delete a specific error transaction or you can delete all error transactions for an apply process.

Deleting a Specific Apply Error Transaction

If an error transaction should not be applied, then you can delete the transaction from the error queue using the DELETE_ERROR procedure in the DBMS_APPLY_ADM package. For example, to delete a transaction with the transaction identifier 5.4.312, run the following procedure:

EXEC DBMS_APPLY_ADM.DELETE_ERROR(local_transaction_id => '5.4.312');

Deleting All Error Transactions for an Apply Process

If none of the error transactions should be applied, then you can delete all of the error transactions by running the DELETE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package. For example, to delete all of the error transactions for an apply process named strm01_apply, you can run the following procedure:

EXEC DBMS_APPLY_ADM.DELETE_ALL_ERRORS(apply_name => 'strm01_apply');

Note:

If you specify NULL for the apply_name parameter, and you have multiple apply processes, then all of the apply errors are deleted for all of the apply processes.

Managing the Substitute Key Columns for a Table

This section contains instructions for setting and removing the substitute key columns for a table.

Setting Substitute Key Columns for a Table

When an apply process applies changes to a table, substitute key columns can either replace the primary key columns for a table that has a primary key or act as the primary key columns for a table that does not have a primary key. Set the substitute key columns for a table using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. This setting applies to all of the apply processes that apply local changes to the database.

For example, to set the substitute key columns for the hr.employees table to the first_name, last_name, and hire_date columns, replacing the employee_id column, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name         => 'hr.employees',
    column_list         => 'first_name,last_name,hire_date');
END;
/

Note:

  • You must specify an unconditional supplemental log group at the source database for all of the columns specified as substitute key columns in the column_list or column_table parameter at the destination database. In this example, you would specify an unconditional supplemental log group including the first_name, last_name, and hire_date columns in the hr.employees table.

  • If an apply process applies changes to a remote non-Oracle database, then it can use different substitute key columns for the same table. You can run the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package to specify substitute key columns for changes that will be applied to a remote non-Oracle database by setting the apply_database_link parameter to a non-NULL value.


Removing the Substitute Key Columns for a Table

You remove the substitute key columns for a table by specifying NULL for the column_list or column_table parameter in the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. If the table has a primary key, then the table's primary key is used by any apply process for local changes to the database after you remove the substitute primary key.

For example, to remove the substitute key columns for the hr.employees table, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name  => 'hr.employees',
    column_list  => NULL);
END;
/

Using Virtual Dependency Definitions

A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions being applied at a destination database. Virtual dependency definitions are useful when apply process parallelism is greater than 1 and dependencies are not described by constraints in the data dictionary at the destination database. There are two types of virtual dependency definitions: value dependencies and object dependencies.

A value dependency defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables. An object dependency defines a parent-child relationship between two objects at a destination database.

The following sections describe using virtual dependency definitions:


See Also:

"Apply Processes and Dependencies" for more information about virtual dependency definitions

Setting and Unsetting Value Dependencies

Use the SET_VALUE_DEPENDENCY procedure in the DBMS_APPLY_ADM package to set or unset a value dependency. The following sections describe scenarios for using value dependencies:

Schema Differences and Value Dependencies

This scenario involves an environment that shares many tables between a source database and destination database, but the schema that owns the tables is different at these two databases. Also, in this replication environment, the source database is in the United States and the destination database is in England. A design firm uses dozens of tables to describe product designs, but the tables use United States measurements (inches, feet, and so on) in the source database and metric measurements in the destination database. The name of the schema that owns the database objects at the source database is us_designs, while the name of the schema at the destination database is uk_designs. Therefore, the schema name of the shared database objects must be changed before apply, and all of the measurements must be converted from United States measurements to metric measurements. Both databases use the same constraints to enforce dependencies between database objects.

Rule-based transformations could make the required changes, but the goal is to apply multiple LCRs in parallel. Rule-based transformations must apply LCRs serially. So, a procedure DML handler is configured at the destination database to make the required changes to the LCRs, and apply process parallelism is set to 5. In this environment, the destination database has no information about the schema us_designs in the LCRs being sent from the source database. Because an apply process calculates dependencies before passing LCRs to apply handlers, the apply process must be informed about the dependencies between LCRs. Value dependencies can describe these dependencies.

In this scenario, suppose several tables describe different designs, and each of these tables has a primary key. One of these tables is design_53, and the primary key column is key_53. Also, a table named all_designs_summary includes a summary of all of the individual designs, and this table has a foreign key column for each design table. The all_designs_summary includes a key_53 column, which is a foreign key of the primary key in the design_53 table. To inform an apply process about the relationship between these tables, run the following procedures to create a value dependency at the destination database:

BEGIN
  DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY(
    dependency_name   => 'key_53_foreign_key',
    object_name       => 'us_designs.design_53',
    attribute_list    => 'key_53');
END;
/
BEGIN
  DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY(
    dependency_name   => 'key_53_foreign_key',
    object_name       => 'us_designs.all_designs_summary',
    attribute_list    => 'key_53');
END;
/

Notice that the value dependencies use the schema at the source database (us_designs) because LCRs contain the source database schema. The schema will be changed to uk_designs by the procedure DML handler after the apply process passes the row LCRs to the handler.

To unset a value dependency, run the SET_VALUE_DEPENDENCY procedure, and specify the name of the value dependency in the dependency_name parameter and NULL in the object_name parameter. For example, to unset the key_53_foreign_key value dependency that was set previously, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY(
    dependency_name   => 'key_53_foreign_key',
    object_name       => NULL,
    attribute_list    => NULL);
END;
/

Undefined Constraints at the Destination Database and Value Dependencies

This scenarios involves an environment in which foreign key constraints are used for shared tables at the source database, but no constraints are used for these tables at the destination database. In the replication environment, the destination database is used as a data warehouse where data is written to the database far more often than it is queried. To optimize write operations, no constraints are defined at the destination database.

In such an environment, an apply processes running on the destination database must be informed about the constraints to apply transactions consistently. Value dependencies can inform the apply process about these constraints.

For example, assume that the orders and order_items tables in the oe schema are shared between the source database and the destination database in this environment. On the source database, the order_id column is a primary key in the orders table, and the order_id column in the order_items table is a foreign key that matches the primary key column in the orders table. At the destination database, these constraints have been removed. Run the following procedures to create a value dependency at the destination database that informs apply processes about the relationship between the columns in these tables:

BEGIN
  DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY(
    dependency_name   => 'order_id_foreign_key',
    object_name       => 'oe.orders',
    attribute_list    => 'order_id');
END;
/
BEGIN
  DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY(
    dependency_name   => 'order_id_foreign_key',
    object_name       => 'oe.order_items',
    attribute_list    => 'order_id');
END;
/

Also, in this environment, the following actions should be performed so that apply processes can apply transactions consistently:

  • Value dependencies should be set for each column that has a unique key or bitmap index at the source database.

  • The DBMS_APPLY_ADM.SET_KEY_COLUMNS procedure should set substitute key columns for the columns that are primary key columns at the source database.

To unset the value dependency that was set previously, run the following procedure:

BEGIN
  DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY(
    dependency_name   => 'order_id_foreign_key',
    object_name       => NULL,
    attribute_list    => NULL);
END;
/

Creating and Dropping Object Dependencies

Use the CREATE_OBJECT_DEPENDENCY and DROP_OBJECT_DEPENDENCY procedures in the DBMS_APPLY_ADM package to create or drop an object dependency. The following sections provide detailed instructions for creating and dropping object dependencies.

Creating an Object Dependency

An object dependency can be used when row LCRs for a particular table always should be applied before the row LCRs for another table, and the data dictionary of the destination database does not contain a constraint to enforce this relationship. When you define an object dependency, the table whose row LCRs should be applied first is the parent table and the table whose row LCRs should be applied second is the child table.

For example, consider an Oracle Streams replication environment with the following characteristics:

  • The following tables in the ord schema are shared between a source and destination database:

    • The customers table contains information about customers, including each customer's shipping address.

    • The orders table contains information about each order.

    • The order_items table contains information about the items ordered in each order.

    • The ship_orders table contains information about orders that are ready to ship, but it does not contain detailed information about the customer or information about individual items to ship with each order.

  • The ship_orders table has no relationships, defined by constraints, with the other tables.

  • Information about orders is entered into the source database and propagated to the destination database, where it is applied.

  • The destination database site is a warehouse where orders are shipped to customers. At this site, a procedure DML handler uses the information in the ship_orders, customers, orders, and order_items tables to generate a report that includes the customer's shipping address and the items to ship.

The information in the report generated by the procedure DML handler must be consistent with the time when the ship order record was created. An object dependency at the destination database can accomplish this goal. In this case, the ship_orders table is the parent table of the following child tables: customers, orders, and order_items. Because ship_orders is the parent of these tables, any changes to these tables made after a record in the ship_orders table was entered will not be applied until the procedure DML handler has generated the report for the ship order.

To create these object dependencies, run the following procedures at the destination database:

BEGIN
  DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY(
    object_name         =>  'ord.customers',
    parent_object_name  =>  'ord.ship_orders');
END;
/
BEGIN
  DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY(
    object_name         =>  'ord.orders',
    parent_object_name  =>  'ord.ship_orders');
END;
/
BEGIN
  DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY(
    object_name         =>  'ord.order_items',
    parent_object_name  =>  'ord.ship_orders');
END;
/

Dropping an Object Dependency

To drop the object dependencies created in "Creating an Object Dependency", run the following procedure:

BEGIN
  DBMS_APPLY_ADM.DROP_OBJECT_DEPENDENCY(
    object_name         =>  'ord.customers',
    parent_object_name  =>  'ord.ship_orders');
END;
/
BEGIN
  DBMS_APPLY_ADM.DROP_OBJECT_DEPENDENCY(
    object_name         =>  'ord.orders',
    parent_object_name  =>  'ord.ship_orders');
END;
/
BEGIN
  DBMS_APPLY_ADM.DROP_OBJECT_DEPENDENCY(
    object_name         =>  'ord.order_items',
    parent_object_name  =>  'ord.ship_orders');
END;
/

Dropping an Apply Process

You run the DROP_APPLY procedure in the DBMS_APPLY_ADM package to drop an existing apply process. For example, the following procedure drops an apply process named strm02_apply:

BEGIN
  DBMS_APPLY_ADM.DROP_APPLY(
    apply_name            => 'strm02_apply',
    drop_unused_rule_sets => TRUE);
END;
/

Because the drop_unused_rule_sets parameter is set to TRUE, this procedure also drops any rule sets used by the strm02_apply apply process, unless a rule set is used by another Oracle Streams client. If the drop_unused_rule_sets parameter is set to TRUE, then both the positive and negative rule set for the apply process might be dropped. If this procedure drops a rule set, then it also drops any rules in the rule set that are not in another rule set.

An error is raised if you try to drop an apply process and there are errors in the error queue for the specified apply process. Therefore, if there are errors in the error queue for an apply process, delete the errors before dropping the apply process.

PKfSllPK &Aoa,mimetypePK&A>0+~SN:iTunesMetadata.plistPK&AYuMETA-INF/container.xmlPK&A]*OEBPS/strms_monitor.htmPK&ACOEBPS/strms_trapply.htmPK&A7-OEBPS/strms_ipro.htmPK&A;55htOEBPS/ap_strup.htmPK&A[pTOBOEBPS/cover.htmPK&AߎՎӬOEBPS/whatsnew.htmPK&AF2;OEBPS/strms_cpmon.htmPK&AUrHYH{OEBPS/strms_topology.htmPK&A1;3#OEBPS/title.htmPK&A#|#SS;:OEBPS/strms_fgmon.htmPK&Ah+OEBPS/strms_miprov.htmPK&Aؤj OEBPS/ap_strmnt.htmPK&A\uW OEBPS/ap_other.htmPK&A`y; OEBPS/strms_rumon.htmPK&A@W  OEBPS/strms_qpmon.htmPK&A3++OEBPS/strms_adcca.htmPK&A.G   GOEBPS/strms_mcap.htmPK&AZ>>lOEBPS/strms_adprop.htmPK&A76E1*"*i,OEBPS/strms_transform.htmPK&ABtlXVOEBPS/strms_adrules.htmPK&A0   >OEBPS/strms_glossary.htmPK&AtLNIGOEBPS/pt_infprv.htmPK&AU^DOOOEBPS/preface.htmPK&A< `mOEBPS/strms_adcapture.htmPK&A0͜2t`UOEBPS/strms_apmon.htmPK&A<^  ~OEBPS/index.htmPK&AZ=sswOEBPS/strms_adqueue.htmPK&A;1`OEBPS/img/strms068.gifPK&AI<2OEBPS/img/strms040.gifPK&A^dd)OEBPS/img/strms066.gifPK&A 4WW.OEBPS/img/strms051.gifPK&A5  vOEBPS/img/strms022.gifPK&AEStOEBPS/img/strms043.gifPK&A; PdKdOEBPS/img/strms047.gifPK&Ac_88BOEBPS/img/strms039.gifPK&Aӈ#..J4OEBPS/img/strms016.gifPK&A3:b5bncOEBPS/img/strms037.gifPK&AKQQOEBPS/img/strms061.gifPK&ASJ^=Y=OEBPS/img/strms020.gifPK&AUOEBPS/img/strms504.gifPK&A^X X&OEBPS/img/strms055.gifPK&A *z*~OEBPS/img/strms012.gifPK&A:1FJJOEBPS/img/strms049.gifPK&AE,,OEBPS/img/strms026.gifPK&A*ijMM OEBPS/img/strms057.gifPK&A&!;<6<nOEBPS/img/strms053.gifPK&Ah.gg,OEBPS/img/strms052.gifPK&AbTI## OEBPS/img/strms044.gifPK&AL$6 OEBPS/img/strms008.gifPK&A%۷ѷGK OEBPS/img/strms034.gifPK&Aa%SXXf!OEBPS/img/strms048.gifPK&AcEjXWe\!OEBPS/img/strms060.gifPK&Ave5`5!OEBPS/img/strms038.gifPK&A2(U!OEBPS/img/strms418.gifPK&A mmˠ"OEBPS/img/strms013.gifPK&Ahbb#OEBPS/img/strms056.gifPK&Ap`'$"$q#OEBPS/img/strms019.gifPK&A ,M'MO#OEBPS/img/strms007.gifPK&AcdX{~eye#OEBPS/img/strms046.gifPK&AI$OEBPS/img/strms017.gifPK&A"[[d$OEBPS/img/strms054.gifPK&A!: :$OEBPS/img/strms036.gifPK&AHiufpf$OEBPS/img/strms505.gifPK&A_tnon!a%OEBPS/img/strms_streams_main1.gifPK&A*BΫxx%OEBPS/img/strms067.gifPK&A"dZ_ZqI&OEBPS/img/strms506.gifPK&AZ8HH&OEBPS/img/strms065.gifPK&A8&OEBPS/img/strms045.gifPK&A5l) 'OEBPS/img/strms041.gifPK&A-Zr%%'OEBPS/img/strms059.gifPK&ATjjz'OEBPS/img/strms042.gifPK&A\V(OEBPS/img/strms058.gifPK&A@E>@>v(OEBPS/img/strms507.gifPK&AKX(OEBPS/strms_trrules.htmPK&A&?)OEBPS/strms_mrules.htmPK&A[fY T B*OEBPS/pt_mon.htmPK&A<*OEBPS/img_text/strms052.htmPK&AQٶ <*OEBPS/img_text/strms505.htmPK&Aꓷkr!*OEBPS/img_text/strms061.htmPK&A_%DLGo)*OEBPS/img_text/strms054.htmPK&AHC1*OEBPS/img_text/strms507.htmPK&AQ7*OEBPS/img_text/strms012.htmPK&AӿI2-y:*OEBPS/img_text/strms041.htmPK&AUq!?*OEBPS/img_text/strms043.htmPK&Aig8{B*OEBPS/img_text/strms036.htmPK&AMmWJEH*OEBPS/img_text/strms051.htmPK&Ac3*OEBPS/img_text/strms058.htmPK&ANS*OEBPS/img_text/strms060.htmPK&Af:<7Z*OEBPS/img_text/strms042.htmPK&A)`*OEBPS/img_text/strms057.htmPK&A f*OEBPS/img_text/strms066.htmPK&AÐc6h*OEBPS/img_text/strms046.htmPK&A.HA&l*OEBPS/img_text/strms_streams_main1.htmPK&Awwmt*OEBPS/img_text/strms019.htmPK&Az.)w*OEBPS/img_text/strms008.htmPK&A y*OEBPS/img_text/strms047.htmPK&A|_Z*OEBPS/img_text/strms013.htmPK&A mr*OEBPS/img_text/strms007.htmPK&A+v*OEBPS/img_text/strms067.htmPK&Aɉ*OEBPS/img_text/strms059.htmPK&Ap*OEBPS/img_text/strms037.htmPK&Ac4ҞXSh*OEBPS/img_text/strms022.htmPK&AJmLB *OEBPS/img_text/strms039.htmPK&AHpM*OEBPS/img_text/strms040.htmPK&A8*OEBPS/img_text/strms055.htmPK&A ѣ+&*OEBPS/img_text/strms048.htmPK&Ar4d_ *OEBPS/img_text/strms026.htmPK&A#JE*OEBPS/img_text/strms034.htmPK&AsA`[L*OEBPS/img_text/strms056.htmPK&Ar4/*OEBPS/img_text/strms016.htmPK&ASr*OEBPS/img_text/strms038.htmPK&Atg*OEBPS/img_text/strms053.htmPK&AO32-*OEBPS/img_text/strms020.htmPK&A 0*OEBPS/img_text/strms506.htmPK&AW+*OEBPS/img_text/strms068.htmPK&Amiz*OEBPS/img_text/strms065.htmPK&AlUP*OEBPS/img_text/strms045.htmPK&AmHi*OEBPS/img_text/strms418.htmPK&A;*OEBPS/img_text/strms017.htmPK&A{*OEBPS/img_text/strms049.htmPK&A֙kzn*OEBPS/img_text/strms044.htmPK&AY /"*OEBPS/img_text/strms504.htmPK&AAs| +OEBPS/strms_mtransform.htmPK&AGgCt4t+OEBPS/strms_adapply.htmPK&A f=-OEBPS/ap_restrictions.htmPK&A=Z].OEBPS/pt_adconcepts.htmPK&A46hڞbf.OEBPS/strms_mprop.htmPK&Ap/OEBPS/strms_change_table.htmPK&A½lMgM1OEBPS/strms_trmon.htmPK&A}IbO]O0U1OEBPS/man_advanced.htmPK&Azd**֤1OEBPS/ap_xmlschema.htmPK&AI.XPO1OEBPS/strms_capture.htmPK&A5\5eE`E 4OEBPS/strms_mintro.htmPK&Apd(_( e4OEBPS/toc.ncxPK&A[uH6C6S4OEBPS/strms_trprop.htmPK&AVbQb4OEBPS/content.opfPK&A|$LXS t'5OEBPS/lof.htmPK&A_ ?5OEBPS/dcommon/prodbig.gifPK&AY@ \E5OEBPS/dcommon/doclib.gifPK&Ar(tllF5OEBPS/dcommon/oracle-logo.jpgPK&A5OEBPS/dcommon/contbig.gifPK&A5OEBPS/dcommon/darbbook.cssPK&AMά""!5OEBPS/dcommon/O_signature_clr.JPGPK&APz 5OEBPS/dcommon/feedbck2.gifPK&A-w5OEBPS/dcommon/feedback.gifPK&Aː55OEBPS/dcommon/booklist.gifPK&AN615OEBPS/dcommon/cpyr.htmPK&A!:3.i5OEBPS/dcommon/masterix.gifPK&AeӺ1,5OEBPS/dcommon/doccd.cssPK&A7 Z5OEBPS/dcommon/larrow.gifPK&A#5OEBPS/dcommon/indxicon.gifPK&AS'"6OEBPS/dcommon/leftnav.gifPK&Ahu,Z6OEBPS/dcommon/uarrow.gifPK&Al-OJu6OEBPS/dcommon/oracle.gifPK&A( 6OEBPS/dcommon/index.gifPK&AGC P6OEBPS/dcommon/bookbig.gifPK&AJV^p6OEBPS/dcommon/rarrow.gifPK&A枰pk6OEBPS/dcommon/mix.gifPK&Ao"nR M >6OEBPS/dcommon/doccd_epub.jsPK&Av I )6OEBPS/dcommon/toc.gifPK&A r~$&+6OEBPS/dcommon/topnav.gifPK&A1FA,6OEBPS/dcommon/prodicon.gifPK&A3( # 06OEBPS/dcommon/bp_layout.cssPK&Ax[?:=6OEBPS/dcommon/bookicon.gifPK&Ap*c^C6OEBPS/dcommon/conticon.gifPK&AʍF6OEBPS/dcommon/blafdoc.cssPK&A+&]6OEBPS/dcommon/rightnav.gifPK&Aje883_6OEBPS/dcommon/oracle-small.JPGPK&Aއ{&!t6OEBPS/dcommon/help.gifPK&AjX<[7[ޙ6OEBPS/strms_trouble.htmPK&A18._6OEBPS/strms_prop.htmPK&AEc˂ ٰ7OEBPS/toc.htmPK&A,Xט͘39OEBPS/strms_trcapture.htmPK&Ak\$QrLr9OEBPS/strms_adha.htmPK&AfIBѳ?:OEBPS/strms_over.htmPK&A8GVV=V;OEBPS/strms_rules.htmPK&A{G`RGROEBPS/strms_apply.htmPK&AQ@;ߜ@OEBPS/pt_trouble.htmPK&A\KAa@OEBPS/strms_otmon.htmPK&Ar3 AOEBPS/pt_admin.htmPK&A72 AOEBPS/lot.htmPK&A|LƪSAOEBPS/pt_ap.htmPK&ACrWR:AOEBPS/pt_concepts.htmPK&AfSllԮAOEBPS/strms_mapply.htmPK2D

~qU uqRyL`G~^ x Nud b=Oo9W`')+-/1/3uD79Op>P `GIKM @EY[P 0 [I@ XiK  O/Puowy{}/Oo/QX00 Oo` )@K v^ "poo?p OoǏɯ/O_ ؀ }@/OoOo!1ޏos!a_O@ DjPB >QD-^@ =~Q%MDRJ'krɖ5m޴ .pZ&ϘAETRMYr$5HUM:pU'ufkաY;PK mmPK&AOEBPS/img/strms056.gifbGIF89a ???πߟ///___ooo؏OOO@@@rrr쫫999 ```000PPPpppӅ,,,>>>lll***YYY|||111;;;ȓGGG!, E $&!D") () ĝ&'  '% Ҟ(H*:,<ǩĵlCIIZ PId`;y,mϟ@ v-_749?Y(塩իX|tBd hU`@ C s۷p&:)U N"LKaEnZ`B H`U!k9׉:NNP%ͺ"H`)@:WB5~fIA <%A >zA,lwq 0ݠ"F*Ͼ=`ޠ _QP]%@e! ׀bA( ,8ᆿ=HqXa(B p\ !㍄yp&Baz?yU PY '2)HS~!4! >D"2!*RӒk< peP#S"yDZ%vlԧ 8p! ,5fZHY{e5:D@dW4uil"mBAyX1Z umt" t\pB;JgeTW~+0c-B[ό@ WC0˶D(pլE@ .#cQV{(&=ȴJ#O l˭U JӏKLP`sEhLe ղ0W͈.3@eH33.' >: &ΘrZ] Dլ2+&= \1pID (ɂ[ČLݴ2Px y?f=x#0/&/ ]@e򬮺| (Qk { 4J]e?Fm-;G mY B'Aߛ >"udz?rK?YnTޗo ID K ,D$Id,̓*2`K#RvUb8 ؂B F<"H%.1Gd (EAPHlX+j1X k*C2D *@:nu#H;}c8@RC!"2Q(SAxLv4N\ғ 寖>. IDi$XFny7@#hT$%ˆi\R tq\ +0IP3uf!MDr|7IpN$g HCr<*O"Y(i$h~AONTpQ@ '6OceKO/HCE`LFJ%=)J+`RB*EiKҙ 1MKWj"t2H{gKIC,i !}aLpEWOPyZ@~G? bXz($ &dǿ4ѥㅊ!CqZ;-jS-Beb"#"ܦwDYKE81*"&TmSFd%a ;M`0m"S]'ޥ[,:݊qD!*2>J38q[CvejT,"²;SJX*;5H+%uEҊ7\;؂׿'Dy q^:`7]7* 5[w$R>ո{*t/`o|`+%O׳,W ބ Q9EVߘ+C^P=Y=j|'./e[]rbun%7Wk.ѹ,yMqa)Tęifx2ܪo)d& 1A|2LMʇpY9S,V]JD;<nWT%DkʉC#Ÿ$Ż}=F&e4- P5谜A[6yR@~6ϝ]}:է+uz{߻^m޴O?f[-,f,<{N8WO{U[]o5OOmZtw^м#"е]nxoztgT{x1sB~4>͗?<{BQq_ .+%{1Yqas# d'& Bq!vvlǁqZ{8)+O&pW9cz,H+NB87q';[@wY0j%li t P ЕuٲYҗ@ 5q`Y*9%  OE Rҙ8Or&)Άl yr6 Yyșʹ ~?9eqQ4c FCHqX} 虞깞ٞ zJYF "@Yp4.1V9ho'd)qY Ez cՠU82΀J 0 R)Ae/ɌY'P@E= 8:8&jG:$HqAz&BKKYLXBLpn`p'M親@FJSj$UjԶ `(Z;F4aP] צ0t EJPxxژzBzKuw/1 ec;8J77iWsmcjP4W8\ # !0'Q*uZ$wzGq{ʩw>KK;fO"8r{,_~5}#mVF%)w-WV.V]!2/Q WvD:iJ bQvZ41Xc[* z`Fui2>Js%(:D+m S+RuʯRZ>(/;1;/[64[2K=D :{Bk ƃR4#)[74[anP;z!*Y.GtG5VӬ AG`D|tMxGz;E|~۷uV;[kKP zm4{2{bxy`wfM'?yr?[fZGww2G"9~zrs7ͷ\[PG˸ kEۤO`偙`* :0*7C#nUj3'c '#_4ZW`yʻ(3*|IJ{C;E ̴<|, )7i9`),>+}! WQ#(Qa,9dLw66,˺ 5`J";E(#TfK(ym@ҳ<۷ܭ9j r [ >bq}U:lv-.~x:&]_'ӵ5WXxT:N P+/l "VyBQ|nѩeFE+.8S]rޱPLM/"tu0aG C::4"aU NڹP8"0~4Ћk2,568-<ѩhՎ;aJ%KLe6O! =g3^9+A!rg:8QxTcI2+*Li\8^ձy4P(s>j8%0δ{]R=^5]?npl@`lwYy˟0_3Ev25LNb1:zS/ #=#ﴚJqV1'$r0<` iSEED EDDDDÖυӠE 㝾Ȉ+  H[7Y*\ȰÇ#.an;3Si  m*U mVB{T`s3(ɜϟ@ Jf \ʴӧPJJժ.%HE 6@@ʱ3M_ێgxjqPC Lp8J5H PP0p0?s!Dd.bYy FPfUz`mޤ*qem%~;!rA֭Q&FsR(Vh10Az=@Y~Lp++Q$|:[0&wn+n~IN)O! L; {25ՔJ |[euҐlB>;юa)z*2ƭ=2,*; WDm j'Sn/ԫbn&t\rvnzb.wmbGV˄ļgz! 0@k6V;$4#nu_8&D{Nv->קa&I:*r/Y~ ȖܱդdWY: ,@FW 0-k^0EPЀ Z.F4/#ޙϙ`$G2BH?-,LXXma!=Ŀ0ne$*j; yH "yhK<)G["D Eע)U-TzH&f>"yEiW01%n$p @Y ̞a>jP*Q4Fo,q`B6R  b ;yK I_L"+$7h`694i@jZ4D(n`35eIEH2fL#τ"NggpJBIΌ6))KFRUΛ"! 1Z eD"M1. 4ݭ5DsDN~#RUO&ޥg=i{3g'C(@~dpOyЇB)t;][i$H\oD̄ 2m-H<#8AJLk̻jgD<O]8\c`od-LĽ֦K,A?FX6Rf8ǧpT\Ejsnc`f-' 3.ëtdez^׾%ȵ(PבZr/6 2cص7ֈcgRz~QI/<5mM+q<~1g&4MYWtf%h!YD6sv,{RVr,~ ?K&5CV2[+J2xؕKQ ӆITg:ٳX~(&E\ ')Q*I#Ā+6lɔF7Y"h˧6K ܵgܲ >ǔʠ%Y=@ nЉ{c,yI&| {10X,b1s@ȕ_~ַ̺{`)[&dsbsm1uWܛu# _ZexK+U & I[ϼ7{HmF6x%zlTܸJq%aqi^1pjs 0ڔ($xy>?iZc]u|~~yB}g 6fc W"&PsUx؁0^X~ 3fnwz.( 3cw8:]';+7&}gvB 5X[zL؄NPyCwCxbu}V Ix `&[x kdGWl8!g"3mbxxdȂ gq ρ 4ppЅu؈BqF~2Eͷ,'`s 18Xx؊V| 3<؋hoWz3&zXxJpTvG~ǘ f@ "}2bz ۸WBVx( U-xq"H%WxM4և؏_%-`#h'c49HNА⏙# mV8nDhۀnk[78݆1Y3i(B8 9X +5E4RM"U֒8|XFik@h*DV0$>1/ ǕljFBm2ɀ4$f)2q/l!Ut/!;Qr#S@Wd$ORD kRR"0M4jd@ @A0 AD@AhXpߒ?#T<">džqb$U!0(s?,,Ub5IB)pf@Zz)-:BHR D Y4ٱ0(:B?CR5Z C;P(QK `+}?!3ta#ִ֔LӶ)yrS7DՄ"H1dmE$LER +q9R_J{ MGM\WvMdM4NO2C[}W7(ewOX^X85U'UBK-Qq$"_ `wP@UQ[KTr Y$ɗ}vI 0,;QQ3jVRED}BY 6#{4/% H2C4m6JY0Ӧ=!%7c$YSWp05DzrdAb`C X^8nm S?e񦥡,XY>&*"I?Wఞ!}o CDPEv50?͓:*:4#![tjvVQtYz*5{#NJ!Gj ֕8W;F4>Sv"3}2ѱYsIblz$;Rt5|#/# ETxnj[/ bqjxQ85D[)??AW31rED9p? ٢̚1(*+/:/51jP2,FLw 븏[ۿ.>I` ]7;PNT15E:*.8zh<Ƣ;35pk#w̏6'98ȐI` c;F @{9eq4L&Kʪʬʮʰ˲<˴\ˮ, {ɚȡ|8Č~8u Qv7:dvXڬ|ܹ&)<Ҙ4lx\ټ|zٸ[Ǥ΄=Фh|ac`q㋍(AȀ:B=6nIcNI@!I>VErBv WVtօ7?(Bcr֩`)f%bm?0njmba  jx.8xcF (`րnBGg΍oMAFz0"߾S ŢZ*UΏN!I| EEE DEEDDDǸD ʽߙهԼ ȶD ̧)g6  0PDn8m6m,QB/ A ۄ Z5k^8ss࿞@*i ( E$iZICY¤iF|GS,x˛g(E8࿁ VTCNHdœ gvY2bjMxtTp 0r@$Hk ̲)`Hڷʞ@!T$Hp(ArTE ,B~<3cʝ1Wt{CFDWP6{Wwt0hw]g}7{סgjx!'zh%<=8@YN`( Р+(/i8KH Iu9A6ie)E^喾(@0ewO&X3%pgPe`7y)( Xg{fC@DpV:dDp(ƴ.Uvjj TAi) D0g=e,)ğ:k&6Ў]"gbhN2ϸk覫n7SS+kD@K\l' 7|S XIl/, «G,$pueh,s-6P l<쳙h[}QMeĀM&px^M:UyS4WBDp:bll/N"X)Zhaט~q=# kqὂhl ,1(c28@@<)ɜpH / '0##^N$"Mde.>@1% `K /kLa(0<`um5[c- MS}T`B7'Ln%`3_@I,4&0+F4(r /J,b00ĕ鑊l!.Y&QQ芹VLf6pV,4@:su'g ~p\Cj++ Vޤ"@ tyw(mf.A ~P\uA4 ~a @jHp#08ֵ}뺐 6E8xZ YǪf- !/= uB!4U^C:MDkd(jsUl-#C_G`lYLEQ0\4ß+ rjr)@7ptB),,PѴP"ƒ< P2>_ Eƒ7>.>6=G:}'7uC}* >ޚXkYR!tvjBMBlN)6NՇx W(WF"AWAq$w rg('6&o|9S"GOV4Qn7 & tm{(Vfoeu}3XGrC7(}]N&7 TQ N%$pzWlpx{$O CsOp1QׂV!$ IIx#d<vSȂL7Ro1tCWyL4E"@JL}D;6 )M( DKG;tCQH(`nL uEJo p"{(#iGHƶmRSp /!ՇCW'uEGVMp u!Iw+a3@I7FtV\ԃK'xn;$PyZ5E0wl.e3o#;0;gql5lk; x?`TΑD#'kST o#;B=@˕Y{t5&d4ofC$[DF\XX??=I4xX;=y#3k52A&[X Ws*UqpTw׶VwuRw*T6YѤlͦX(x#9k>mf%qW+o( &T$I@w~릅7z}e Uix q9q (}4Ir:` B:ZR@BDh-Kz$u"7){ED }xZHPLpY(/h`Uq%@. ܐ%K:ZG~7膐WfY^ 7c'Pc$ `p"Xז8Y H ?yUQT ܔD0PS#I$lxRnsdPgV3XW) ʝ( Xp" Q*s J9! /0DY &YY97AUgnZڟҺآ@tBzyi$z% 7O ߁OL ?ɑ*1RP< $[/rmwtɑ&)6nXTɬpfV7˩7Q@*o &x24IVZ;DQJF7,;<^*NMPdW:S9}[) ]9Yr mնGmwFM&!N%J3+} ?0!33&{T+&UQ<#9eE) Y,;=!СfxC;G sR…(O, MEŀzsŤ[S\7DP =q?|X PUwc@0Tn@(]#=ABI [I!|+~K;j;E<)%L#Qƒ&Ã`EC5#\7Nə=/HlTY303.ɑ jP 7BKPBR R0RFP*nR=AdIӫ+A%ő ɘ~D `Ay82K: =h,A٫ Dt#QB=@pHCs茳L)Q, Τ+#p`zJ˘0ƒ3$2ضt-\:.Kh +[#Yƶ pRG~1*mϪ~ .B~7y=VÃa#༢uDC;Fn1^ %F#}.t= \WK2漏a% .S Q$΂oPVUDH .F.1o Opt} #E%/'/  /W4Rޮ>F#9@OkO9D_ K8BEm>pUSKTo ^_DB_/S v f:h|2l_쬲{/@?@/o%`8q c_/3[ 55[%a!06A9f歟_8o c4,K__ׯa_"&>=oov-OPϬo* /pE/*EDD  Ѹ  D  D  w/܀aAH";wȱr~,3zZK(`!JH 8> u d*rJRy 6n5j `֯U)A <mТ)t1+?}k]yl+hŐsٍLrɎ6΀Ξm A>TdԲkG_˞=. 8AD"$}m6­ۜ"6@ ^L AAj=O+z$=OM-_"GrA\F&],ԡ$@w 8'UuP!2ݝ$`@U`OStq 2 T u`<ε`fC@YI$MOF")ⒾXh%v. B2E%N(D8֎n9ԏ4xΐ8/WKBxZnSNFY% cpuCL=&Zp^)Gtx M''*W*I:צNe tZ,vec@I~N,oMs^r\FAvWz NcYJrۑlyETʅ+LNw"2b[Čcv.GNl14Y}|' .O1Fg˺HMzKw وm^̔<]=׃H%L7PGtќV ݎQjDΎ)dme#y(yѱ|ש}7b\K儂7_yw{9ny]W^σyA! "# PVa:]4>92\`󗵍O2@zDյe%M>evc/N/Oo|/?Cz@O ŀC%Y胟\ Ɏ2(( x}*TBY(Q`GV#^n/,Ȝ OR2@8y ),[Vf -~@aa񢋸 ,|,BC@XP$@;fLshF\0`-`@bD 4%M̰hf60-ibxqkb.8|("/1%OyaMI9F"ą#bh #8̥.H pP7ظ5*1E0OijR-vynigFEh2dSBH 8[j0]nuu e:eM#Si$u:lT}TeDt{)1& `M pzIIݳ7+E ,fnYY$ ;lKyшt\ p@3œ>ڽ}Q)ORBBF FhraMg״kCE\Z:g6;'Ds |BuDhƷZ{_wH r8]}3@\d̨`8knxY >8ҋJZT( V-XK#}i !ѐqzz{ G$0@FFB{ Lvo'f$K2w0Ӂ Wssb Nfz`$V/p( BR}#H'$K)r-8M/b$=3"!W2$O6s@A9lKWa7F :j< uDDEb[ @(F'ه{U*4#i iQt􇅖upGāpBޱ,G3(ȅڷ}e5|$Snxz6 uXmwx CffuBgXgt> XP03dr$nr6&Q!(r-}Fnlр-i\jsxHT( ;2ikEBϠSF% WS& u4$?nr E.5AmUrit$πH}g ñL_u=ImpqP bHDxlrU 1G6 CCc=!{Bx0)8U8:yH ї$@V|X& WHv l"h1ȅ8HhwYSdcIh6DUzXWޖ1i8%yYƆl9)]\%Ys"["yS9%lSm4V}┕Ud3 ! I2aiz "O.ڹ`Պ(%rxKa֝ a?x+}a#^ٟcEH@ $@&' 3QPzڡjp\Izg4^Ga q&´#:<ڣ>@}Jg95Q]ipE@! 1]Z3a\ڥ^(ji de~;03cd4S9Y 0b6B)og2ɗv:𥊺 w_|hFuT1f|Mef(h feMqZ,GE7 'b'@jt1EpY V:ZzȊacS9|Ӏ*BPPHL$m0 VRWBUE$ >MpH&@8Ĭb1ZVl0n%UV Qn]5QwPŬw(1swGuˋkf_xm{igү; %jI\=G5eEB%epm?g±;zRL%4h KF V1i[*[,+J.)4S|Pt 6 qbJA fYp$wzdy3Qݻ0Q3݋Ra@qw%bhalaHYE|yߡIչVUCIIGIJY%Y$ pnyTjtYck,^||ęgfG&Kjl_z*Z |z~*zI'\'q̅:8p\ Z=WVzCTH o nׁ}]0[.*3Y $kB2~u]'2KE;'`#pP[/L@DN37_x~md"sSq&3[ldHMI0O |8lD麉\U]:fp$Y2v$mLMs+n.ާײ%]mJc ~Pˏ>'L\ f1<{B wrXx PLHVt^yپ ߌ_4Φ_;؄2VU-^;/ YKH-/Qg5Jj&^Ts}-6LbK R 7qA %4 r?t!)-'G0} 2M@(B} (p̶ *'g^R Zϣծ7`$b$ִq֜cjL4 P "`Tu q*j] ןt.Be.!&cf"kAXOYOO{ "pL EE'&&'E$D' DDDEǘϢEEED D0İ"$aCMDh`bD(( UqE\ҵw8c۹l+"ntѣH*]ʴS֢Mv L(@`C"(`hr y[րnn[݄  ˷߿u@a(^̸bX8.Ql<˘A ʡSsL  < rH D$( x a.C@ËWm4jQO1^s 0( Aqpஈx~EL8 Qw$Jh/ &E K$C?H!Lz1QDȺ$GQ@I6pTLfy ApM4p=Fq3 䑐$'IJZ̤&1p9>1T'bJJD),Ff2("IUruSd'.ЌZ̥.u):Z)) aJ;fdu%& W*e$.IjZ CA s1&0/U>301`3 sB~fyzԼDfbdB@CR5D`"+AAeAk; 9lj[4 4hIýA.1*. ȦO%R$CdR}SJ!0mz,ݜU*g$plk 7i+ر@6RrYWÃc1zy +F Ĭp ^ yC'V"8>u-@Z@ӈ❢1^e|c)3Kѯ- oٝT2ǰ-osCQ1c5oِ̪Y[ڝ="("WuUv_d *Dp`a|X|ӵd]iMНnfl,ZKG7}5z';zp)#@Qsүf- -\.;>1AURk_;K8@*0coL>Y OMm }`nvLڧ޶m(zC6"[ϵSAxz>)E[rOYD͡8')5)(0qA 7S|Mɽn%@5r婉ȿ &9߹;R,~:B)rLv @RaA/R'@0U,mX[(NNx>|U=UDA[ 7w*F5sM*BǮ<'-a 87,)8Wyͻ~= MJuOP0 _$|+_0o'_lǯ:{NG('>{T7 w (  XHx>5 0{x+o "r)'0ZT׀Xx8 8H;H=AH?XǁH83"@Ap/!y?QPW\ZL,T`,gjȆrhtfXEpmxXzȇs膜ք#d pE1?55yHg1;PKhbbPK&AOEBPS/img/strms019.gif"$GIF89a(???@@@///___999oooOOO 000```pppPPPrrr򧧧LJuuuyyy|||>>>UUU!,(@pH,Ȥrl:ШtJZجvzxL.zn|N~GVJ33׉  ؊  `h`s2sp.W08;&0?`O1r86L$s0A ##Ha>V@[9_}hAҪLs!C7z# "hN`΀SS:`,_ `HS A-Z-:plˮ$8lRjj`%ts1o0 F~RmbdVˀ3z; 0 3 pg!?6$!mn`BoDfDio/!)%{BAsЀI@Q7[CdV O 0 p][2~3$0AV xy-‘`n @j5JaAL D4#zK3`TD&2Ib@S`\`0a4uhDH*fF1`]9Ed^(A(.@zA|dP)@m |xdg<"iYkP@9G}Jakn0 H*bD ¾*wP~2*xpa0P ƱݢdYAhj(l\P!38e{nd0PiTA@^ 4{a4pe[b[$~W- @pWe\8arf@OAǡYdL0rh 1[TAA_\@]AP]\ 3F'n=ڵ=Cxǁnn,|pv+NGdqSKFsݚ@V@[桧Q*>i`AuTyt0>% ;rzX5 ~|dD3< O s A1T_ 7A_D!-q d J.?sg5 @E";&\o*4a \@w20s^00zCd2 0hh@0(8X12\$Aą ^\X,q[d#$`/rqA= nTBe,d~ Q3!#5Dz d$T&ܰA0Q(U'`S@0, ! W_lX\Z5{Ld$K*<  ,$)hgFA 'HSZd0/9llda4g}I:NM| X(t&T$H$~E% Pe _dD!(T ,o!4)}EiP*nTG#R(@p"--)pJB6=K:@e5"HM9ɯNTZSpGi|A x%*7P*B+t:tl0w-&q 8>tBs_\ C `4b|XNB#H9v X1{P<fg:=}2$ǘ=~wF+}2fdEχ!Ol #*_RN?ɿ3٥cGdXfvKzǀy%%x@dwāv$Eu`餂ts Ddrăqy$p@rdolĄng$Dm`al\ti ~fxhBk r8t pi|z|؇~|dA-np􇈘xhF2iP_ K8xh_(*8*pSFqwX)xxzPP hJyG3,o':3x2fP\pF0:SL0:7OPW4Ԍ}`FSI%!(-2"HHX`u-/CpIe 4H(?g EpVVqBfmB#ZXWB,t(Ky* SCCUHS)X$8M#Y.=3X[H53L8=da(=Y؉#)Q$N0:LptR/y3 5=δe(E`YLD7*H؛` ~  Zz~b}ZJЊ|~`65jzF"j;jQpM̼ȪʚɿL3(,aD%PDrj Q5 -<|fۻ@pjӓ>&dX\Ɏ]Hb)ud4/#dLiL3.LE߃0 Dp }1ENh!4pXsD,N--ݏL4nN&Ir۠+>N绞7Lqӑ ]T:-ݎX:EE`c>1%nw.)L:5c@^ۥbO"dP L)0<x$V\^ lwYºEOD^""&ő a( deǎO* =KMO_Ƨ@^G>G=v?ȄȘDQWe|M#v$@?oc"i:U_U{?_r["ĿMmo_ ! "uQ ,ڐGt #Teؙ~a\6φYvqy|0gP0,OF  Tf( /135 2 $7IKMK%+HBhjhfhkj#B&CfquB*Ef N3 I%b#'+r (xJEi&<Ɋ\xME:h@q`R ȊQyu1B,W\? ތ<7bW%GɜtXcX=$2ԒdSS V vrB],v[}_)S]%%ҚYlXM6鳳5h W(?ׄR%`ml؜ @64! Ə@s!|N8). \gۉAGĺ8{p.d )R &: !p@ H EC`qAf"2B\rDeʢe4îJNH#P (`?BM 4*SL㜳9ÄQDLP-jQz4 PK0J}XB iKx2MqU|{թ)L.T x$$UյUMuU*2J[ugԻHWeuVYJR,5e`wcx[q!v@VW1fքx .uKjV#NS)'&!aUz_' 2bQNYe6gau/ewdh8֙ S4Y/!UsuEIpg덧 nygUezI0zQ V.N[. ᨆM~Uk 'ڿ\ /ã[ҺYiA\9|pe|Sr1a=pEwtFMUo^:@/O^_k=FZJaˁ>ŽϽ>] ۷ =9f+JE:""@f\ʟ¿/JP1A̫AZ#@4"XB)dh ]=P}+d!P\8. [tH- G7̓[aD5QA-T*9Išxj`|DF1WFAptCkᏉxDqqI XOb7ŬUoW,ZŵEJ3Si!GZ$Gy*Kѕ+U+Ov e,KQʑ2;|*%J'(,DtG`C#t,Lmrd2Lm64QM]9g y!NNnz%~Nor-#7ճr4i u⋝1&pڱ M'?tAW'IZ!5F+Qy~ 5hFJ}ꯡyh"2It~,KG4 WM;sS+;iEZR D QTrYeP0zՈ`-XHV/NT-#P*V TXfeeLهWɘl l 9K 0ˆv}]X@)lfIClqUjg= 2Sod%H֑1-:rR1LR{6ҫtnbk1 _k|͖]v]V~7uA,&ωnS.7ʚ/bꋜ2%Uxb%P #ZUܙ%ĝx+, G%ÖO:I6įqSJ,'ɰqc=O5/*g` /Mv.!6e-o]/`űm1^Xmv(oʐ(E둗gA Z}ĝh~--hI{u>18F2UKAPjUx@#μ4#յGuڐԬymEC~=L $.vՁ|x@@oHooޣ 4 3a[t 7zN3pu&8jg AXo|K &]V'3031|nYe*3(ŵ Xa~wsNEs.@rST` ѥp9"olԯ8 @ p`xM.I@6瞃1Zlo.O:N: ,1P\<\@Gh<w-Odqgn% "~1o" yMh}.(52q NUQ L'`l.8`(I88+>LѢ1PQ.@ O"c(21A&1 xAjQPqC%$%`<"M$(`E`H( :ʑbL) @*; .2c>GZc$ ,*R#!'5~ 2&!.  T2@ Nb 2!1 +"`? 'r 20b3 >,!&*Ò Q`)=64gCb!LĄ7O.<58=r6234%7/9ń l[a  5R!;:ml,? `>U4@?r?: s@@)Р XB#TB!(6/?t C9DMDQ4EUtEYE]Ea4FetFiFmFq4GutGyG}G4HtHHH(;PKp`'$"$PK&AOEBPS/img/strms007.gif'MزGIF89a5@@@ӿ𠠠```ppp000 ???PPP333jjjiii444vvv;;;,,,555OOO///XXX ---ʶhhhظ___ggg䛛222ךᳳ'''ڵJJJ111ݼoooMMM...666Ĭ&&& ***www)))777áܾZZZ888\\\tttAAA+++VVV~~~lll[[[NNNeeeuuuqqqxxxrrr|||999kkkHHHBBB{{{]]]sss"""TTTfffCCCnnnFFF(((}}}LLLddd%%%YYY^^^ GGGaaazzzUUU!,5 H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI&Hrx@l @'}=8`ͫXjl`^xЇ @t"`1 ,Юx7&] Y5\I X}pZ`  L@Σ&3x 2XHgq~cG]μt=Ad@90]lPb3+>{ر%16 97wX@6* v\{܆vHkbs|T" |eZdj][GȟUZ HPYN5@kd@{f\2]ٕ|I]-T k%@e@w&g$Xkhc&F@@v)餔:wxg@=){6jY_ڜe݋ygzG^{v<)@ `c@x `)O&X} \ @WB׮[6z&]DkU)9q6HAZVwj6R+-]&'g&`Ѫ^ 0h&I&<^]9$V ի]Ti|n {FT̴۾T -!UgUduOIMOMK5\w`-dmhcDU =0aH5*)Bu|BiXDF]CgSSV5"wr6Py_`S[SJ7یs~AWjI߄C%zpJܽ x8~&e'-l 9e<PlBD֫6, 7юm=KH_(]VБ/Қ*#xN -@6rӢ=BT-aF ZZC`dTwZ`u+[Ց,ZWV` O%kYl!omXy`;!ɡguxp*6eƗ->O{$`τKnPUA@`|v=QS\1!a.1PU}: HSdu%zyC7eS2 !dӂd-~Y&y7 1 p8  7&AHW]`M(&#HAi`Q!GPikhr_t{,m7& 0 BqhaPِa} MK( V:uaH̦hGf-AGQ.Hs8xh &}H{1 oWk 3 P:h6t C)1xu1 "B$8 A :xs0N.Qp1{rlj *T0A"ugA(AqrCDm#!+&7Qh%Xz@!N1:ρ&%B Y#^22m*ȥ3Bk:w)$8BV"4FDB'22oOrکzF(QY{9~ذb@Q$tl&8.4_qR!>b0Q!U+:3+C2a-#¡}A`aA <,oB;.&%%Q$XId'BiүB=C@(Xzy ۰ؠ ?p۱<( @{ @b <3#*i b%UA-$u'B5VW.5bF$xh1.Ckair/Ҙzf+ӡE߃*Z9##&>6µW;eE+cX/f?;!q;:  +~`F@p,˸p`*7u*@+7+0qlq;p#"'Ϻ3;z8;+p*Gb[#'pq8ޑ 2=+&R,<5}+ڻ j&1 ;703A bv;k+`K뺻-sk"5N+"BKa(Zr&:e P*8#r 'ɪ8zh}+G`02<4\6\؀FC}$Z ?r4>'b{9oA«qF"&(H3vau)/bᬊ]L',Fv'jAޡw2q+*>C.2_wA//{ܮśh,Ԓ2+L}~"? p;8P6lwšd!ᦗ,pLZ<& JztN |;t9:kLC݌Al(h% 0ηl/LD(s,4i7`c@8 Џ i]47-ѷ]D7!- MFغ%/-9ӆ[VV@z1'Ԓ4 #$@%O PinxH[~cIm1D21-6iA>g=L@"D6ر5-32KJb,VP sMZg8J īe2;!_dF-)ȭ=-L{sv1A-ܻU]WQ۶]ҵk`; #ֹ SQz@@/߷."|LrCd=91'Z=Uaš5-&.@ YPPߞQ#D*("ȲiK6F}6-}J 㝱%JpCc#N O`js۳63nF>5"3&fN#M]6A>8@'.~G/q:s)S]#"0]܀-ݶ*;(hN$bC1~Qmъ.NP7sٍl}DKN`왭m^6}J PM$;hC!2Q^~>^"pvIY:p|#1_;IA6~4m @WORfO }gp `J#O"^{g Xp-Q6T@PiE,?IMO7O4Eo4G`K/iz;T3;o#T?zs<Dc.a obHKG6 H/HG`fe0jVD_82BF|4f/8`F y/qwV><`K=G`$J? v>`bX$J6btU5rS:0'J~eQUGօ2<+6:U8+q,? j# 8D.dCQ`mG!E$YI`PeΜYSȠ`<%Z' ,eMABn*S%dL`@l5B֭D# @]%͢a[̘?fܘ,RM%?xdUYn1#i k#bq$m[;V@Pql?B<TȼWE^R0~@ s`!ݼ3b/Δ2 ?0 (0& c\F H Fc,A(*HNPF/"Xed >#o| JrLjT%LD%C!'l˜aʻ1$!"', (44tҏSPNe+ldqf UdGV^=LG/KaH_jkLB^h!hσg=jO{භ"e*xw)͎~3)Ed!䗳 >`d#8 <){ ,":TBX*x6B1pbHCHVĢLl 7I2C0~AOaD!nOf4T@a)(HiW!8$Ld 4x%Y vɲORT$aX '0,BUS`D h ?Y4(,2e\`&>,X 8PAE\$,i9H.U,%oZB,'(9s&@Prc`b&T>EA=1A\OM0 Dpҹl+X Fہs!"̙ϝ|T"M pXVR{f0&"d (E/za A  qUjap4X˾#SA5X)ke`#)K5@揨)A$`GTgA\󰄺;ЫdW0~ Ppv6+dVmc0Cg'5@'xU:!@n2 ZH8x B觌,Jb&heAn+$b{7] B(Ds%T  l`#t\-ХKn{ ,AHbj$Ք|CQE 5>'.8`g &켧a@}2H8!l|cao N<{'[JaMp|Y x5u=ˆp쫋˘ ??3?A@1;ۈ\8 &?\? >sÀ{h =ˆ4@(4 ld lA䤟x Y2 Ll( -T-`A  plh XAD[À ? &d:Љ {#84\6 pÎBFB.*Z<1\EQlHlÍElP\<( 0`̆( x?e(܂5Ŏhp<hld$d `  v>WYlE\l@(6 bĆce4E$hdEDyԔz }E~L8a F8;![LUXFhH0 \ ?<{ X<0<ЋHaŊ\} vIl(|́,ɍLɕƇ,KxJHHwŬDhQ f,\@ KAC L3\PK~l˫H!ȁ|tKDɽ+:K4K ̆$}4Ȋ6 ִHlHͼ5 Dļ<,EOHDdO40NM,-:G\t^t֬NJO:> PPPP6 0P6L}PsP -K $("!H=h d4JN::@\;5;:}IS{&EM(ҍҍȁHZRRS2J3E:]lH5RNr3;]}E^dUV`fuVbPc} lyM5{lp:$t u%>@\T@ދrUXJU~ { M JKY:䃐4Hx@lXV} k&NsԐKL݈}ڕeY\-YPHZZZe;:k\LJA٪cVpo5XE PvGXE-\Z:-[bEē#D@̫  = PZܐWSyu;؁[EBltwl DȽ!Sr@}\W \\x5]M-Vl09ٳܖlIMIkM l(bZTT<;ͺԉəӤ:4XITZ:J`S2C)C[} M PVބ>DFN(]Dl9 p r7?hE&<JN:w N3$ u l 3 ` L@̀>p؃b+6Vn,@ \K>_l[Ildc%֖@cl@  $hd\$Cަ|K%Hb V)Nl];RnSNe ƆUNk̥c7m:k^eᥫahʰCJ^Gn^ (8uQ &-(T; \δWޘ\Q>wez&̭[ݘk l浵f&@efAۖʝfum N6\ acc؂h ֍agg}-Wx'igpG聂 jj.j>jNj^ 8gŌ=^z6jf U.iDhnFh挮a@Bsm>ut@`2-CR8pp+ذ^Kb bŴ.f,q#Ӹrt *UXK,YhRd-\ E1Ȏ `A/BPТGF5t 9]yf}[ O4|Ab‡/-[O⎸lN.x9=Z`pۊ2!> mClO1^L=U_TuITHZx!L06# @z!^ 8BH,v ؈``atWH!%(Hs=%Qx]M^!Cy#Zyy,bݏx)6:Z7^FNbl{hfFDĤ}.ge+*#`9՘auQg"w_Z$p(^X@}r BE^*AD1æZ6[d;E`˱ɪe@N!T)+Z)M0ς*P[0 zlயH]9 x;%Qp `a 'G ,y,.12 K¸ m ؤk 9i1))svD/sv5ETkՄ3CBCb=. yQ}7T* )pM b8S>xgcx &vB|ܥXB 7qw0wMpq;~p;Yd" DG8dg6$ n.y߮#|M,O_=aR}nOj`P(hA Dy"HD ` L8@T20` E` XjS[M; ,)=BЄ@aTB02anU0IH`C p0SL\(A<TE9l|TXAxl#DV!$(HF2򑦤' YXAqS >uH P"G6@!IYJS: $y D)xoH6paӓ4 iHe*tٰ i  \ \h\B(  Bb~' YNS,xP5mN?4h!P"1ԡ!0=)`¢E%ؓ@.BPCZ P&"s0 KshPT><)P[`F|MBP8`Q ,x?p&0/0ad#Ayh6x!G%?M"@6@ (l^\V jM(Hsvlg$9*  ("R/T,ʩE )W0$&5NP\ LœNMjUK! i}@qP@U7Iz)6(&FKЂUFN 7"y;)GvL!}7a[зMu/ ݹA#Е6lyӻc lxa0E Qc \V@qˆI>FK{z0qH R6~ـ;@p@ֱaB>>#g \Vvh_hzh@puwBhrĝkGSNEBd!Py; fC M` dCBdg6 `C @6AgxDNri6#HkbCʥ!"fY܀M.DD@D0%^BЄHrR<\džnL Jcޑs.d>6B[ Z@`gTT@ $w^B"aC&0ԧ]'r*0(HdC_~I M^vBjI%"]6l*[gUp}p&Nzx vV h6tv. B^&Ri%&6|Y B`Ѭ vvݾ1ٙ)޾ѪG`-hʂ6ȡ`be^IkM-Op&(hP*6 Q iĎ Ĩjd !♆f'r\[`ƞ㭊,fG;>'^slxv 7FtfC6纒Z*ɞ뻺zjw*)bC!x+ާArGcj_W2<&e68Y!mivb.6x,6lIx&hefCiZ&@ i`CmhZ'C$ 4%+"`6DfIlemk\YXj׍5jiV Z-v`ֲ&=jtfZbC$A 'mp~xf*6$y~j *, elcGڡJ"£'R} H%% 'ũ_z.Q%gFF!leQ//bUv4,PA:`_0gpf SH h 0 0 X N8@讄'J]L 1 NGg?1GCq.i0$@0C@T,J| c^?&07$ ,ͺKYxdRk|r#R8@ *+z+>G8k-2edRZb DN$ |͖$qbphEr@aa^NR Ԟ"A-$T\. %ڱzR`]4td*.Jr޹i~ &&M]b!ƣkfG23m征. pdžocG,A"Q`RlľΞ,0t rqB*J'(l'BC/GD `FO/^*sJ #A㪜:?/\ўu +:(GDWPQaيZ@>%2I .%$Tlᴂi0D|eA*'$L(r"~#1"6(K&('S޷8w#ej`B/fIG@zLоt:5G3-qaSk5S+~aSdݬPCy[q>K`!;MT/D2DRݡ9j@GcAG3Sa^Z;ܬP@ z1zDjk3JG_۰vLk5!zšB4vOW@82x@º5^,/dK8:W~7ѪWqR;J2*SJKfYܹCmAv$@͂@\ϒ[Mg].ةc{q$ߵGhz|,dkeMrVŸ[7N|x_' vLwoJNcڝUo62}L\G{@.q3=|z.\A:R/FBڃxۃ0X߽_C:DP)ƙLw=y>PO 4qJ2,C_~_oL p|>΂.I^F>9jH[G`hSo:oxǢۋ}t8V?B(@h) ~蜩ܱ Z&[&Mڹg6X#13"uI 4x5tbD%&@F9v۩*(4yeJ+Ytf̗dР@Щ2X AC.XhhCPtAá8*l 4{mړ tpZ3jx#6NRp`Q.ՙsԨ2T pòI>[ M;L@ݻxA*ҷunݻy58:1;lxi٠jl 4g3sd ;z󎺢||wF| H"(kl:j p*; o5 [TqB U\EJ*Q唒P kT r|jht*ZZhu{ד l"E,\M.:)`.<2s6 Y!Dh(P-G @E]C:,@! @Tf KU]5Ҵ 1"he5C`cNM Q9BxAp]Y Im\Q\YܵKHpr-:b@Rx#z GKGK0l5Y4M-!;<%X-xFQ$<ލW )@:P2VͦIXngs@pN +F i&!jj #`2o#ʖSP-؛&p"x‘@B1*&  "b 3lVe(ؐu,h@J@!C 8& ȶ.ag2H)٥[ l7i|*#p򂬪U4?6៟'<X" Jw:PCFÆ ܌ Xw*g'7Q*s IFvԄvd ~Rh~A$`b9 b$J<># rIT".Q"8aV~E-wQ@Dv *hN@oh9$`uœCD#%.{bE2tBӅ!%@ }0(3Xm#T@hƱG9%shа&dRMӝ|vqHuJI#h8+R8`;Tb$7E ɰQf eJP'N88 w8,pIb줟X'Plm@$%5 0C! >d4LtoTjN 4L"7M1<p$-J({-LZ( GA;@U:U o T`8tIcF$%b!jN>Juk5jI& C`ց>Xʺ\:CdIAsլYEAV $cRpE@ Sbfb?#zP iD@І@n*cB@(B9!6 HŠf$J&HتGJ @'H !$cH"cQgkosQw{QQQo- ;PK ,M'MPK&AOEBPS/img/strms046.gifyeGIF87aX`?**?***UU?UUU????**?*******?*******U*U?*U*U*U**?*****?*****?*****?***UU?UUUU*U*?U*U*U*UUUU?UUUUUUUU?UUUUU?UUUUU?UUUUU?UUU?**?***UU?UUU?????**?***UU?UUU?????**?***UU?UUU?ժժ?ժժժ???**?***UU?UUU?????***UUU,X` H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜIfK O@$XР} ϟ? H*\ȰÇ#JHŋ3jȱǏ 'Ǐ}'p A HPྂ'p ǯ? H*\ȰÇ#JHŋ3jȱǏO>̧O>~,hA Ǐ>}80>} <0… :|1ĉ+Z1ƍ;z_?~'p} <0BO|,h „ 2l!Ĉ'Rh"ƌ7rcAO@}O@ DPB`>? 4xaB 6tbD)VxcF9v}˧o~>~?}˧o??~Ǐ?~G˗/}}Q?~˗O~?~Ǐ?~G˗/>~>~H?}ǯǏ?~Ǐ?~߾|Ǐӗ/_>}}Ǐ?~Ǐ˗/߾~?~h?}Ǐ?~Ǐ?~O_|dž˗o_?~Ǐ?~!˗O?$XA .dC˗/߾~#F1bĈ#F1bĈ#F1bĈۗ/>~#F1bĈ˗/#F1bĈ#F1bĈ#F1bĈ ˗#F1bĈ˗O_#F1bĈ#F1bĈ#F1bĈ˷_Ĉ#F1bD/bĈ#F1bĈ#F1bĈ#F1?1bĈ#Fa?}1bĈ#F1bĈ#F1bĈ#FtO_|"F1bĈ#F/|8`A&TaC!F8bE1fԸc}رcǎ˗_ǎ;vرcǎ;v옐_|:vرcGׯcǎ;vرcǎ;v,/_};vر#Bرcǎ;vرcǎ˧_ǎ;v츐_|:vرcǎ;vر#Gرcǎӗ_ǎ;vرcǎ;v/_>;vرcD~uرcǎ;vرcGرcǎ)˗_ǎ;vرc;v/߾;vѢ|uرcǎ;ꨣ:h" <0… :|1ĉ"E)RH"E)RH"EG"E)RH|(RH"E)RH"E)R_>})RH"E) OE)RH"E)RH"EG"E)RH"~QH"E)RH"E)RHQ!|(RH"E)R,/E)RH"E)RH"EG"E)RH|(RH"E)RH"E)R_>)RH"E)ܗ"E)RH"E)RHEQ@'p "Lp!ÆB(qDQH"E)RH"E)RH`>~):/_|(̗!| ̗ϠA 4hРA Է? 4xaB 6tbD)VxcFc~ ? 4(0_80̗ 'p "Lp!C'p "Lp!ÆB(q"Ŋ/b̨q#~q/_~b| /%̗#Gqȑ#G9rȑ#Gq̸/_?/|˗O_˧ϟ|˗/ /߿}|ۗ/_>}˧/_>~˗O_˷?~Q#|9rȑ#G9r|3g0_>/߾0@/߾O|˗o_}'0_/_ ߾|O~˷_|8`A&D/?$XA .dC%NXE5n/|ׯ||ׯ|ǯ| ̗c}qȑ#G9rȑ#NJ1_~'0_>˧/_|'p@̗o|˗O_|/_|/|˗/|˗/_~ ? 4xaBO@ DPB >QD-^ĘQƃ㈑_|ӗ||ȏ|[/@}|/_>/_˗? /_|8r/?9rȑ#G9rX_|3O@}@ o?'_} @ O?O '߿}ϟ@~뗏,h „? 4xaB 6tbD)VxcFӗonj˗|˗/?}˗/߾/_>~(Pӷ?} o?} _o>  <0!|O@ DPB >QD-^ĘQƃ#GǑ#G#G9rȑ#G9V_>}9rȑ#GG9rȑ#G9rO?qȑ#G'0?9rȑ#G9rX>ȑ#G9ԗϟ|8rȑ#G9rȑcE~׏#G9r8P_~ȑ#G9rȑ#G#/?9rȑ?}#G9rȑ#G9V䧯|8rȑ#Ǎ+G9r#8#8 ~8p_>} H*\ȰÇ#J(_| G"E)RH"E)RH"ł/_?)RH"E `>)RH"E)RH"E)R,O_B}QH"E)R4/_>~ H"E)RH"E)RHbA~˗E)RH"Ł0?)RH"E)RH"E)o?}H"E)R/_>~ H"E)RH"E)RHbA~˗O?)RH"Ň0?)RH"EQDEQDEQT@'p ? 4xaB 6tbDa>%J(QD%J(QD%J( ?}˗D%J(QĂ/b>%J(QD%J(QD%J( ?}˗/?%J(QD˷ϟI(QD%J(QD%J(QăIO_|I(QD%6/_>~ 'QD%J(QD%J(QD' |'QD%Jx߾|h0?%J(QD%J(QD%Jx>0_>~8`A&TaC!&/_|"*/bĈ#F1bĈ#F1bĈ#F\O_D˗o_#F1bā˗o 1bĈ#F1bĈ#F1bĈa}_Ĉ#F?~_ćE1bĈ#F1bĈ#F1bĈ ?~˷#F1"A˗o1bĈ#F1bĈ#F1bĈ#./bĂ˗/>~"F1bĆ˗/>~"F,_Ĉ#("("("("(? 4x~˗O߾~&L0a &Lh}˧ &<,h „ 2l!Ĉ'Rh"ƌ7䧏D˗/>~qQ~˗/}qG9rȑ#G9rOGO@} <0… O|ӷ 6? 4xaB 6tbD)VxcFǑ?O@}? 4xaB O> Է_? H*,h „ 2l!Ĉ'Rh"ƌ7䧏#ǃO|ӧ?~O@ DxП~ӧ`>Ǐ?$XA &? 4xaB 6tbD)VxcFǑCۧO|̧O>}8`A(`> ӧ?~'p "Lpa|8`A&TaC!F8bE1fԸ ?}9j?~ӧO>$X`> O@'p "Lp!C'p "Lp!ÆB(q"Ŋ/b̨qA~8rȑ#Gȑ#G9rȑ#Gȑ#G9>#G9rȑ#G9V䧏#G9r0?9rȑ#G9rX>9rȑ|8rȑ#G9rȑcE~8rȑ#Gȑ#G9rȑ#Gȑ#G9>#G9rȑ#G9V䧏#G9r0?9rȑ#G9rX>9rȑ|8rȑ#G9rȑcE~8rȑ#Gȑ#G9rGqQE <0… :|1ĉWbŊ+VXbŊ+VXA~*VXbŊ+V_Ŋ+VXbŊ+VXbŊXbŊ+VX_>+VXbŊ+VXbŊ+䧯bŊ+VXbUXbŊ+VXbŊ+Vx+VXbŊWbŊ+VXbŊ+VXA~П <0…8`A&TaC"D!B"D!B"D!B>8`A&T ?̗/_~ H'p "LpB /?$XA .dC%NXE5n$XA .L? 4xaB 6tbD)VxcFGP?g0~˗ϟ@}O`>'0|o|9N`>9rȑ#G9rȱ"?}ȱ>~O?_O? #lj#G9rȑ#G9V䧏>~96`>__>~ '0_>~ #lj#G9rȑ#G9V䧏>~96`'P|/߿ _| H*\P'p "Lp!ÆB(q"Ŋ/b̨qA~p_|q䨏|O_O>˧}S/_|0@8_> Ǐ|8`A&TaC!F8bE1fԸ ?};/Q??}O_>}˧O?'p_>}˗@_>~AW0@{|8rȑ#G9rȑcE~gp_~/|O`˗ǁY̗_>/|˷?}/_>G0| /@~K|8rȑ#G9rȑcE~g0_>70߿}__>~ HC`>}O`>۷/'| O`>/@~w0?$XA .dC%NXE5nqG-O_~ '0|@~O~/߿|Ǐ|8`A&TaC!F8bE1fԸ ?}3//|˗/˧/_|8 `>'`>'p wP?O`>o_>~ _> 7>|O@,h „ 2l!Ĉ'Rh"ƌ7䧏>~_|_ _>O~g0_A~/@.˗_>㷏| _'0߿~@~w0?9rȑ#G9rX>/_>}/>}˗@~O_|8` /|S/_|˧O@}/_|˗|#O|ϟ|%? <0… :|1ĉ+Z1ƍ#_C7g0_| |9N`>9rȑ#G9rȱ"?}ȱ>~g`>'p wOE8r|8rȑ#G9rȑcE~g? 4xaB(P o@~O8P|'P ,h_? ϠA,h „ 2l!Ĉ'Rh"ƌ7䧏>~'p "Lp 7p`|/_} o(_ 8pO@ w`A} <0… :|1ĉ+Z1ƍ#G H_|ӷ/_O_>~O>~/_~|/_>_#G9rȑ#G9V䧏>~96Ǐ#E ?/_?o߾>_>_>۷}󗯠>~ȑ#G9rȑ#G#G p70A'0߿| _/߿___?8_> H*\ȰÇ#JHŋ3jx>qP? | '>~'0@~_O`>/|O`> w0?9rȑ#G9rX>qP? ˗/_70|/| /?}'0| ̗?O߾ '0_A}Ǒ#G9rȑ#G+GP?1> ߿'P`˗o_>/_|O_|O`>˷/O_|_ /?$XA .dC%NXE5n 4xa|? 4xaB w0?$XA .dC%NXE5n 4xaO@ DP„? 4xaB 6tbD)VxcFGP?P $`˧߿|G_~'PO@ `> H*\ȰÇ#JHŋ3jx> < ,hP? 4h|W_>/@~@~G0 4'p "Lp!ÆB(q"Ŋ/b̨qA~ǑcC}8&ԗO)G0߿|7П+`>/_> ;G9rȑ#G9rOA}8rlDŽ%Oa>O߿| >~̗0߿}Pqȑ#G9rȑ#NJǏ#džq/|˗/_>}'pG`>_? ǯ|8`A&TaC!F8bE1fԸ ?}ȱ>~)˧0|7П> ߿//| Ǐ|8`A&TaC!F8bE1fԸ ?}ȱ>~)˧0?g0}7PK}c|8rȑ#G9rȑcE~ǑcC}8RO?}/|'p G`>_>~ WP`> H*\ȰÇ#JHŋ3jx>qP?q>~ȑ#G9rȑ#G#G H_>'w0?9rȑ#G9rX>qP?哘/_|'P(`>(@8`A,h „ 2l!Ĉ'Rh"ƌ7䧏>~96Ǐ#E$|̇0|;G9rȑ#G9rOA} O@ Dh? 4A 4hР| 7П> +Ϡ|4h>~ <0… :|1ĉ+Z1ƍ#G H_>`Ϣ>~ȑ#G9rȑ#G#G H_>0@O@ ߿|(,(P?O@ DPB >QD-^ĘQƃǏ#džq/|`Ϣ>~ȑ#G9rȑ#G#G H_>'0|!70E}Ǒ#G9rȑ#G+GP?Ǒ|˗@ o`>} ߿|(,(P?O@ DPB >QD-^ĘQƃǏ#džq/G;G9rȑ#G9rOA}8rlGq8Qqȑ#G9rȑ#NJǏ#džq/G;G9rȑ#G9rOA}8rlGq8Qqȑ#G9rȑ#NJǯ?$XA8`A4hРA gРA 4hРA3/?$XA .dC%NXE5n$XA .L? 4xaB 6tbD)VxcFGP?DZ"@}̗> 4xaB 'p|8`A&TaC!F8bE1fԸ ?}ȱ>~9 /?!Ǒ#G9rȑ#G+GP?Ǒ@~b>9rȑ#G9rȱ"?}ȱ>~9 /?!Ǒ#G9rȑ#G+GP?Ǒ@~b>9rȑ#G9rȱ"?}ȱ>~9 /?!Ǒ#G9rȑ#G+GP?Ǒ@~b>9rȑ#G9rȱ"?}ȱ>~ H*/_>~ .\p|8`A&TaC!F8bE1fԸ ?}ȱ>~9 /?1Gqȑ#G9rȑ#NJǯ`?$XA8`A4hРA $/_>~A4hРA  <0… :|1ĉ+Z1ƍ#G Q |o 'P߾|˧'0|oc>9rȑ#G9rȱ"?}ȱ>~9 / >'0|O`>'|8rȑ#G9rȑcE~ǑcC}8r/_>~70| '0߿| '0߿|ۘG9rȑ#G9rOA}8rlGǯ`> '0| _߿|(,h`> H*\ȰÇ#JHŋ3jx>qP?˗_A~ۧo?O`>O`O ?ȑ#G9rȑ#G#G Q |/߾O`>/߿| ܗOoc>9rȑ#G9rȱ"?}ȱ>~9 /?!Ǒ#G9rȑ#G+GP?Ǒ@~b>9rȑ#G9rȱ"?}ȱ>~9 /? ̗/_>~ H*<,h „ 2l!Ĉ'Rh"ƌ7䧏>~96Ǐ#GW0@}8r,G9rȑ#G9rOA} O@ Dh? 4A 4hРAW0@˗ϟ}O`>'0|ϠA <0… :|1ĉ+Z1ƍ#G Q |G0|۷_| _>|i#G9rȑ#G9V䧏>~96Ǐ#GW0| /| /?_>~i#G9rȑ#G9V䧏>~96Ǐ#GW0@(_/߿ _(,h`> H*\ȰÇ#JHŋ3jx>qP?˗_|O ?O`O ?>~˘G9rȑ#G9rOA}8rlGǯ`|O_|ӗO_>ϟ| ܗOOc>9rȑ#G9rȱ"?}ȱ>~9 /8Z#G9rȑ#G9V䧏>~96Ǐ#GW1Gqȑ#G9rȑ#NJǏ#dž <0p… ? 4xaB 6tbD)VxcFGP?Ǒ@~b>9rȑ#G9rȱ"?}+П < ,hP? 4hРA8`?$XA .,h „ 2l!Ĉ'Rh"ƌ7䧏>~'p "4 ϠA 4h?} (P,h „ ? 4xaB 6tbD)VxcFGP?q?? 4xaB  <0… :|1ĉ+Z1ƍ#G Q|O@ DPB'p "Lp!ÆB(q"Ŋ/b̨qA~ǑcC}8r08`A&T`> H*\ȰÇ#JHŋ3jx>qP?˗/_}9>#G9rȑ#G9V䧏>~96Ǐ#GǑ#|8rȑ#G9rȑcE~ǑcC}8r/_}9B#G9rȑ#G9V䧏>~96Ǐ#ǁq1?9rȑ#G9rX>qP?Ǒc|8rȑ#G9rȑcE~ǑcC}8rb>9rȑ#G9rȱ"?}+П < ,hP? 4hРA H*\? 4xaB 6tbD)VxcFGP8`AO@ gРA 4/@,h „ &? <0… :|1ĉ+Z1ƍ#G H_>'w0?9rȑ#G9rX>qP?q>~ȑ#G9rȑ#G#G H_>'w0?9rȑ#G9rX>qP?q>~ȑ#G9rȑ#G#G H_ 䗏|;G9rȑ#G9rOA}8rlG5̷o|7w0?9rȑ#G9rX@>~ H*\8P? 2|'_|ӗ/?_|O@~O|1`> H*\ȰÇ#JHŋ3jx>qP?k_/@~o_>O`O`>o߾1`>9rȑ#G9rȱ"?}ȱ>~)0߿}/?O} /| '0߿|1`>9rȑ#G9rȱ"?}O@ DP'p "L8_>O߿|(߿~o_>~_/߿/߿'p A,h „ 2l!Ĉ'Rh"ƌ7䧏>$XA .,? 4xa)?}@~ o`|O '0|/B? 4xaB 6tbD)VxcFǑ#G k|/_>~˗|/>}˷/| ϟ|1`>9rȑ#G9rȱ"?}9r_>O?1`>9rȑ#G9rȱ"?}9r_>˗ϟ~Pqȑ#G9rȑ#NJqȑcC8r|8rȑ#G9rȑcE~8.Ǒ#iܗ/Pqȑ#G9rȑ#*>$XAK0a„ &L8_ 'p?̗0A} <0… :|1ĉ+Z1ƍ㨰_|8r/_F}'P߾|˧>~ȑ#G9rȑ#G0_|8r/_|/@~O`#G9rȑ#G9V䧏cB~#Gew0߿| '0|e`>9rȑ#G9rȱ"?}'P>$XA .dH_ '0?/߿ $|8`A&TaC!F8bE1fԸ ?}O~ H*\p| '>O ? '0_C? 4xaB 6tbD)VxcF!| O@ DPBkؐ_|ϟ| '0|.`> H*\ȰÇ#JHŋ3jx>'p~ H*\P| 6lP>~O@ DPB >QD-^ĘQƃq<08`A&T@6lذB} <0… :|1ĉ+Z1ƍh?~˷?~8r/G;G9rȑ#G9rOGǑF8`A&T0a>~ /?$XA .dC%NXE5n9rȑ|8rȑ#G9rȑcE~8*/?9r_>9rȑ#G9rȱ"?}˗G9r/?9rȑ#G9rX> #G9rG9rȑ#G9rOGǑ#G9#G9rȑ#G9V䧏B~ȑ#8O@ DPB >QD-^ĘQƃqT/_>9rȑ|8rȑ#G9rȑcE~8*/?9r_>9rȑ#G9rȱ"?}˗_A  <0‚ ˗@ $H A #H A $/?$XA .dC%NXE5n~ `>棘a>qȑ#G9rȑ#NJqT/_>Է/?'0|O`>˗_A}ۗ/?}? 7| ԗO߿|/_~G0|˷O| /_/_>˗|8rȑ#G9rȑcE~8*/ /߿}'0|O`>'o߾/߾ 7P`؏/?o| >~?} _>߾/?$XA .dC%NXE5n_߿|߿_? 70}o|8|߿|/߿_> '0|o?@~7p <0… :|1ĉ+Z1ƍ_| 3O`O` _(߿/߿ǯ>~(0@'0@~'0|/|O` >~ `>/?$X_> H*\ȰÇ#JHŋ3jx> `>o߿~/?}/߿~'?_㧯_|o|#O`>_> /|ۧo?_>'?~A#G9rȑ#G9V䧏B~+0@/?߾|O ӧ?}'P`>?}80_| /߾7_>~ _|'0|ӗOӧ߿|˗?7p <0… :|1ĉ+Z1ƍ㨐_|*ȑ#|%Ǒ#G9rȑ#G+Q!|UǑ#G8J#G9rȑ#G9V䧏B~Ca>qt`>(߿| H |4/?$XA .dC%NXE5n 珠|㗯@~+_~w_|ۗ|ӗo|߾~/?ϟ| ԗO?~o_}/?㗯@~#/_> ܗ_qȑ#G9rȑ#NJqT/_> _ W0_w_>/~?}o??~_}?}//߿8`>_>>_> ? 4xaB 6tbD)VxcFQ!|'0|ȯ`>C@~ '0|/|/?'?}G_>/?#O` 70|70_|8rȑ#G9rȑcE~8*/|߿|/ O@_/߿|/߿7P/>~߿ ?#O` ߿|_? ? 4xaB 6tbD)VxcFQ!|'0|| ̇P?O`>_O |'?}o`>_>_>}0|O ?/|/_|8rȑ#G9rȑcE~8*/'0?W0| GП|'0|˷O`o`>˗?_|Ǐ_|/_}0_>~O?}O_|O_>}$/?$XA .dC%NXE5nqa>9rȑ#G9rȱ"?}˗׏#Gȑ#G9rȑ#G㨐_|8rȑ#qȑ#G9rȑ#NJqT/_>Ab8rG0Gqȑ#G9rȑ#NJqT/_>A0|]w1_|Ǒ#G9rȑ#G+Q!|O`>'0߿| /?/_>'P|㗯߿@'_|ӗo@~_|ۗ|˗ϟ@}//| HO@ DPB >QD-^ĘQƃqT/_>o?_>'_|o|߿|ȏ}/@~?} _>_ '0Co߾qDG9rȑ#G9rOGW0߿}'0߿|O O`_>~70߿|o` '0| /?O`> a>9rȑ#G9r䈣? 4x@~+_>O`0@/߿|_> '0|(߿/߿߿|߿|/߿o@߿| (0_$H A  <0… :|1ĉ+Z1ƍ㨐_| O`>_O ?ӷ|'0|@}O?}O`>O` _>CO߾qDG9rȑ#G9rOGg0?ӧ߿|ӧ?}+?}`>~ ϟ|/>}˗|/_>}'0?}O`>ϟ|'p "? 4xaB 6tbD)VxcFQ!|qx1G Ǒ#G9rȑ#G+Q!|qx1Dž8,h B'p "Lp!ÆB(q"Ŋ/b̨qA~8*/?9r_>9rȑ#G9rȱ"?}˗_|%̗/?C`> <0| P!|8`A&TaC!F8bE1fԸ ?}˗_|#O߿~#|e#G9rȑ#G9V䧏B~+/_>O`o_~o_>~ӗ/?'0|O |Է/?Oo>/_?O | ԗO߿}?qȑ#G9rȑ#NJqT/_>/߿}'0߿| _~ _o@ /߿'p ||_ G'0| _>o?G 'p "Lp!ÆB(q"Ŋ/b̨qA~8*/?o`>O`>O`g?Oa>@~ /?o_ @O` G0߿| 1?9rȑ#G9rX> `_>O`>+_}/| '0? ߿|7PǏ_?}O`>'P߿ ? 4xaB 6tbD)VxcFQ!|ԷO`O`'0| '0_|؏?>>~O` |W>~/|>~}b>9rȑ#G9rȱ"?}˗Aϟ| o_>}O`>+_|ܗ/_?'0| ϟ|'0?'|_ӧ?}߿|˗߿8`'p "Lp!ÆB(q"Ŋ/b̨qA~8*/?9r_>9rȑ#G9rȱ"?}˗G9r/?9rȑ#G9rX> a*˘O!|˗#Gȑ#G9rȑ#G㨐_| oc>o߿~Ǒ#Gqȑ#G9rȑ#NJqT/_>/|O_|˗߿}_>ۗ|ۗ_A~˗|O`>9 P> H*\ȰÇ#JHŋ3jx> `>'0߿|7߾>O`>O`>~/ 70߿|qQ`>9rȑ#G9rȱ"?}˗_| '0?}7_O`>O`> o_8r(0?9rȑ#G9rX> `>O`} G0|'0| '0| /?/@8`A&T!|8`A&TaC!F8bE1fԸ ?}˗_| '0?~ ԷO`>}'0߿|O`>`؏?~>9 #G9rȑ#G9V䧏B~+߿| _| /_~O`'0| '0_|˗A}/_~q1?9rȑ#G9rX> #G9rG9rȑ#G9rOGǑ#G9#G9#8#8 ~8`A / &L0a„ &L0a„  <0… :|1ĉ+Z1ƍ㨐_|8rȑ#qȑ#G9rȑ#NJqT/_>9rȑ|8rȑ#G9rȑcE~8*/?9r_>9rȑ#G9rȱ"?}8`A ˗oB O@ DPB >|D!B"D!B"D!B?~P <0…'p "Lp!Æ>"D!B"D!B"D!B?}"ĂA"D!"D!B"D!B"D!B?}"ĂA"D!"D!B"D!B"D!B?}"ĂA"D!"D!B"D!B"D!B?}"ĂA"D!"D!B"D!B"D!B?}/|!:"D!BD,h „ 2l!Ĉ'Rh"ƌ7o|Q|9rx1?9rȑ#G9rXEG0?_|o| ǏD8rb>9rȑ#G9rȱ"?}勘`㷏_>7p߾ o|9rx1?9rȑ#G9rXEG0?} O ?O`>qȑ|8rȑ#G9rȑcE~1|`>߿߿| H`A8`A&TaC"D!B"D!B"D!BA@~O?GP߾ |!B"D"D!B"D!B"D!BO@ /_}˗/_?'0A'0DA"D!"D!B"D!B"D!B?}"ĂA"D!"D!B"D!B"D!B?}"ĂA"D!:|8`A&TaC!F8bE1fԸ ?}ȱ|9rx1?9rȑ#G9rX)a>/G9^#G9rȑ#G9Vo| 0|ȑ#Njqȑ#G9rȑ#NJ /a>ӗo?}O _|˧?'P߾|Ǒ#Gȑ#G9rȑ#G/B70_O| '0|o|O@ DPB >|D!B"D!B"D!B?~ˇp 70_/@O`O`>A"D!"D!B"D!B"D!B?}C0@?/߿>~߿|/߿O@ DPB >|D!B"D!B"D!B?~w_ }_>/ '0?}'0| "D!B<D!B"D!B"D!B?~wp ̧/߾'0|׏_|߿|˗|)"D!B`>!B"D!B"D!B"D/|A_>!B"ăA"D!B" " " " G0A 4h| H*\ȰÇ"D!B"D!B"D!BO@ B|!B"D"D!B"D!B"D!BO@ B|!B"D"D!B"D!B"D!BO@ B|!B"D"D!B"D!B"D!BO@ B|!B"D"D!B"D!B"D!BO@8`A&Tp| H*\ȰÇ"D!B"D!B"D!B_> H*\h? 4xaB 6ta| B"D!B"D!B"D"D!B"D"D!B"D!B"D!B(_>!B"D!B>}!B"D!B"D!B"D"D!B"D"D!B"D!B"D!B/D!B"D!B"D!B"D!B"D!B`>~!B"D!B0>!B"D!B"D!B" h_>$XA .dC%N/?)RH"E)RH"E)&OE)RH"EG"E)RH"E)RH"ŅH"E)RHq`>})RH"E)RH"E)Rl/_?)RH"E"E)RH"E)RH"EG"E)RHq|(RH"E)RH"E)RQ_~)RH"E#׏"E)RH"E)RH"EG"E)RHѡ|QH"E)RH"E("(}'p "Lp!ÆB(q@~H"E)RH"E)RH"EG"E)RH1|(RH"E)RH"E)RHѠ|(RH"E)/?)RH"E)RH"E)RD/_>~)RH"EӗOE)RH"E)RH"E)./_~)RH"EǏ"E)RH"E)RH"E˗O?)RH"ŇG"E)RH"E)RH"E˧E)RH} <0… :|1ĉ+Z1ƍ!˗/_;vX߾|uرcǎ;vرcǎ˗_ǎ;vO_|:vرcǎ;vرcG˧_ǎ;v\o_|uرcǎ;vرcǎ˗o_;vh_|رcǎ;vرcǎ;/_|uر#G˷_ǎ;vرcǎ;vQa}ׯcǎ-/_>}:vرcǎ;vرcǎ˗/>~:va}ׯcǎ;vQGuQGud@˗/>~8`A&TaCۗ/_|A"D!B"D!B"D!B"~˗o?!B"D˗/_} B"D!B"D!B"D!B"~˗/>~ B"Dӗ/_|A"D!B"D!B"D!B"Dӗ/_|"DO_|"D!B"D!B"D!B"D!BL>} _? H*\P~O@}'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?o>Է_? H*L?~'P}O@ DPB >QD-^ĘQF=~|}80>}ϟ? H?~O|,h „ 2l!Ĉ'Rh"ƌ7rcGۧO|P8`A0ӷ~'p "Lp!ÆB(q"Ŋ/b̨q#ǎ?|ϟ? >'p | ( ?]? 4xaB 6tbD)VxcF9vdH#I4yeJ+YtfL3iִygN;yhPC5ziRK6u*ʀ;;PKcdX{~eyePK&AOEBPS/img/strms017.gifGIF89a???999@@@rrr///򯯯___oooOOO``` 000PPPppp>>>yyy|||UUUGGGvvv!,pH,Ȥrl:ШtJZجvzxL.zn|N~} [22KBƇ¸}ɷ2($rж2 1i۵1_WNG\H# #sBV,QGr(u cR2['#, @ 3P@ȃ*$@Z:\z쥓 H@d<@РAf*Œ 2ݢ[_NAWFH {G+@t* TC@HS+ᕋCjd3h1:uf\ڊr0O"4H 6UƵ̭k!hb,.p?Ov5&ޏyB~Pt?7 `ѠFh(Eޅ1av((6Q'"+4DR,8㎯X#LRDiH&&ԵPFi - SΖ\v[*PdifaZ&btEIqʰeډ$Pc衈&ԠPF*i ju])\ni-騎`uj% j+ Ȫ+]#Js k~о8qi`+Dz䚡ơ;@L++kp+/-@l' ֆ򫫿rwI? $l(,"1 ٫<\,#12ڼL7PG-R-DM0C!^_ @۱ jBHVɠ~|6\k Xkiv@=IY8#yFJPzeJK6\f @լ4 7(Kxg((kBUL 1-W mB+2fB eCL(oPԂHom DsƇZ`)0?xu`oI)jbJYWм*`Ueփ[I"͉J^Y5 rSEԹu ccsB &ENLP`WHX3'u9@*K a9VAv(39L`(g6nWiS,0.NVd5Y,w#`E%zx3{^Њ t[of/]D":9JGKG*X(2n֥lõ+%B! B,qM*ZQƥ#p=H]+ E69 l)`.R_,+ jTnlA"$F@@ !]fAr, p0zb xfA ,%s)Kahv ]RYk7gp$;5߾b.. ii+887L]J{O*= $*[3sB_8Y[ 24v2G9g L(w^`f!7cJfj dMؐK-SVrƉuuqAxpUa* D""XzCDb, No"Zw *.a.VdnO?.C>,[x993FfnJKB8U6MmOoD?\! C(H, ')zkN`eaC!7yD vKf%O {w(8M}$sj X^޷0"%_c9o6b3,PмU Z\& +WQZuAuOc$EXj?j&.\x2Ђ]J)FI4.fu4WY܃0r6>@z|؇~HsNx~2SXVeSzWG63Uj|fe6m!8dhC8 HK3S7zӈsjF1y6X| TDhoQYA=CT 8(8Jh2f倆Y(E@Dy1䉝XjhW6SaeȌX2`ER@ 'l>itqh v/ӌqk]]U 2Hj8_sfSZH#u(>ŏ f2TRXd291zPD0Er#:=Yv "ip0# B`!+2\i/~Bp(:Io#FEp}uF'%0&ug"(30ucye9g9QL R H0swpM@zw)'Y^—Bx9ɍ0$ NْW si0~`L\W } @`y/A]9Ҕiy  /9Yy虞I`XٰY #i ASLڠB W+ Kqڡ ":$JzZ G(/Z1Zi Ptaqf8jhhw޲+x3tCEGj+*+fACLOJ:Zp-cc$ȣ+Ro#Q!!8)M0 0&a{E+ HhT;p;3CNw ~`sk{'xiuozkvRkqK;]{4wik[VghD0A0Hac ˇ#л[VxP3{?b[H9bgxwmV?aPe[k1iWA8Kc@`gj%B ]ɖ`e >3ǥ@&, PzB 4y!<#0.ȕ `6;S2a; >[E0}YP'*i}{9h +a!+*9쓻2Sd1e߀¤RyH(F$7u1p8Qķ'D`TrO DK:a6x1ƣ"Ie &uAU̪=5xO968vxa1KdžȟȮȍ|J +]iPvzA 8bvX sb.O3(ܷZ  /vY[7-۷ iqo,DTeߴi8Q8ٳ1IEqH'c(%@HFǝq7Emh^ÌޜCjpFCׅ&ݪ!,}Bެڟvbdb`:j\w]Αy_^+X A J?s1!᧕-{toPސ4SŌeFh,]C1p'8^Ėar\ ܗ̈́tiFd]A4QbIn'i!.T2QsPH9Ն5J\x\Aq_=zR:|v 6>ɢ<>nA+#.؜~W]δҤ~t# P .Ǝnֳ^Ϯ廰ˇvLJ~ ^>[[׎[:ÒuOAP`YO׻rCNtb",A]f1Ձ dBKTdK$A0lO:Ar%+m]\ qh>|3喖Zhɹ㵕O$9sT7%L$ݴs)ȈTH(@ȈRs4ҝ[uĺo}YM QXr! c^]5%-q1ٻQ\EUA6C) ɨWv5la>Wv61TFԳb (KrիXtYNVA%RQ *U9@S׺~ɫ ȓ5GدK+̈́4D; *//kpl"D Y5 9BJ/"ʼ:-EF4\Eo>|;} cBpŞ^<Nmhf- Nm $*t$FG%>2FP&!\SNtL,jjC8dJ`Õ ظ b1T,i7뼲IْŬ"UAHQp SٌB mqWaUYi[km@NJyE~qqԩS/kTC8@ kVmoU^ɝDD!  TayGPEuG_X .,se94#AGZV.}EIJ YIxNdRx-L`b)q$0 SxKZ>,Uv_h駡ZꩩꫭhJ^qSZѦN=ζn䦓9 4o8/\q0! r'9;s7\t#QO]Youcyߎq]y߹<ډw[.O~䟋o㕏^w^^{髿{:^|O ;PKPK&AOEBPS/img/strms054.gif[sGIF89a???ߟ___///ooo@@@OOO999rrr000쐐 ```pppPPPYYY|||,,,>>>yyyUUUuuu***!,C% (B# !  ()ޱ ' $! ,(9 @Fx0^Ǐ CbPN14 VT$"c}-sIrkׁxmfѧPJ$M(80uL@T׳hji \z3 @Kɚ˷_CBpwK\4 B*@L:L#:kJEۺXFP6,ʦqB.oxHEGvNu 3M#8rCp&SM"  C5@' H`'q A8)(?z%@B` ^}b ,r^-X)h* 4@7N3D㐄lpGY<`K.&4eWЃŠWXt)"'9`RI %)1`g%@xҷ&ɗ (y rwݷ(j'5 DEx6:$pijbb) vZށe뗦k*+ :$J{֨aZ$hەP4LZKl ]/a33h]v`b*4u-JӴhPGWK%CNfvB+h*IHOI%ۡJA@ $IUE΋Ao00hٰ|;8L'u!~')dd9-6q;Vcbe!YWJADYpEqc攓]odף* B!2LFW +Ns3 +`Ђ`D@Bڣ8@`]gݰFi ّ v6iM#:_ /v!  jCLP hv,R)HZȠա5. Y@zJaj :H"BЩ- y}xZ@8"Z~ jFlGnGSGًy?+xT=z|4K<ꥇpwہŦc?ضH8X\&5zqRl&fVk?SnuiVڡ|&NF[ G+۽k#%L')]ꨇpX89Cz D(Bl $K,Rm y 7Mb^7%9ʷ|!+kM8:d\;ymOlr<DžnO*P,5|raǃȀƂ\6 y*:C`\$\Xz2泝\6Fʋ錸PuJhf YœrAaĴwQBGV`EI0MXhqBL,DLiMHl>у+ҜXUVe }ôLXvqWҦ@4D5C5c"RM$0g".qyh=qm]^!dM*S#W藑lP:#H)~؛D"blRpǶF@BFq]9G$!`Q X`Q,KıbeKG0x`]]Ǜ"s+tSNɜ.J:֡U6;nHp6'.-%39%B&k(1C܎({Wohp пdpthǿ}b"5OLkU 픐s'IF&cJvLإIrDc7I!oS8 M }llsk0ᒐšX&2tK C =84ꌿ9ί\H⒪a 07BȀR~[EĈN\ ϚF'Z$V)I+~}*h(u(o~E' $cIBd!Wvpu!W¡'vLzhZSjXcxL5Bj\@pb9uOm Le0{QTQP  xxiǸ%fW%Mp~fؙYQI)8_,yȢ9,hyw ̚qUJEUƻm0tY&Z:_:L2*ť$`x!w&ג:* *P5ҩ~).B# oK6}j#21 d"#FsʊٶKhK[Q%B<6Dю4|Ul8* `@l5d @,xh0.FB y\5[TrV=!&y1ţX[, a6E_/Ƹy,tSV*X:p9ŕ90kb yk rwSUsnzmRRUA:.D;B1rA#ڱv;vx=6f,(}kMrfa,_1cJMY kÞpFvhgVz(f9~п00ݔ %8ֱڇA.E!g>y%IzY:`YXn;E\T},qKҊ)ӯIj'mu+u)J"t[Vj\djWg2TD=t v/eǤxvԹTTތHIJ p[ Y n-/NlᰯC;1=gC) J$8$jJyO rТI!!`&Q771G_XCx$lOFpZ:"R4VEwd!ApAp]9uxu/Y{T+؁7e4zSwjw 8%ht284& #HfGz&y 2yW 5xHx7|ÃWDCpD D@Xu'4D834 ^`b8dXfxhgȅ%NG#G(Y*]/qytCa|~|Rmsnu Y6 58T1mqVxC H =IX(WbAUzLq)G/@K00rx (4~7hpȋ2G'Ax 聢0wa<Č8XH]]Fx{%88|[|xt|84ӗx5y؏tTǐATh }U醰!Y7TVr98294Y6ib'rHVxژHɒ !:F84gsLٔN3ը+)cpl$XBRIW y(lXh[bdw)Sp 2Fzd.pv蕉 cDtyv) wubP H`'XG0ؕȇ: X9t E"vn!8pa.)+@U1GVy(gCFfmi_ɚ;ɎxG )7 @.eqZ}~Uc<[r4Z w6h!K Y.t2`4JpY"v;ɝ';c8uW@}uәs5acR*s?Z,,f;rKȠ@<w 5a w@4G "y 7ީ/rH 0_6P /g2J_`yxfC3I @31*OKG g3(V,~1gd(A  1pJClםǑ7*oWFq.t@ې1u^ ().!IrUIF 2RV`"D }`NsC/3P]G&:!o'3uwQ A,.gcAB&.fZxieAz"yPQ Sgn}r s]nsfn4 %P,P$e8<'g_fz$o"M~!MtWB/IM*S b `6E& z.B1W`~*XB&B0"Yo Z 1 TU)S+^? 6p`1UnQW%[V:aeplb_C.0W%K ෢qK7Cq;鐫$۲ op_{"r: Tٓy hhK ~T`3;dzja_ .솲sC3p-{'m Qq&8Tak9ww'215 n{ "u6c.|ҼS+e̛-k(V+ 2(KEKÓXLv !q)Œ/]F_>K&v`7,`9s:bmNR'?{;1"{.+oC7As ꠸̖,4n0WMX$ p¥-3Dk|(֠bs?_ qL,!P\IŪi XT I>VEp;VT\ z[Hi전̄xȧp2Ì(JFNlVi 8f Svq 2M'p'{ҡ`WWQ(rWn)lBE؇ *}$np.2ؕPM;b,,\5-ג-۲8L_Eҥ..h复ZΧ]鴸ږ> v^֞>a@h1FM>wB2s a9JΌI[>=!d>FQWXqfdF6#ZU0#6Qw'`ێMSV I3ifi~Wd<֙cO+4vQS.;3mƒ5C[(<=P3ON?WϺ-9 3. q4*b@e@t.t{.N .ܮ/IuNL{o=:HbF>.uhojٛFNtv< u3^c@pMW X.Pp. a:_e'lf/-}GUrEWwxB wPˑD"b$yȒH*e-N1eR2eZVy]f$cF"#D UIpJxh!x/k)sA6BL]h >ѥ*U.珢Y*묥8'V(ͭjtxB4&RZOzHvA HZpJA+ /;-S!+kB*Չ"ڈ31⁥6|#Wlgw7PhtAp"|zA"̌"#΃@.DmH'L7PCRZl0@ B CL B$<(tAC@97 lw0K|߀.n8RI43O/"#L %M lC2 .p")ݬx{.n2"4-9#?rB/ #~rvWSw#wr,$$2<6{zHPYo?M8P)2 H4-'N DA ;3!4`"_N?h| y ?f|@'8A4~(B\DWB Dpؚ">CHn Т>$) (J*I*Z\G:1#BH?DqV!6lb B˗&7Nz^@aYf.7EP언̥.wɱ%j tjO 8lMR>IjZ (M6FKHټ aj a|6Har+!s dH0l 9SDu@\נԳ F<CXNO M0`2TK擘TZ &Nu̍/?"'RXdШ"Ij EN])T7%UFl.zOTRj,P PO>:@)ųR1x1ʅΌ `T9@9* ,0G)Gm]0">d80$X X^V*+x" `QʥnTWmSXZqGDf(5FiKk@ 0HBp[H`(Mۈ  i0XiTֳ{s!Cx ȢG ET! ! @@,#C0qL _%|JPЧ '\&"Wb?'(/B91g`Y fR8(6v[ aPr p=9, !@GCx.^WZ'_4䖕}=]S 0Č[ `9fptV!{Jx"Bwe@" ɄKՀ2f^Q F> .EN7%kXA FWBӭV]#7Œq'7xqџ@!ԗ%pZ$L51s3JBb>De)ŶF1a Z1uC@Mcc7̆H@JQ5o*28}s^ކ|3%!d9,GpFwR;kYۭ~>{YJCBvuj!A n5d%A@6D.v<\|<)@dq ĭ1G޺aLq.f%/B,9p_ J47.J9@7Sp)DH9!L0cQ MD{E1A'(npE"{{r9t6S2PR( `TR'(F514QBmcvvd\)!VBF^G0灒6S`W8}Vvy41os%zbepXxf:KGZD,.s_Cp@!H&(.F"X1pb"4yX381wrOW 98n)rK.N p]n'yْ$LQM 7Ikey~ @aOGi pZS&kH 51<->EX S0)Z0 O$N'@fqʖNQI1Sy Pu,vIC)Z0v6 C#q.-e1 9.HGxI iq \~ ICFbS~,W ɕX&Rqgy`!YTnٜCI2VPEj0 @9) N  €1I3OBm35U#*1'669=]YhCRy w?V)Kif𑫈)_Ng AV(AIX  Z!6: <9O80ُC8fH '1 q >F؍s(e58:+BVQzSڜU_C#iQ. *y \(50.gx  $ WkT|B`8Y2Hz8_s]ZB!x3\j,Ppg9Z HGOb,hc -B Ac+yI KT}fC&9*Y*:ySHI# pbr 68.ת{&Y6V#?6FvNPE#M Fto94gY1ڠJytT}/G؅{0ɥ*`F"d Ckgķ ]јͰ+j P7Fs ²) DCdTg{y7+AQO]#aiJ\k6Q@đ X5H+8'lx`j @`Ȑ0t^zK;Qd$q}m@aHdKZ}poq !";YHyFpåzDFsM)}ҙ܋ =s3" v)$f 4Ƴ <yb@=kn$pc0r(thxɐg}4t[PPl#_(q Vڊ&L,QLt-'`rUjƒ9tz7RBl=m X %f#ʊ ,0h\zZB@'71 , LLjT)5 Ѕ#` g #P $:BB;Wt X0`,LY9}uٷ Xd15 - SpSXuLF_ |m !0 aASlLj'v\ac{Q )k -m![| e$22` |sH!ca,"sqʣҵ ѷp vxs^`9etFt6#gs2aaC?+!$I SHȠ`ޠř/BE8f}֏C$5CAy' 0j'_BDA# DmoFͲ A|k )ݙc*AFA68G:JCU)RF03sL=SJ`Z&HЙ= NNRA`!WP]-dahA٨o8Чp35PU;2I7pN4 AA} 5őqʱpڽݨlТ EdJz9%9卉I2ދG+ M |brTD=70%PWrST ~Q.C ]׉ 'N3+6äE5j6jÁ$PwnW6 06]JvC~um>d2c>sJqEXBoFra,O 既|>!~_2t^`J5}n 8= ]e/!,z:U<@Ll><'^Q{Jc< 0yFtؐ?곒 E )Li䅦qhjsµϕuȦ Nq#3w)^& 901_7ЈINm.z X oE஦P\0uHF~3Zf \N*h%\9 IC7 \)yF\!4{)qm pX CC CC   BBB)(()BCB¥CĆ őޤCCB BJAz6Lr 1mCdž-6nhӰ1!&4<.& BW/[*V_02dź6򋈡TfЩIڞQ @т 3e‹h r8,EEs5#[wZ) L UP\.ysJoHi! 0@<'$7[iv)*WKx=;f&M,M-i "=҆4-ҎU%q<:/ 2CPDV LAdH PYH0;S<h;];1sOfx6?(_w,ߤJ a<;,~&yPk בN JP`$jK>Hs6Edt ֒Z؈uG+:O8!MM(!s-AȠLE3NsP#(*:~7aNl*XXOͨPDL$@TOͪ3+rƣĪXDvU#Wc@XZ 0hZ?3)t%OERWs^G '$ [C" ؼb@bZ'X6X\ [Z-J`{*ѭı-iWz6 A&UwUƷK|Emg݉"ȹ`uHd,<;"5u/ #.%o,aOd|W؃tP Rq&L ǸC>cFpM>ē)94I) x@5`cIqdbln,d9+yu1g'YG֮n^ jK 0\[f;x&f!lZϗqjKi=>2Tn V̾;|2i_Eh>rٚ<kʛJi˼`G]HcU뺘kC%&+^ؖ1Apۓ A,7[7DuM&ʭG>XgmݒPwM񒗜rCK{tdߛ. c|9gSc6$kgOӇ#QOW"Ѻ׷/TKqRA#Tѻ?o1"[roGپ[ϼ7߱|a%/ xO9/o==[arw;WwWw޴SsvMO[.->ub@ hߪ wR|TO~_}`fQ'G4W7 R! 1+0G'tG~;S/ hסUm3סY~$87IW&8Eg">Q AU!ӄߔ7x @C+AH+B084u3SC 0%  l 8GgYI铀r0%胙p A`dHYNWphG `>  0P R! (rhn_`ᇡ Mvhx2Tv'0Xb/U( ÊȉĈ~v } lĎ3 fs RRq y! $Wx* x;K y+^ Apwb 9 IHfrLis4Y6y8J' )80 3yfI'YJٔNP3(VqnY'h{&? aVdhWLq#8lٖ 'KbQgWyE(тʃ wmS#bQo@(hB>B<1Ot9yVГiO @^{"1(fP<Ƞ H\ JI1#n n1HewKʢ i5M臬 (>x2ӃNBh{((޷$-T@9П9¤<[џ/H-Jgt$X*3 57&O x 0"v 9lRvCť) 1a}ڞ[W"j-^)fREZ2l10Z7*rv~!)Zh#IQI0eaZɪfxլBZ ]IۉQWtڙ/JR* 8J C9` & j,J ~QZ3Rq_Kwޣڰ` Lmۯ 'GwggGj@?!*X /:w D2G ft[v{xzn@fIl-~4Q۔5v:jJW Tחۿ}7yr)i i@\$>!@$$VLpN4(*, 0[ x;p?  x 0  (5'XRLp3<6l'E9` ,R<|P(54M;Xƪ+W\ {[i ą8煡HƀJ>B'ɒ<ɔ\ɖ"$6٘ᇆ*5Қ56/\äB7量xy.>#b?+hZ/( 'E7 i%|Knh`J6 UWMnt:mvs){(g^7,#[$؆c'Mf|uԟpݟODou?DzCt qI׏ItzBCBB4 "ɦ$ֽۣ (( B%% *\IĀJ"JHŋ3j؂P H$L\ 0c̛8+ r"ϟ&ahL H 4 ! JAȄXev" W{|[rCȪFAڷ =@ n'޿ L8Y! +d`@Őm@!dȄ,c n'WK5W=pKٸMM5T;NjiTٶ64p Hq# cv!$Ė_ O4JD }s ߀ F1 <"Hw G!* 8&l($)P&7"(0 'ܶ⌏lVB8@4Nr B='BtFV8. NUB2Zif$x9a|Hx$Pg)$nzA詨&0K*jPAvz '.f@ ޡyjq!xD% ) PB{%4'f8 A x@ m1$¬! H ->J'#lA J.`+ `>>:::UUU<<0{ܧ*SZ,.Rv=O*띙}vxz~|EB; UWUFp_tdicf[ouùŽjˢӤԇءզ׍ݨILOrͼ|m aI;E<5qUT]GwCb8bI$5g]{`B͛8sɳ@ JѣH*]ʴi=NJJիXj4]ÊKٳhӦH`EڷpʝK.UW˷߿wILv"^̸MCLe(J` ӨSܣ+tH @G:ͻi7+@ A 4;T-ڷ< :D;wРmv x=u"4] 5^M=Qp9<~d7z<`[v65 M%EgaDn:n(=-NYǩ=RpL6dkle4{lGb~ xdn4 A P7d\un\X柀vfjM&裐YB -i~>F*tM:ps^Q`MGꬴZs;X~;ĉ9 |h'ƜsDwVkPD8Au^6@lWNp.Q`w>5MJB0q"`0H:@<@J"mfqI+Đ૵qB`WgJ~+-4k#XZ'Bo9 tg@ZF'=HC`K)&lZ){ @\"P@jF}S0sKgیhGWnO_OLy_or.:j*b*q |ܤfE@J{]"+⏢YMMᙝNx#V̈%IhFB+_ʭ3:qԥl@'FX|D9u`0n8*ӏ0.>ijS&6C ޒ4lE~Pw0GYBJ~0R7dk Q0XD]c(` ,7+W'!X(>;h^pfilSt%; @tIv=`D3# c¨y@)B־l* !W_3~F+:Αǥ~u:bRŎm+Vl iN'Y i-:(u(V-iN@"I;nB*qRLi;xvPt.sI V|\ %Qne>I"~)4JNBx_mDPo eHIOYg(pF!1[|O)$nfK8)`靥f$R:oa#YZR @2ഝ)FbDEj4A(KNckWv`ֶVKFEaBPJ".)Cϲ>jJGShR9)r&+ 7y>|<@";0&Y I@\Ѫ բ8j!iG eځ>EXJ4bsbH=Iy5@{}JgvL@n :LiRW sB4uX HHԹL.O89 !me sn*4F\  8EqRl›5#@Ф@b'7G8Nlٟ,NF* 3?DPOZx(ޑNJI&]!nէ]Ц?f)V@`ҍ痄ɘdSENXVY^MfSWIJWadZj9l&4Jw:Q r399:ծiE-Vj}3" 7 cӡM(+V9+ݕakϜLӔ&(_'NMѝgPِ^nzHO̷T`iR k̚WR6|F,r@BfiP47bȱl&-@pԫ]!!Yh~[4y "w)@?9ϙן򤀵~=,!G $˪dJM׋e1eh~#~+9 y>UWJ#gZ,t!7G&mqbeCN! ggRr &Kyiv#}'PAY &-G)vj/O4ا ]-T]́Rsgg3}'&qrugFC]z d]\:ۿ||DY_NLw(ESR0 ^usNmJJT]UXg9kL3Jn$'}';&8t$ n!?,zc57|oVU]<cDс#@ǡ}qe<I7MN%`S.!Mʤrr@+6H:فZ$U4MT<1)n'ڔmFtSpuurSQG1S'g extq&A9YyّdX04fcTs6CƢc;#b&&d|gF, mPtɒYD"s4y'`JLٔNPR9TYOy Mhha~1hr])B')T.2f27y+lYpgѱ`kVSq PUj8P\9!iq39 9!Iij2!@/:0;Q#К8qy#bQ I{pufpH9Q?/H@aET:p=!8Jy p:03I;p`p'p0"@`)';@i4'0 :J*韪YɞyBp */뉁 P-?T8A' ;p%@ 'C"OpIP9E%P$pyXJ+::QI扂$?&; К:R @zJ#> Z#x 0RjzJ@p;0:?:y Йd*#Pq 4a _r0|R**pZs5zb!N£7)@wjyBP CJpjr &AZ;h:Pqsz( 1@BۯJ i J& 9[%`٤ +˫yj˫dS V;عkY827>@B;D[F{HA2H:Й+zI72.9;`V*:왮#J {ʦ$?"p#Й)? :&4+믄0 @)?{VzGʰ4/}{sI@:uzIFS yT'O{:К+H0   keI 4˫"`\jث4q|+0ԟjڸ*J&`8+K+˧ ۨKھ$EdKߙI5K`Ky`˻\|78!Ky09'PPjJ) * lIئʞ { "p:[KJi홰&̱ [FHܰF<8+u$`Xٞgۮ80˫|/bxz|~ǀȂ<Ȅ%KK6O;Û`9VƒyJKJ;pՙ )虠z*jER*띚I{PZ ʩ̩ [KL ,H6AK#$P˺ ٲ2ÊĽW`O@' ë`[:P7˧ P0w x[D c|PTܽ v;"}+ j&S\.;K4sʍqaɣ9:0Ӵ.)EmtӌLr ԳBHyJ'M1TmYщ5Ƚqօl154Gmmz|qpz kY؆}؈؊،؎ّؐ} 5|ٚ9y a75}ڨڪڬڮڰ۲=۴.lּ؃am!@=]}ȝʽ̽ܕ4:M؝ڽmUi19q޽M3Em=?l>^~ n1 sLF0!+s)U~B}t Dll1N& p.sNz{B]S~^ciLh||C1H2'&Ba!'b];A [Fl{K{~k|@k1>^a8afhܵ; GF&8AR,bk۱ħ>AvDVƸ$XfGbAVce{sZyIВ FDaCяc8 J({2~ aS(VasXNL5TiH2yS=XDCTyB12pŽΎ41(JA]i چn"MTdxs|HԒt6u@+m3/d|"p ?o_KD`C+4O'?,$$NUgJ>B?%t<>u"9۾uQGd#2 dSKkL˔I`Ew4F?mrTM*(:ࣜ]m膑LJ? p]F|] Q,w]CP`a\vq^  \- &˸.57+)6ACEGUWY[]_84`kmoq[Iy#{:uB3($ ƐLPSs[euhaw%CЫӰCu˻$w/ú ޻Q=8QN}c.'-P ;Ks f`D/AZDG7y)"AOA A4(#C   lRORTJx.qE's&jN]6NA ]2(@!<@!C (LVR^C4)mk*ʢ/E3`@} 6qnݠe(H A;Erb o&6a8IK塙|yѧW}{Wͷf8 W;Aa)Ů,:֖ޮ <"9)a=h $Pn.X̚BPLQYl?ϭ72 x * XQ ^S A$#C$q%Lܡ;-zAk$ (8-3b BFbX 4B:) TA ="`2`Mj`!秂 Z,Y2%C,|߉jQOhU#v`"n<+P ˣ`ԭ.`1nxTnep3My`7` (,# n`6%xِ㗚WqP  ,C?u^^_hq[@,vi&pІ)9Fu/܁8iIsVoA>R|BJc) hm*^BQ*_2b<R?\,*.H~khHZJX $V6 ]1 0c^amQ1ST ur㌢@~];! oj^}"x1wC8 g*3&2!8?LhZ4&` V`@SO4@8B.5j(!R@guMyRN1H. :9 ѡ!BZҘ!Q[\O'+Nz>L"N"W|}uv @Jщ&E"o% K/JM#FkX{-&ѕ9U ;(U yU1+m;M \DG}^)Y {,LkCp'j0GR#dJpk7,$CMKa&G/Iu4- #d.<9!8e,*oZ%`=$bfŐ3F|cXm|eCdM P0)ݐ5PMRIErZʺlж6QKmbH6SeLʹs9"1C6rhIF.ʉ PLWPY)Z{x/$Jld6ׂ+Uwk|ke m̫)όE)RQɆlJ|Yޥ1WuNte2Ap{ڷ&4 qj;B`ik5p(@zo@1T(y82Q ,WzwgW[1Y| ?T!'39P۝ёΈ=mvr/YE>P6Gf`va1DTؓ<Y:_Sr(0~=9ClC"Yϙ~y'X)Id YIٮçGAz;H^p5{pytB͞2:iH:4?75<ſC%H@/w}OշIwmRRe<8.ψJ d01O~OK/)AOdB}Gm6@91p=k@P]50Pp$V06Dupui0fҎm0 o jq SfHj ި M df ;p ٰ 0p0 g # P {` pư i 70f/U .`>0`0P!%(Q-ko6Q J<` Dq P IZTa^Q cf&p@1X1 L`r JJ>@v@=@q01Kwv1 1 RQ @qD `"@A} H N @`u O.Qzq=LO@:Aq":@N1 Q.&&Ӱ r"L OA8rU#@ ,rRD =Hm )w`~nJ.`r{<J<."**'@ R:sH DL4;NR`.q<414!PN#E!s## $Q .NAF䲍:`t 7D``G@ <ѓ ,3F3+v>`:J<3!>S A;ks& %R3Ցt("Tf, ?r`F .M:.&w&<T!>G>1w7w@y.MY`*AYqq4KRrM6.W%18ْ= J4p@ Ks@Eu91w JC;1IQG%54K<CE2TwQ$*k>Qq?4Rt]h`b2O.g"U$S :eCgB%A@>5 D etQrE7 qHw@T N7 &T=qV^ A%E{2TG@?FVD1'}8X-NT94Uf7ozSN.Yaww8Ow`3'N4 J[1rx_UϸoQ7vB0@ 0`y1 ۷ xnr6cp`%w }9M 0 ԰or`}?K]V_5?`07} .@D FQ!EQ&Etl뒌SDt?A@#&+ X FV 5'c#րf! ؕא"' eI3%-3'S&MPN I0߯xHPMyA" ӕ;C|Ӓ' G1ԞX@`{.:Ϲ{:qRE Tf1raX %BoL 8ZA@S/vjƓEX12;i 5kAQi?hxgxñ-Fdoڠ.!v ڪ t Iv5p2 D[i4 ק`+gve@=/z#5-M 3{`)-~)G:-0% Td!4WEG1 4;=TAI`${d`,;'<L3/}sos-!J54]qr=Q8"AooAAcqrX}0c7ѡ/"&8D|MEQcmƧ.}{y0 s|Pn@y0 -`:bʩ\ wI<Ý~ Р>~#[%~)ޜ#km-ѠbF+ 1w0k`xG\4+!xY]MO¦C%c$ p.%. 9~jA brBt?~F8˳ wPU` ZGnic ؾtZ`ZkWT@a:{l/4!F0$ &Q(F8#,̦/ڀBF+e1* ,_Mb 6*g`\DCb4NmF"ϯaelB}$ LB B=d@J6! t6e50$F"SR)5W!YbW`f PND.Q5D$xEfV6L@S `hnjDjr6EzP2ҕJViBUU&Ub"dn!K P2j,qE**2Ҩs۱Ar˾̹F^1,b;<jf\)ٳnDn0pԀ;qez(e/")/쇳e<@`P`20kJH (" HĬL,,EBYd6\kĆVj\%LF8e>u$D)En1əJCl!VIX-DHm;Djyj7!ч#n)SYubK9y o0Ip {pPLceF+A^xf^TaXEXq !{6(O] $Ł)/t^|awNIżSi {@N>!uMSg&[= #@@G 2|7}r5~(S?'"CRL/cQP'32C&p]]xrL k~r,,0h;1*1"*أ9d4R^rնB5@%#ETGܱ 9Ckɝ# C"2n1T2p)$;y!94#!k$ $+'>2lPH:drYAKK+{WaƢ^V}̆lk=I ́rj3 h(?|OY&;8Y6hqT)vkкcMiZuvkDp \ö ݭ01V:hkt@#m@po]j@:JR5%> &m:#D0;;`2\ @ `'x5~1] F0n7, + :HzK>"P(.n ׻Erp}S @ (WOqìb+Y+_߼9!ŏ#ԍÅc}j=_˞Y'Ǽo_gk|փ-~?c_u}}٪kZWñ_"'.o??_7^VJ16, bCi p`x`7_2J rm 6 F V ^C ڠ[ 5`NIJ! !3R_Zba~^CJ V z5a>`Vaa!5D.ࡈ4`H 2B| 2("C##*C$J;Pb|X1`b&&G'>*(`NCbmmb0+NbE.b/b` -m!c2*22c3:3Bc4BZ08"1C,,5bc:h#mp/x7C88 ,9afb5|"inAj&kBk&l6l&jij*Jlfo&pf'q&%)r o't"gqNqBsFrRung`guJ'wZwvfiyJz`gzf{g|gn֧j'z%}}'~~B{('mmfk&*6nΦjx6gy~x'yZhr^hb舊h艒h(*(ƨ(֨((')菮'};PK!: :PK&AOEBPS/img/strms505.gifpfGIF89aH???@@@쀀999000 ```ϟrrr///___PPPooopppOOO:::<<>>GGGUUU!,HBB B¯ĵƻë ъ܅׈ꘜuZ,V[{% XcE$`EfdhbG)z8RcI]S O^J\7S˕0I͚PFvTqMUtTQ)hiѮ\eu(دbђгf]-[An}+/ݹv+$^ 4]xq㣌)xhdD-WzYPfΛtYtѥQ;ĺװc˞M۸sͻ NxoƓ+_μУKNQ XνËO^rӫ_Ͼ g.?(ן* 6`obƒVh v ($[`, ČATo284NBp/&D0A P[PdG2i"g5"A 4@tp8@ eP`%% ɧTP#~b:p (0(#z橨&w]v6c(P>"4 4V)o&#FΨ32k*&BHJ#7;æ*l2JɌB* '@bip)vܻ:9}#U8ffna&)M>1槫@.h+*w `<$E 9  > |ot[i NԪ޳ԦэlRѧ1d,"4ƣ@B/ɊJ W(YkBI`SX4& LD;ri lCV`B@@$6}$`b15]aHDhL6B*:HM1V9;qDNʾC *M~ r2_  gh.HX#_ fKs9nF@$ Ic017(ޗZ(@wfﺔ7I|4d EY=hnCDD+As6=THeY7zsLisFp\J׺uFcgQhTk .E2bHEqFЖU0ؑeu+\CFvnheV9mz`lgKͭnekD'`],Wż"VRhDC{MfVQJ @; J`hv=WZktO0MSetBa*@NҠ 'F JdՁz]vbL2@({Yg`8}LZ>VԵdt4Ѩ5VنX_uQ 9ĕiH ?Zv :ipYNh.3 RE^$  zm"ZjI O#ƔnB)U'iqgRb[рT`P XRK"iQn];Pt#͗OSDzRٞa![፯xɚy B2 Nw}hipiO{4X$|ԓh'x ڣ)f/%qU _kHrZc]` QJ&ppeig qWRq&I`kHry%R5y5PAW!)ms3yG+ @\GyDѝ G:#.P5((A)p:py ;@Wg&`TջE!<":vAwb87^1dAr\3i sJ6m'>J5*8C2_A)3=w)cm:>4&!@#ȗ4RB|A [ЅA}> rBn8"x0 %|pA0cHY#ЅeBPPo|qhsHb R]j#`|X0g##{ʼnxX[؇(7HYA%`W1_E7*8"~ x& A("5xneR`m_8#heH#8h2hERi3%[.p( )p/Y'` -")cC40؄s:s(\qh$Y)P=:2-uD6_,'+GT'T5S!W"o7Hr9zCSS'c)zcwBnǒ[0A`ng~`9i"m%@drC+U#|RyWYynUpqi1ID.GP$|-+mR-œϙA,b岉֖I➹ыE:ٟ:Zz 􉕧ۘ:Zٖ yi":$:mZ Z RV02J#'z"Xx=y9j#p%˨(*D*(J裂٢Nڥ^`^ʇ`;x!B`gZn Ԉ #59(jBA*`Ȫ.)*:9tN!*TR:Ya%Iɺ!tSv,)9jz?:J 4iP|zj8)ʤ(%Аˑ' hʤ [0! 0˩P ڰe*k;CvX2;Uz[Ȯ["8zp"hih<'[)ÞaOKQ+X9]{2 !`T )vXHp V2]"T-4N11]ߜh-tc)jJ)s2,e'? qVkxeޑ֥gD+G:"p'^%7U6- ,q7D5$N%oEث%-zqmuIN#//3$jccR>SK$+"Cl"=\nUi!%=:x1$#4ԩ+&~os(sBC=vo?tm.'9ѕFN#['{dw#cwe(羁 T3ܐ:w(czJw)НxWs=cb>/A#b=EW:>Q8'9WF~xSCݘQ5O(^ڲ^Ձ܈"bZa*)*!_(bd3z8/^9Y>@B?G,DH?<ojxPR?T_VXZUπ;Ob?d_fT>&ӱ!j fA"1jKb؜J|ya͸_'qbԇlt!9ÉH[;6Y,9{QxgWɈ4Lhqx.L#M !HhhLx"ki hjo+!`&BBB#ABB&AAB!!!B#BA'%AA&&Aɐ%͕%&Ճ%܇ ! *\ȰÇ! ! ,pD ; !Ãa)U%5B QVݍ8K#F,`QT@MLj뮉*z1fdC)QdBciJ80H$ʍ.)p_^P`E˘3k VHA.1YD>hihBzzgkp1\.fLSAf,,tm~yփ LrZ/Dv錉w#Cǔ!$Iu]9 d5F֝g^ \TBb{E k`5Z$C) cs@∃x-EЄ%esy[}[|=g$-@ErmrOtMo C #l&!(Uk&m+ Rf-$p@7p8nќVh/DBH].V(qu qIb9\lQ=$U63LCi2JDԝɞ!* L&zSNq1B8zʐbJJ#VdV] v\;KYMqa!Uk1 \y$~A\03!& 0‹4ςlɡ2c"hJ',2U"'µ1!@ " x #\ NpR.!yJjE`ha4385;QɌ7ؾh: 9fvxCG%L\%3`3,#9D 7PFA,d@o~9/-|>n&red|k}먗c挟 "R/| >A !qSȄ/8'"4̠7z GAl*H2t0 gHjW(@ 0"$H"у#8SSؿ*9PH*Zq0aWxz;b(]DhLר}/2ґMd^-# ǒ r #"Y=2A! 7R@t4Hk('HL2![0.`Uu`_ܑ#_"S'd@Z~ʗ *eWwJ1R O "%@,`9*dH=ȑE:!d4~dE2dtR  ! A͑+ODp/AJWbHJD3'o@2ċ;%!0EX 'dd!7|9IƲL P,ZTe(ROS'|>j,@S1QT{AzHyi5֖GFB .t):{JR !!Ts T p)J1j"a7H,jeCJʈFe5E i[sVY]i 6GkƘSQۍ b#T_`cl`vcί,bfCP+AN/2;*vCi-YXpBw}b2~ ԤrO-E권 8J fA}R!*V%`8֮F4bhYQJ9xW ˿n¨HQ$4р_ 4ʢBаN|݋8\ Ɣ!*wP0P7 rm#E&0L20cf*<1_|+@BT}нHG]}4-Dy Am'\z>r]+d{Qkl0ŌxO]LAؗed&Nc6ZkȎ2lb~%˘ o]d# [.5NTVmHp:fs7Gdo.<vDApW\!^‹y>2.ȫן#09閍8ϹwBD|NE}/`җ;Pzm HXϺ֯^`Nv( zy~pN3@*rOxO;񐏼'!3{wB?O7Wzo|zև~ϽQo2_eO;_. O[W|{'! 28O28AU(b @x& |tp xJP0pt($ 0A @pt?4qBtA`~@ ` x؀R@2(0oY%#֘U$xAM0>2%((hؙ#@Y%&9rXG))B(q: jFYx*♪ȅ)4cyAN_8څ*SBɄؔy){ʡ,ٮQf"+ꔱWtRǺ)Ȯ[9 0W{ɕɖ_@itIjxcَ o )q )9xy*P *)+=HJxXFș e˔@ɩb˄iAKl8ڛ*ʔ=CǐE  Ƹ!#襀{qtHǧ}>ʍٹۉ+{ uC [fVv7Cۻ[#KAps<׼0йEYЛڛskLfV0iG+kQZkrľguk+"F's KL>I(Ws> Vm$|,a4;0>q =4[2J"4H*h%ei51i9| ;l MƠ ]/%5q & rf,Qqk@0i ; փKE9Ѱ+"Bه_Dف< q0(tM]4acN1n . I Wa%!aM,dQ3'(ݴ0ޚAX~> BSS.* q$"*v (bf`'~ y "EB_S5s!,a]%v@ Y!L鯞pUc845W#' ?Adcٴ,[Ҷ&/ULSaNbDP[wbv:M`[4 -}@g@OC[m lP`v1&f]ƈN2B X(5qp11@Yuaߕ_Ζ>˽ XN YO>?װ WNj L d!<->ʚ]߻Q& "و~jF]]x?b}-Ѐ[̍?هlnƕ֙`k}jDj?__BFt[?Jwm/zʿ =}?Ϗ?w_z~@{_t ?y?@yX??{ B-- A3ȅA߈˦$=r*4oÄx+E,(jx!ǏLdB(SLqgHb͛ZJ̒'y JQ'ѣb "ӧB=`i5N!UԮ<+XUa(  V׶5ŅrpR\aVo$0һ#J,%cG!Wޜ2獗3C(+ >V5^z6լ }޼ -7p℆>xƝ#mj9~Ko9ݧ =[޴VRvjf ޸[.(g6 ˠ,o֫%oqM3@D@]!/Bf 2ʀ!sM^ 3_4@BD@r!,,KԴ!C/2̓MЇ<ԅ0MZ﬈օ8p؄͈B̓$3gQ ~RcLp@PH-AS]$tσ 1z=~t^^W8'E8aLCυ.<Ѝw_]H{5}e-G&-}Mc<L  @7MAIPE{$G  ~_)'< o 4@Z؟5@vB@`z'u7¸9ySGwng(u|E@6a !D`6%ek V\Di3"$LZ41 M@P%z#MLa<ҀJ |P<(730rJ<@1 bFa;$ŝ1&Y J$G.22at 3"}l88Q/I(c#Q=5@,fIC 5u9CZn(^\˒!EܠACD1[ t̋Gw4c!@K p. bHCPHlM(fc8 o-CE.YY x"mрA (9G @e ܖE ud_fH&ZJ^0^#WԬ 5 գeuifOH@`K*^ĆT> "Tq+Mdl@d EDD ;"`P>DWlIӠmxnUklMlSMXnkPS|Unm1k*\zuz`&򶱄tq@!aNҀ 5Gr==)/*p΃vB GbPUm MJ^!$hJ"m.k@0@)O7zs"Dxf !:p_Un-+؁UeʈX6-[IGcgاsgg gxlkUu8^- '٢ İ~|~cxzM+vp-7 CoLv_b)ݜ4uBI8t+ˠHzd$^k6C|+VbS8ה~BPc\pUFkB @Aq0fowY'f3Y"G힛rIk,!e!ӻHyhȳlOGf\W!Ӑp@IЄ8.5=}hŴ'1IUjň}rְ,Ui7|bΉcT2Z3v##4!_CDDpHzs%I]C/}@5K*}?5,΃ `/d,djq )[& %Sk_j@9S c: ( MSoe0u8="([ sX%*He,(.xa+p88!p8X9h=;8:8@xG(IHH8{"10=XPTXVZx\aЅ^iXbd8fsg8`y(oȇq(}8P08XH(x=8X(9(?P s!~!'"'x))*J"*2)"(؋*k*Ɉ*Q+ԸH*8hvBȊ|(Hrh(0$Xx)H،Hr xȌ yb۸(0 8)Ip9 p)Y(i,В.092 49/Y ;=ɓ6BBYDyF7QRB@ [ Z) *Xbi]yah_Ƀ,Ќ/ 'p3hvyӐ޶5ї~ imYsΖ1yl6B!Y i`=^ D taF^G51!eA1uِ@;9]?p7МzٛgA^"7A@DTuT6$PV A?ETv%k2nGCIDUDuU 4;pGNA! DNA|V ?is`E#EW  Q1%Q@$Ր?0Zy*k:D>4qMgg#a2#g)+Ơl{Cq#;W*gYvG#4nSh>U1L5noj3bMkO*QzSNŴZcnVA!A5d L#6A4Dc?fAt%{FG?vsbRIv_4;u4Sy^4ӯ H7A r1􂒛 V}Fj% s CK ++J 'ZEpY4uQ'1'!sC I6E<o Zn{n'E;v[u>ﻺ 2!1^4WzL9%%jC`1bQ9@qm3>1"QT!| ! DY/a'8$$?QfA ]zZ@D5>'!1wL ^3^^kԥc<SGHK7#A%7A5k @ףXm$aǕsaS?9^1U ,ZE9EzˎAo1;qU8͚=wP1,ŖDགྷ*A1uD>q4M\̘ɲ̙V6sy<` Gg$e!<>Ô@W{w+-?c| Rl.Fɣܚ%ʱ9XM!gv1j  <ﺳ1 a*2zXZ\=`b}չ1.:vH5Wj$U0Kz@W?W?U#4nU5"u!؄~56bUT*Ep΍sqٜ٘ٚ,IY=ڤ]ڦ|ڪګ  1Pۼ۾=]{} `A ]1001ۭ0- 600ޟ*ޕ߂ p' a6  q>0ApA. yA> PA`*,~߆P p#-. ͬn !S>6n!`TNPpNjH>RrA c>}P3NpPN&@~.7N~陞on< iݪ^N}}n0.7믾nAP91NA.^...Jvp}.7؎nO}.>pYK|n..Na~2n셮?7?_P` 4A!# BPQ^'?*.AP.n>>/7A?BPc#~LNn&YNA@nj/ o3?>\Q0B``0O#/xz|O?ݾ m}.ޣɍ0?z@?_Տ|NߏP%?_O B BĆż̲Ƕ) ,˥ϷޥABxHÅD ˔R  ?L9!@p񕾎 CvB %f`ა8<À7=DžB]iPB6 )adE&XS &L@|>hk&fnMYCu`@VTC@F7hE&[7ԇR`g§6@0!.X(tV ""N:#R0R-{{H\w#sJhz3l~F3P=/!PQM)՛{At!<fRdwdzWE ݊g&yŃWg1_!ӏ!;4Y 盂XLlBY@1Y`z'S,yK, Ha`-4 *bB,[y}Z`ȆEZt$ax0xHP!#CgiCm)uBdۑʙ¨!AYmdA~  rwAr¯KL"ݓD|@JnIqdB8]Zjh{ii`PYR \!j# I or*p ^5Ux$XQWUBV% :,hPfrzeJ·O#` ӯzqR\Э+UOXS"v6IfxH2"2H tA ې]=@o65 |uքIn\"#d tvG[S(wL4 ^xᕏ.H`;o"XN&c*ú{^Sb|Or(,O=_o=}/kb=;k"'X~l&o[`qAH:'H >;7z(X*iH W gHҰDvxl@ H"HLDdjS YBZ)!␊] HFjRwX1jpb:Q_FU IBr !F2s$'y<)|&+l2'X(GIRL*WV'HG?IDtYK5޲b^ Ic^dA%RL2qPf6A6s@59vJe!IL!Dl H!)R5D(pssL49ET2 Є" m@*hU GxBA@0@p MsJj 0``FQTE0lf&pSVU*Sg: 4)N{QХi\= z 0j9T5_> § 4j"ʹ@OvKVt,,ZϺ⮼ +>Փ 0V$OQCֺvY!p 4+URzKkE[vekIZL DU+ D %xCvxu ժe q xW+{+^UCa~ϴ0m@: fT@ TuV} CԷsB+pX 0})S{ YzV*PO H)sԓFFN!#̭'K0J` <@_C>oPVkN[~ӗˢǚ87F&Zǒ\&&Gᤞթ~j̹voT# v\يV6鰲8t]nnU΍pl+e??x1 ot/Ưx8L]R<@UlYIPQk=A;N՜;o_oyy/Bq{R>joF N_|Oz(@ cqBԎO`%$PP>żbz2gȕ[\ 4~@'Nz[$N}xe&sK\V[~ed EfufQ~Xetc,XXXTRgPQͷ{g ek7Hi#Q"PPFU? !U97 r㶁PgdM8 Oh30u6gT5ԅnmAtxz|؇~ CXXM]=Hk 4X a8e 7qh 7ItH K+I 0T=픋|hbMA?8NK`ǨQXKDu6xAmQ7!"D@wh0 :$ 4-! 4ހ# 5/;_8 $" 7X<>A"S)kS77 ٍ,1Q3wA+y(!X:7BQ&1(R3A\0PXBÐ`a` 0!c W',, ِvxQ tВ.S6.24yUB^% +M@, -9(@p7aə )]9D VyĂr93$P9(>#Q%vaD N3 B qaf, |)b!Fc(əPr 1ɞىE #) yɚ*ќ!q( /50ũaJy7 r+!2~y,Aɠ)1(h4#Fʕ剢S\P'Q. )`JXA`(! ؑ#:ZtZLґ%qp+ >:=sBFV:)H#VZ)KٚVʚC@zy+PFÐ&YQby3gkJUmzI%$qJ^$QuPdYiK06uJ{YK7ѪBⱤ$6YcR+Mz|N,UFQ+AY,vABk:$\¢r5hY%M4Ir'Y Pɨ꒎J67âC "LɤxJ,귲cy*wL#ERK8ۥ Kz i1/.z'2/06`#8bq(b#v!.٣/ۗ1)"U1,_;q{g%c%tK'Y+)at}{jᓳiF]3+ȪtiҨèNޱΠȋ; ;L"ψ+K&kS4T{kBLc{ ډ;Dx1)[A9 T VYs ?昿Ӻ<\ vWk@ I-F,!G#lI'-kLR:ـÿ-5L_- es E <612%,9?&AXѧqa2x2q[7fqn6V1Bk:9_!BY7Z25y ZkQ±!\)a.|:!"sKȋ0#/L-Z)ƈ+!`(Q%83ّȋɯ!|P ,L)9"l/Ҳ"r:[rQ/׌(H3 ZxaS`46aHa\\͜%/#h <KBXr1A!»4pܼ:2#rۚ,,A*M6LwL*+U% }J.!eA!R Ҳ9.R-l ``)d\ 19}чˣ)Aо,-<.̺ 6.Js `M 7Bӡ7\St61}(BAMȦ6޳m1?R?B(,iߌm 3b፲"n /3< +n IL9.k 103,q Sd⽄?>}$CdQ&\(Kk,ZL^K5 ;(\tܿr> DxBڡg?@ @N)&C>>Id~tEobInE%03~E륎,>C>INJ>좤@4@Ȟ0~J+>x jNoh:^ޠnqX8ؐijX&'<ɋŐiHsxl+NÐiބ}QnTUKOm(lEzp&gM&Y,TWwG_wH;>l?}`jNP0 Q=v7 mO'P6fܮ"6 ?{ KGKB뙚Kkw]I[Ͼ˟Oчy Ah& 6"A:vßpU`jZst݅(ɇ*^3s%\2:b71N=8Nd)/~4.iH NjdX!,R\PsPJ YwZ@i˖\SɕA'Ov*eApe `)Dh` a6蚇>BhkjSXzHd@i\+d@e2*j& 8BĮ>`A 1 DQ@%&&vͶ{r\Nx^i Z0ʮ,[Z Co"mxbo }꫘ṳ#3 fK0j`'nI1oœJ#Xo'*+w4LȘ"i!`0!%Hj"]4<-G~ 8GLxXc /Xs2ċ~w砇. GvDgJgmn"'L'+{28Fԣb}'Ywo2eo>)f+ c7g5!3HǿMc"U?C aui @`W&#4x%8̡q 50Aw @OB0L .F !V d B5 ?3_a,W p#=AC"8P*> J׹6U`@`!! (DC<^iL’+]QZ90{4W,H!\x%t)(e? g8;kV+R&Y|d W47R @ML"fmqQd2A arfH$a F*؃\I@P vL/Z!"" KSA q1*U&MnB TUpmPa6wJL[@V&$. Id5l=S8$P҄:0 XaW>*`!ЌzC.H0 : V.dAXY5xU^\5'5$QtT|:`sVH(kx F]fqfiE ԘmҀet5o)>SU Kr "CP suu`W] ]8FzmⰆ ; <kN}VM)TSHƂ#gFp0 xxv.XV1!C|`A0&0NǷ9G4WMleł0Yr>QaY#Jpf8V¼.q"D z2JR_T@.ЋPcVl3$/ 0k!HMejZKAWW ۭ1 QnXA\^{. Z%ckr* z*ir܋_G9r R_ U* џ+4"M )&P#,Up[4 -`;X&pjP۵uF *{Gu`6gZDQHpH@F٩$݌00|%-@Lab<(IF u&w\q;B5NPR99P>yh.H<ڬsZ⯚J e[j-eNaY9";]nc{9*b /p(UspٻSzA/AV49 ρ hӝP;gaVI`+ɗɓ6y:BYǒF$Jٔ ĔNԓRYVyXZq\H`9"dymh&l=Dp9@tyx&(@$΄ cI$)rq0DRI`)ٙ)yIɚ iiIiD#ϑi9BP7(ɜ Ʃ IՉIӉ0s9Ig虞Pyɟ韟IʠjJ J j *z! %*'Ji+-ɢ13 12٣Y@B*>ZGڞʡʤ MJWjO[ͥQʥ_S*ZjR*2z)oq*<42ddf8fd$&$4fƬtftf4tĜԮ4ԽƬtĜtƬ4t|tttttrԜԚrttrttttttԚtּLbrtrt֜trrĜļf44df4ftf4trrttԜflrL<>LV,лgo|t|\̹ϠCOLM;=ﴙ]'@9sم;wӷT@GKװ_FN}{jӎ$Pw lx /st?3e ?;yԶpG Gs K"8!q9alHuϩMDwGy詷b{/B|2E`@ @وMSiL$gۄ *I6|X@M"*tTZMYrJ]L/q]\p#ryp@zbRwI0K8 hihpz|Jx**^D.q)$jkrGf&fLoԦo!LjO~e5=* /%;1iFmpiTv1'z(ph&0[0yy ގ  LuKpwl({5"q}m1&.<&tV3/,q%V{llDx=͡u6hf6xnZ$*{-wLe"H 7cn}S)74Mzk̖~:$7N~fe5.fj.6-&b`@Kɂxe4s;g}>0Ese[A4igy$C`3=Մ5p=mv5*iU">uUt^7 ְXY jg͚ͳtZ.PIiU*&_bFōwy2e~0a;8ͩNwӞ@ P*THlɠ2|kiRJժZ)2kqR` XzUhMZ̵pk*׺xE ]׾i+`K)=bƲKڕfլ $ ܰ٠.uhGKڱ`erY̆v'OdٖTNK6B8Vkێ qY:M oې[[(dV踳hDЬE -H˹󊉻 Zo9;_Fv6NdKvuTz77ʪn$aɞ“;˜Ml˿>8iM@3OfX ;]7DkA~tN,e6qqx^Eւ!R&ZK '8-u<7vF[ 825\[Y@卉ĹĹ"]g_L@Hr络7\f -[q[d>'s`fH hژt6Xlf Z+c-2}hXwS.wk5 ՜wv-Cu/;Y@V$֛Ycs<^G1~ioؑ[$y9-sp8LfVM~jYI6䝵­wF'}Mf0m/ o`M2H\9ϹΣ [wmr}o_^ |yVjD̛;QX)g.ײCE%*NOakvS;OtO;񐏼'O[ϼ7{GOқOWֻk@B8DXFxH89(;XIPR8TXVxXKM&8ADS(퐅npr8t[]dhEqE_(@;dmȀX8Ux/7V,{'~H\F󇈍lH8h5Aq!gP`~b0; 8Xxiظ؊8hPH@؎GhH,$EB&YQŁ XxY&ͲrFĎhP8fxiIxfx*i,;` يa8ؓ>@H H8i-G_ rH&oHg2qŐ(ph"<9H6kɆli؈o9psY;(YyI h70JJA4r3s@y 9oX1ʸن=9e xf~)Yy Ș61 2t YT ѩ`i { Qv@KV&I)9Wt9 (|xyInyYzFȜHdr@Nyhp0R| w )Imنi牓2ښ40yɒlxu)hdHJ+Ƞ57!WUyr_Bp1]jrW)e$Z`fIxi*Cux٧Ȍ{ʤz :Z$w_X ibR+ I h'ZZ:yDD:CuzJ* jez#zȚ?ث~:Zպڭz蚮h7:Zzگg\7{~U W ۰b[nu S$[)3&*+$(.Ѳ0;~E6{8:<۳>@B;D[F{45!L;R PV  0YLTl` g% @ \ |{ hl;^Ad P{۷0`{6Kk2(ⰸn% y˷[a BaA?byk+J@aaҩbQg2񻐦og_d"WA8qW(hހ@ 2a @9A@HY k"[0K` ̿  ˾ ;\P13A˷_qG ҩ7#9W&5-0LZɝDۆ+uPSƇ2 EfvY<ā]0 [Y { |P ;0;aJqb'p @K-o\ c 0 ˷&f! .W<+l8>CJIʍV[ ̡xm -_:ee=<7q{[[<ʬY LҀw%?"8aqF t̾/a 26q{ /!ab:QJ/#u:`qT g1Ë2SPe Q+nQ2mj+LHȊZBofĜ`Puu\ 1M! WV=Wόь 90a i l&A  jm9Ai (p\Sr,n!=(̾ =20 ؈ qւ z&, ,  ڀ؏ϯ3Эp  - oR;`BJ4BܪU2T*3 W*-.,;Wv;uk,S =eN+e jcd 0qY=μnͨ֩n ٤` "κ7lҿNxlj١~< ` nI~k<',BǺN^0Ndž`]_j[eQ+jrWV+ءJ,g.|ۭ窼37mzQIqM-Ρ:jG읫KB T /T 6N9.ϱ^nPȩ P I <.+~NҿP?ǪJ-lI P I_U}Via?Y__/p ~^53'lFBϽ [+9 <P3$iH$>҈#^1FgFoıqxⱻD+q*9DH'L..!ڋ˶;hKBJ j#R̎tGն++c,zTL/1;c 36 @-FAf.4SM7Mm))$";Sm3Ҹ8لMFZil.TOV]k-@6aGEx'S=ygy矇%䣧z>{_~{?|{G?}gjۇ?~秿~;?_8@o4`@6`t`%8A VPu`5ABaE=Є'Da EWBЅ/XCІ7 qCЇ%8D"0EDb#.щObE*VъlE.n/]c(/ьgD#ʘF6эS[8G:f,uch;яdJHBҐ!H:&IJVҊD2ML+r VMJ%]ˋ4vc)AbẹDy2N/_*f'fA%I:# &L`QJGQ)`z , UKk"$2ĈMVa4SHIK-u+fj ̔ɗDɒtgN?h ]^$eLdV2\(=;UPdxQ`甒3Kjf6QCh8br'6ib6` /]k;v,&XP?>2+vtb CQ_quMLfbvJƨ2N"Vy\c b]fi$djБNrsf^+yYHzњ1Єpt`͛MOEБ=niVT)ݙ\nfw|JÛ9^X>Z.jmZ–)^0 JS;5eN-uc]zGCiZO;GڋDn`{1uFQt_m͏l@?Xٴbk9&<9g8_j<=ݑu &(zazGI8qӴO װq8؜_> S-:ƭ]0mܿ" jcc` ,L_nca:t .m_On |ЌuqN娷x=N/Q¯%N&Nk>߄櫖TK`2r}tNi[F1w^fmwۆ~w<6TPߓ][9+~nVÈ;ڽd*AȒ| @q+R <7a%>2鯜s'<ҵp1BĦ93>(3?8˓&)x3952)%'C@hCp36|I53T*JLCD1DDCMAG9?;LLĦ*fn2:Y&ZAK䦼CB9ڬ݃2 0⓼ۊSs0&:D>;+K@1TBFsJE/S=}ʬǽFG#hi{ HHPlrGG(FAǦʩi :kB $D|hT&򊮔,ED?>Go3g̟jQJrif\y%!n[|q")!:ʁh9ˬH<Fʪl˼K0r˿$4DTdtDŽȔɤʴ̕$4, dK"քڴMNMxi`\(g(jaNpGH|PWوV!Vh x0' 0 @OPO! 0 44< (0NN4Α񅏘cKpNqN D 8$dXp QQjrjPsQ  QUM1O^ _(!SH25ӏQhQHғP| @-?m5e 4U6 pR8POQو ՄSȏ!X$(%-Ҏa@N  ňQTUVSy#yhGaUdO0VQTXQ0QhTGpNH% `DOrSAe*eU#mքxSMU8PV_`uh-V`S_WDxb%&TM}́ ێUFQ8C'xPpU1OԍXX츏uQhYuQOFՌ!]Y kJX@"я0WsUW%C叜%ZR%R鎡}^q}Uw٬HٳRȀJm)M 1h%U* ^Y u[P¥成]eO yϵ ȭ57QfW锖xtT ҏΥ,]=Yǵ0]^[l\[jܭ]V|=^[yXgZV M\%m[gaWYP#\dRX31_7[W8FUEϨ _7W_=[Nߌe_gE^(^}ZQ^<% }U5-YAba%U_ܴ݄ͨ*Ya0le][=Bubܽa"N#aݍY ^R3m^D ъXHayQ| YTa^\1$bY2V[YZ?M@4KX'_=S⃨m&L@FWVc d#N\VdS]ccD^b Uh oeiޘU0Qi`bɕ]`h&V i]|l5\UjvfYLRpNCl}^jWsVf[YV'uf\o6pfmQHpa} Ѝb9I=7e$蓶M}Α_p!2QMF6f~6$N VVj9v^ꩦjfGϹ=s,%kT %}S-nvIkjhekF=. x7FiQjb]A.,ykhk%i[ǖd"엸lfNiXbFM=Sf6lA@VSenϾn ~lIj"0%gn.Oc\H۾nۦ. +njSU|hhf#W}햁Ԃwh_R֦"z|U=~EXdofgv} 'qm N>Fo^U]]p7k%ؓ-Ճ%V"iiR x rgh߮DVI<[=v(smieFYNMC}MٯSJfT٥#^UsBfz=ٟ5R@go@ʵWnn0as];WTMs6'5[a 4__W7qnTuIuZdZj5_e3Nnߍpf6S 9suLoEev l̽vxuT$vmOaٕwRdro^sw޸R2.2iG6j}b[0~-sjVRW^x]v@_mY`/a `xw-y]PRMo1}ÍyOU_^ xf1z0ڬe/}V-GxQN.xvFaOfwv)bz1pfcn!ֵ cXz߆匯 ^k1)|YN1`vgum} &՗wʇPBupy ^CR,? 8 :PV : J8!ZX~ I,!h% eu'"1652x#Ge$ݸv`Ʒ!ˡ%DѼRO,ݷ!1N~p.2ci |e}jeX IRb .m w Do|۵(6scF?,Oz<r\#Q,HBFRJts(␇,dc*ƓR9 ٨bBʕ±Yˠ97yyr~ {*UKٙrXJ)q&I\*PCsЅ2wC#*щRM(F3э>(HC* yt&=)JS.})LҘҴ6Loӝt9)P*tF=*RUԤ2NR*թRKQ*V$]u^*X-հfRBϪֵ5Bcm+\*מuvZ׽Uz+`Ԩ*m8a ǡ,d#+Q]lPJ(e8ȡ v0f9ղ:v!Ņe/7 v5,nې6Am-oe6 V_1.JkZv j[Z !t{X6}@b:<ȭe ^2 x+6/( _AUlc_6+C{\80״u.C.LV^Hk&>lY¹aO{]2ĩmk -+ZŖ1y&FF,`yAn1qw\Al{L,R^2,&DċF9Yvy[p0of5nMTܓ"8Uns+~VahEƬC-ް&g<w޲c+Y.O QQB-lu>e3n6 ־6ms6-q>7ӭu~7-yӻ7}-(?83l8#.S833i8C.򑓼gS7JtQ`!ia+pf O!!JdaSDca}a+С~UJPD qDȬ R P+Uz t<ƨP1),!)8`MٴO*RcbM5&""\P) aH;F9#e52 "0b".J02^t8"pbW,@&!7rMubM > (LD,c-9>M\IFtHЏ#>2.M&IPGB% &#Jv<,f,J-*&. $GhbMPOlOVJJ'uT J:%dJ^XLID8)ؤ,$!OJ,Vb$$!Y$e !Nܨ&%HfC ^P*\^vNR(-edUD5O\G2JDO $Gzi9fY$Z> 4Q$ba1N'ye\bwl&gj_]Q#g)4rB燔*o"nj^ȣ.i:"@)P*)J)P0 d*ksg%ըn!F{jS&ENjiuX`Zn\[BR VZNϺ.'qjp>^i+$%hkj.N Dj.P劂RcgRađ0͸Ƅ"p"_6.wbLWI%̢Z$"IE|l|"*ڨeL,c +,BdD^SAȠ"ZL64ьb=bj< %B$`ޒF$",B`_hS`a01 - QRޒ3#\|̶Pڲ툺-m*nL2Axn/yňd(T"M'|VҊ(NBέԖaBo0oS\BȏF^n.nH.>~$!.N H4 > ^/ooƈ'$!`.'I #0ϮQoGL0?):TpB-]!d00ư!̰ I[pG^kʽI ;M1Wugo1gu1Oqq^D1q¹1ϱ111111 r 2!2"/)2#?v92$OgI2%_2dY2&o2_i2'\y2(2[2)rY2*W2+rU2,ϲS1vE BSE.}W2o!QP] ZQA/2߁93cA.k3q!3}sR3:z\ ȁ8693:saY{i͗us>sAi|5s8VA[tQYsX>GZ0'tbt494V40+4K7CCDDDVE#s4{ 5?G4(BN_VMKCtIttL;;#5so63e4U;3iV1#3F{uHG=A[]5N5\l5\fe4[2[W5Ƞ`6/i5vXӵlNh6Xvff5LYWK6{Q\u]5g95/#3Y3WF3uja`F6mgv[fq;3/67LwYKq'iOu4S3}6^38|vn#f{64cw({Csy#vzz{wVYhc346yW=v438VtU0Y6xܷ  i﵅#7Rx{tyWxk4?k;R3x4846C8sw@8Kx;wMP#uV`5ϳ7OW1s7p{9u8SxouP}لv :9xyT5_siueg=C8a _95l:{qU<kX]7yz%Odz}:zmPB5ֱtG9z03׳}AG׌mspgXUeIauyٹwm0Yz0?u{rWp{;o;f;|{OQWxA%S W4Go}MɲH<{Vs T|ߺ|VcV|ЫS8R<^G,2go=׏|t=؇؏}=ٟ=ڧO}շ=PM=Lѽ=\>G>N'W7B1G~@AW>:Qg6awC!Pq~栾귾>0>aD/D]q>@q^O";])?O0B[(`M?????Ͽ?߿?p?@&DPaC?Fd8QbE#BĸbG=9Rȁ'QTeK/YL)s̒5IN<:JhTˤl:jUfպkW_F%R,XVUm[SŖ˴uzT^qz.a D@d7vXʊ/W~ܲoSʗ?K-+T7s>vjׯ lkڷq]3wW۽E\%RP~C'69}ߎw;l;NkwO_zwݫT~} ׿.3 @;L;s+Q !d:YcB7C A̬C> DFER\i"|mat]ǹ4uǢK#';'2+)ʦKB!npN!LK<ͬ欪N;52N3&KSP_sCOڳ7CEK]H>ġLeF rOA崩H34T6zHYO?eTYbuSZqUR9U5OXeE\U6AC@`<6Z|mL1X 1zЍ.W4Y+vnqO,W[nݕݶM /[t=Oq]W{+Ro]WC$89}U؉)Ϊ^¼Xb"`"XˆώŌ7GąUe=Yd Z?he?S*h6k#[ xB u ?h 6n#o&z0 ?a[fM<ղO;%!%zǓ1NŖPy杇^IwOܾ磧j|8xع*B "} 7e~,B uBJr` >=tY)L\B"Ɏi"+$aUX%T)KV*& !vCV | |3%d҅ YdwA1΅7OOQ?h$^Ȕ-% @ES݊x1EtЃ'*@E1}TZ1 ?ı'%9 E)0ePC߱r)|HNVH%6@d@BQ|'¥qtʩR4X6V+10 d2? <-MnzSBg}ԩ-v?|-#“RxIjM$&iWAfЉ/}j(WQ ;iQbtV5"GSP kgK(0JEHԤL+)H8KB7+5OI2&!.=AG jTQ2T?W5HtE=hI|TeX/U+xZL*`UW%&[ Vpb G įg١߈$,AZn)ʟz<@j1lJFeYPMmg{56ͬKZdD-ԆuyEhj{;Ε=rCQQėdnrQWOme"5@ b8KAw0^|]K_"|<׺2Ůɦީ:nVt˔iak( vvM3⟺w TMg^`"=eLc<1c{=wM(QUy;*UB0%*{CW#*j\,@)*V ;}PM4g)9, "@Nh;%q^ < ^Kz!D-oN,5iOO"OMw1, }z*q)ggm.b\-)a鑡,G)XJyW_?:9j3n('W:ծ5p=\sŧ>fRRXۆv%s/0zU?ZGڲݽL0Oȣ@P gEL{l[jn_+98%Nbw[aBuh=d8x.dF?/ONUnTO+ -iˍ&`8K* Y:.kǞ/'ؑΫFfҜږf6WnRξ}e8ۓ;ncAb;ܻUl ݅VaY$Z㍧b?{3z7OW LjCϯ{^^/Q𢵷*%G}v(5Oȱ?We y,O\BAo, mT?/hO̩**Ҡԏ`(L0NPR. 0/b.fш=( Jp*l@ 6cv%pOA$p ߭ BXFApմPTHXKOBOD"^Bp!j:% uIҊnLGFmRm^+%*ߐ:p\( % kuƀ+%.-N1`kp-*% /g,j1@Y1 (%L b 0qu" p d'n,qd17?Q uq-l@v&ͼo s#`XPip 5o?&f"* v &B1C- @ a "Evf\q!! 9C">Мv aB_ [) &RG%Q-Qr%af&K2 k2)#!q' &2Gj>m/'O%_֌ sHx,̊'#,@,,.OCJ#aB˘-.2,%Lo%.` ` Ѐ0.`"-)=?1B2)2ފrLmL$p*po'Ő8B2I211& br %nt&f#9C(b`hDbL$r(s::S"8sfL3Ys4?+h)d9ע0GE0a4HO#Z* III}45cwqCvb`;% pai"R hU-YcٖA8ެS$%>QI8~- 6Ƹ?v+03V՗8Wbhs񂕁,E+7¬7aB08Τ /5*~x%2 *gփTݖARy[YD]z޸~1-pX8x>n,e%#&5ʊ+u)dyY5K.B9>qC/%k;"?ϗ)ʯwuyU"ɐ- V(t;J]+zYYq, (UyK33wb;\'-`Qe K˹;9S{%uD#%FUBNڀfNw?Z=ZOBl:.Hld7ڶviY Ù):۲YV%@]Z)uzI:+tŹ64/:/>dy1%rڰ/# 9S G4ُi>D۰u ӨuҢBX$x=g;Z~kQw|z#5klX+*Ud['U/uqLb77/lSbOCv`L9_:B|/zxkV?۪Q:]u/J1ũµO~Y?\z ^joҊc(A[/%4yTuԳB|x]+01rR=xC \)*YLI[wU]YJ;0-vB%ĩ%c%]ۿo%1x>I] Z+d@#݁?@T_qІ-۹Aòmv`]\ 㸉* %TTR+ڧV~,uQ|+002Dl]sٴ{{\R-D:pf ]dlLW-xB+C띶b`K.qX[_?*ꦛ"G"*բa/%)?O0\DYR~ThA5xTb'i7Ϣz:\Dd?u%0x& 5𯀇~^]%:_s.ћ=?_Vk0/\KAg`c" 4…00 H"ƍ9j2"K4JRT`9Bjlc˝#"I0@mĩShI$4*ϔ+ ÝLz"T`O-)@e۝1f9#y 8tH4HFŤJ{T!t4b)CmtT85BP`4.IlvIۍ{KY1mF]. SoK͊=} QwHC[ OsdutE[Z/5&4t-EN&Mx)~Y!EU4!GզQ.`%aumP&}]|8cvXՈBԝ`a(}'ƧI8Ƞ\:R!>4SmFJXzx%o]&yUIB3%GU%AY*s uCE8 ɠg 7P{gh h{w" JQUhdc 'd?%ouiB5#F=H1DTUb޺ʪPF;P_K%b> m戫<)`mTAVB2 RbqD!UrBIK*VKЧԐYI8 ɧŅ'^ Sk%+K\ZrUR2Ѧ>!O,mqf.?Q @ 4F  ِW`BIN7`[ rDTA]tR\/zԛm$C^jC";;Cg68vwI^4Yp=~ä+c cgbr EFJlQܔV{Tf\w+ ^ Qd'Y$F(hoCKR"IA3I2 l@r#EbQɒ 8Ĩt}, dPH?$: jl 4i?YSLMrXK=QU(3"DHQfHP0 SUw$GwBˢIkFdYME}Y0EWlf)% rj5՝,47TSBd{Z|* %騦; +>4H.vD FtemU:A)FXyő&⮳kޱo}hyU0-m SA-5HoYV.v&5!&63"uO\W1l`ӈ+\̼Skt! p,FdX H+Gu\m>B~ta&c.ˠlk]U$z]`FӁUG,/ ҂ԉ e§k_ X#4!A+|2a4%jrLB\d80tȜDuD2Z+n+WBs6Q (_gC lG":HEၥ,  `!^XHWP7j%\V &ьo^pMc FuN-:i1J \+2V.9ioLe{axoӱ,M|;s0@Ǜ#t/ZG e~& wOZ?B64šlsy0q6Tv{ P7AWީZIn{ ow;NJ6Hd`6@Z OawS#ݶDyrpSbR!`2 gn|;OLevW5`Ӵ-maB$>E4C0sUyd{oϻCy`1x26 }GH^LHV2Y z}sr/=sw#/b 3/޻-M*k9np7츣>_oIq`~гMPNwCh@'kU X"= 82Uf1x}R tAHX|/;xȀ2c7SN9wڄI/MOLSHN(UYWLe"ąaHVXg؄cka!XjjȆsXw(xXt{'ׂhW8H3m與X艡Xr(HZъF(h(=t=rr+$!lx0!{+tȌTg(t4YAT܈;hS䍭XPxX*wA}Sx 1{vxk nAqh=ق! qX ?pA"ِIèw۳h7 x$9&ّ(y!y3a5A:By,CiE>GII9OY1!VIpH![]Bc 7_.)@sٓ>| ɑ;hIb=zV$yyHɒYS钇y^f9c8Zyi&^i䙁*ziL}l I)iqwfyQɕhykqᡛyY锩xę!)@֘X)‰9݉Mit9ɛ ٞ-ɛ"gٔIiٟyv9 ڢ=e%tɠ):'Zb!:&:0m.' ٠1JIVE?)BĢ)HJ꣑ɔ,X9^١\jӣƩY*j89x"rsxָXtsئvYV9 ɧljLJqJyڨNk ?yXz(oJj &gʫ꫿ Jj*Ǫʬ*J8hGo*:eĭZeR zTԛq=e:9>atGVQwjGIzm9'-k2 KBv%Ùv#[F\걀 벇ۤ3k5+&;9"4IB+ʳ4{IC_O{>+ >KNeC)yX˰ֈL{k˶FKR /;IZAyo+:iFZ;sy*ٲAQ b˘W[*YZ򘠊+뤚[ nkz;4I8㺨ߪɕ+k{~ڵcۻۜȻ֊ky٫FՋ+۽K{]˪; ھKk:˿ ,lU b,Lq Q!L%lyR1+-/ 1,3L5l'\6;=? ;̶ALElG|øULħ<;ŐXQqŮ%ŵ8ƹMPeLlnnlg\MP|tbxΚtlh}Y<-A`l p@ ȎȊlP0Ɉɋ\L,-ɧȰ,p ɯʙ<aˌ !ʔ˥\˷l̬ɋ!Ь-Q̮<˘,˦L,  Q|̑˲|ͪ(vȏ<ͽl˰L <r˖͉}"]0mѕ$})@$ н7-m,F = Qm!̠<*mˎ1Pԡ^ɧ<@ -?MƬi,JEˇ=$`s]M%MOmɛӨ,@ٖ=+0${CloӠ]ن]S|،с }]ûA۷]˻p( Lҭmݕں؂ݽ Vm }1 M})=$,s ^=َ FmjĄ,H==ϟPm͙'3Nk(4a]%}V]έAwMA& ЭLhm,M|L#]]*6,^ܴ #pL|ru.\ۡMgm|W}͞}ɓҒO~^\NO"M̿}^}[> I^ӑlR::krɗ]4ԅ},.X;^|#Ѵn=﷭M M|a߮ _^> _G-&ʃ~m 0l1HmH^<aq߉<uٵo^zwŏ'_կg[OΧ_}p@ 4@TpAtA#pB +B 3pC;PK_tnonPK&AOEBPS/img/strms067.gifxYGIF89a岲@@@퀀???000 ```pppPPP⫫///ooo___rrrOOO333;999vvv;;;,,,888XXX---JJJwww777666ggg555444 xxx111222nnnɸ)))~~~&&&ttt:::iiiuuuȨsssĺSSSQQQ+++lllZZZ EEE<<>>GGGʣHHHddd'''UUUNNN***(((BBBeee^^^|||̛zzzyyyfffbbbcccLLLIIIRRRkkkmmm\\\]]]jjj{{{WWWAAADDDMMMFFF!, A 0Z c(<;p.`WCpAh|PB˜/m@ Ll- 9 A lsEhPQ0=fCH3pq0<udx3wYcU+δmP砇m΍Al<@ٰ,4BPqoAl<;,5^.;{|DL{d;66lQm3.;`puBy4|u#P5]A_6gm&l 37k1bbho`#C6/6Nh2a`507}ЅH.$w3\VŽ]SX8C l\/  q# hQ>qZNl/xLIѬn,@{#74P} P`9Ťu}Q򏰌%9} _8Z a퇕WV8I@ +섌" g-L:Kx&L P @@I,am&9>}H&e ;'507 lR9: 0 -pLJi0EbJӚtRDvӞ[%4H)HeV +/nʨITc5UnY%19gLj `&v6 d2| yx F4o='7=SMb߄>kvWlH6 vkaY c%˂~oۙPWr'XS^Ɲl)ѮVvMa*1U#D vsakzH2|+q}bm@{Sdu2%ѷ*-E5}kj*xlo UETAKaE~h#!'FIt:Q\[X!"q:jf|ufyvb2=3ƂFfc›O`vi;s=a1oۆ54wJd4n0" EknWK-{{Wc&_|/*mF6\k0(md~)M=cȜ|3J] 1967CId >-jYPM @+v5l$uܚΆ?%{[sA5>vjIe>ˋM5fIUN Po *D()@Nߒ9X00gNDA ZxZӒ!PHO:@9 *IϺg/|Pw7DahO;[ ؅}cŰo`-iv v[@E=޽n g_'_a?:{޷|G͏>O}REҏ> 03O˞C[LO;|lhW{>d ئO_iaԀ (>[Z |p6r~B3^}~FS-l2 cчc< ! o GW pB`0 ,13OS q/ Re=2hw`=}0 +g/}-PXs- P jmF Gv=SadxU7m4lc555`Cr)B6f;pZ QPaM/0 PWh2^?ǤVe%3K3`N].3[ `/Q(LQ1(#)x/p='qn~2# 0 80T0HqqSqx0 7h6goB V1T r nU;^rT&=O9e7w35eJa'w CG80/ [qHp\P jPQGP4戎h x,0x38A A!;0?ȊlS?C 3>X3=\h[sVug) 'n͸8("PpI`?8e9 -MD<瘎BP$@x5Ya28hh7$TsݤV0V\ 9=6^2 ' ' 9–)~4%I'/{ٗ8(Nɘ ٓ?i-`"^ixֆt$28ؔTH58y^$@I48:w ٠ )Z  i&%:) Cii|闀)๘YAقh&X+I^p03|3w28O_`E99MjisKeOC9›  ٠ذ`uJـz/90ী/+~@vosYwf`[$6ڝߙ@;J>J@h E:n+ GEzlnjCc@{3V7cCF5XPc43z :, %zɩjtj|j @%皮 < % ذ͹'ҷk:1T5 W ڣyAjh/& N)ei3h+r:@j`& uڡ @,5 j [Ur)$颠k{y@ F0 Н9 W0ۘP ʱm&kKDH۷z&⚁j+@; G%P©8'i  Z[\BP@ ذ l/-qؠ q t[GD+&|{Gizjj |~շ{pbQHy7T  W7к, ?:BC|\| &{~ ɧ4[$p|j2 Pכ7Kz‡5w PMv v@ y jǀ i6 Q2R*J0 0-8 3 &0 f㯴\#QD p` $OaAGk ((&B1M\oItI(S#"pQ" oHlx ;h,X !.0*| 9B|@K*x@3lk1+ ,@Ih *N9' 83MlrVԈV\E"six" .&2qM\JLjޤ4K.4l nl]vH 4k "4.@M "N$фxc^Xͦ\+V,X]=j!\L I#3W<x@ 6l$xp戸[I" x9 X0czڍN cXign`Xzo:e}4;YAj(0m\f©=/N\;^Q5ԛ[6^o` 3D+2V+>[579 L[mMdu o+0l܉֏`k.!Pu=Mͧ.lH`Nr}`߳+/x[ۭFIg}="3)T6)ic7">șyɟV=QOzꓙ#3$FpAz\㭈Miՙ &0tz~ZGXJ$U~B md&aؠODҝ,JvU51qp} (N4D^Q@o.+Z[II\2gwDU%4E*bhYFj^40SC+$G$V O"p *@3n?Q*%1 6ЃAkrpAoT6i*_|F%AY!3:w zGLETL)|= Ф̂^(`ɪFKv%uf-$ʜ2>l$OtSr:1]Pw 1WL.jqh#uݶeU%V"K2AKhjr,`hjYXfde\8xjvygLkK Vw9z&ap\e@&YaY-d{OrŬZqκ&"\]rfRZ<[E:xh(6[q~WT#G ;y1zL`ʶ: - " ,/9fXSK݂ ȢTr5&Ȳhy^&sqf37Tsf8q |g& bg?z/T]hDӅЉVʡ,@q-ݕ߲B=}~'uc'Vu]MVZֳ&Ki}k\K͹5Nk>ɯƱ[(3l?[#bkkߺvgmkwۯt1aڡ>jTɖwn{[w=o3x=punv;##gpOsƥqùyqmrZ1ypg^sK9w1 qO)zpwt6}OzUlW]#wyni{;_h=v~hW{vuJ{.v׽wvew>xN;\xbN?)/<^w^eMП^*Ŗ>QKi_{CD:P_G/Ǟw>Dӗ~a\/j߾}g?o?gw̆D $@@@@@ A @$A4ADA DSl8A{A\<[  B A#"$B$d‰KBh#|B'(̆%B/B%/B-Ĉ+(0*$56,C0 H,SH= H?˿ȌȍHH?zIȓ?TIDIȗHIIH ɜ< ,ɠʢ$4ʤDt Tʦʧʨʩ=k=JH=KJ. Έr K'iݔj#{XW قaY _)"1lUت`j- ~r.|U.fYF*~z՚S 㢙d{)񗞩؅E `%ڇuTU )jyY}iqy=BA֑U0 Uz[%_z{Zxܠ!.qqײۺEثU ˑMZ :4ً-۹U x*[K_+xM-\X4㒕Er ~ faz]F `ܹ 9tQت-߶ꎦ !zōe^4;%ڭ = x#Iէ&ߵ1` ^E`f 5x&[n5P j%`uUY wx  j!_$;r%[a}IV:2UaaJ% A mc[œ [63^*F]a@~ \}S#BޘeZPdZEދp =[} Zd"JމÏA-|m%T\p .6W^2#/T1R^_IX0IV~`$Yȣ͵ P v%6]kHZC1.f)gU@`p pI]M_Avp!.VÈ`iVv;g2Sf法歭_=…|Mڧ1bae*2hh MnLjƍ]<6Fᥪb_v>6揭᱈ BP` 2Qj_P깐Ɛ`6EL@^gan^`^ E5]Z܀I~^.kXB2W(TkkڰkkU ۹kγ+.-W.Tmؐߴ^ 8ܤv!^d :Qu_ZjkIm9pY jꎧc ^l_ d5F]k~ ln^mv'qRAԠEIY(Ni|i[i5'V!u- !xbpəe z6$^C (>o~ZgK g,q`T"L#7nl>HrZٙ*'Oqie-t*/)3?32ׂ)"~rM[+/(>s?s@o(-x8JU\ʀ,tJtK'+~gE A5)K/uS 7N7O437Su[|nuxK /uautiu\KZ+vg?Gv{)xvl/Vc$Rvrt*Ȃ rH{q/wx/sTw`G{ *wxlȂ1({ |g4~vps .lЂIOxa(`>7 :/9(hv? l;,8yA+y)Ȁ*pylX3!yc(q2vdž!؁>+so*7'U `V/ '}ihN璼pFLw8 hE)=vA/txYHpO{%3Z$y^ ҥ}##-x6&2ir{'|pPć7's2Ws/`X?H|3= c$6ʰh6O ۑi]l `# P  ذ= 6DXA6l"> m*Wl%̘2gҬi&Μ:wL ϠB-j&lCxqb #fC'Y[LrW(\LPS( #5hHq0ۅI-PMvbAtl 0`@aP֠(8A ) X0)6 +@!am9ҧSnK>:  0)p :phC Khh "i׶}wn]TfDM $e@6  $e#aff`%mAP@=`aVPJ+ "kQJ&e^6f^d0bw$I*ApQJvL9TTY%y桧{(|ه~j\re "B,@DX!o"y mHPzСErvh!:!of IlSz*@*6P[ Đԋrz2)bPIehA{|dNF=}o)ϗB$򁸬 hso#L rk# fFq`UkYZ/0"k`1OyET8Kw.|j0O Tijd5PV!f*@#0f/+VkQ3mXY{K~!FN!V^!6D~!( A!O؟a!qWaN}iD !>&"6&a9I#Wz&O``Ub|(jq"b'V(݄)2*4! `Ah (@ .#363>#4`IH,1#2,E0,bb--&Z.p6|A(Ep0.p6@2F#??#@\CIKУ=>D @P##)ʞ!""6c7f"# ` dp@d|6l$ @$c@$M֤M!HKdIdJf($P ( t*b7:%G cK 6 b @6d Qd1 @8Rd6h@ F. (ZD#cJ660$GbG`C #,#`^TTe^&fffn&gvg~&hh&ifhXL㋰Da@h@b~$Cn=fHf. f`"6@FU^ ".d#/f""Jn;@;c7r@=%6v%6"9E ^fwr@Lf6,$Kab&&+@Uc{JIfe6laIv{vdoIdzb@\@d$l00ne*@. lt%NPg6|$ ,f ,avʧJ_'d^b}N 'D> r("PJ|E|6B~d6a6gnBh[c;(NPh(J~>聶;*@7@H'Ln}+})A'*@afp@&"u6eSF'`Z@' `i[6aZ*[ #Xb;n~'-[yn(S6evJ1ʦ@4@4&i Eq`De=wYRb jD\MPN.r_2J(|j_Z%>&"!K#|J' ,Hj.@p@6@G _aIljx \yJ'X eB(0fBvs\UL(NluEJw 6"KT0dK~$f0JgVkBc$в@`f$[D"$_fqlej,j@wD$26֩s@RX5 TLW0p@N/%E `Q_pSHIL(_lIDJF.xMD^mbD,φD5M`&Lff@~mRXs$.KDCl҅<HNu*BЕ Qf$n^ nKy`YDplKHFQsN!* > _PCgDㄙҙ5/6{nb sE03K|4:M4R1LMۄMN{RK4S?5TGTO5UWU_5VgVoVoXP9 L4, MZ54XY.Qkz5^^+uyϬg&>bہ/6c7c?6dGdO6eWe_6fg_`9"Z]gNם+j " %M (WgRh\iitLSvKv5v hBbko^!vgvo:w♑%w7y!غMH^73Bj·}!!nP(78 \D'/8+so 4O_R8gwMw8Bk^88+8ӄ88m8x_ފ Y8 7?9G;9N_9g̀9CET@O؉qk]D\'9)굹{9 :w/4NW:P]:kwК_::啺 /bz$:wϜ;߬: 㞮 ;_r}~1!(H-6LAW>~=Xk>|3>uT׾鷀 ܳ~sp>hmH?AZ~uը| OK6|S3HC4@KsO<6OAGG+ҳ\RxD 4H,cdsbD)VxcF9vdH#IFv@I+3&eL3iHA;y†65ziRObKjD0VzI(Q|vzhP7C5{mڦOӶunV\Iio(TQ 6lP1!`b7oQ2#$M!6}Z$_Xd=ʄ8fŦQ[ Ct0BG!@-ќ-=p-1"f4It_&Wa-õTWŎ(uȁ+ ֌UW]y!D`"*`J,-MT,Bk66YaOuW >;!:AkH  il a67vox;^׼PzO f jD|}ml3b*Qm".PY(O{׽臰TO\*"J4"C`h$G]ŗF܍w);Ob6 ^UO;n[:b8! g P@s+O*B@NkP P@z@ˑ)6Vl"K8 h.i0 StqD2d ߰&" p@=#"$!ݐV$ pD";5=8R&"er qў4a, "1q2&&j)`@ ~:A(0@)2=P-nX@/ % a -gU&bZ@ @ 3;YBAΠ4 ^`!Sْ=5`:   DV3656S9(5>S:)s5:S;c:Q;<Ӻ3UlR<=);O<=s=M=ߓ>3>Kc>S?>?3.4@ T94@8A#)Ծ@MB/#)0B1C3Ag;?D2DoQ DW450EcT ]0`TFo4ht0lG{4PEbZ$cxGXNDIT@JLICKZeJSELje*GA(p)-&`;8CIe8MЦ#Ц͎,MS4EOi1dS S*V E<-`MQk"9 <k85 CnTeF4R%U*8/C~R"5"l6oQȖCp~ɖNol N$6Dv5#M"rEVXȦHǟf"(@ՌRK $D"A؄2*`}jHcC΂05K!Ct 9*u%D([&@NCA̵M+siOWcKG|Mq~,ڌ$=U^#gT@ ]Q[n~EŘ-kkK'+?.2"j$ "&e#:2>w/`s_w?aAjU#LP"T$0#A#[!BQ\"xe_"_$bjp$UlV"d1w~_@ <0… m^0€ 5a 0ʕ,[DM4kڼ3Ν<{ f' |Y0AH*`BG.`!C$TxAŕ bs6-fۺ} 7ܹtڽW.aă"@pPd+KI?08fz :h݌a ]VUJh Xe!ċ?ܸɛ;iK4$ D @%Lۻ?N8!*6K@JZ AfV~ $|DNH!} eBD[^WY]HKbYR,Gwl!& J ȘMeVz ^ds/RaC@HP H^6DIAzL~Pb-@e 2zpp͏7I$լѢwT=xj cuERUWeE Aܖ%w7EGЀMP7EK^y=Z)fY6g|c_?Sew3lII8 c]Y6tI̙ ]OX ;x*r.+@Cx!I#HI츅X*0XֹN.`ԏ@ H? =x H}9/tIfwIC2{U=XKT.ĨKQ&.(':H64hAYЗ*5-1ٞ|-I 6N^*pO0#.M#ҥ-MDqQ 5!VJP8QaYd;4 *EJn ut|*Dƃюn<%Y1+KCI(౩laiQI5rmaRY/]vF=]YU$6aqjŞv !gÎjȺn.J vOJY">ƿ^磷 -oq㹴OonW)NWɲeFYtZkj'ĸR܁FQB# ;UC3Rk+N7!([LS>2R]ɇ y 8rj$WKjG(KymqBL,kyrQ.B0VVڊ (ln g(kYs<&sf @ z'}6g(DFf(юv/j|Z7No,ZXmOW{46cWs=h\sRJj #Eש6 m=b*AAP@Y n@s~Bmp{=F]nuFyAn{=w{Ek8  XOq$ Pxȁd#߷Wx)#yjNppix60` @6΂ ^mh V`uc2zAb.x`ط>6 Jq{ `%o }KVyW@]~|Ϟvt#|?saW:޳!ɇz@Xmd>~w":}MIVk`@lp;9yAYXEH . 6YU971%H|BY|Ds(gfI?xiMthz؀U7 B( 7 `inwIzW iSy'} y( y qy雋&@qX|>gٓQ}/i+)ђ{ -ZwpQWz0!uy:h#ן) ڇ!u581  JT~t2JG'Zq jl4wउדO mnPhLZvQzZ0Pg a Gǐ-ڥdz ܦ p]1J0b`u0g0aZxڨ1kfs)7i ن%#jj/jQ pz+ yGh-*p qǪi@+m4':.j`!*-z* yj+Wkxmh.ں$tÖGv'#{d{vΗz zWyӂ;zz)v+}|v~y+y2)رKd}6tѸ,TWC;I'ODz% kW1~WNH~W)(=׵@!(ʬ=W+~Y[ۂyiunI9zZq~@ȶ`boXxsU?&s~;=~'R;Qk$;2)xd㖬0-ȥw;ingAt;16>׽{zط! 붥["?g抂q"*:r +;Hj[s ,<5Q2Vh' [-gWmj+/x0~;Ň PǨy i;3Fahy%덬k\I|>\uꬾā2G|;9IAzH?9ھ/L)I\vHIqsIu|͙?PPȌ|z0h?Ŀ xIq~LqLIȼGٰʙ|Lɖlƙ,$ ;klo\p9ܶ.)+0u, G ljLlzͬF9Ӹxiug1I8loz_ ܚMdnC7Кu\ǜ|/ʀ,צ-@XhZ$͖ʖm- &-r:h*|].MR3i ҭt5ӚK#EM"GVґ mQ- Bկ lH gMdABq3C~5t~#qt'r _^L5OKϲFRf%"PZ?`Bg3޳ήD$A"=)TEDV/4_W_Y*9߱!Q3P07eFm?\WRo9}d$U4&E>%+Xd?EUs6S60+_OCsT*"%#0oZ'QQ@f/2._P%^5P /Ϙvql/UtQVuu((o zˮ߆J !v+H@& *AȴBM$pDqO@A2ȧ$2 $IFJ$sZ0;لR# rۍ~2 $ *M{Ფʆh%,K6I?$D4QDȁDd( o5(MS+200,:i D-4*h?(#l&EdL4WgŎQݏ $. TK t N†҈8)* @MpW[)@v3`!Z4w]Ѐ ͐[ovw7qMc>lmW ĈLŪc!OKuWL VՂ ( @?CYL hQebel^ljL׾Nm离6([Ǝ]3zop?pw,T]\q'O(#wv 7s?=tG'=?Ɯr\ Agvo=www]1Q{u u_WtwJ=fzmN$ŋOQ>'Y۾Aԟ/ߣm}6}Ng~]{1"=4o %_B0 TD h46$رNЅ  >e! )>`CQ Á1h d ЉOLI(l @Nf}P"d4MfC`c66,"$(Ɔh@!ElьFh*Rb$T!Rl( >DH0Ilp$ ʤdI 5Y 4AR4C%6ˤ6L0J*D%!A@N>ݜDHRӢfݰ 2!!ڐ!Rz8"DP%$C>PFv>"by@"iqX hE*mOHFģ)Kjí1GJJC*GTv;};muD‰*T+P҂mT&35#Ysbe9IV1FZ/ Ulei6pHYbI!vH92]IYG;ZȆO4e6X)ЈLD)Е/(ˈ5&1pLT*dmQn~-r0@&p<`In7mjn R11A'a3PE8G߰D`N,y%H.P {`4¯~L<CrA`PQ. ]^?I2"PT=@_T;d2C s<;+‹5׹tc.sqkϣJ.:٠WL&g2ǐ926eJȱ?hmFuv\ 0!49KŁ@yEm8ž;pwjb#JQ&/6>YTmR>Pۼ!]r}czLoXS (Eks}o<vNQ͢{5ve jvF8TMѩ$7!x#pF쌺"œ*AuyE kЛ276{9|v̅^]gi@CO Dᵳ]J--F:c;:7qa1.ڍ-+u7YvX ^woFϻ w[}_x7]%|0t5yw}E?zuG=E*LGWS?2Q<{~?|G {7B?@?}W~}wo~N{Gտ}{?w㷞}4\Dd9 K{ @$DA3+Kt|A@@!\@KK#DB $2LCA|CBCCCDD?dA\CGCHCICJ G,DL>>}}}===...RRRCCC111---LLL222WWWTTTSSS !,y;``4ha >Сˆ.4"lj8bɏCJLD-_|3L4g&fϜ8wӠO@ JtFMtҦL:jĖ^1l±b˒ehV ڳj;Wu+ogW/ 6oaLJ)/[ogƬPsfΧ=ZtkMF=[umַ]=Pvl&^ppOh8rʓ]q`#Cn|]{w'^}ryս7]oqwߛ||ׇ_~~ȟ!` 6sFt͵@rHVF$d(]t)-z$3X#7ʸSYUVWiWRDU@(dDiH.dP>dT^iXne\pAm%[g)fZkI9ќtix|矀*蠄j衈&袌6裐F*餔Vj饘f:v駠*ꨤz܆ꪬ꫅vp2p 뭸뮼: *k,{6쳃&Vk d+k*JnP\`A>P{[/'CtSoJ@tS;q71,,60 YPD L,M's8'2{s| q,01mIlՒ*LtA7 \O 3_0qB--jlkܱ߁2}Æc-jg l03>@]7,_p}mzzwӵy7@7o4{ö=A.;WZnp7U{cb].XPӬ>d@>~ 4-˜w1a_?9s׽yClI[ M~{XF;v(74A(㸗?ꍅT`E  %'/! 8 `?-': 8!ӡ,$~Ђ],7M(9"wKA>OK7ƾkB#HŮb\` -|m|"HJmXnЭB(@tbeЫ{+7<βJ昇9Bxi>݁^N/Xn.l]f =^N ŜiL| 2N]e1uạ9N}i,r$70p jxٜv'ɢu4QNLDВ(MJ Eւ"Lg@44ͩljW_,_1hw@M`p /E(hVr4j2(ätb]V&Tkթ\yZMO+׸iQs*A;V`3Y,ԙ2vl(l209Eo%Y&m6bdչRy˘?6`q}ʺlV`~{ ͮDU5h4 @rte+,ֳLBobYjByvF@T0͂M'j8-21:mcX kOoΙbP2}j_V{F:m| ?ҳ)6k)X dVwp̲"b::Orj-`S^&Io4,gl&B`)- _*{kPE7bP 2l'Mir$2ɾ[tGGM`n F;uvv |f*kj%@'`jn-]eNhA|{]iQ7;0j=?{|7owzq 'pC;^oXPOKΘ[ ~˻1+iV Of@A_,Ώ0(zO: oa+8@P#7rNgؠ> Sd~WbUC7,48 @/Ux{ VD@7P` [0(8ZpME0OZ?%/5hѵg(pwW|W 8{y] #H%#NZN5c/*$P8}FT ;xy"04 Yp}Ev 0yCM('FEm@09!'EL dXvWsoXVS4Huf?@H49cS^x؊|&-@h.k 8V&h5 Z 1T Gu395~)Fi8Xxtܒx>@]RC *P((%<+2@7'7%4Gp0~R19ّ "9$Y&y(9Mx{9pl0P+L0()I9V,y'.s02eh*\@H% !(CioFY=IM SY,AV Ű1ZO Q xf^px=9 a $@v-oٙrYx ? 5ONPetxw0I P 7 "&`9)Ih0(]/7xAh,ՙ t' `”ةnםvY00+G/.#0F30K  ٟI)IipP@roP#~TNXV=Ti+Aօ[h9ti~z)| F i . 2J6:ʣ> BJFʟ)JjWPȝ57>;43:7>BmqtDc] )\^+6$Ko L`L LeXMJcA{xꔋ'^~:*Zʮ*ZG:kjxP'ZNX1B p0@Ios1BK o c[qz۬c[jKҲuю?ح*ڧ'Z69K{>ۋKvm I/E[.cnϘ8dE@_+:Z)˿Iy+fj˶`H PF,*ܒ1 읂 {ջ˽ʤ x I>( ̺o5@0 04 @šJkjҋ;k=۸{B{x2Ӎ2:*r@ʬ\6*m*,˷ƣ#| p)ْ*ܽvZpW 07q_Ppy@ 9NEM?~'ALRizVMkQ^4DpWll^^N^ Ƒ]# Goj0jpzn B kD–)@zpJI `~ͻ&; fYn^[.)_^!fn%7G Pe# p럝ݐB=r@n'.웉4@$k1:0FWnۺؗO^;7GS@ #P@3@ ]c*𠠙+Ӽ\׉ ׃͙ܶ͜Fz]Ӈݩ ' G 0c 0}p mԖ| piG#/іm {pQ~H P : z  }P B=?ǖ wO= /Z|\`=ڿ>p|xoz' 2 .N=ݜsO~#1#٧OցnC"}>ٺϼ,կ0]acn~ N8 8 \1"@z|HҠA nÄ 'N AG(PPACeM9uOA :I.eT'n z @W8W` $VE +VtP  I2,@]ԭ`5> Ō;~ 9ɔ+[9󁊰Ei)iԩBլ]۶ױeϦ]mܹuu :y1F=I$J,]”HE՛V^4멯0}sE[./s7Fan>(1Ȳ;˾L<J=S >~ +cvϷ# npn%>C$ۮ20 o3!fԲr/,z3и{΀2Q]L;ʺ <댐x0sQU1LG#&4y7+8 TQc1  A àS'pJCe&ݲQ[4W^53MxF4f. C*;Z{UWqo۵\tUMLutS8Gpp!` C) 8t0 e8sbFG658yM^Hh0C,jP$*83ryg{gzh湆JjKslsCVpȊ/P(`}*0`[m{nn{Pڧ敁 j}y7hgH"mɑD0ܽB}tK7tSW}umcGcٕcb #rV e; @զ d`@FP`-x>IӪwaK{z= p[:2ЀP\/b q"+Cx;]n,c>`C$JIg^$~'3E06NC*BĊAT؆'D,abcȔ@7iZw;4B'ZᵡHHF1F4,OR C[y$yLb,SKϏP&f'I]Ȧ&SC1iS$@vlenؓ/?-ړ~@7Y:;siHyk1lT8(f Hø:7҄( xfn /MeJ :) Ҳ%e!<,jȁyJKT@TuC:7TO#/B+hFZLXa 2fWi>{%miJܕ ngpgim.%nq%lA'b\B[& y`84Uw@!PBq^@;@rQBoۉ8hC.6@;SU>8E?8BLВ0` c( ADVtF?ZғI)}K[miOơ I*a1#YgWձ#i YߚuWrk09-D+V lHڧ4U]ÜT0m_['&Smz;:yѶ٭w{;I={7~S\X W7 p\8'p s{{"8?no{*/9O[{2o9oBoǽ~x+n' G)t;RW:t{X:գ_]E{kO$X{Q=n9.1y Mx]Pvs}wul|;Oycx߻KOz]y_zWl瞘}%{ߎ~H|'߆W~|GsЗ~F}ga~{}_RD~ݿ}i_Bퟞ_/?b4 p( 膟@*@ ( To2 hAo(  @l  P &C# 8Bo(RBỏ0T($BJa(nBo, 6B:s* (n( p JB|s*| 2%pC8x CB-DnHC|?W*@ 4 %T I' ؋ ڀ ȉLJA[\`ƝBD@ 0Dj, LEoFs64 xŜC wAo] #eL Fo4tE$b CoL=KC,PGedf H (@ HeB&\ ZG _I+ĉ\FLBL8|DJ*l~ldL2nh1HICҐ$<@l -K 4LD Byˢ6$‘lhKiGD-8y}N˾%U^ umU5a AlPKHI6n8AI0)<n|S*c WF)15B]I1I^ȓ f^~|f Ne$D LAkn?,h6eLEn4dd[al/6 fq~}~a1vB6E.F@;5tL6YGUԳUL&O1ю&pƉf#Nq QJO7 , D6vpc qEmCnGp\C4@cs$IŽi5hc D#z$tG]^fHn smni6FN1ǧ&4rgnn ~ 7jJ*4D cuʈm0ɀ>hTfB*'d:3|"$muNE%W+V&>T_frVL*t f_Z_WiT$>'ب=HLbJ:P,hrr8<ˋVp^e2:AvmOToJg$LSZdhOYhfW&3WԎǧL#VCHdNE^%,su$3w5/Daϛssxrkߒ/8}+lO@Bt@ZBAf̓d p'gq5vpb륷!gd-Lv/my~ (Ef B=D`F-\nEi=}mf!tLt@&LzdSedz 2^9Sr̖n|Iq|ͷ#?&X_wAgԗ?Jr֧H&n?>;dwy=~`;@g4;_G${kPLc'~'cYnyǴ )p,x)..c'皀,h „ 2l!Ĉ'Rh00`Ɛ"G,Ir tD2gҬi&(R8a &-j(Ҥ;8 C)ԨRIn( 8r:3ʓ$X0,ڴjײmV+Q+޼[AHBo.l0 JE]'eX7s3;B4]45زg\`өwq ` $ ޤӮnַ[\0nd'kA6L8,$ WN[#G}'x D7)eAp_  P@ LX!=@qGP w~~_Ȁ5 eM PxP@ YHT(͊LP%P*!1tC㎟ݘcqz@7^y^NlÃRkAiVtkYJ7s%8ЕM]Ѐ~ٛ jF']u.$A1X䠮 "bi+`EMެYaf *):Z0En f8TY] @p+Xp5 A1@Y&vG K`ZHoMԚ1R]2X XFXq+ 4}[ҭ²oaL9Ert IEeVrYe{YPZ^<"A*EKc<5EDUVv>7zӀc]5q'nY)KUaEn*7V@ D@@H7zoVK>g$ >& 8#5\qM9i:g{B uyjDC.QyM%I{<+<;<71C~fт ,=2xE@KDUX{Q4ar11,. ؀!l6PtFA| #( Rb=.^ PqJ QK#Y@6(KYۓfJ X p UrAE^PV"-r^"=dTZ `0$OL7@k\ *EN&e:yqb)mc^$/ gȘe铇IW`Dcf7t! xdяۂ 2J3q)&E5]iVڕJ @!^ O1CMiTq f\% ]0ְ$`EB]H-my)&04M[rAUN~қeTVyJ8' XpԜDA)zdxH@s?*УL #!w)?W#!()q U)$rrVlu"'IhRB R3bp D U'8JT}LzMf<$c-R6C-je}O,.b[H?4 H A=BdG$ 6Hjk(9P[nVvqM9ꕸE. `/p p0A3>$7 .xK^JGUo^ti= hZl2ADF0Z'\a6)0^;\z #>@OX/k|wy)c u  01( @ƁD{#MH@ + $`0< $@0A l9`7r \y n#` H@ i#D¹k/53ELbX,v1eLuޱy'k~ hQÐz,L B"P h|ۛ%C 2nxPXaB 6&cJJ#DC䦔bBURESbE%֢JUY@FLXLF8 ~c#͑0njUCaAХCT]^*z_*`:,>$UDbei0 h)!-BdUR!Eb&K0A[0'|Ƨ|'}֧}'~| bAHtF_&^v6aCJeFUޟ/@ꠧO0iBg'fnf]RR¦vҦCN-N!.gbr!$֨(樎(h!=A t*D˅hR+(bwn(/^fn)V|fiA.Sngmha&h.fxv)iY))Aię`z'nz-4\4=*FN*V^*f*4ͅ)Il*(a"(xV)<Dhǧ:HzèH2dmhn"ЁBs$E<j*F `vgE*RziX<ٕ!N@DZ$hX+H}k^g.$Trm(xr%D(s:g!8r^m dy"A +l^l+l2h% N(D5 zNzzd]TQ g*.&>i.l**?lE](XA<ڦ>^-f-Zh+ªik m5 4mkZ- zQm"f7l8E !AVȮxR @( @4\htlu\Sv-*,*& Ar~5m0,--`d=֭g 7$if6o&J顆+r^D@ѭZc $*ί|/vn+.6Βe)C4ؾ$OlNon @dNRSFnb Y@)ΰE԰B.>,/ wނS1 3͂)! LmUAbkq+Ίm )1lSK?o : qtӬ +1!3!@7A}2((2))2** #E ?~kκ*N lٻة/20Ёr./S60w1Wo7 @*N5_36g6o6s388s1sq {q7!#T8׳=[7߳>5CB2ps# 3?7t6CG8tCt bS4II4JIK4L4E˰E#F3K-C 4C\nƵMs3[3.d@5DlE t-S/b@|T?TUUSrgus||CAI"[J$ jYAb", uPGMw40[,aD 9Ϭ^ 2Ӳ_}7A|qڞbHigd@ds0c6BvҴͮLѨg wLR2Lk6FG:0lkt;_2*hnsa@%Y TiH{'3Q{w_Ah:UWNkQB7\ 2 |`z#sGY(7$Gr$ Wm4uJrlo;3x  s@xq7`R| cDsjF@2o1et#@l(xhw"YD8-!g^D5P ~K^WvAnBBt58 DL"aI0'hhJe tF}oxxkSwF3۲t'9s@BOHuѬ]2sU+\7h @8yRHP+R̺[69el,Aw'\,'PZ8]0N7B H:Nwn(DCw +csWiE]|A8dҬ1T™[;F-BLO@A6pGdc|A@$d"0Tsln2|A4P4[s8 H ,[A=䗅r\@1p.A"aÈ%b'I`]P , yu& FopCFRBQWH~ )?2 :$xKjnZ  $,H@4h4lEbPI9HRPu 8%RЧ"%1$` j Kɂ J)wD M$j ,R&"s,'P2?>ƂnbL#GL ``qF Ă65- O6;86uT "п 7oĂ% Y/SV3 I^4oO I] 2ECB>*FJQx%<`@rb50~'U*]`puXqJ-$(hP<1z6P @YJQ-R#< B(-mE,% D$Ƞ&~nfZB%'Kz 0O&NFPnd&74}pͪZ: _\JPV|.h ="G*CIڬ{6Ypb32j1*ѹFuK(Oa[H6)'0E?qe1[ 'hZ&D9L!D HlЄo~TԬAkQ\M@*ʓgiގyܭ-XuIpq_3\&"(Z@E`L 6vZx0з3p3oeňn2 fEςV$=!+¸a3=ͭ 6SBBo&44yw\m'[ֹVy&Ar( 5n$(pA`Nz/yͩtpy |;]יk}N ] . `8t N ` nچ9aC0Bظ XO-xb1i\c8d%ȈQw<1 INAd%?rLΜZ(_y^F4Gc̤zfhSO.2`lXD@z"D9(72`>$9гzLhx fGH`ت7` @i\-@" 5tU:DPhN.oY>h>ld]qvm֚CTAnjQlmo0Ђ!N"nnݢKԚ\`\-V9Mnw֖L4Ǹ%s{\r5.)k|z?E m6)G(r[ f@v4cU_K Uy74UI7%N(S%pk fg7sfxDl7] A2{niVw5Ktf jLkDHΌh078 x+=\[\5мU}wf`7㞋p6)crxg${GI9+s~g*~cn|'Y_+GX-$\ ~D\#kw+ɟ>BObcB #PP%"[r$pb%g1`"؂1Đ&&'Dwc&cGgB%#(Ri8.$%@1*mQD J @0 ?@l(-σ+Rr%CN6$6ܲH.`%fG$@U"Dn=N/5+",nC & 3)(0j2/SQ2c3B8&fWt"B>"3I"`MR2" e% ;(4"ӱ mn-&E9Wff$'G6%E$ l>R+r, Q;;qW[qͣ@seT%F8Vb>9@SǀD!^KbB"B*&H2'DR0:B#$I4cbFZR~"60@Nnb> ,egV3+SnlD73WwC&2dc@B'{8( ?&*x(85& $d@*B8C)X)_NOPUP?A#l(0Wb LF,)-6 pO,8S3USa"&tE6sTq&Lp#66U[!*5 ˈ'Mc8 u02 pXXYk̪ƴ dh~n$O"'Vfu’ ZGZoiq9 Pg'dg[8 9[muC\Ij'q /^'4}"3U]w`'6AHs#m@i%p`E:Vfu/$ @J! \"b`%b!B[F''H#\<FK">5&7^,*+M"6] Ͳ*`8TX,V f)#g`g#Z(8H4!E(-4'%?L¾-"‰!lV%=vJLlw쓦fnKdFrafVl26n!bnu5,hz&;O)D[HVH¾P,͐@)0Ǵa.6*-ƫ ̫Ӯ .{wOM@M/&nˆpimӌsklKh/2XxP{sH4j<[Q{R*KÀҒ+%jrmVIj;It)fvHzh6+ , K-b',/M}xrV.ˣn p7kC'b@03rv!ɒ4 D:KTP֪fbfόRθ:f}q=<Dw܎f G *+62.kVQ Vg)@67 6fۡX;[vk]m-R \S^u~k"-Κ 8jMhY(޵-=s.~M:HYP[UY>U+kՐ8j};t0iNS2 jH@7l}̣}t_kΈ`2܂ߝ ||M+L R.){T=~~\qLÍ|^Lʠt~e~}Ǥ3UFE(E?9pf^9+'='|e__t5 _ן_?.Ŗ@B <0… .! o+Z1ƍ;zA$K<2ʕ,[|KD 0Ν<{ D=4ҥL:U:1ԩT :֭\z 6رd˚*sfG չt9޽|5>ppK8&93;K :b89~ ;ٴk۾;V޹;ċ^̛;=JL=ܻ{Nz[˛?>zn~X?ۿ?`H`` .M91aȖ[Mag݅~bu&o$bb.8׊0Hc y6ȣ^BIdFm-EJ.間PFiѓLJ eUNeEByїK9%dNy&VcYm‰&jzoY'wRĦz瞀Y(YQ:6:9zY )z餑V饡f䦝r:*E i뭵jZVEj$+Zt,\" J[m>[QnKm<ւښ纛.Eʲͼk+[ /o";;PK"dZ_ZPK&AOEBPS/img/strms065.gifH)GIF89a岲@@@???٫Р```000 ppp///PPP___oooOOO333999Ҿ汱;;;vvv sssJJJ888wwwqqq,,,---nnn+++444555777ggg mmmXXXiiitttxxxrrr̘ĉ}}}222111uuuddd666YYYlll***:::kkkMMMVVVUUU½ˎeeebbbFFF|||BBBIII...DDD]]]TTTHHHCCC<<<)))NNNZZZLLLGGG(((\\\zzz===hhh!, !A