Oracle® Streams Concepts and Administration 11g Release 2 (11.2) Part Number E17069-07 |
|
|
PDF · Mobi · ePub |
The following topics contain information about consuming information with Oracle Streams.
See Also:
You can create an apply process using the DBMS_STREAMS_ADM
package or the DBMS_APPLY_ADM
package. Using the DBMS_STREAMS_ADM
package to create an apply process is simpler because defaults are used automatically for some configuration options. Alternatively, using the DBMS_APPLY_ADM
package to create an apply process is more flexible.
When you create an apply process by running the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package, you can specify nondefault values for the apply_captured
, apply_database_link
, and apply_tag
parameters. You can use the procedures in the DBMS_STREAMS_ADM
package or the DBMS_RULE_ADM
package to add rules to a rule set for the apply process.
If you create more than one apply process in a database, then the apply processes are completely independent of each other. These apply processes do not synchronize with each other, even if they apply LCRs from the same source database.
Table 10-1 describes the differences between using the DBMS_STREAMS_ADM
package and the DBMS_APPLY_ADM
package for apply process creation.
Table 10-1 DBMS_STREAMS_ADM and DBMS_APPLY_ADM Apply Process Creation
DBMS_STREAMS_ADM Package | DBMS_APPLY_ADM Package |
---|---|
A rule set is created automatically for the apply process and rules can be added to the rule set automatically. The rule set is a positive rule set if the |
You create one or more rule sets and rules for the apply process either before or after it is created. You can use the procedures in the |
The apply process can apply messages only at the local database. |
You specify whether the apply process applies messages at the local database or at a remote database during apply process creation. |
Changes applied by the apply process generate tags in the redo log at the destination database with a value of |
You specify the tag value for changes applied by the apply process during apply process creation. The default value for the tag is |
See Also:
Oracle Streams Replication Administrator's Guide for information about creating an apply process
Oracle Streams Replication Administrator's Guide for more information about Oracle Streams tags
The following sections describe how apply processes handle dependencies:
The parallelism
apply process parameter controls the parallelism of an apply process. When apply process parallelism is set to 1, a single apply server applies transactions in the same order as the order in which they were committed on the source database. In this case, dependencies are not an issue. For example, if transaction A is committed before transaction B on the source database, then, on the destination database, all of the LCRs in transaction A are applied before any LCRs in transaction B.
However, when apply process parallelism is set to a value greater than 1, multiple apply servers apply transactions simultaneously. When an apply process is applying transactions in parallel, it applies the row LCRs in these transactions until it detects a row LCR that depends on a row LCR in another transaction. When a dependent row LCR is detected, an apply process finishes applying the LCRs in the transaction with the lower commit system change number (CSCN) and commits this transaction before it finishes applying the remaining row LCRs in the transaction with the higher CSCN.
For example, consider two transactions: transaction A and transaction B. The transactions are dependent transactions, and each transaction contains 100 row LCRs. Transaction A committed on the source database before transaction B. Therefore, transaction A has the lower CSCN of the two transactions. An apply process can apply these transactions in parallel in the following way:
The apply process begins to apply row LCRs from both transactions in parallel.
Using a constraint in the destination database's data dictionary or a virtual dependency definition at the destination database, the apply process detects a dependency between a row LCR in transaction A and a row LCR in transaction B.
Because transaction B has the higher CSCN of the two transactions, the apply process waits to apply transaction B and does not apply the dependent row LCR in transaction B. The row LCRs before the dependent row LCR in transaction B have been applied. For example, if the dependent row LCR in transaction B is the 81st row LCR, then the apply process could have applied 80 of the 100 row LCRs in transaction B.
Because transaction A has the lower CSCN of the two transactions, the apply process applies all the row LCRs in transaction A and commits.
The apply process applies the dependent row LCR in transaction B and the remaining row LCRs in transaction B. When all of the row LCRs in transaction B are applied, the apply process commits transaction B.
Note:
You can set theparallelism
apply process parameter using the SET_PARAMETER
procedure in the DBMS_APPLY_ADM
package.An apply process orders and applies row LCRs in the following way:
Row LCRs within a single transaction are always applied in the same order as the corresponding changes on the source database.
Row LCRs that depend on each other in different transactions are always applied in the same order as the corresponding changes on the source database. When apply process parallelism is greater than 1, and the apply process detects a dependency between row LCRs in different transactions, the apply process always executes the transaction with the lower CSCN before executing the dependent row LCR. This behavior is described in more detail in "How Dependent Transactions Are Applied".
If commit_serialization
apply process parameter is set to FULL
, then the apply process commits all transactions, regardless of whether they contain dependent row LCRs, in the same order as the corresponding transactions on the source database.
If commit_serialization
apply process parameter is set to DEPENDENT_TRANSACTIONS
, then the apply process might apply transactions that do not depend on each other in a different order than the commit order of the corresponding transactions on the source database.
Note:
You can set thecommit_serialization
apply process parameter using the SET_PARAMETER
procedure in the DBMS_APPLY_ADM
package.If the names of shared database objects are the same at the source and destination databases, and if the objects are in the same schemas at these databases, then an apply process automatically detects dependencies between row LCRs, assuming constraints are defined for the database objects at the destination database. Information about these constraints is stored in the data dictionary at the destination database.
Regardless of the setting for the commit_serialization
parameter and apply process parallelism, an apply process always respects dependencies between transactions that are enforced by database constraints. When an apply process is applying a transaction that contains row LCRs that depend on row LCRs in another transaction, the apply process ensures that the row LCRs are applied in the correct order and that the transactions are committed in the correct order to maintain the dependencies. Apply processes detect dependencies for captured row LCRs and persistent row LCRs.
However, some environments have dependencies that are not enforced by database constraints, such as environments that enforce dependencies using applications. If your environment has dependencies for shared database objects that are not enforced by database constraints, then set the commit_serialization
parameter to FULL
for apply processes that apply changes to these database objects.
When rule-based transformations are specified for rules used by an apply process, and apply handlers are configured for the apply process, LCRs are processed in the following order:
The apply process dequeues LCRs from its queue.
The apply process runs rule-based transformations on LCRs, when appropriate.
The apply process detects dependencies between LCRs.
The apply process passes LCRs to apply handlers, when appropriate.
See Also:
"Apply Process Subcomponents"In some cases, an apply process requires additional information to detect dependencies in row LCRs that are being applied in parallel. The following are examples of cases in which an apply process requires additional information to detect dependencies:
The data dictionary at the destination database does not contain the required information. The following are examples of this case:
The apply process cannot find information about a database object in the data dictionary of the destination database. This can happen when there are data dictionary differences for shared database objects between the source and destination databases. For example, a shared database object can have a different name or can be in a different schema at the source database and destination database.
A relationship exists between two or more tables, and the relationship is not recorded in the data dictionary of the destination database. This can happen when database constraints are not defined to improve performance or when an application enforces dependencies during database operations instead of database constraints.
Data is denormalized by an apply handler after dependency computation. For example, the information in a single row LCR can be used to create multiple row LCRs that are applied to multiple tables.
Apply errors or incorrect processing can result when an apply process cannot determine dependencies properly. In some of the cases described in the previous list, you can use rule-based transformations to avoid apply problems. For example, if a shared database object is in different schemas at the source and destination databases, then a rule-based transformation can change the schema in the appropriate LCRs. However, the disadvantage with using rule-based transformations is that they cannot be executed in parallel.
A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions at a destination database. A virtual dependency definition is not described as a constraint in the data dictionary of the destination database. Instead, it is specified using procedures in the DBMS_APPLY_ADM
package. Virtual dependency definitions enable an apply process to detect dependencies that it would not be able to detect by using only the constraint information in the data dictionary. After dependencies are detected, an apply process schedules LCRs and transactions in the correct order for apply.
Virtual dependency definitions provide required information so that apply processes can detect dependencies correctly before applying LCRs directly or passing LCRs to apply handlers. Virtual dependency definitions enable apply handlers to process these LCRs correctly, and the apply handlers can process them in parallel to improve performance.
A virtual dependency definition can define one of the following types of dependencies:
Note:
A destination database must be running Oracle Database 10g Release 2 or later to specify virtual dependency definitions.A value dependency defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables. A value dependency is set for one or more columns, and an apply process uses a value dependency to detect dependencies between row LCRs that contain values for these columns. Value dependencies can define virtual foreign key relationships between tables, but, unlike foreign key relationships, value dependencies can involve more than two tables.
Value dependencies are useful when relationships between columns in tables are not described by constraints in the data dictionary of the destination database. Value dependencies describe these relationships, and an apply process uses the value dependencies to determine when two or more row LCRs in different transactions involve the same row in a table at the destination database. For transactions that are being applied in parallel, when two or more row LCRs involve the same row, the transactions that include these row LCRs are dependent transactions.
Use the SET_VALUE_DEPENDENCY
procedure in the DBMS_APPLY_ADM
package to define or remove a value dependency at a destination database. In this procedure, table columns are specified as attributes.
The following restrictions pertain to value dependencies:
The row LCRs that involve the database objects specified in a value dependency must originate from a single source database.
Each value dependency must contain only one set of attributes for a particular database object.
Also, any columns specified in a value dependency at a destination database must be supplementally logged at the source database. These columns must be unconditionally logged.
An object dependency defines a parent-child relationship between two objects at a destination database. An apply process schedules execution of transactions that involve the child object after all transactions with lower commit system change number (CSCN) values that involve the parent object have been committed. An apply process uses the object identifier in each row LCR to detect dependencies. The apply process does not use column values in the row LCRs to detect object dependencies.
Object dependencies are useful when relationships between tables are not described by constraints in the data dictionary of the destination database. Object dependencies describe these relationships, and an apply process uses the object dependencies to determine when two or more row LCRs in different transactions involve these tables. For transactions that are being applied in parallel, when a row LCR in one transaction involves the child table, and a row LCR in a different transaction involves the parent table, the transactions that include these row LCRs are dependent transactions.
Use the CREATE_OBJECT_DEPENDENCY
procedure to create an object dependency at a destination database. Use the DROP_OBJECT_DEPENDENCY
procedure to drop an object dependency at a destination database. Both of these procedures are in the in the DBMS_APPLY_ADM
package.
Note:
Tables with circular dependencies can result in apply process deadlocks when apply process parallelism is greater than 1. The following is an example of a circular dependency: Table A has a foreign key constraint on table B, and table B has a foreign key constraint on table A. Apply process deadlocks are possible when two or more transactions that involve the tables with circular dependencies commit at the same SCN.When an apply process cannot identify the table row or the database object specified in a row LCR by using the destination database's data dictionary and virtual dependency definitions, the transaction that contains the row LCR is applied after all of the other transactions with lower CSCN values. Such a transaction is called a barrier transaction. Transactions with higher CSCN values than the barrier transaction are not applied until after the barrier transaction has committed. In addition, all DDL transactions are barrier transactions.
The following sections discuss considerations for applying DML changes to tables:
You must ensure that the primary key columns at the destination database are logged in the redo log at the source database for every update. A unique key or foreign key constraint at a destination database that contains data from more that one column at the source database requires additional logging at the source database.
There are various ways to ensure that a column is logged at the source database. For example, whenever the value of a column is updated, the column is logged. Also, Oracle has a feature called supplemental logging that automates the logging of specified columns.
For a unique key and foreign key constraint at a destination database that contains data from only one column at a source database, no supplemental logging is required. However, for a constraint that contains data from multiple columns at the source database, you must create a conditional supplemental log group containing all the columns at the source database that are used by the constraint at the destination database.
Typically, unique key and foreign key constraints include the same columns at the source database and destination database. However, in some cases, an apply handler or custom rule-based transformation can combine a multi-column constraint from the source database into a single key column at the destination database. Also, an apply handler or custom rule-based transformation can separate a single key column from the source database into a multi-column constraint at the destination database. In such cases, the number of columns in the constraint at the source database determines whether a conditional supplemental log group is required. If there is more than one column in the constraint at the source database, then a conditional supplemental log group containing all the constraint columns is required at the source database. If there is only one column in the constraint at the source database, then no supplemental logging is required for the key column.
See Also:
Oracle Streams Replication Administrator's Guide for more information about supplemental loggingIf possible, each table for which changes are applied by an apply process should have a primary key. When a primary key is not possible, Oracle recommends that each table have a set of columns that can be used as a unique identifier for each row of the table. If the tables that you plan to use in your Oracle Streams environment do not have a primary key or a set of unique columns, then consider altering these tables accordingly.
To detect conflicts and handle errors accurately, Oracle must be able to identify uniquely and match corresponding rows at different databases. By default, Oracle Streams uses the primary key of a table to identify rows in the table, and if a primary key does not exist, Oracle Streams uses the smallest unique key that has at least one NOT
NULL
column to identify rows in the table. When a table at a destination database does not have a primary key or a unique key with at least one NOT
NULL
column, or when you want to use columns other than the primary key or unique key for the key, you can designate a substitute key at the destination database. A substitute key is a column or set of columns that Oracle can use to identify rows in the table during apply.
You can specify the substitute primary key for a table using the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package. Unlike true primary keys, the substitute key columns can contain nulls. Also, the substitute key columns take precedence over any existing primary key or unique keys for the specified table for all apply processes at the destination database.
If you specify a substitute key for a table in a destination database, and these columns are not a primary key for the same table at the source database, then you must create an unconditional supplemental log group containing the substitute key columns at the source database.
In the absence of substitute key columns, primary key constraints, and unique key constraints, an apply process uses all of the columns in the table as the key columns, excluding columns of the following data types: LOB, LONG
, LONG
RAW
, user-defined types (including object types, REF
s, varrays, nested tables), and Oracle-supplied types (including Any
types, XML types, spatial types, and media types). In this case, you must create an unconditional supplemental log group containing these columns at the source database. Using substitute key columns is preferable when there is no primary key constraint for a table because fewer columns are needed in the row LCR.
Note:
Oracle recommends that each column you specify as a substitute key column be a NOT
NULL
column. You should also create a single index that includes all of the columns in a substitute key. Following these guidelines improves performance for changes because the database can locate the relevant row more efficiently.
LOB, LONG
, LONG
RAW
, user-defined type, and Oracle-supplied type columns cannot be specified as substitute key columns.
See Also:
The DBMS_APPLY_ADM.SET_KEY_COLUMNS
procedure in the Oracle Database PL/SQL Packages and Types Reference
Oracle Streams Replication Administrator's Guide for more information about supplemental logging
Oracle Database SQL Language Reference for information about data types
A column discrepancy is any difference in the columns in a table at a source database and the columns in the same table at a destination database. If there are column discrepancies in your Oracle Streams environment, then use rule-based transformations, statement DML handlers, or procedure DML handlers to make the columns in row LCRs being applied by an apply process match the columns in the relevant tables at a destination database.
The following sections describe apply process behavior for common column discrepancies.
See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about LCRs
If the table at the destination database is missing one or more columns that are in the table at the source database, then an apply process raises an error and moves the transaction that caused the error into the error queue. You can avoid such an error by creating a rule-based transformation or procedure DML handler that deletes the missing columns from the LCRs before they are applied. Specifically, the transformation or handler can remove the extra columns using the DELETE_COLUMN
member procedure on the row LCR. You can also create a statement DML handler with a SQL statement that excludes the missing columns.
If the table at the destination database has more columns than the table at the source database, then apply process behavior depends on whether the extra columns are required for dependency computations. If the extra columns are not used for dependency computations, then an apply process applies changes to the destination table. In this case, if column defaults exist for the extra columns at the destination database, then these defaults are used for these columns for all inserts. Otherwise, these inserted columns are NULL
.
If, however, the extra columns are used for dependency computations, then an apply process places the transactions that include these changes in the error queue. The following types of columns are required for dependency computations:
For all changes, all key columns
For INSERT
and DELETE
statements, all columns involved with constraints
For UPDATE
statements, if a constraint column is changed, such as a unique key constraint column or a foreign key constraint column, then all columns involved in the constraint
When the extra columns are used for dependency computations, one way to avoid apply errors is to use statement DML handlers to add the extra columns.
See Also:
"Statement DML Handlers"A column data type mismatch results when the data type for a column in a table at the destination database does not match the data type for the same column at the source database. An apply process can automatically convert certain data types when it encounters a column data type mismatch. If an apply process cannot automatically convert the data type, then apply process places transactions containing the changes to the mismatched column into the error queue. To avoid such an error, you can create a custom rule-based transformation or DML handler that converts the data type.
Conflicts are possible in an Oracle Streams configuration where data is shared between multiple databases. A conflict is a mismatch between the old values in an LCR and the expected data in a table. A conflict can occur if DML changes are allowed to a table for which changes are captured and to a table where these changes are applied.
For example, a transaction at the source database can update a row at nearly the same time as a different transaction that updates the same row at a destination database. In this case, if data consistency between the two databases is important, then when the change is propagated to the destination database, an apply process must be instructed either to keep the change at the destination database or replace it with the change from the source database. When data conflicts occur, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules.
Oracle Streams automatically detects conflicts and, for update conflicts, tries to use an update conflict handler to resolve them if one is configured. Oracle Streams offers a variety of prebuilt handlers that enable you to define a conflict resolution system for your database that resolves conflicts in accordance with your business rules. If you have a unique situation that a prebuilt conflict resolution handler cannot resolve, then you can build and use your own custom conflict resolution handlers in an error handler or procedure DML handler. Conflict detection can be disabled for nonkey columns.
Any of the following handlers can process a row LCR:
DML handler (either statement DML handler or procedure DML handler)
Error handler
Update conflict handler
The following sections describe the possible scenarios involving these handlers:
See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about the EXECUTE
member procedure for row LCRs
If there are no relevant handlers for a row LCR, then an apply process tries to apply the change specified in the row LCR directly. If the apply process can apply the row LCR, then the change is made to the row in the table. If there is a conflict or an error during apply, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
Consider a case where there is a relevant update conflict handler configured, but no other relevant handlers are configured. An apply process tries to apply the change specified in a row LCR directly. If the apply process can apply the row LCR, then the change is made to the row in the table.
If there is an error during apply that is caused by a condition other than an update conflict, including a uniqueness conflict or a delete conflict, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
If there is an update conflict during apply, then the relevant update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, then the apply process either applies the LCR or discards the LCR, depending on the resolution of the update conflict, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the update conflict handler cannot resolve the conflict, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
Consider a case where an apply process passes a row LCR to a DML handler, and there is no relevant update conflict handler configured. The DML handler can be a statement DML handler or a procedure DML handler.
The DML handler processes the row LCR. The designer of the DML handler has complete control over this processing. Some DML handlers can perform SQL operations or run the EXECUTE
member procedure of the row LCR. If the DML handler runs the EXECUTE
member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR might have been modified by the DML handler.
An apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR. If any SQL operation performed by any statement DML handler fails, or if an attempt to run the EXECUTE
member procedure fails, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
If any SQL operation performed by a procedure DML handler fails, or if an attempt to run the EXECUTE
member procedure fails, then the procedure DML handler can try to handle the exception. If the procedure DML handler does not raise an exception, then the apply process assumes the procedure DML handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.
If the procedure DML handler cannot handle the exception, then the procedure DML handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
Consider a case where an apply process passes a row LCR to a DML handler and there is a relevant update conflict handler configured. The DML handler can be a statement DML handler or a procedure DML handler. An apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR.
The DML handler processes the row LCR. The designer of the DML handler has complete control over this processing. Some DML handlers might perform SQL operations or run the EXECUTE
member procedure of the row LCR. If the DML handler runs the EXECUTE
member procedure of the row LCR, then the apply process tries to apply the row LCR. If the DML handler is a procedure DML handler, then this row LCR could have been modified by the procedure DML handler.
If any SQL operation performed by a DML handler fails, or if an attempt to run the EXECUTE
member procedure fails for any reason other than an update conflict, then the behavior is the same as that described in "DML Handler But No Relevant Update Conflict Handler". Note that uniqueness conflicts and delete conflicts are not update conflicts.
If an attempt to run the EXECUTE
member procedure fails because of an update conflict, then the behavior depends on the setting of the conflict_resolution
parameter in the EXECUTE
member procedure:
If the conflict_resolution
parameter is set to TRUE
, then the relevant update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, and all other operations performed by the DML handler succeed, then the DML handler finishes without raising an exception, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.
If the update conflict handler cannot resolve the conflict, and the DML handler is a statement DML handler, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
If the update conflict handler cannot resolve the conflict, and the DML handler is a procedure DML handler, then a procedure DML handler can try to handle the exception. If the procedure DML handler does not raise an exception, then the apply process assumes the procedure DML handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the procedure DML handler cannot handle the exception, then the procedure DML handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
If the conflict_resolution
parameter is set to FALSE
, then the relevant update conflict handler is not invoked. In this case, the behavior is the same as that described in "DML Handler But No Relevant Update Conflict Handler".
Consider a case where an apply process passes a row LCR to both a statement DML handler and a procedure DML handler for the same operation on the same table. In this case, the DML handlers can be run in any order, and each DML handler receives each original row LCR. Also, an apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR. Each DML handler processes the row LCR independently, and the behavior is the same as any other scenario that involves a DML handler.
If any statement DML handler or procedure DML handler fails, then the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
Consider a case where an apply process encounters an error when it tries to apply a row LCR. This error can be caused by a conflict or by some other condition. There is an error handler for the table operation but no relevant update conflict handler configured.
The row LCR is passed to the error handler. The error handler processes the row LCR. The designer of the error handler has complete control over this processing. Some error handlers might perform SQL operations or run the EXECUTE
member procedure of the row LCR. If the error handler runs the EXECUTE
member procedure of the row LCR, then the apply process tries to apply the row LCR. This row LCR could have been modified by the error handler.
If any SQL operation performed by the error handler fails, or if an attempt to run the EXECUTE
member procedure fails, then the error handler can try to handle the exception. If the error handler does not raise an exception, then the apply process assumes the error handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets.
If the error handler cannot handle the exception, then the error handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
Consider a case where an apply process encounters an error when it tries to apply a row LCR. There is an error handler for the table operation, and there is a relevant update conflict handler configured.
The handler that is invoked to handle the error depends on the type of error it is:
If the error is caused by a condition other than an update conflict, including a uniqueness conflict or a delete conflict, then the error handler is invoked, and the behavior is the same as that described in "Error Handler But No Relevant Update Conflict Handler".
If the error is caused by an update conflict, then the update conflict handler is invoked. If the update conflict handler resolves the conflict successfully, then the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. In this case, the error handler is not invoked.
If the update conflict handler cannot resolve the conflict, then the error handler is invoked. If the error handler does not raise an exception, then the apply process assumes the error handler has performed the appropriate action with the row LCR, and the apply process continues applying the other LCRs in the transaction that should be applied according to the apply process rule sets. If the error handler cannot process the LCR, then the error handler should raise an exception. In this case, the transaction containing the row LCR is rolled back, and all of the LCRs in the transaction that should be applied according to the apply process rule sets are moved to the error queue.
Consider a case where an apply process passes a row LCR to a statement DML handler and there is a relevant error handler configured.
The statement DML handler processes the row LCR. The designer of the statement DML handler has complete control over this processing. Some statement DML handlers might perform SQL operations or run the EXECUTE
member procedure of the row LCR. If the statement DML handler runs the EXECUTE
member procedure of the row LCR, then the apply process tries to apply the row LCR.
Also, an apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR.
If any SQL operation performed by any statement DML handler fails, or if an attempt to run the EXECUTE
member procedure fails for any reason, then the behavior is the same as that described in "Error Handler But No Relevant Update Conflict Handler". The error handler gets the original row LCR, not the row LCR processed by the statement DML handler.
Note:
You cannot have a procedure DML handler and an error handler simultaneously for the same operation on the same table. Therefore, there is no scenario in which they could both be invoked.Consider a case where an apply process passes a row LCR to a statement DML handler and there is a relevant error handler and a relevant update conflict handler configured.
The statement DML handler processes the row LCR. The designer of the statement DML handler has complete control over this processing. Some statement DML handlers might perform SQL operations or run the EXECUTE
member procedure of the row LCR. If the statement DML handler runs the EXECUTE
member procedure of the row LCR, then the apply process tries to apply the row LCR.
Also, an apply process can have multiple statement DML handlers for the same operation on the same table. These statement DML handlers can run in any order, and each statement DML handler receives the original row LCR.
If any SQL operation performed by any statement DML handler fails, or if an attempt to run the EXECUTE
member procedure fails for any reason, then the behavior is the same as that described in "Error Handler And a Relevant Update Conflict Handler".
Note:
You cannot have a procedure DML handler and an error handler simultaneously for the same operation on the same table. Therefore, there is no scenario in which they could both be invoked.The following sections discuss considerations for applying DDL changes to tables:
If you plan to capture DDL changes at a source database and apply these DDL changes at a destination database, then avoid using system-generated names. If a DDL statement results in a system-generated name for an object, then the name of the object typically will be different at the source database and each destination database applying the DDL change from this source database. Different names for objects can result in apply errors for future DDL changes.
For example, suppose the following DDL statement is run at a source database:
CREATE TABLE sys_gen_name (n1 NUMBER NOT NULL);
This statement results in a NOT
NULL
constraint with a system-generated name. For example, the NOT
NULL
constraint might be named sys_001500
. When this change is applied at a destination database, the system-generated name for this constraint might be sys_c1000
.
Suppose the following DDL statement is run at the source database:
ALTER TABLE sys_gen_name DROP CONSTRAINT sys_001500;
This DDL statement succeeds at the source database, but it fails at the destination database and results in an apply error.
To avoid such an error, explicitly name all objects resulting from DDL statements. For example, to name a NOT
NULL
constraint explicitly, run the following DDL statement:
CREATE TABLE sys_gen_name (n1 NUMBER CONSTRAINT sys_gen_name_nn NOT NULL);
When applying a change resulting from a CREATE
TABLE
AS
SELECT
statement, an apply process performs two steps:
The CREATE
TABLE
AS
SELECT
statement is executed at the destination database, but it creates only the structure of the table. It does not insert any rows into the table. If the CREATE
TABLE
AS
SELECT
statement fails, then an apply process error results. Otherwise, the statement automatically commits, and the apply process performs Step 2.
The apply process inserts the rows that were inserted at the source database because of the CREATE
TABLE
AS
SELECT
statement into the corresponding table at the destination database. It is possible that a capture process, a propagation, or an apply process will discard all of the row LCRs with these inserts based on their rule sets. In this case, the table remains empty at the destination database.
When an apply process applies a data definition language (DDL) change, Oracle Streams ensures that the data manipulation language (DML) changes on the DDL target within the same transaction are not replicated at the destination database. Therefore, the source database and destination database can diverge in some cases. Divergence can result in apply process errors when the old values in row logical change records (LCRs) do not match the current values in a destination table.
The following cases cause the source database and destination database to diverge:
When a DDL statement contains a non-literal value that is derived, the value that is derived might not match at the source database and destination database. For example, the following DDL statement adds a column to the hr.employees
table and inserts a date value derived from the computer system running the source database:
ALTER TABLE hr.employees ADD(start_date DATE DEFAULT SYSDATE);
Assume that a replication environment maintains DML and DDL changes made to the hr.employees
table between a source database and a destination database. In this case, the SYSDATE
function is executed independently at the source database and at the destination database. Therefore, the DATE
value inserted at the source database will not match the DATE
value inserted at the destination database.
When a DDL statement fires a DML trigger defined on the destination table, the DML changes made by the trigger are not replicated at the destination database. Because the DML changes made by the triggers occur in the same transaction as the DDL statement, and operate on the table that is the target of the DDL statement, the triggered DML changes are not replicated at the destination database.
For example, assume you create the following table:
CREATE TABLE hr.temp_employees( emp_id NUMBER PRIMARY KEY, first_name VARCHAR2(64), last_name VARCHAR2(64), modify_date TIMESTAMP);
Assume you create a trigger on the table so that whenever the table is updated the modify_date
column is updated to reflect the time of change:
CREATE OR REPLACE TRIGGER hr.trg_mod_dt BEFORE UPDATE ON hr.temp_employees REFERENCING NEW AS NEW_ROW FOR EACH ROW BEGIN :NEW_ROW.modify_date:= SYSTIMESTAMP; END; /
Assume that a replication environment maintains DML and DDL changes made to the hr.temp_employees
table between a source database and a destination database. In this case, the hr.temp_employees
table is maintained correctly at the destination database for direct DML changes made to this table at the source database. However, if an ADD
COLUMN
statement at the source database adds a column to this table, then the hr.trg_mod_dt
update trigger changes the modify_date
column of all of the rows in the table to a new timestamp. These changes to the modify_date
column are not replicated at the destination database.
In an Oracle Streams environment that shares information within a single database or between multiple databases, a source database is the database where changes are generated in the redo log. Suppose an environment has the following characteristics:
A capture process or a synchronous capture captures changes to tables at the source database and stages the changes as LCRs in a queue.
An apply process applies these LCRs, either at the same database or at a destination database to which the LCRs have been propagated.
In such an environment, for each table, only changes that committed after a specific system change number (SCN) at the source database are applied. An instantiation SCN specifies this value for each table.
An instantiation SCN can be set during instantiation, or an instantiation SCN can be set using a procedure in the DBMS_APPLY_ADM
package. If the tables do not exist at the destination database before the Oracle Streams replication environment is configured, then these table are physically created (instantiated) using copies from the source database, and the instantiation SCN is set for each table during instantiation. If the tables already exist at the destination database before the Oracle Streams replication environment is configured, then these table are not instantiated using copies from the source database. Instead, the instantiation SCN must be set manually for each table using one of the following procedures in the DBMS_APPLY_ADM
package: SET_TABLE_INSTANTIATION_SCN
, SET_SCHEMA_INSTANATIATION_SCN
, or SET_GLOBAL_INSTANTIATION_SCN
.
The instantiation SCN for a database object controls which LCRs that contain changes to the database object are ignored by an apply process and which LCRs are applied by an apply process. If the commit SCN of an LCR for a database object from a source database is less than or equal to the instantiation SCN for that database object at a destination database, then the apply process at the destination database discards the LCR. Otherwise, the apply process applies the LCR.
Also, if there are multiple source databases for a shared database object at a destination database, then an instantiation SCN must be set for each source database, and the instantiation SCN can be different for each source database. You can set instantiation SCNs by using export/import or transportable tablespaces. You can also set an instantiation SCN by using a procedure in the DBMS_APPLY_ADM
package.
Oracle Streams also records the ignore SCN for each database object. The ignore SCN is the SCN below which changes to the database object cannot be applied. The instantiation SCN for an object cannot be set lower than the ignore SCN for the object. This value corresponds to the SCN value at the source database at the time when the object was prepared for instantiation. An ignore SCN is set for a database object only when the database object is instantiated using Oracle Data Pump.
You can view the instantiation SCN and ignore SCN for database objects by querying the DBA_APPLY_INSTANTIATED_OBJECTS
data dictionary view.
If an apply process is running, then the oldest SCN is the earliest SCN of the transactions currently being dequeued and applied. For a stopped apply process, the oldest SCN is the earliest SCN of the transactions that were being applied when the apply process was stopped.
The following are two common scenarios in which the oldest SCN is important:
You must recover the database in which the apply process is running to a certain point in time.
You stop using an existing capture process that captures changes for the apply process and use a different capture process to capture changes for the apply process.
In both cases, you should determine the oldest SCN for the apply process by querying the DBA_APPLY_PROGRESS
data dictionary view. The OLDEST_MESSAGE_NUMBER
column in this view contains the oldest SCN. Next, set the start SCN for the capture process that is capturing changes for the apply process to the same value as the oldest SCN value. If the capture process is capturing changes for other apply processes, then these other apply processes might receive duplicate LCRs when you reset the start SCN for the capture process. In this case, the other apply processes automatically discard the duplicate LCRs.
Note:
The oldest SCN is only valid for apply processes that apply LCRs that were captured by a capture process. The oldest SCN does not pertain to apply processes that apply LCRs captured by synchronous capture or LCRs enqueued explicitly.The low-watermark for an apply process is the system change number (SCN) up to which all LCRs have been applied. That is, LCRs that were committed at an SCN less than or equal to the low-watermark number have definitely been applied, but some LCRs that were committed with a higher SCN also might have been applied. The low-watermark SCN for an apply process is equivalent to the applied SCN for a capture process.
The high-watermark for an apply process is the SCN beyond which no LCRs have been applied. That is, no LCRs that were committed with an SCN greater than the high-watermark have been applied.
You can view the low-watermark and high-watermark for one or more apply processes by querying the V$STREAMS_APPLY_COORDINATOR
and ALL_APPLY_PROGRESS
data dictionary views.
This section describes how Oracle Streams apply processes interact with triggers.
This section contains these topics:
You can control a DML or DDL trigger's firing property using the SET_TRIGGER_FIRING_PROPERTY
procedure in the DBMS_DDL
package. This procedure lets you specify whether a trigger always fires, fires once, or fires for apply process changes only.
The SET_TRIGGER_FIRING_PROPERTY
procedure is overloaded. Set a trigger's firing property in one of the following ways:
To specify that a trigger always fires, set the fire_once
procedure parameter to FALSE
.
To specify that a trigger fires once, set the fire_once
parameter to TRUE
.
To specify that a trigger fires for apply process changes only, set the property
parameter to DBMS_DDL.APPLY_SERVER_ONLY
.
If DBMS_DDL.APPLY_SERVER_ONLY
property is set for a trigger, then the trigger only fires for apply process changes, regardless of the setting of the fire_once
parameter. That is, setting DBMS_DDL.APPLY_SERVER_ONLY
for the property
parameter overrides the fire_once
parameter setting.
A trigger's firing property determines whether the trigger fires in each of the following cases:
When a triggering event is executed by a user process
When a triggering event is executed by an apply process
When a triggering event results from the execution of one or more apply errors using the EXECUTE_ERROR
or EXECUTE_ALL_ERRORS
procedure in the DBMS_APPLY_ADM
package
Table 10-2 shows when a trigger fires based on its trigger firing property.
Table 10-2 Trigger Firing Property
Trigger Firing Property | User Process Causes Triggering Event | Apply Process Causes Triggering Event | Apply Error Execution Causes Triggering Event |
---|---|---|---|
Always fire |
Trigger Fires |
Trigger Fires |
Trigger Fires |
Fire once |
Trigger Fires |
Trigger Does Not Fire |
Trigger Does Not Fire |
For for apply process changes only |
Trigger Does Not Fire |
Trigger Fires |
Trigger Fires |
For example, in the hr
schema, the update_job_history
trigger adds a row to the job_history
table when data is updated in the job_id
or department_id
column in the employees
table. Suppose, in an Oracle Streams environment, the following configuration exists:
A capture process or synchronous capture captures changes to both of these tables at the dbs1.example.com
database.
A propagation propagates these changes to the dbs2.example.com
database.
An apply process applies these changes at the dbs2.example.com
database.
The update_job_history
trigger exists in the hr
schema in both databases.
If the update_job_history
trigger is set to always fire at dbs2.example.com
in this scenario, then these actions result:
The job_id
column is updated for an employee in the employees
table at dbs1.example.com
.
The update_job_history
trigger fires at dbs1.example.com
and adds a row to the job_history
table that records the change.
The capture process or synchronous capture at dbs1.example.com
captures the changes to both the employees
table and the job_history
table.
A propagation propagates these changes to the dbs2.example.com
database.
An apply process at the dbs2.example.com
database applies both changes.
The update_job_history
trigger fires at dbs2.example.com
when the apply process updates the employees
table.
In this case, the change to the employees
table is recorded twice at the dbs2.example.com
database: when the apply process applies the change to the job_history
table and when the update_job_history
trigger fires to record the change made to the employees
table by the apply process.
A database administrator might not want the update_job_history
trigger to fire at the dbs2.example.com
database when a change is made by the apply process. Similarly, a database administrator might not want a trigger to fire because of the execution of an apply error transaction. If the update_job_history
trigger's firing property is set to fire once, then it does not fire at dbs2.example.com
when the apply process applies a change to the employees
table, and it does not fire when an executed error transaction updates the employees
table.
Note:
Only DML and DDL triggers can be set to fire once. All other types of triggers always fire.See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about setting a trigger's firing property with theSET_TRIGGER_FIRING_PROPERTY
procedureIf you use the ON
SCHEMA
clause to create a schema trigger, then the schema trigger fires only if the schema performs a relevant change. Therefore, when an apply process is applying changes, a schema trigger that is set to fire always fires only if the apply user is the same as the schema specified in the schema trigger. If the schema trigger is set to fire once, then it never fires when an apply process applies changes, regardless of whether the apply user is the same as the schema specified in the schema trigger.
For example, if you specify a schema trigger that always fires on the hr
schema at a source database and destination database, but the apply user at a destination database is strmadmin
, then the trigger fires when the hr
user performs a relevant change on the source database, but the trigger does not fire when this change is applied at the destination database. However, if you specify a schema trigger that always fires on the strmadmin
schema at the destination database, then this trigger fires whenever a relevant change is made by the apply process, regardless of any trigger specifications at the source database.
When a database object is prepared for instantiation at a source database, an Oracle Streams data dictionary is populated automatically at the database where changes to the object are captured by a capture process. The Oracle Streams data dictionary is a multiversioned copy of some of the information in the primary data dictionary at a source database. The Oracle Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types. This mapping keeps each captured LCR as small as possible because a captured LCR can often use numbers rather than names internally.
Unless a captured LCR is passed as a parameter to a custom rule-based transformation during capture or propagation, the mapping information in the Oracle Streams data dictionary at the source database is needed to interpret the contents of the LCR at any database that applies the captured LCR. To make this mapping information available to an apply process, Oracle automatically populates a multiversioned Oracle Streams data dictionary at each destination database that has an Oracle Streams apply process. Oracle automatically propagates relevant information from the Oracle Streams data dictionary at the source database to all other databases that apply captured LCRs from the source database.
If you run multiple apply processes in a single database, consider increasing the size of the System Global Area (SGA). Use the SGA_MAX_SIZE
initialization parameter to increase the SGA size. Also, if the size of the Oracle Streams pool is not managed automatically in the database, then you should increase the size of the Oracle Streams pool by 1 MB for each apply process parallelism. For example, if you have two apply processes running in a database, and the parallelism parameter is set to 4
for one of them and 1
for the other, then increase the Oracle Streams pool by 5 MB (4 + 1 = 5 parallelism).
Note:
The size of the Oracle Streams pool is managed automatically if theMEMORY_TARGET
, MEMORY_MAX_TARGET
, or SGA_TARGET
initialization parameter is set to a nonzero value.See Also:
Oracle Streams Replication Administrator's Guide for information about configuring the Oracle Streams pool
Oracle Streams Replication Administrator's Guide for more information about the STREAMS_POOL_SIZE
initialization parameter