PK
+Aoa, mimetypeapplication/epub+zipPK +A iTunesMetadata.plistV
Database administrators who have been using Advanced Replication to maintain replicated database objects at different sites can migrate their Advanced Replication environment to an Oracle Streams environment. This chapter provides a conceptual overview of the steps in this process and documents each step with procedures and examples.
This chapter contains these topics:
Performing the Migration for Advanced Replication to Oracle Streams
Example Advanced Replication to Oracle Streams Migration Script
See Also: Oracle Database Advanced Replication and Oracle Database Advanced Replication Management API Reference for more information about Advanced Replication |
The following sections provide a conceptual overview of the migration process:
You can use the procedure DBMS_REPCAT.STREAMS_MIGRATION
to generate a SQL*Plus script that migrates an existing Advanced Replication environment to an Oracle Streams environment. When you run the DBMS_REPCAT.STREAMS_MIGRATION
procedure at a master definition site in a multimaster replication environment, it generates a SQL*Plus script in a file at a location that you specify. Once the script is generated, you run it at each master site in your Advanced Replication environment to set up an Oracle Streams environment for each master site. To successfully generate the Oracle Streams environment for your replication groups, the replication groups for which you run the script must have the same master sites. If replication groups have different master sites, then you can generate multiple scripts to migrate each replication group to Oracle Streams.
At times, you must stop, or quiesce, all replication activity for a replication group so that you can perform certain administrative tasks. You do not need to quiesce the replication groups when you run the DBMS_REPCAT.STREAMS_MIGRATION
procedure. However, you must quiesce the replication groups being migrated to Oracle Streams when you run the generated script at the master sites. Because you have quiesced the replication groups to run the script at the master sites, you do not have to stop any existing capture processes, propagation jobs, or apply processes at these sites.
The generated migration script uses comments to indicate Advanced Replication elements that cannot be converted to Oracle Streams. It also provides suggestions for modifying the script to convert these elements to Oracle Streams. You can use these suggestions to edit the script before you run it. You can also customize the migration script in other ways to meet your needs.
The script sets all parameters when it runs PL/SQL procedures and functions. When you generate the script, it sets default values for parameters that typically do not need to be changed. However, you can change these default parameters by editing the script if necessary. The parameters with default settings include the following:
include_dml
include_ddl
include_tagged_lcr
The beginning of the script has a list of variables for names that are used by the procedures and functions in the script. When you generate the script, it sets these variables to default values that you should not need to change. However, you can change the default settings for these variables if necessary. The variables specify names of queues, capture processes, propagations, and apply processes.
The migration script performs the following actions:
Prints warnings in comments if the replication groups contain features that cannot be converted to Oracle Streams.
Creates ANYDATA
queues, if needed, using the DBMS_STREAMS_ADM.SET_UP_QUEUE
procedure.
Configures propagation between all master sites using the DBMS_STREAMS_ADMIN.ADD_TABLE_PROPAGATION_RULES
procedure for each table.
Configures capture at each master site using the DBMS_STREAMS_ADMIN.ADD_TABLE_RULES
procedure for each table.
Configures apply for changes from all the other master sites using the DBMS_STREAMS_ADMIN.ADD_TABLE_RULES
procedure for each table.
Sets the instantiation SCN for each replicated object at each site where changes to the object are applied.
Creates the necessary supplemental log groups at source databases.
Sets key columns, if any.
Configures conflict resolution if it was configured for the Advanced Replication environment being migrated.
If Oracle encounters an error while running the migration script, then the migration script exits immediately. If this happens, then you must modify the script to run any commands that have not already been executed successfully.
You cannot migrate updatable materialized views using the migration script. You must migrate updatable materialized views from an Advanced Replication environment to an Oracle Streams environment manually.
Oracle Streams does not support the following:
Replication of changes to tables with columns of the following data types: BFILE
, ROWID
, and user-defined types (including object types, REF
s, varrays, and nested tables)
Synchronous replication
If your current Advanced Replication environment uses these features, then these elements of the environment cannot be migrated to Oracle Streams. In this case, you might decide not to migrate the environment to Oracle Streams now, or you might decide to modify the environment so that it can be migrated to Oracle Streams.
Before generating the migration script, ensure that all the following conditions are met:
All the replication groups must have the same master site(s).
The master site that generates the migration script must be running Oracle Database 10g or later.
The other master sites that run the script, but do not generate the script, must be running Oracle9i Database Release 2 (9.2) or later.
To generate the migration script, use the procedure DBMS_REPCAT.STREAMS_MIGRATION
in the DBMS_REPCAT
package. The syntax for this procedure is as follows:
DBMS_REPCAT.STREAMS_MIGRATION ( gnames IN DBMS_UTILITY.NAME_ARRAY, file_location IN VARCHAR2, filename IN VARCHAR2);
Parameters for the DBMS_REPCAT.STREAMS_MIGRATION
procedure include the following:
gnames
: List of replication groups to migrate to Oracle Streams. The replication groups listed must all contain the same master sites. An error is raised if the replication groups have different masters.
file_location
: Directory location of the migration script.
filename
: Name of the migration script.
This procedure generates a script for setting up an Oracle Streams environment for the given replication groups. The script can be customized and run at each master site.
Figure A-1 shows the Advanced Replication environment that will be migrated to Oracle Streams in this example.
Figure A-1 Advanced Replication Environment to be Migrated to Oracle Streams
This Advanced Replication environment has the following characteristics:
The orc1.example.com
database is the master definition site for a three-way master configuration that also includes orc2.example.com
and orc3.example.com
.
The orc1.example.com
database is the master site for the mv1.example.com
materialized view site.
The environment replicates changes to the database objects in the hr
schema between the three master sites and between the master site and the materialized view site. A single replication group named hr_repg
contains the replicated objects.
Conflict resolution is configured for the hr.countries
table in the multimaster environment. The latest time stamp conflict resolution method resolves conflicts on this table.
The materialized views at the mv1.example.com
site are updatable.
You can configure this Advanced Replication environment by completing the tasks described in the following sections of the Oracle Database Advanced Replication Management API Reference:
Set up the materialized view sites (to set up mv1.example.com
only).
Configure time stamp conflict resolution for the hr.countries
table.
To generate the migration script for this Advanced Replication environment, complete the following steps:
Complete the following steps to create the Oracle Streams administrator at each master site for the replication groups being migrated to Oracle Streams. For the sample environment described in "Example Advanced Replication Environment to be Migrated to Oracle Streams", complete these steps at orc1.example.com
, orc2.example.com
, and orc3.example.com
:
Connect as an administrative user who can create users, grant privileges, and create tablespaces.
Either create a tablespace for the Oracle Streams administrator or use an existing tablespace. For example, the following statement creates a new tablespace for the Oracle Streams administrator:
CREATE TABLESPACE streams_tbs DATAFILE '/usr/oracle/dbs/streams_tbs.dbf' SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
Create a new user to act as the Oracle Streams administrator or use an existing user. For example, to create a user named strmadmin
and specify that this user uses the streams_tbs
tablespace, run the following statement:
CREATE USER strmadmin IDENTIFIED BY password
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;
GRANT DBA TO strmadmin;
Note:
|
Grant any additional privileges required by the Oracle Streams administrator at each master site. The necessary privileges depend on your specific Oracle Streams environment.
See Also: "Configuring an Oracle Streams Administrator on All Databases" for information about addition privileges that might be required for an Oracle Streams administrator |
The directory specified by the file_location
parameter in the DBMS_REPCAT.STREAMS_MIGRATION
procedure must be accessible to PL/SQL. If you do not have directory object that is accessible to the Oracle Streams administrator at the master definition site currently, then connect as the Oracle Streams administrator, and create a directory object using the SQL statement CREATE
DIRECTORY
.
A directory object is similar to an alias for the directory. For example, to create a directory object called MIG2STR_DIR
for the /usr/scripts
directory on your computer system, run the following procedure:
CONNECT strmadmin@orc1.example.com
Enter password: password
CREATE DIRECTORY MIG2STR_DIR AS '/usr/scripts';
See Also: Oracle Database SQL Language Reference for more information about theCREATE DIRECTORY statement |
To generate the migration script, run the DBMS_REPCAT.STREAMS_MIGRATION
procedure at the master definition site and specify the appropriate parameters. For example, the following procedure generates a script that migrates an Advanced Replication environment with one replication group named hr_repg
. The script name is rep2streams.sql
, and it is generated into the /usr/scripts
directory on the local computer system. This directory is represented by the directory object MIG2STR_DIR
.
CONNECT strmadmin@orc1.example.com
Enter password: password
DECLARE
rep_groups DBMS_UTILITY.NAME_ARRAY;
BEGIN
rep_groups(1) := 'HR_REPG';
DBMS_REPCAT.STREAMS_MIGRATION(
gnames => rep_groups,
file_location => 'MIG2STR_DIR',
filename => 'rep2streams.sql');
END;
/
See Also: "Example Advanced Replication to Oracle Streams Migration Script" to view the script generated in this example |
After generating the migration script, verify that the script was created viewing the script in the specified directory. If necessary, you can modify it to support the following:
If your environment requires conflict resolution that used the additive, average, priority group, or site priority Advanced Replication conflict resolution methods, then configure user-defined conflict resolution methods to resolve conflicts. Oracle Streams does not provide prebuilt conflict resolution methods that are equivalent to these methods.
However, the migration script supports the following conflict resolution methods automatically: overwrite, discard, maximum, and minimum. The script converts an earliest time stamp method to a minimum method automatically, and it converts a latest time stamp method to a maximum method automatically. If you use a time stamp conflict resolution method, then the script assumes that any triggers necessary to populate the time stamp column in a table already exist.
Unique conflict resolution.
Delete conflict resolution.
Multiple conflict resolution methods to be executed in a specified order when a conflict occurs. Oracle Streams allows only one conflict resolution method to be specified for each column list.
Procedural replication.
Replication of data definition language (DDL) changes for nontable objects, including the following:
Functions
Indexes
Indextypes
Operators
Packages
Package bodies
Procedures
Synonyms
Triggers
Types
Type bodies
Views
Because changes to these objects were being replicated by Advanced Replication at all sites, the migration script does not need to take any action to migrate these objects. You can add DDL rules to the Oracle Streams environment to support the future modification and creation of these types of objects.
For example, to specify that a capture process named streams_capture
at the orc1.example.com
database captures DDL changes to all of the database objects in the hr
schema, add the following to the script:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'capture', streams_name => 'streams_capture', queue_name => 'strmadmin.streams_queue', include_dml => FALSE, include_ddl => TRUE, include_tagged_lcr => FALSE, source_database => 'orc1.example.com'); END; /
Notice that the include_ddl
parameter is set to TRUE
. By setting this parameter to TRUE
, this procedure adds a schema rule for DDL changes to the hr
schema to the rule set for the capture process. This rule instructs the capture process to capture DDL changes to the hr
schema and its objects. For the DDL changes to be replicated, you must add similar rules to the appropriate propagations and apply processes.
See Also:
|
This section explains how to perform the migration from an Advanced Replication environment to an Oracle Streams environment.
This section contains the following topics:
Complete the following steps before executing the migration script:
Set Initialization Parameters That Are Relevant to Oracle Streams
Quiesce Each Replication Group That You Are Migrating to Oracle Streams
At each replication database, set initialization parameters that are relevant to Oracle Streams and restart the database if necessary.
See Also: "Setting Initialization Parameters Relevant to Oracle Streams" for information about initialization parameters that are important to Oracle Streams |
Ensure that each master site is running in ARCHIVELOG
mode, because a capture process requires ARCHIVELOG
mode. In the sample environment, orc1.example.com
, orc2.example.com
, and orc3.example.com
must be running in ARCHIVELOG
mode. You can check the log mode for a database by querying the LOG_MODE
column in the V$DATABASE
dynamic performance view.
See Also: Oracle Database Administrator's Guide for information about running a database inARCHIVELOG mode |
Create a database link from the Oracle Streams administrator at each master site to the Oracle Streams administrator at the other master sites. For the sample environment described in "Example Advanced Replication Environment to be Migrated to Oracle Streams", create the following database links:
CONNECT strmadmin@orc1.example.com Enter password: password CREATE DATABASE LINK orc2.example.com CONNECT TO strmadmin IDENTIFIED BY password USING 'orc2.example.com'; CREATE DATABASE LINK orc3.example.com CONNECT TO strmadmin IDENTIFIED BY password USING 'orc3.example.com'; CONNECT strmadmin@orc2.example.com Enter password: password CREATE DATABASE LINK orc1.example.com CONNECT TO strmadmin IDENTIFIED BY password USING 'orc1.example.com'; CREATE DATABASE LINK orc3.example.com CONNECT TO strmadmin IDENTIFIED BY password USING 'orc3.example.com'; CONNECT strmadmin@orc3.example.com Enter password: password CREATE DATABASE LINK orc1.example.com CONNECT TO strmadmin IDENTIFIED BY password USING 'orc1.example.com'; CREATE DATABASE LINK orc2.example.com CONNECT TO strmadmin IDENTIFIED BY password USING 'orc2.example.com';
Run the DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY
procedure at the master definition site for each replication group that you are migrating to Oracle Streams.
In the sample environment, orc1.example.com
is the master definition site, and hr_repg
is the replication group being migrated to Oracle Streams. So, connect to orc1.example.com
as the replication administrator and run the SUSPEND_MASTER_ACTIVITY
procedure:
CONNECT repadmin@orc1.example.com
Enter password: password
BEGIN
DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY (
gname => 'hr_repg');
END;
/
Do not proceed until the master group is quiesced. You can check the status of a master group by querying the STATUS
column in the DBA_REPGROUP
data dictionary view.
Perform the following steps to migrate:
Connect as the Oracle Streams Administrator and Run the Script at Each Site
Verify That Oracle Streams Configuration Completed Successfully at All Sites
In the sample environment, connect in SQL*Plus as the Oracle Streams administrator strmadmin
in SQL*Plus at orc1.example.com
, orc2.example.com
, and orc3.example.com
and execute the migration script rep2streams.sql
:
CONNECT strmadmin@orc1.example.com Enter password: password SET ECHO ON SPOOL rep2streams.out @rep2streams.sql CONNECT strma dmin@orc2.example.com Enter password: password SET ECHO ON SPOOL rep2streams.out @rep2streams.sql CONNECT strmadmin@orc3.example.com Enter password: password SET ECHO ON SPOOL rep2streams.out @rep2streams.sql
Check the spool file at each site to ensure that there are no errors. If there are errors, then you should modify the script to execute the steps that were not completed successfully, and then rerun the script. In the sample environment, the spool file is rep2streams.out
at each master site.
Perform the following steps to complete the migration process:
To drop a replication group that you successfully migrated to Oracle Streams, connect as the replication administrator to the master definition site, and run the DBMS_REPCAT.DROP_MASTER_REPGROUP
procedure.
Caution: Ensure that thedrop_contents parameter is set to FALSE in the DROP_MASTER_REPGROUP procedure. If it is set to TRUE , then the replicated database objects are dropped. |
CONNECT repadmin@orc1.example.com
Enter password: password
BEGIN
DBMS_REPCAT.DROP_MASTER_REPGROUP (
gname => 'hr_repg',
drop_contents => FALSE,
all_sites => TRUE);
END;
/
To ensure that the migrated replication groups are dropped at each database, query the GNAME
column in the DBA_REPGROUP
data dictionary view. The migrated replication groups should not appear in the query output at any database.
If you no longer need the replication administrator, then you can drop this user also.
Caution: Do not resume any Advanced Replication activity once Oracle Streams is set up. |
You can view the names of the apply processes at each site by running the following query while connected as the Oracle Streams administrator:
SELECT APPLY_NAME FROM DBA_APPLY;
When you know the names of the apply processes, you can start each one by running the START_APPLY
procedure in the DBMS_APPLY_ADM
package while connected as the Oracle Streams administrator. For example, the following procedure starts an apply process named apply_from_orc2
at orc1.example.com
:
CONNECT strmadmin@orc1.example.com
Enter password: password
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_from_orc2');
END;
/
Ensure that you start each apply process at every database in the new Oracle Streams environment.
You can view the name of the capture process at each site by running the following query while connected as the Oracle Streams administrator:
SELECT CAPTURE_NAME FROM DBA_CAPTURE;
When you know the name of the capture process, you can start each one by running the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package while connected as the Oracle Streams administrator. For example, the following procedure starts a capture process named streams_capture
at orc1.example.com
:
CONNECT strmadmin@orc1.example.com
Enter password: password
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'streams_capture');
END;
/
Ensure that you start each capture process at every database in the new Oracle Streams environment.
If one or more materialized view groups used a master group that you migrated to Oracle Streams, then you must re-create the master group to retain these materialized view groups. Therefore, each database acting as the master site for a materialized view group must become the master definition site for a one-master configuration of a replication group that contains the tables used by the materialized views in the materialized view group.
Use the replication management APIs to create a replication group similar to the original replication group that was migrated to Oracle Streams. That is, the new replication group should have the same replication group name, objects, conflict resolution methods, and key columns. To retain the existing materialized view groups, you must re-create each master group at each master site that contained a master group for a materialized view group, re-create the master replication objects in the master group, regenerate replication support for the master group, and resume replication activity for the master group.
For example, consider the following Advanced Replication environment:
Two master sites, mdb1.example.com
and mdb2.example.com
, have the replication group rg1
. The mdb1.example.com
database is the master definition site, and the objects in the rg1
replication group are replicated between mdb1.example.com
and mdb2.example.com
.
The rg1
replication group at mdb1.example.com
is the master group to the mvg1
materialized view group at mv1.example.com
.
The rg1
replication group at mdb2.example.com
is the master group to the mvg2
materialized view group at mv2.example.com
.
If the rg1
replication group is migrated to Oracle Streams at both mdb1.example.com
and mdb2.example.com
, and you want to retain the materialized view groups mvg1
at mv1.example.com
and mvg2
at mv2.example.com
, then you must re-create the rg1
replication group at mdb1.example.com
and mdb2.example.com
after the migration to Oracle Streams. You configure both mdb1.example.com
and mdb2.example.com
to be the master definition site for the rg1
replication group in a one-master environment.
It is not necessary to drop or re-create materialized view groups at the materialized view sites. If a new master replication group resembles the original replication group, then the materialized view groups are not affected. Do not refresh these materialized view groups until generation of replication support for each master object is complete (Step 3 in the task in this section). Similarly, do not push the deferred transaction queue at any materialized view site with updatable materialized views until generation of replication support for each master object is complete.
For the sample environment described in "Example Advanced Replication Environment to be Migrated to Oracle Streams", only the hr_repg
replication group at orc1.example.com
was the master group to a materialized view group at mv1.example.com
. To retain this materialized view group at mv1.example.com
, complete the following steps while connected as the replication administrator:
Create the master group hr_repg
at orc1.example.com
.
CONNECT repadmin@orc1.example.com
Enter password: password
BEGIN
DBMS_REPCAT.CREATE_MASTER_REPGROUP (
gname => 'hr_repg');
END;
/
Add the tables in the hr schema to the hr_repg
master group. These tables are master tables to the materialized views at mv1.example.com
.
BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'countries', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; / BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'departments', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; / BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'employees', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; / BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'jobs', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; / BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'job_history', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; / BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'locations', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; / BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT ( gname => 'hr_repg', type => 'TABLE', oname => 'regions', sname => 'hr', use_existing_object => TRUE, copy_rows => FALSE); END; /
Generate replication support for each object in the hr_repg
master group.
BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'countries', type => 'TABLE'); END; / BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'departments', type => 'TABLE'); END; / BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'employees', type => 'TABLE'); END; / BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'jobs', type => 'TABLE'); END; / BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'job_history', type => 'TABLE'); END; / BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'locations', type => 'TABLE'); END; / BEGIN DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT ( sname => 'hr', oname => 'regions', type => 'TABLE'); END; /
Resume master activity for the hr_repg
master group.
BEGIN DBMS_REPCAT.RESUME_MASTER_ACTIVITY ( gname => 'hr_repg'); END; /
Note: A materialized view log should exist for each table you added to thehr_repg master group, unless you deleted these logs manually after you migrated the replication group to Oracle Streams. If these materialized view logs do not exist, then you must create them. |
The following is an example script generated for the environment:
The following is an example script generated for the environment: ---------------------------------------------------------- -- Migration Script Generated on 12-JUN-05 by user STRMADMIN. -- ---------------------------------------------------------- ---------------------------------------------------------- -- ************** Notes and Assumptions ************** -- -- -- 1. The Oracle Streams Administrator is "strmadmin". -- The user "strmadmin" must be created and granted the -- required privileges before running the script. -- -- 2. Names of queue tables, queues, capture processes -- propagation jobs, and apply processes will be the -- same at all sites. If the DBA wants different names, -- he must edit the script manually before running it -- at each master site. -- -- 3. Archive logging must be enabled at all sites before -- running the script. -- -- 4. Users must set up database links for queue to queue -- propagation, if needed. -- -- 5. Repgroups must be quiesced before running the script. ---------------------------------------------------------- set pagesize 1000 set echo on set serveroutput on whenever sqlerror exit sql.sqlcode; -- -- Raise error if Repgroups are not Quiesced. -- declare repgroup_status VARCHAR2(10); begin select status into repgroup_status from dba_repcat where gname = 'HR_REPG'; if (repgroup_status != 'QUIESCED') THEN raise_application_error(-20000, 'ORA-23310: object group "HR_REPG" is not quiesced.'); end if; exception when no_data_found then null; end; / ------------------------------- -- Queue Owner ------------------------------- -- streams queue owner at ORC1.EXAMPLE.COM define QUEUE_OWNER_ORC1 = strmadmin -- streams queue owner at ORC2.EXAMPLE.COM define QUEUE_OWNER_ORC2 = strmadmin -- streams queue owner at ORC3.EXAMPLE.COM define QUEUE_OWNER_ORC3 = strmadmin ------------------------------- -- Queue Table ------------------------------- -- streams queue table at ORC1.EXAMPLE.COM define QUEUE_TABLE_ORC1 = streams_queue_table -- streams queue table at ORC2.EXAMPLE.COM define QUEUE_TABLE_ORC2 = streams_queue_table -- streams queue table at ORC3.EXAMPLE.COM define QUEUE_TABLE_ORC3 = streams_queue_table ------------------------------- -- Queue ------------------------------- -- streams queue at ORC1.EXAMPLE.COM define QUEUE_ORC1 = streams_queue -- streams queue at ORC2.EXAMPLE.COM define QUEUE_ORC2 = streams_queue -- streams queue at ORC3.EXAMPLE.COM define QUEUE_ORC3 = streams_queue ------------------------------- -- Propagation names ------------------------------- -- propagation process to ORC1.EXAMPLE.COM define PROP_ORC1 = prop_to_ORC1 -- propagation process to ORC2.EXAMPLE.COM define PROP_ORC2 = prop_to_ORC2 -- propagation process to ORC3.EXAMPLE.COM define PROP_ORC3 = prop_to_ORC3 ------------------------------- -- Capture Process ------------------------------- -- capture process to be used or created at the local site define CAPTURE_NAME = streams_capture ------------------------------- -- Apply processes ------------------------------- -- apply process for applying LCRs from ORC1.EXAMPLE.COM define APPLY_ORC1 = apply_from_ORC1 -- apply process for applying LCRs from ORC2.EXAMPLE.COM define APPLY_ORC2 = apply_from_ORC2 -- apply process for applying LCRs from ORC3.EXAMPLE.COM define APPLY_ORC3 = apply_from_ORC3 -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- DEPT_LOCATION_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- EMP_MANAGER_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- INSERT_TIME of type TRIGGER belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_DEPARTMENT_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_EMPLOYEE_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- JHIST_JOB_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- -- -- ** WARNING ** -- -- Oracle Streams does not support the repobject -- LOC_COUNTRY_IX of type INDEX belonging to repgroup HR_REPG. -- The user can add DDL rules to the Oracle Streams environment -- to support creation or any future modifications -- of this type of object. -- ------------------------------- -- Setup Queue ------------------------------- variable local_db varchar2(128); variable local_queue_table varchar2(30); variable local_queue varchar2(30); variable local_queue_owner varchar2(30); -- get the local database name declare global_name varchar2(128); begin select global_name into :local_db from global_name; dbms_output.put_line('The local database name is: ' || :local_db); end; / -- get the local queue table and queue name begin if :local_db = 'ORC1.EXAMPLE.COM' then :local_queue_table := '&QUEUE_TABLE_ORC1'; :local_queue := '&QUEUE_ORC1'; :local_queue_owner := '&QUEUE_OWNER_ORC1'; elsif :local_db = 'ORC2.EXAMPLE.COM' then :local_queue_table := '&QUEUE_TABLE_ORC2'; :local_queue := '&QUEUE_ORC2'; :local_queue_owner := '&QUEUE_OWNER_ORC2'; elsif :local_db = 'ORC3.EXAMPLE.COM' then :local_queue_table := '&QUEUE_TABLE_ORC3'; :local_queue := '&QUEUE_ORC3'; :local_queue_owner := '&QUEUE_OWNER_ORC3'; end if; dbms_output.put_line('The local queue owner is: ' || :local_queue_owner); dbms_output.put_line('The local queue table is: ' || :local_queue_table); dbms_output.put_line('The local queue name is: ' || :local_queue); end; / begin dbms_streams_adm.set_up_queue( queue_table => :local_queue_table, storage_clause => NULL, queue_name => :local_queue, queue_user => :local_queue_owner, comment => 'streams_comment'); end; / ------------------------------- -- Set Instantiation SCN ------------------------------- variable flashback_scn number; begin select dbms_flashback.get_system_change_number into :flashback_scn from dual; dbms_output.put_line('local flashback SCN is: ' || :flashback_scn); end; / -- -- Setup instantiation SCN for ORC1.EXAMPLE.COM -- begin -- -- HR_REPG : Set instantiation SCN for "HR"."COUNTRIES" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."COUNTRIES"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."DEPARTMENTS" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."DEPARTMENTS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."EMPLOYEES" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."EMPLOYEES"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."JOBS" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."JOBS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."JOB_HISTORY" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."JOB_HISTORY"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."LOCATIONS" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."LOCATIONS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."REGIONS" at -- ORC1.EXAMPLE.COM -- if (:local_db != 'ORC1.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC1.EXAMPLE.COM( source_object_name => '"HR"."REGIONS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / -- -- Setup instantiation SCN for ORC2.EXAMPLE.COM -- begin -- -- HR_REPG : Set instantiation SCN for "HR"."COUNTRIES" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."COUNTRIES"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."DEPARTMENTS" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."DEPARTMENTS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."EMPLOYEES" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."EMPLOYEES"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."JOBS" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."JOBS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."JOB_HISTORY" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."JOB_HISTORY"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."LOCATIONS" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."LOCATIONS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."REGIONS" at -- ORC2.EXAMPLE.COM -- if (:local_db != 'ORC2.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC2.EXAMPLE.COM( source_object_name => '"HR"."REGIONS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / -- -- Setup instantiation SCN for ORC3.EXAMPLE.COM -- begin -- -- HR_REPG : Set instantiation SCN for "HR"."COUNTRIES" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."COUNTRIES"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."DEPARTMENTS" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."DEPARTMENTS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."EMPLOYEES" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."EMPLOYEES"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."JOBS" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."JOBS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."JOB_HISTORY" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."JOB_HISTORY"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."LOCATIONS" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."LOCATIONS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / begin -- -- HR_REPG : Set instantiation SCN for "HR"."REGIONS" at -- ORC3.EXAMPLE.COM -- if (:local_db != 'ORC3.EXAMPLE.COM') then dbms_apply_adm.set_table_instantiation_scn@ORC3.EXAMPLE.COM( source_object_name => '"HR"."REGIONS"', source_database_name => :local_db, instantiation_scn => :flashback_scn, apply_database_link => NULL); end if; end; / ------------------------------- -- Setup Propagation ------------------------------- -- -- Propagation from local queue to ORC1.EXAMPLE.COM -- begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "COUNTRIES" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."COUNTRIES"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "DEPARTMENTS" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."DEPARTMENTS"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "EMPLOYEES" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."EMPLOYEES"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "JOBS" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."JOBS"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "JOB_HISTORY" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."JOB_HISTORY"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "LOCATIONS" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."LOCATIONS"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC1.EXAMPLE.COM' then -- -- HR_REPG: Propagate "REGIONS" from local queue to ORC1 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."REGIONS"', streams_name => '&PROP_ORC1', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC1' || '.' || '&QUEUE_ORC1' || '@ORC1.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / -- -- Propagation from local queue to ORC2.EXAMPLE.COM -- begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "COUNTRIES" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."COUNTRIES"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "DEPARTMENTS" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."DEPARTMENTS"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "EMPLOYEES" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."EMPLOYEES"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "JOBS" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."JOBS"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "JOB_HISTORY" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."JOB_HISTORY"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "LOCATIONS" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."LOCATIONS"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC2.EXAMPLE.COM' then -- -- HR_REPG: Propagate "REGIONS" from local queue to ORC2 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."REGIONS"', streams_name => '&PROP_ORC2', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC2' || '.' || '&QUEUE_ORC2' || '@ORC2.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / -- -- Propagation from local queue to ORC3.EXAMPLE.COM -- begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "COUNTRIES" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."COUNTRIES"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "DEPARTMENTS" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."DEPARTMENTS"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "EMPLOYEES" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."EMPLOYEES"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "JOBS" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."JOBS"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "JOB_HISTORY" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."JOB_HISTORY"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "LOCATIONS" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."LOCATIONS"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / begin if :local_db != 'ORC3.EXAMPLE.COM' then -- -- HR_REPG: Propagate "REGIONS" from local queue to ORC3 -- dbms_streams_adm.add_table_propagation_rules( table_name => '"HR"."REGIONS"', streams_name => '&PROP_ORC3', source_queue_name => :local_queue_owner || '.' || :local_queue, destination_queue_name => '&QUEUE_OWNER_ORC3' || '.' || '&QUEUE_ORC3' || '@ORC3.EXAMPLE.COM', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end if; end; / ------------------------------- -- Setup Capture ------------------------------- begin -- -- HR_REPG : Add "COUNTRIES" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."COUNTRIES"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / begin -- -- HR_REPG : Add "DEPARTMENTS" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."DEPARTMENTS"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / begin -- -- HR_REPG : Add "EMPLOYEES" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."EMPLOYEES"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / begin -- -- HR_REPG : Add "JOBS" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."JOBS"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / begin -- -- HR_REPG : Add "JOB_HISTORY" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."JOB_HISTORY"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / begin -- -- HR_REPG : Add "LOCATIONS" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."LOCATIONS"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / begin -- -- HR_REPG : Add "REGIONS" -- dbms_streams_adm.add_table_rules( table_name => '"HR"."REGIONS"', streams_type => 'CAPTURE', streams_name => '&CAPTURE_NAME', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => :local_db); end; / ------------------------------- -- Setup Apply ------------------------------- -- -- Setup Apply from ORC1.EXAMPLE.COM -- begin -- -- HR_REPG : Add "COUNTRIES" to apply rules for apply from -- ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."COUNTRIES"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "DEPARTMENTS" to apply rules for apply from -- ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."DEPARTMENTS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "EMPLOYEES" to apply rules for apply from -- ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."EMPLOYEES"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "JOBS" to apply rules for apply from -- ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."JOBS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "JOB_HISTORY" to apply rules for apply from -- ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."JOB_HISTORY"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "LOCATIONS" to apply rules for apply from -- ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."LOCATIONS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "REGIONS" to apply rules for apply from -- >+ORC1.EXAMPLE.COM -- if(:local_db != 'ORC1.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."REGIONS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC1', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC1.EXAMPLE.COM'); end if; end; / -- -- Setup Apply from ORC2.EXAMPLE.COM -- begin -- -- HR_REPG : Add "COUNTRIES" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."COUNTRIES"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "DEPARTMENTS" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."DEPARTMENTS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "EMPLOYEES" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."EMPLOYEES"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "JOBS" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."JOBS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "JOB_HISTORY" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."JOB_HISTORY"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "LOCATIONS" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."LOCATIONS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "REGIONS" to apply rules for apply from -- ORC2.EXAMPLE.COM -- if(:local_db != 'ORC2.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."REGIONS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC2', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC2.EXAMPLE.COM'); end if; end; / -- -- Setup Apply from ORC3.EXAMPLE.COM -- begin -- -- HR_REPG : Add "COUNTRIES" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."COUNTRIES"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "DEPARTMENTS" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."DEPARTMENTS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "EMPLOYEES" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."EMPLOYEES"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "JOBS" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."JOBS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "JOB_HISTORY" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."JOB_HISTORY"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "LOCATIONS" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."LOCATIONS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / begin -- -- HR_REPG : Add "REGIONS" to apply rules for apply from -- ORC3.EXAMPLE.COM -- if(:local_db != 'ORC3.EXAMPLE.COM') then dbms_streams_adm.add_table_rules( table_name => '"HR"."REGIONS"', streams_type => 'APPLY', streams_name => '&APPLY_ORC3', queue_name => :local_queue_owner || '.' || :local_queue, include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'ORC3.EXAMPLE.COM'); end if; end; / ------------------------------- -- Add Supplemental Log Groups ------------------------------- -- -- ** NOTE ** -- -- The primary key columns must be supplementally logged. -- alter database add supplemental log data (primary key) columns; -- -- ** NOTE ** -- -- The unique key columns must be supplementally logged. -- alter database add supplemental log data (unique index) columns; -- -- ** NOTE ** -- -- All the columns in a column group that is assigned an Oracle Streams -- supported update conflict handler must be supplementally logged. -- -- Supplementally log columns in column group 'COUNTRIES_TIMESTAMP_CG' -- that is assigned the LATEST TIMESTAMP update conflict resolution method. alter table "HR"."COUNTRIES" add supplemental log group COUNTRIES_LogGrp1 ( "COUNTRY_NAME" ,"REGION_ID" ,"TIMESTAMP" ); ------------------------------- -- Setup Conflict Resolution ------------------------------- -- -- ** WARNING ** -- -- Oracle Streams does not support LATEST TIMESTAMP -- conflict resolution method. -- Changing LATEST TIMESTAMP to MAXIMUM as -- they handle the conflicts in a similar manner. -- declare cols dbms_utility.name_array; begin cols(1) := 'COUNTRY_NAME'; cols(2) := 'REGION_ID'; cols(3) := 'TIMESTAMP'; dbms_apply_adm.set_update_conflict_handler( object_name => 'HR.COUNTRIES', method_name => 'MAXIMUM', resolution_column => 'TIMESTAMP', column_list => cols); end; / ------------------------------- -- Verify Oracle Streams Setup ------------------------------- -- Verify creation of queues select * from dba_queues where name = upper(:local_queue) and owner = upper(:local_queue_owner) and queue_table = upper(:local_queue_table) order by name; -- Verify creation of capture_process select * from dba_capture where capture_name = upper('&CAPTURE_NAME'); -- Verify creation of apply processes select * from dba_apply where apply_name IN ( upper('&APPLY_ORC1'), upper('&APPLY_ORC2'), upper('&APPLY_ORC3') ) order by apply_name; -- Verify propagation processes select * from dba_propagation where propagation_name IN ( upper('&PROP_ORC1'), upper('&PROP_ORC2'), upper('&PROP_ORC3') ) order by propagation_name; -- Verify Oracle Streams rules select * from dba_streams_table_rules where streams_name = upper('&CAPTURE_NAME'); select * from dba_streams_table_rules where streams_name IN ( upper('&APPLY_ORC1'), upper('&APPLY_ORC2'), upper('&APPLY_ORC3') ) order by source_database; select * from dba_streams_table_rules where streams_name IN ( upper('&PROP_ORC1'), upper('&PROP_ORC2'), upper('&PROP_ORC3') ) order by source_database; -- Do not resume Repcat activity once Oracle Streams is set up. -- Drop all the repgroups that have been migrated to Oracle Streams. -- Start apply and capture processes at all sites.
This chapter explains the concepts related to Oracle Streams tags.
This chapter contains these topics:
Every redo entry in the redo log has a tag associated with it. The data type of the tag is RAW
. By default, when a user or application generates redo entries, the value of the tag is NULL
for each redo entry, and a NULL
tag consumes no space. The size limit for a tag value is 2000 bytes.
You can configure how tag values are interpreted. For example, you can use a tag to determine whether an LCR contains a change that originated in the local database or at a different database, so that you can avoid change cycling (sending an LCR back to the database where it originated). Tags can be used for other LCR tracking purposes as well. You can also use tags to specify the set of destination databases for each LCR.
You can control the value of the tags generated in the redo log in the following ways:
Use the DBMS_STREAMS.SET_TAG
procedure to specify the value of the redo tags generated in the current session. When a database change is made in the session, the tag becomes part of the redo entry that records the change. Different sessions can have the same tag setting or different tag settings.
Use the CREATE_APPLY
or ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package to control the value of the redo tags generated when an apply process runs. All sessions coordinated by the apply process coordinator use this tag setting. By default, redo entries generated by an apply process have a tag value that is the hexadecimal equivalent of '00'
(double zero).
Based on the rules in the rule sets for a capture process, the tag value in the redo entry for a change can determine whether the change is captured. Based on the rules in the rule sets for a synchronous capture, the session tag value for a change can determine whether the change is captured. The tags become part of the LCRs captured by a capture process or synchronous capture.
Similarly, once a tag is part of an LCR, the value of the tag can determine whether a propagation propagates the LCR and whether an apply process applies the LCR. The behavior of a custom rule-based transformation or apply handler can also depend on the value of the tag. In addition, you can set the tag value for an existing LCR using the SET_TAG
member procedure for the LCR in a custom rule-based transformation or an apply handler that uses a PL/SQL procedure. You cannot set a tag value for an existing LCR in a statement DML handler or change handler.
See Also:
|
When you use a procedure in the DBMS_STREAMS_ADM
package to create rules and set the include_tagged_lcr
parameter to FALSE
, each rule contains a condition that evaluates to TRUE
only if the tag is NULL
. In DML rules, the condition is the following:
:dml.is_null_tag()='Y'
In DDL rules, the condition is the following:
:ddl.is_null_tag()='Y'
Consider a positive rule set with a single rule and assume the rule contains such a condition. In this case, Oracle Streams capture processes, synchronous captures, propagations, and apply processes behave in the following way:
A capture process captures a change only if the tag in the redo log entry for the change is NULL
and the rest of the rule conditions evaluate to TRUE
for the change.
A synchronous capture captures a change only if the tag for the session that makes the change is NULL
and the rest of the rule conditions evaluate to TRUE
for the change.
A propagation propagates an LCR only if the tag in the LCR is NULL
and the rest of the rule conditions evaluate to TRUE
for the LCR.
An apply process applies an LCR only if the tag in the LCR is NULL
and the rest of the rule conditions evaluate to TRUE
for the LCR.
Alternatively, consider a negative rule set with a single rule and assume the rule contains such a condition. In this case, Oracle Streams capture processes, propagations, and apply processes behave in the following way:
A capture process discards a change only if the tag in the redo log entry for the change is NULL
and the rest of the rule conditions evaluate to TRUE
for the change.
A propagation or apply process discards LCR only if the tag in the LCR is NULL
and the rest of the rule conditions evaluate to TRUE
for the LCR.
In most cases, specify TRUE
for the include_tagged_lcr
parameter if rules are being added to a negative rule set so that changes are discarded regardless of their tag values.
The following procedures in the DBMS_STREAMS_ADM
package create rules that contain one of these conditions by default:
ADD_GLOBAL_PROPAGATION_RULES
ADD_GLOBAL_RULES
ADD_SCHEMA_PROPAGATION_RULES
ADD_SCHEMA_RULES
ADD_SUBSET_PROPAGATION_RULES
ADD_SUBSET_RULES
ADD_TABLE_PROPAGATION_RULES
ADD_TABLE_RULES
If you do not want the rules to contain such a condition, then set the include_tagged_lcr
parameter to TRUE
when you run these procedures. This setting results in no conditions relating to tags in the rules. Therefore, rule evaluation of the database change does not depend on the value of the tag.
For example, consider a table rule that evaluates to TRUE
for all DML changes to the hr.locations
table that originated at the dbs1.example.com
source database.
Assume the ADD_TABLE_RULES
procedure is run to generate this rule:
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.locations',
streams_type => 'capture',
streams_name => 'capture',
queue_name => 'streams_queue',
include_tagged_lcr => FALSE, -- Note parameter setting
source_database => 'dbs1.example.com',
include_dml => TRUE,
include_ddl => FALSE);
END;
/
Notice that the include_tagged_lcr
parameter is set to FALSE
, which is the default. The ADD_TABLE_RULES
procedure generates a rule with a rule condition similar to the following:
(((:dml.get_object_owner() = 'HR' and :dml.get_object_name() = 'LOCATIONS')) and :dml.is_null_tag() = 'Y' and :dml.get_source_database_name() = 'DBS1.EXAMPLE.COM' )
If a capture process uses a positive rule set that contains this rule, then the rule evaluates to FALSE
if the tag for a change in a redo entry is a non-NULL
value, such as '0'
or '1'
. So, if a redo entry contains a row change to the hr.locations
table, then the change is captured only if the tag for the redo entry is NULL
.
However, suppose the include_tagged_lcr
parameter is set to TRUE
when ADD_TABLE_RULES
is run:
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.locations',
streams_type => 'capture',
streams_name => 'capture',
queue_name => 'streams_queue',
include_tagged_lcr => TRUE, -- Note parameter setting
source_database => 'dbs1.example.com',
include_dml => TRUE,
include_ddl => FALSE);
END;
/
In this case, the ADD_TABLE_RULES
procedure generates a rule with a rule condition similar to the following:
(((:dml.get_object_owner() = 'HR' and :dml.get_object_name() = 'LOCATIONS')) and :dml.get_source_database_name() = 'DBS1.EXAMPLE.COM' )
Notice that there is no condition relating to the tag. If a capture process uses a positive rule set that contains this rule, then the rule evaluates to TRUE
if the tag in a redo entry for a DML change to the hr.locations
table is a non-NULL
value, such as '0'
or '1'
. The rule also evaluates to TRUE
if the tag is NULL
. So, if a redo entry contains a DML change to the hr.locations
table, then the change is captured regardless of the value for the tag.
To modify the is_null_tag
condition in an existing system-created rule, use an appropriate procedure in the DBMS_STREAMS_ADM
package to create a rule that is the same as the rule you want to modify, except for the is_null_tag
condition. Next, use the REMOVE_RULE
procedure in the DBMS_STREAMS_ADM
package to remove the old rule from the appropriate rule set. In addition, you can use the and_condition
parameter for the procedures that create rules in the DBMS_STREAMS_ADM
package to add conditions relating to tags to system-created rules.
If you created a rule with the DBMS_RULE_ADM
package, then you can add, remove, or modify the is_null_tag
condition in the rule by using the ALTER_RULE
procedure in this package.
See Also:
|
If you are using global rules to capture and apply DDL changes for an entire database, then online backup statements will be captured, propagated, and applied by default. Typically, database administrators do not want to replicate online backup statements. Instead, they only want them to run at the database where they are executed originally. An online backup statement uses the BEGIN
BACKUP
and END
BACKUP
clauses in an ALTER
TABLESPACE
or ALTER
DATABASE
statement.
To avoid replicating online backup statements, you can use one of the following strategies:
Include one or more calls to the DBMS_STREAMS.SET_TAG
procedure in your online backup procedures, and set the session tag to a value that will cause the online backup statements to be ignored by a capture process.
Use a DDL handler for an apply process to avoid applying the online backup statements.
Note: If you use Recovery Manager (RMAN) to perform an online backup, then the online backup statements are not used, and there is no need to set Oracle Streams tags for backups. |
An apply process generates entries in the redo log of a destination database when it applies DML or DDL changes. For example, if the apply process applies a change that updates a row in a table, then that change is recorded in the redo log at the destination database. You can control the tags in these redo entries by setting the apply_tag
parameter in the CREATE_APPLY
or ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. For example, an apply process can generate redo tags that are equivalent to the hexadecimal value of '0'
(zero) or '1'
.
The default tag value generated in the redo log by an apply process is '00'
(double zero). This value is the default tag value for an apply process if you use a procedure in the DBMS_STREAMS_ADM
package or the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package to create the apply process. There is nothing special about this value beyond the fact that it is a non-NULL
value. The fact that it is a non-NULL
value is important because rules created by the DBMS_STREAMS_ADM
package by default contain a condition that evaluates to TRUE
only if the tag is NULL
in a redo entry or an LCR. You can alter the tag value for an existing apply process using the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package.
Redo entries generated by an apply handler for an apply process have the tag value of the apply process, unless the handler sets the tag to a different value using the SET_TAG
procedure. If a procedure DML handler, DDL handler, or message handler calls the SET_TAG
procedure in the DBMS_STREAMS
package, then any subsequent redo entries generated by the handler will include the tag specified in the SET_TAG
call, even if the tag for the apply process is different. When the handler exits, any subsequent redo entries generated by the apply process have the tag specified for the apply process.
See Also:
|
In an Oracle Streams environment that includes multiple databases sharing data bidirectionally, you can use tags to avoid change cycling. Change cycling means sending a change back to the database where it originated. Typically, change cycling should be avoided because it can result in each change going through endless loops back to the database where it originated. Such loops can result in unintended data in the database and tax the networking and computer resources of an environment. By default, Oracle Streams is designed to avoid change cycling.
Using tags and appropriate rules for Oracle Streams capture processes, synchronous captures, propagations, and apply processes, you can avoid such change cycles. This section describes common Oracle Streams environments and how you can use tags and rules to avoid change cycling in these environments.
This section contains these topics:
An n-way replication environment is one in which each database is a source database for every other database, and each database is a destination database of every other database. Each database communicates directly with every other database.
For example, consider an environment that replicates the database objects and data in the hrmult
schema between three Oracle databases: mult1.example.com
, mult2.example.com
, and mult3.example.com
. DML and DDL changes made to tables in the hrmult
schema are captured at all three databases in the environment and propagated to each of the other databases in the environment, where changes are applied. Figure 10-1 illustrates a sample n-way replication environment.
Figure 10-1 Each Database Is a Source and Destination Database
You can avoid change cycles by configuring such an environment in the following way:
Configure one apply process at each database to generate non-NULL
redo tags for changes from each source database. If you use a procedure in the DBMS_STREAMS_ADM
package to create an apply process, then the apply process generates non-NULL
tags with a value of '00'
in the redo log by default. In this case, no further action is required for the apply process to generate non-NULL
tags.
If you use the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package to create an apply process, then do not set the apply_tag
parameter. Again, the apply process generates non-NULL
tags with a value of '00'
in the redo log by default, and no further action is required.
Configure the capture process at each database to capture changes only if the tag in the redo entry for the change is NULL
. You do this by ensuring that each DML rule in the positive rule set used by the capture process has the following condition:
:dml.is_null_tag()='Y'
Each DDL rule should have the following condition:
:ddl.is_null_tag()='Y'
These rule conditions indicate that the capture process captures a change only if the tag for the change is NULL
. If you use the DBMS_STREAMS_ADM
package to generate rules, then each rule has such a condition by default.
This configuration prevents change cycling because all of the changes applied by the apply processes are never recaptured (they were captured originally at the source databases). Each database sends all of its changes to the hrmult
schema to every other database. So, in this environment, no changes are lost, and all databases are synchronized. Figure 10-2 illustrates how tags can be used in a database in an n-way replication environment.
Figure 10-2 Tag Use When Each Database Is a Source and Destination Database
A hub-and-spoke replication environment is one in which a primary database, or hub, communicates with secondary databases, or spokes. The spokes do not communicate directly with each other. In a hub-and-spoke replication environment, the spokes might or might not allow changes to the replicated database objects.
If the spokes do not allow changes to the replicated database objects, then the primary database captures local changes to the shared data and propagates these changes to all secondary databases, where these changes are applied at each secondary database locally. Change cycling is not possible when none of the secondary databases allow changes to the replicated database objects because changes to the replicated database objects are captured in only one location.
If the spokes allow changes to the replicated database objects, then changes are captured, propagated, and applied in the following way:
The primary database captures local changes to the shared data and propagates these changes to all secondary databases, where these changes are applied at each secondary database locally.
Each secondary database captures local changes to the shared data and propagates these changes to the primary database only, where these changes are applied at the primary database locally.
The primary database applies changes from each secondary database locally. Next, these changes are captured at the primary database and propagated to all secondary databases, except for the one at which the change originated. Each secondary database applies the changes from the other secondary databases locally, after they have gone through the primary database. This configuration is an example of apply forwarding.
An alternate scenario might use queue forwarding. If this environment used queue forwarding, then changes from secondary databases that are applied at the primary database are not captured at the primary database. Instead, these changes are forwarded from the queue at the primary database to all secondary databases, except for the one at which the change originated.
See Also: Oracle Streams Concepts and Administration for more information about apply forwarding and queue forwarding |
For example, consider an environment that replicates the database objects and data in the hr
schema between one primary database named ps1.example.com
and three secondary databases named ps2.example.com
, ps3.example.com
, and ps4.example.com
. DML and DDL changes made to tables in the hr
schema are captured at the primary database and at the three secondary databases in the environment. Next, these changes are propagated and applied as described previously. The environment uses apply forwarding, not queue forwarding, to share data between the secondary databases through the primary database. Figure 10-3 illustrates a sample environment which has one primary database and multiple secondary databases.
Figure 10-3 Primary Database Sharing Data with Several Secondary Databases
You can avoid change cycles by configuring the environment in the following way:
Configure each apply process at the primary database ps1.example.com
to generate non-NULL
redo tags that indicate the site from which it is receiving changes. In this environment, the primary database has at least one apply process for each secondary database from which it receives changes. For example, if an apply process at the primary database receives changes from the ps2.example.com
secondary database, then this apply process can generate a raw value that is equivalent to the hexadecimal value '2'
for all changes it applies. You do this by setting the apply_tag
parameter in the CREATE_APPLY
or ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package to the non-NULL
value.
For example, run the following procedure to create an apply process that generates redo entries with tags that are equivalent to the hexadecimal value '2'
:
BEGIN DBMS_APPLY_ADM.CREATE_APPLY( queue_name => 'strmadmin.streams_queue', apply_name => 'apply_ps2', rule_set_name => 'strmadmin.apply_rules_ps2', apply_tag => HEXTORAW('2'), apply_captured => TRUE); END; /
Configure the apply process at each secondary database to generate non-NULL
redo tags. The exact value of the tags is irrelevant if it is non-NULL
. In this environment, each secondary database has one apply process that applies changes from the primary database.
If you use a procedure in the DBMS_STREAMS_ADM
package to create an apply process, then the apply process generates non-NULL
tags with a value of '00'
in the redo log by default. In this case, no further action is requi^.red for the apply process to generate non-NULL
tags.
For example, assuming no apply processes exist at the secondary databases, run the ADD_SCHEMA_RULES
procedure in the DBMS_STREAMS_ADM
package at each secondary database to create an apply process that generates non-NULL
redo entries with tags that are equivalent to the hexadecimal value '00'
:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'apply', streams_name => 'apply', queue_name => 'strmadmin.streams_queue', include_dml => TRUE, include_ddl => TRUE, source_database => 'ps1.example.com', inclusion_rule => TRUE); END; /
Configure the capture process at the primary database to capture changes to the shared data regardless of the tags. You do this by setting the include_tagged_lcr
parameter to TRUE
when you run one of the procedures that generate capture process rules in the DBMS_STREAMS_ADM
package. If you use the DBMS_RULE_ADM
package to create rules for the capture process at the primary database, then ensure that the rules do not contain is_null_tag
conditions, because these conditions involve tags in the redo log.
For example, run the following procedure at the primary database to produce one DML capture process rule and one DDL capture process rule that each have a condition that evaluates to TRUE
for changes in the hr
schema, regardless of the tag for the change:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'hr',
streams_type => 'capture',
streams_name => 'capture',
queue_name => 'strmadmin.streams_queue',
include_tagged_lcr => TRUE, -- Note parameter setting
include_dml => TRUE,
include_ddl => TRUE,
inclusion_rule => TRUE);
END;
/
Configure the capture process at each secondary database to capture changes only if the tag in the redo entry for the change is NULL
. You do this by ensuring that each DML rule in the positive rule set used by the capture process at the secondary database has the following condition:
:dml.is_null_tag()='Y'
DDL rules should have the following condition:
:ddl.is_null_tag()='Y'
These rules indicate that the capture process captures a change only if the tag for the change is NULL
. If you use the DBMS_STREAMS_ADM
package to generate rules, then each rule has one of these conditions by default. If you use the DBMS_RULE_ADM
package to create rules for the capture process at a secondary database, then ensure that each rule contains one of these conditions.
Configure one propagation from the queue at the primary database to the queue at each secondary database. Each propagation should use a positive rule set with rules that instruct the propagation to propagate all LCRs in the queue at the primary database to the queue at the secondary database, except for changes that originated at the secondary database.
For example, if a propagation propagates changes to the secondary database ps2.example.com
, whose tags are equivalent to the hexadecimal value '2'
, then the rules for the propagation should propagate all LCRs relating to the hr
schema to the secondary database, except for LCRs with a tag of '2'
. For row LCRs, such rules should include the following condition:
:dml.get_tag() IS NULL OR :dml.get_tag()!=HEXTORAW('2')
For DDL LCRs, such rules should include the following condition:
:ddl.get_tag() IS NULL OR :ddl.get_tag()!=HEXTORAW('2')
Alternatively, you can add rules to the negative rule set for the propagation so that the propagation discards LCRs with the tag value. For row LCRs, such rules should include the following condition:
:dml.get_tag()=HEXTORAW('2')
For DDL LCRs, such rules should include the following condition:
:ddl.get_tag()=HEXTORAW('2')
You can use the and_condition
parameter in a procedure in the DBMS_STREAMS_ADM
package to add these conditions to system-created rules, or you can use the CREATE_RULE
procedure in the DBMS_RULE_ADM
package to create rules with these conditions. When you specify the condition in the and_condition parameter, specify :lcr
instead of :dml
or :ddl
. See Oracle Streams Concepts and Administration for more information about the and_condition
parameter.
Configure one propagation from the queue at each secondary database to the queue at the primary database. A queue at one of the secondary databases contains only local changes made by user sessions and applications at the secondary database, not changes made by an apply process. Therefore, no further configuration is necessary for these propagations.
This configuration prevents change cycling in the following way:
Changes that originated at a secondary database are never propagated back to that secondary database.
Changes that originated at the primary database are never propagated back to the primary database.
All changes made to the shared data at any database in the environment are propagated to every other database in the environment.
So, in this environment, no changes are lost, and all databases are synchronized.
Figure 10-4 illustrates how tags are used at the primary database ps1.example.com
.
Figure 10-4 Tags Used at the Primary Database
Figure 10-5 illustrates how tags are used at one of the secondary databases (ps2.example.com
).
Figure 10-5 Tags Used at a Secondary Database
See Also: Oracle Database 2 Day + Data Replication and Integration Guide for more information about hub-and-spoke replication environments and for examples that configure such environments |
In this environment, one primary database shares data with several secondary databases, but the secondary databases have other secondary databases connected to them, which will be called remote secondary databases. This environment is an extension of the environment described in "Hub-and-Spoke Replication Environments".
If a remote secondary database allows changes to the replicated database objects, then the remote secondary database does not share data directly with the primary database. Instead, it shares data indirectly with the primary database through a secondary database. So, the shared data exists at the primary database, at each secondary database, and at each remote secondary database. Changes made at any of these databases can be captured and propagated to all of the other databases. Figure 10-6 illustrates an environment with one primary database and multiple extended secondary databases.
Figure 10-6 Primary Database and Several Extended Secondary Databases
In such an environment, you can avoid change cycling in the following way:
Configure the primary database in the same way that it is configured in the example described in "Hub-and-Spoke Replication Environments".
Configure each remote secondary database similar to the way that each secondary database is configured in the example described in "Hub-and-Spoke Replication Environments". The only difference is that the remote secondary databases share data directly with secondary databases, not the primary database.
At each secondary database, configure one apply process to apply changes from the primary database with a redo tag value that is equivalent to the hexadecimal value '00'
. This value is the default tag value for an apply process.
At each secondary database, configure one apply process to apply changes from each of its remote secondary databases with a redo tag value that is unique for the remote secondary database.
Configure the capture process at each secondary database to capture all changes to the shared data in the redo log, regardless of the tag value for the changes.
Configure one propagation from the queue at each secondary database to the queue at the primary database. The propagation should use a positive rule set with rules that instruct the propagation to propagate all LCRs in the queue at the secondary database to the queue at the primary database, except for changes that originated at the primary database. You do this by adding a condition to the rules that evaluates to TRUE
only if the tag in the LCR does not equal '00'
. For example, enter a condition similar to the following for row LCRs:
:dml.get_tag() IS NULL OR :dml.get_tag()!=HEXTORAW('00')
You can use the and_condition
parameter in a procedure in the DBMS_STREAMS_ADM
package to add this condition to system-created rules, or you can use the CREATE_RULE
procedure in the DBMS_RULE_ADM
package to create rules with this condition. When you specify the condition in the and_condition parameter, specify :lcr
instead of :dml
or :ddl
. See Oracle Streams Concepts and Administration for more information about the and_condition
parameter.
Configure one propagation from the queue at each secondary database to the queue at each remote secondary database. Each propagation should use a positive rule set with rules that instruct the propagation to propagate all LCRs in the queue at the secondary database to the queue at the remote secondary database, except for changes that originated at the remote secondary database. You do this by adding a condition to the rules that evaluates to TRUE
only if the tag in the LCR does not equal the tag value for the remote secondary database.
For example, if the tag value of a remote secondary database is equivalent to the hexadecimal value '19'
, then enter a condition similar to the following for row LCRs:
:dml.get_tag() IS NULL OR :dml.get_tag()!=HEXTORAW('19')
You can use the and_condition
parameter in a procedure in the DBMS_STREAMS_ADM
package to add this condition to system-created rules, or you can use the CREATE_RULE
procedure in the DBMS_RULE_ADM
package to create rules with this condition. When you specify the condition in the and_condition parameter, specify :lcr
instead of :dml
or :ddl
. See Oracle Streams Concepts and Administration for more information about the and_condition
parameter.
By configuring the environment in this way, you prevent change cycling, and no changes originating at any database are lost.
See Also: Oracle Database 2 Day + Data Replication and Integration Guide for more information about hub-and-spoke replication environments and for examples that configure such environments |
You can set or get the value of the tags generated by the current session or by an apply process. The following sections describe how to set and get tag values.
This section contains instructions for setting and getting the tag for the current session.
You can set the tag for all redo entries generated by the current session using the SET_TAG
procedure in the DBMS_STREAMS
package. For example, to set the tag to the hexadecimal value of '1D'
in the current session, run the following procedure:
BEGIN DBMS_STREAMS.SET_TAG( tag => HEXTORAW('1D')); END; /
After running this procedure, each redo entry generated by DML or DDL statements in the current session will have a tag value of 1D
. Running this procedure affects only the current session.
The following are considerations for the SET_TAG
procedure:
This procedure is not transactional. That is, the effects of SET_TAG
cannot be rolled back.
If the SET_TAG
procedure is run to set a non-NULL
session tag before a data dictionary build has been performed on the database, then the redo entries for a transaction that started before the dictionary build might not include the specified tag value for the session. Therefore, perform a data dictionary build before using the SET_TAG
procedure in a session. A data dictionary build happens when the DBMS_CAPTURE_ADM.BUILD
procedure is run. The BUILD
procedure can be run automatically when a capture process is created.
You can get the tag for all redo entries generated by the current session using the GET_TAG
procedure in the DBMS_STREAMS
package. For example, to get the hexadecimal value of the tags generated in the redo entries for the current session, run the following procedure:
SET SERVEROUTPUT ON DECLARE raw_tag RAW(2048); BEGIN raw_tag := DBMS_STREAMS.GET_TAG(); DBMS_OUTPUT.PUT_LINE('Tag Value = ' || RAWTOHEX(raw_tag)); END; /
You can also display the tag value for the current session by querying the DUAL
view:
SELECT DBMS_STREAMS.GET_TAG FROM DUAL;
This section contains instructions for setting and removing the tag for an apply process.
See Also:
|
An apply process generates redo entries when it applies changes to a database or invokes handlers. You can set the default tag for all redo entries generated by an apply process when you create the apply process using the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package, or when you alter an existing apply process using the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. In both of these procedures, set the apply_tag
parameter to the value you want to specify for the tags generated by the apply process.
For example, to set the value of the tags generated in the redo log by an existing apply process named strep01_apply
to the hexadecimal value of '7'
, run the following procedure:
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', apply_tag => HEXTORAW('7')); END; /
After running this procedure, each redo entry generated by the apply process will have a tag value of 7
.
You remove the apply tag for an apply process by setting the remove_apply_tag
parameter to TRUE
in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. Removing the apply tag means that each redo entry generated by the apply process has a NULL
tag. For example, the following procedure removes the apply tag from an apply process named strep01_apply
.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', remove_apply_tag => TRUE); END; /
The following sections contain queries that you can run to display the Oracle Streams tag for the current session and the default tag for each apply process:
See Also:
|
You can display the tag value generated in all redo entries for the current session by querying the DUAL
view:
SELECT DBMS_STREAMS.GET_TAG FROM DUAL;
Your output looks similar to the following:
GET_TAG -------------------------------------------------------------------------------- 1D
You can also determine the tag for a session by calling the DBMS_STREAMS.GET_TAG
function.
You can get the default tag for all redo entries generated by each apply process by querying for the APPLY_TAG
value in the DBA_APPLY
data dictionary view. For example, to get the hexadecimal value of the default tag generated in the redo entries by each apply process, run the following query:
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A30 COLUMN APPLY_TAG HEADING 'Tag Value' FORMAT A30 SELECT APPLY_NAME, APPLY_TAG FROM DBA_APPLY;
Your output looks similar to the following:
Apply Process Name Tag Value ------------------------------ ------------------------------ APPLY_FROM_MULT2 00 APPLY_FROM_MULT3 00
A handler or custom rule-based transformation function associated with an apply process can get the tag by calling the DBMS_STREAMS.GET_TAG
function.
This chapter describes flexible methods for configuring Oracle Streams replication between two or more databases. This chapter includes step-by-step instructions for configuring each Oracle Streams component to build a single-source or multiple-source replication environment.
One common type of single-source replication environment is a hub-and-spoke replication environment that does not allow changes to the replicated database objects in the spoke databases. The following are common types of multiple-source replication environments:
A hub-and-spoke replication environment that allows changes to the replicated database objects in the spoke databases
An n-way replication environment
Oracle Database 2 Day + Data Replication and Integration Guide describes these common types of replication environments in detail.
If possible, consider using a simple method for configuring Oracle Streams replication described in Chapter 2, "Simple Oracle Streams Replication Configuration". You can either use the Oracle Streams tool in Enterprise Manager or a single procedure in the DBMS_STREAMS_ADM
package configure all of the Oracle Streams components in a replication environment with two databases. Also, you can use a simple method and still meet custom requirements for your replication environment in one of the following ways:
You can use a simple method to generate a configuration script and modify the script to meet your requirements.
You can use a simple method to configure Oracle Streams replication between two databases and add new database objects or databases to the environment by following the instructions in Chapter 4, "Adding to an Oracle Streams Replication Environment".
However, if you require more flexibility in your Oracle Streams replication configuration than what is available with the simple methods, then you can follow the instructions in this chapter to configure the environment.
This chapter contains these topics:
Note:
|
See Also: Oracle Streams Concepts and Administration for instructions on determining which database objects are not supported by Oracle Streams |
This section lists the general steps to perform when creating a new single-source Oracle Streams environment. A single-source environment is one in which there is only one source database for replicated data. There can be multiple source databases in a single-source environment, but no two source databases capture any of the same data. A one-way replication environment with two databases is an example of a single-source environment.
Before starting capture processes, creating synchronous captures, and configuring propagations in a new Oracle Streams environment, ensure that any propagations or apply processes that will receive LCRs are configured to handle these LCRs. That is, the propagations or apply processes should exist, and each one should be associated with rule sets that handle the LCRs appropriately. If these propagations and apply processes are not configured properly to handle these LCRs, then LCRs can be lost.
This example assumes that the replicated database objects are read-only at the destination databases. If the replicated database objects are read/write at the destination databases, then the replication environment will not stay synchronized because Oracle Streams is not configured to replicate the changes made to the replicated objects at the destination databases.
Figure 3-1 shows an example Oracle Streams single-source replication environment.
Figure 3-1 Example Oracle Streams Single-Source Environment
You can create an Oracle Streams replication environment that is more complicated than the one shown in Figure 3-1. For example, a single-source Oracle Streams replication environment can use downstream capture and directed networks.
In general, if you are configuring a new Oracle Streams single-source environment in which changes to replicated database objects are captured at one database and then propagated and applied at remote databases, then you should configure the environment in the following order:
Make the necessary decisions about configuring the replication environment. See "Decisions to Make Before Configuring Oracle Streams Replication".
Complete the necessary tasks to prepare each database in your environment for Oracle Streams. See "Tasks to Complete Before Configuring Oracle Streams Replication".
Some of these tasks might not be required at certain databases.
Create any necessary ANYDATA
queues that do not already exist. When you create a capture process, synchronous capture, or apply process, you associate the process with a specific ANYDATA
queue. When you create a propagation, you associate it with a specific source queue and destination queue. See "Creating an ANYDATA Queue" for instructions.
Specify supplemental logging at each source database for any replicated database object. See "Specifying Supplemental Logging" for instructions.
At each database, create the required capture processes, synchronous captures, propagations, and apply processes for your environment. You can create capture processes, propagations, and apply processes in any order. If you create synchronous captures, then create them after you create the relevant propagations and apply processes.
Create one or more capture processes at each database that will capture changes with a capture process. Ensure that each capture process uses rule sets that are appropriate for capturing changes. Do not start the capture processes you create. Oracle recommends that you use only one capture process for each source database. See "Configuring a Capture Process" for instructions.
When you use a procedure in the DBMS_STREAMS_ADM
package to add the capture process rules, it automatically runs the PREPARE_TABLE_INSTANTIATION
, PREPARE_SCHEMA_INSTANTIATION
, or PREPARE_GLOBAL_INSTANTIATION
procedure in the DBMS_CAPTURE_ADM
package for the specified table, specified schema, or entire database, respectively, if the capture process is a local capture process or a downstream capture process with a database link to the source database.
You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:
You use the DBMS_RULE_ADM
package to add or modify rules.
You use an existing capture process and do not add capture process rules for any replicated object.
You use a downstream capture process with no database link to the source database.
If you must prepare for instantiation manually, then see "Preparing Database Objects for Instantiation at a Source Database" for instructions.
Create all propagations that send the captured LCRs from a source queue to a destination queue. Ensure that each propagation uses rule sets that are appropriate for sending changes. See "Creating Oracle Streams Propagations Between ANYDATA Queues" for instructions.
Create one or more apply processes at each database that will apply changes. Ensure that each apply process uses rule sets that are appropriate for applying changes. Do not start the apply processes you create. See Chapter 7, "Configuring Implicit Apply" for instructions.
Create one or more synchronous captures at each database that will capture changes with a synchronous capture. Ensure that each synchronous capture use a rule set that is appropriate for capturing changes. Do not create the synchronous capture until you create all of the propagations and apply processes that will process its LCRs. See "Configuring Synchronous Capture" for instructions.
When you use a procedure in the DBMS_STREAMS_ADM
package to add the synchronous capture rules, it automatically runs the PREPARE_SYNC_INSTANTIATION
function in the DBMS_CAPTURE_ADM
package for the specified table.
Either instantiate, or set the instantiation SCN for, each database object for which changes are applied by an apply process. If the database objects do not exist at a destination database, then instantiate them using export/import, transportable tablespaces, or RMAN. If the database objects already exist at a destination database, then set the instantiation SCNs for them manually.
To instantiate database objects using export/import, first export them at the source database. Next, import them at the destination database. See Chapter 8, "Instantiation and Oracle Streams Replication".
Do not allow any changes to the database objects being exported during export at the source database. Do not allow changes to the database objects being imported during import at the destination database.
You can specify a more stringent degree of consistency by using an export parameter such as FLASHBACK_SCN
or FLASHBACK_TIME
.
To set the instantiation SCN for a table, schema, or database manually, run the appropriate procedure or procedures in the DBMS_APPLY_ADM
package at the destination database:
SET_TABLE_INSTANTIATION_SCN
SET_SCHEMA_INSTANTIATION_SCN
SET_GLOBAL_INSTANTIATION_SCN
When you run one of these procedures, you must ensure that the replicated objects at the destination database are consistent with the source database as of the instantiation SCN.
If you run SET_GLOBAL_INSTANTIATION_SCN
at a destination database, then set the recursive
parameter for this procedure to TRUE
so that the instantiation SCN also is set for each schema at the destination database and for the tables owned by these schemas.
If you run SET_SCHEMA_INSTANTIATION_SCN
at a destination database, then set the recursive
parameter for this procedure to TRUE
so that the instantiation SCN also is set for each table in the schema.
If you set the recursive
parameter to TRUE
in the SET_GLOBAL_INSTANTIATION_SCN
procedure or the SET_SCHEMA_INSTANTIATION_SCN
procedure, then a database link from the destination database to the source database is required. This database link must have the same name as the global name of the source database and must be accessible to the user who executes the procedure. See "Setting Instantiation SCNs Using the DBMS_APPLY_ADM Package" for instructions.
Alternatively, you can perform a metadata export/import to set the instantiation SCNs for existing database objects. If you choose this option, then ensure that no rows are imported. Also, ensure that the replicated database objects at all of the destination databases are consistent with the source database that performed the export at the time of the export. If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply. See "Setting Instantiation SCNs Using Export/Import" for more information about performing a metadata export/import.
Start each apply process you created in Step 5 using the START_APPLY
procedure in the DBMS_APPLY_ADM
package.
Start each capture process you created in Step 5 using the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
When you are configuring the environment, remember that capture processes and apply processes are stopped when they are created. However, synchronous captures start to capture changes immediately when they are created, and propagations are scheduled to send LCRs immediately when they are created. A capture process or synchronous capture must be created before the relevant objects are instantiated at a remote destination database. You must create the propagations and apply processes before starting the capture process or creating the synchronous capture, and you must instantiate the objects before running the whole stream.
See Also:
|
This section lists the general steps to perform when creating a new multiple-source Oracle Streams environment. A multiple-source environment is one in which there are multiple source databases for any of the replicated data. An n-way replication environment is an example of a multiple-source environment.
This example uses the following terms:
Populated database: A database that already contains the replicated database objects before you create the new multiple-source environment. You must have at least one populated database to create the new Oracle Streams environment.
Export database: A populated database on which you perform an export of the replicated database objects. This export is used to instantiate the replicated database objects at the import databases. You might not have an export database if all of the databases in the environment are populated databases.
Import database: A database that does not contain the replicated database objects before you create the new multiple-source environment. You instantiate the replicated database objects at an import database by performing an import of these database objects. You might not have any import databases if all of the databases in the environment are populated databases.
Figure 3-2 shows an example multiple-source Oracle Streams environment.
Figure 3-2 Example Oracle Streams Multiple-Source Environment
You can create an Oracle Streams replication environment that is more complicated than the one shown in Figure 3-2. For example, a multiple-source Oracle Streams replication environment can use downstream capture and directed networks.
When there are multiple source databases in an Oracle Streams replication environment, change cycling is possible. Change cycling happens when a change is sent back to the database where it originated. Typically, you should avoid change cycling. Before you configure your replication environment, see Chapter 10, "Oracle Streams Tags", and ensure that you configure the replication environment to avoid change cycling.
Complete the following steps to create a multiple-source environment:
Note: Ensure that no changes are made to the objects being shared at a database you are adding to the Oracle Streams environment until the instantiation at the database is complete. |
Make the necessary decisions about configuring the replication environment. See "Decisions to Make Before Configuring Oracle Streams Replication".
Complete the necessary tasks to prepare each database in your environment for Oracle Streams. See "Tasks to Complete Before Configuring Oracle Streams Replication".
Some of these tasks might not be required at certain databases.
At each populated database, specify any necessary supplemental logging for the replicated database objects. See "Specifying Supplemental Logging" for instructions.
Create any necessary ANYDATA
queues that do not already exist. When you create a capture process, synchronous capture, or apply process, you associate the process with a specific ANYDATA
queue. When you create a propagation, you associate it with a specific source queue and destination queue. See "Creating an ANYDATA Queue" for instructions.
At each database, create the required capture processes, synchronous captures, propagations, and apply processes for your environment. You can create capture processes, propagations, and apply processes in any order. If you create synchronous captures, then create them after you create the relevant propagations and apply processes.
Create one or more capture processes at each database that will capture changes with a capture process. Ensure that each capture process uses rule sets that are appropriate for capturing changes. Do not start the capture processes you create. Oracle recommends that you use only one capture process for each source database. See "Configuring a Capture Process" for instructions.
When you use a procedure in the DBMS_STREAMS_ADM
package to add the capture process rules, it automatically runs the PREPARE_TABLE_INSTANTIATION
, PREPARE_SCHEMA_INSTANTIATION
, or PREPARE_GLOBAL_INSTANTIATION
procedure in the DBMS_CAPTURE_ADM
package for the specified table, specified schema, or entire database, respectively, if the capture process is a local capture process or a downstream capture process with a database link to the source database.
You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:
You use the DBMS_RULE_ADM
package to add or modify rules.
You use an existing capture process and do not add capture process rules for any replicated database object.
You use a downstream capture process with no database link to the source database.
If you must prepare for instantiation manually, then see "Preparing Database Objects for Instantiation at a Source Database" for instructions.
Create all propagations that propagate the captured LCRs from a source queue to a destination queue. Ensure that each propagation uses rule sets that are appropriate for propagating changes. See "Creating Oracle Streams Propagations Between ANYDATA Queues" for instructions.
Create one or more apply processes at each database that will apply changes. Ensure that each apply process uses rule sets that are appropriate for applying changes. Do not start the apply processes you create. See Chapter 7, "Configuring Implicit Apply" for instructions.
Create one or more synchronous captures at each database that will capture changes with a synchronous capture. Ensure that each synchronous capture uses rule sets that are appropriate for capturing changes. Do not create the synchronous capture until you create all of the propagations and apply processes that will process its LCRs. See "Configuring Synchronous Capture" for instructions.
When you use a procedure in the DBMS_STREAMS_ADM
package to add the synchronous capture rules, it automatically runs the PREPARE_SYNC_INSTANTIATION
function in the DBMS_CAPTURE_ADM
package for the specified table.
After completing these steps, complete the steps in each of the following sections that apply to your environment. You might need to complete the steps in only one of these sections or in both of these sections:
For each populated database, complete the steps in "Configuring Populated Databases When Creating a Multiple-Source Environment". These steps are required only if your environment has multiple populated databases.
For each import database, complete the steps in "Adding Replicated Objects to Import Databases When Creating a New Environment".
After completing the steps in "Creating a New Oracle Streams Multiple-Source Environment", complete the following steps for the populated databases if your environment has multiple populated databases:
For each populated database, set the instantiation SCN at each of the other populated databases in the environment that will be a destination database of the populated source database. These instantiation SCNs must be set, and only the changes made at a particular populated database that are committed after the corresponding SCN for that database will be applied at another populated database.
For each populated database, you can set these instantiation SCNs in one of the following ways:
Perform a metadata only export of the replicated database objects at the populated database and import the metadata at each of the other populated databases. Such an import sets the required instantiation SCNs for the populated database at the other populated databases. Ensure that no rows are imported. Also, ensure that the replicated database objects at each populated database performing a metadata import are consistent with the populated database that performed the metadata export at the time of the export.
If you are replicating DML changes only, then table level export/import is sufficient. If you are replicating DDL changes also, then additional considerations apply. See "Setting Instantiation SCNs Using Export/Import" for more information about performing a metadata export/import.
Set the instantiation SCNs manually at each of the other populated databases. Do this for each of the replicated database objects. Ensure that the replicated database objects at each populated database are consistent with the instantiation SCNs you set at that database. See "Setting Instantiation SCNs Using the DBMS_APPLY_ADM Package" for instructions.
After completing the steps in "Creating a New Oracle Streams Multiple-Source Environment", complete the following steps for the import databases:
Pick the populated database that you will use as the export database. Do not perform the instantiations yet.
For each import database, set the instantiation SCNs at all of the other databases in the environment that will be a destination database of the import database. In this case, the import database will be the source database for these destination databases. The databases where you set the instantiation SCNs can include populated databases and other import databases.
If one or more schemas will be created at an import database during instantiation or by a subsequent replicated DDL change, then run the SET_GLOBAL_INSTANTIATION_SCN
procedure in the DBMS_APPLY_ADM
package for this import database at all of the other databases in the environment.
If a schema exists at an import database, and one or more tables will be created in the schema during instantiation or by a subsequent replicated DDL change, then run the SET_SCHEMA_INSTANTIATION_SCN
procedure in the DBMS_APPLY_ADM
package for the schema at all of the other databases in the environment for the import database. Do this for each such schema.
See "Setting Instantiation SCNs Using the DBMS_APPLY_ADM Package" for instructions.
Because you run these procedures before any tables are instantiated at the import databases, and because the local capture processes or synchronous captures are configured already for these import databases, you will not need to run the SET_TABLE_INSTANTIATION_SCN
procedure for each table created during the instantiation. Instantiation SCNs will be set automatically for these tables at all of the other databases in the environment that will be destination databases of the import database.
At the export database you chose in Step 1, perform an export of the replicated database objects. Next, perform an import of the replicated database objects at each import database. See Chapter 8, "Instantiation and Oracle Streams Replication" and Oracle Database Utilities for information about using export/import.
Do not allow any changes to the database objects being exported while exporting these database objects at the source database. Do not allow changes to the database objects being imported while importing these database objects at the destination database.
You can specify a more stringent degree of consistency by using an export parameter such as FLASHBACK_SCN
or FLASHBACK_TIME
.
For each populated database, except for the export database, set the instantiation SCNs at each import database that will be a destination database of the populated source database. These instantiation SCNs must be set, and only the changes made at a populated database that are committed after the corresponding SCN for that database will be applied at an import database.
You can set these instantiation SCNs in one of the following ways:
Perform a metadata only export at each populated database and import the metadata at each import database. Each import sets the required instantiation SCNs for the populated database at the import database. In this case, ensure that the replicated database objects at the import database are consistent with the populated database at the time of the export.
If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply. See "Setting Instantiation SCNs Using Export/Import" for more information about performing a metadata export/import.
For each populated database, set the instantiation SCN manually for each replicated database object at each import database. Ensure that the replicated database objects at each import database are consistent with the populated database as of the corresponding instantiation SCN. See v"Setting Instantiation SCNs Using the DBMS_APPLY_ADM Package" for instructions.
Before completing the steps in this section, you should have completed the following tasks:
"Configuring Populated Databases When Creating a Multiple-Source Environment", if your environment has multiple populated databases
"Adding Replicated Objects to Import Databases When Creating a New Environment", if your environment has one or more import databases
When all of the previous configuration steps are finished, complete the following steps:
At each database, configure conflict resolution if conflicts are possible. See Chapter 9, "Oracle Streams Conflict Resolution" for instructions.
Start each apply process in the environment using the START_APPLY
procedure in the DBMS_APPLY_ADM
package.
Start each capture process the environment using the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
See Also: Oracle Streams Extended Examples for a detailed example that creates a multiple-source environment |
You can configure Oracle Streams replication in many different ways depending on your business requirements. For example, Oracle Streams can be configured to fulfill the following requirements:
Replicate data from one database to one or more databases, even if those databases have different structures or naming conventions
Replicate data between hardware platforms, database releases, and character sets
Consolidate data from multiple sources with varying structures into a single database
Provide high availability while performing database or application upgrades or while migrating between hardware platforms
The following chapters in this part describe Oracle Streams replication best practices:
Replication Administrator's Guide
11g Release 2 (11.2)
E10705-09
August 2011
Oracle Streams Replication Administrator's Guide, 11g Release 2 (11.2)
E10705-09
Copyright © 2003, 2011, Oracle and/or its affiliates. All rights reserved.
Primary Author: Randy Urbano
Contributors: Katherine Weill, Nimar Arora, Lance Ashdown, Ram Avudaiappan, Neerja Bhatt, Ragamayi Bhyravabhotla, Alan Downing, Curt Elsbernd, Yong Feng, Jairaj Galagali, Lei Gao, Thuvan Hoang, Lewis Kaplan, Tianshu Li, Jing Liu, Edwina Lu, Raghu Mani, Rui Mao, Pat McElroy, Shailendra Mishra, Valarie Moore, Bhagat Nainani, Maria Pratt, Arvind Rajaram, Viv Schupmann, Vipul Shah, Neeraj Shodhan, Wayne Smith, Jim Stamos, Janet Stern, Mahesh Subramaniam, Bob Thome, Byron Wang, Wei Wang, James M. Wilson, Lik Wong, Jingwei Wu, Haobo Xu, Jun Yuan, David Zhang
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications which may create a risk of personal injury. If you use this software in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure the safe use of this software. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
This chapter explains concepts relating to Oracle Streams support for information sharing between Oracle databases and non-Oracle databases.
This chapter contains these topics:
Non-Oracle to Non-Oracle Data Sharing with Oracle Streams
Note: A new feature called XStream is available in Oracle Database 11g Release 2 (11.2) and later. XStream enables Oracle Call Interface and Java applications to access the database changes in a stream. See Oracle Database XStream Guide for information about XStream. |
To share DML changes from an Oracle source database to a non-Oracle destination database, the Oracle database functions as a proxy and carries out some steps that would usually be done at the destination database. That is, the LCRs intended for the non-Oracle destination database are dequeued in the Oracle database itself and an apply process at the Oracle database applies the changes to the non-Oracle database across a network connection through an Oracle Database Gateway. Figure 11-1 shows an Oracle database sharing data with a non-Oracle database.
Figure 11-1 Oracle to Non-Oracle Heterogeneous Data Sharing
You should configure the Oracle Database Gateway to use the transaction model COMMIT_CONFIRM
.
See Also: The Oracle documentation for your specific Oracle Database Gateway for information about using the transaction modelCOMMIT_CONFIRM for your Oracle Database Gateway |
In an Oracle to non-Oracle environment, a capture process or a synchronous capture functions the same way as it would in an Oracle-only environment. That is, a capture process finds changes in the redo log, captures them based on its rules, and enqueues the captured changes as logical change records (LCRs) into an ANYDATA
queue. A synchronous capture uses an internal mechanism to capture changes based on its rules and enqueue the captured changes as row LCRs into an ANYDATA
queue. In addition, a single capture process or synchronous capture can capture changes that will be applied at both Oracle and non-Oracle databases.
Similarly, the ANYDATA
queue that stages the LCRs functions the same way as it would in an Oracle-only environment, and you can propagate LCRs to any number of intermediate queues in Oracle databases before they are applied at a non-Oracle database.
See Also:
|
An apply process running in an Oracle database uses Heterogeneous Services and an Oracle Database Gateway to apply changes encapsulated in LCRs directly to database objects in a non-Oracle database. The LCRs are not propagated to a queue in the non-Oracle database, as they would be in an Oracle-only Oracle Streams environment. Instead, the apply process applies the changes directly through a database link to the non-Oracle database.
Note: Oracle Streams apply processes do not support Generic Connectivity. |
This section describes the configuration of an apply process that will apply changes to a non-Oracle database.
Before you create an apply process that will apply changes to a non-Oracle database, configure Heterogeneous Services, the Oracle Database Gateway, and a database link.
Oracle Streams supports the following Oracle Database Gateways:
Oracle Database Gateway for Sybase
Oracle Database Gateway for Informix
Oracle Database Gateway for SQL Server
Oracle Database Gateway for DRDA
The database link will be used by the apply process to apply the changes to the non-Oracle database. The database link must be created with an explicit CONNECT
TO
clause.
See Also:
|
After the database link has been created and is working properly, create the apply process using the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package and specify the database link for the apply_database_link
parameter. After you create an apply process, you can use apply process rules to specify which changes are applied at the non-Oracle database.
See Also:
|
If you use substitute key columns for any of the tables at the non-Oracle database, then specify the database link to the non-Oracle database when you run the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package.
You must set the parallelism
apply process parameter to 1
, the default setting, when an apply process is applying changes to a non-Oracle database. Currently, parallel apply to non-Oracle databases is not supported. However, you can use multiple apply processes to apply changes a non-Oracle database.
If you use a procedure DML handler to process row LCRs for any of the tables at the non-Oracle database, then specify the database link to the non-Oracle database when you run the SET_DML_HANDLER
procedure in the DBMS_APPLY_ADM
package.
See Also: Oracle Streams Concepts and Administration for information about message processing options for an apply process |
If you want to use a message handler to process user messages for a non-Oracle database, then, when you run the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package, specify the database link to the non-Oracle database using the apply_database_link
parameter, and specify the message handler procedure using the message_handler
parameter.
See Also: Oracle Streams Concepts and Administration for information about message processing options and managing message handlers |
Currently, error handlers and conflict handlers are not supported when sharing data from an Oracle database to a non-Oracle database. If an apply error occurs, then the transaction containing the LCR that caused the error is moved into the error queue in the Oracle database.
When applying changes to a non-Oracle database, an apply process applies changes made to columns of only the following data types:
CHAR
VARCHAR2
NCHAR
NVARCHAR2
NUMBER
DATE
RAW
TIMESTAMP
TIMESTAMP
WITH
TIME
ZONE
TIMESTAMP
WITH
LOCAL
TIME
ZONE
INTERVAL
YEAR
TO
MONTH
INTERVAL
DAY
TO
SECOND
The apply process does not apply changes in columns of the following data types to non-Oracle databases: CLOB
, NCLOB
, BLOB
, BFILE
, LONG
, LONG
RAW
, ROWID
, UROWID
, user-defined types (including object types, REF
s, varrays, and nested tables), and Oracle-supplied types (including Any
types, XML types, spatial types, and media types). The apply process raises an error when an LCR contains a data type that is not listed, and the transaction containing the LCR that caused the error is moved to the error queue in the Oracle database.
Each Oracle Database Gateway might have further limitations regarding data types. For a data type to be supported in an Oracle to non-Oracle environment, the data type must be supported by both Oracle Streams and the Oracle Database Gateway being used.
See Also:
|
When you specify that DML changes made to certain tables should be applied at a non-Oracle database, an apply process can apply only the following types of DML changes:
INSERT
UPDATE
DELETE
Note: The apply process cannot apply DDL changes at non-Oracle databases. |
Before you start an apply process that applies changes to a non-Oracle database, complete the following steps to instantiate each table at the non-Oracle database:
Use the DBMS_HS_PASSTHROUGH
package or the tools supplied with the non-Oracle database to create the table at the non-Oracle database.
The following is an example that uses the DBMS_HS_PASSTHROUGH
package to create the hr.regions
table in the het.example.com
non-Oracle database:
DECLARE ret INTEGER; BEGIN ret := DBMS_HS_PASSTHROUGH.EXECUTE_IMMEDIATE@het.example.com ( 'CREATE TABLE regions (region_id INTEGER, region_name VARCHAR(50))'); END; / COMMIT;
See Also: Oracle Database Heterogeneous Connectivity User's Guide and the Oracle documentation for your specific Oracle Database Gateway for more information about Heterogeneous Services and Oracle Database Gateway |
If the changes that will be shared between the Oracle and non-Oracle database are captured by a capture process or synchronous capture at the Oracle database, then prepare all tables that will share data for instantiation.
Create a PL/SQL procedure (or a C program) that performs the following actions:
Gets the current SCN using the GET_SYSTEM_CHANGE_NUMBER
function in the DBMS_FLASHBACK
package.
Invokes the ENABLE_AT_SYSTEM_CHANGE_NUMBER
procedure in the DBMS_FLASHBACK
package to set the current session to the obtained SCN. This action ensures that all fetches are done using the same SCN.
Populates the table at the non-Oracle site by fetching row by row from the table at the Oracle database and then inserting row by row into the table at the non-Oracle database. All fetches should be done at the SCN obtained using the GET_SYSTEM_CHANGE_NUMBER
function.
For example, the following PL/SQL procedure gets the flashback SCN, fetches each row in the hr.regions
table in the current Oracle database, and inserts them into the hr.regions
table in the het.example.com
non-Oracle database. Notice that flashback is disabled before the rows are inserted into the non-Oracle database.
SET SERVEROUTPUT ON CREATE OR REPLACE PROCEDURE insert_reg IS CURSOR c1 IS SELECT region_id, region_name FROM hr.regions; c1_rec c1 % ROWTYPE; scn NUMBER; BEGIN scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_FLASHBACK.ENABLE_AT_SYSTEM_CHANGE_NUMBER( query_scn => scn); /* Open c1 in flashback mode */ OPEN c1; /* Disable Flashback */ DBMS_FLASHBACK.DISABLE; LOOP FETCH c1 INTO c1_rec; EXIT WHEN c1%NOTFOUND; /* Note that all the DML operations inside the loop are performed with Flashback disabled */ INSERT INTO hr.regions@het.example.com VALUES ( c1_rec.region_id, c1_rec.region_name); END LOOP; COMMIT; DBMS_OUTPUT.PUT_LINE('SCN = ' || scn); EXCEPTION WHEN OTHERS THEN DBMS_FLASHBACK.DISABLE; RAISE; END; /
Make a note of the SCN returned.
If the Oracle Database Gateway you are using supports the Heterogeneous Services callback functionality, then you can replace the loop in the previous example with the following SQL statement:
INSERT INTO hr.region@het.example.com SELECT * FROM hr.region@!;
Note: The user who creates and runs the procedure in the previous example must haveEXECUTE privilege on the DBMS_FLASHBACK package and all privileges on the tables involved. |
See Also: Oracle Database Heterogeneous Connectivity User's Guide and the Oracle documentation for your specific Oracle Database Gateway for information about callback functionality and your Oracle Database Gateway |
Set the instantiation SCN for the table at the non-Oracle database. Specify the SCN you obtained in Step 3 in the SET_TABLE_INSTANTIATION_SCN
procedure in the DBMS_APPLY_ADM
package to instruct the apply process to skip all LCRs with changes that occurred before the SCN you obtained in Step 3. Ensure that you set the apply_database_link
parameter to the database link for the remote non-Oracle database.
See Also: "Setting Instantiation SCNs at a Destination Database" and Oracle Database PL/SQL Packages and Types Reference for more information about theSET_TABLE_INSTANTIATION_SCN procedure |
In an Oracle to non-Oracle environment, you can specify rule-based transformations during capture or apply the same way as you would in an Oracle-only environment. In addition, if your environment propagates LCRs to one or more intermediate Oracle databases before they are applied at a non-Oracle database, then you can specify a rule-based transformation during propagation from a queue at an Oracle database to another queue at an Oracle database.
See Also: Oracle Streams Concepts and Administration for more information about rule-based transformations |
Messaging Gateway is a feature of the Oracle database that provides propagation between Oracle queues and non-Oracle message queuing systems. Messages enqueued into an Oracle queue are automatically propagated to a non-Oracle queue, and the messages enqueued into a non-Oracle queue are automatically propagated to an Oracle queue. It provides guaranteed message delivery to the non-Oracle messaging system and supports the native message format for the non-Oracle messaging system. It also supports specification of user-defined transformations that are invoked while propagating from an Oracle queue to the non-Oracle messaging system or from the non-Oracle messaging system to an Oracle queue.
See Also: Oracle Streams Advanced Queuing User's Guide for more information about the Messaging Gateway |
If the apply process encounters an unhandled error when it tries to apply an LCR at a non-Oracle database, then the transaction containing the LCR is placed in the error queue in the Oracle database that is running the apply process. The apply process detects data conflicts in the same way as it does in an Oracle-only environment, but automatic conflict resolution is not supported currently in an Oracle to non-Oracle environment. Therefore, any data conflicts encountered are treated as apply errors.
Oracle Streams Extended Examples contains a detailed example that includes sharing data in an Oracle to non-Oracle Streams environment.
To capture and propagate changes from a non-Oracle database to an Oracle database, a custom application is required. This application gets the changes made to the non-Oracle database by reading from transaction logs, by using triggers, or by some other method. The application must assemble and order the transactions and must convert each change into a logical change record (LCR). Next, the application must enqueue the LCRs in an Oracle database using the DBMS_STREAMS_MESSAGING
package or the DBMS_AQ
package. The application must commit after enqueuing all LCRs in each transaction. Figure 11-2 shows a non-Oracle databases sharing data with an Oracle database.
Figure 11-2 Non-Oracle to Oracle Heterogeneous Data Sharing
Because the custom user application is responsible for assembling changes at the non-Oracle database into LCRs and enqueuing the LCRs at the Oracle database, the application is completely responsible for change capture. Therefore, the application must construct LCRs that represent changes at the non-Oracle database and then enqueue these LCRs into the queue at the Oracle database. The application can enqueue multiple transactions concurrently, but the transactions must be committed in the same order as the transactions on the non-Oracle source database.
See Also: "Constructing and Enqueuing LCRs" for more information about constructing and enqueuing LCRs |
To ensure the same transactional consistency at both the Oracle database where changes are applied and the non-Oracle database where changes originate, you must use a transactional queue to stage the LCRs at the Oracle database. For example, suppose a single transaction contains three row changes, and the custom application enqueues three row LCRs, one for each change, and then commits. With a transactional queue, a commit is performed by the apply process after the third row LCR, retaining the consistency of the transaction. If you use a nontransactional queue, then a commit is performed for each row LCR by the apply process. The SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package creates a transactional queue automatically.
Also, the queue at the Oracle database should be a commit-time queue. A commit-time queue orders LCRs by approximate commit system change number (approximate CSCN) of the transaction that includes the LCRs. Commit-time queues preserve transactional dependency ordering between LCRs in the queue, if the application that enqueued the LCRs commits the transactions in the correct order. Also, commit-time queues ensure consistent browses of LCRs in a queue.
See Also: Oracle Streams Concepts and Administration for more information about transactional queues and commit-time queues |
In a non-Oracle to Oracle environment, the apply process functions the same way as it would in an Oracle-only environment. That is, it dequeues each LCR from its associated queue based on apply process rules, performs any rule-based transformation, and either sends the LCR to a handler or applies it directly. Error handling and conflict resolution also function the same as they would in an Oracle-only environment. So, you can specify a prebuilt update conflict handler or create a custom conflict handler to resolve conflicts.
The apply process should be configured to apply persistent LCRs, not captured LCRs. So, the apply process should be created using the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package, and the apply_captured
parameter should be set to FALSE
when you run this procedure. After the apply process is created, you can use procedures in the DBMS_STREAMS_ADM
package to add rules for LCRs to the apply process rule sets.
See Also:
|
There is no automatic way to instantiate tables that exist at a non-Oracle database at an Oracle database. However, you can perform the following general procedure to instantiate a table manually:
At the non-Oracle database, use a non-Oracle utility to export the table to a flat file.
At the Oracle database, create an empty table that matches the table at the non-Oracle database.
At the Oracle database, use SQL*Loader to load the contents of the flat file into the table.
Oracle Streams supports data sharing between two non-Oracle databases through a combination of non-Oracle to Oracle data sharing and Oracle to non-Oracle data sharing. Such an environment would use Oracle Streams in an Oracle database as an intermediate database between two non-Oracle databases.
For example, a non-Oracle to non-Oracle environment can consist of the following databases:
A non-Oracle database named het1.example.com
An Oracle database named dbs1.example.com
A non-Oracle database named het2.example.com
A user application assembles changes at het1.example.com
and enqueues them in dbs1.example.com
. Next, the apply process at dbs1.example.com
applies the changes to het2.example.com
using Heterogeneous Services and an Oracle Database Gateway. Another apply process at dbs1.example.com
could apply some or all of the changes in the queue locally at dbs1.example.com
. One or more propagations at dbs1.example.com
could propagate some or all of the changes in the queue to other Oracle databases.
This chapter describes the best practices for applying changes with an apply process in an Oracle Streams replication environment. This chapter includes these topics:
In an Oracle Streams replication environment, a destination database is a database where an apply process applies changes. This section contains these topics:
The apply user is the user in whose security domain an apply process performs the following actions:
Dequeues messages that satisfy its rule sets
Runs custom rule-based transformations configured for apply process rules
Applies messages directly to database objects
Runs apply handlers configured for the apply process
The apply user for an apply process is configured when you create an apply process, and the apply user can be modified when you alter an apply process. Grant the following privileges to the apply user:
If the apply process applies data manipulation language (DML) changes to a table, then grant INSERT
, UPDATE
, and DELETE
privileges on the table to the apply user.
If the apply process applies data definition language (DDL) changes to a table, then grant CREATE
TABLE
or CREATE
ANY
TABLE
, CREAT
INDEX
or CREATE
ANY
INDEX
, and CREATE
PROCEDURE
or CREATE
ANY
PROCEDURE
to the apply user.
Grant EXECUTE
privilege on the rule sets used by the apply process.
Grant EXECUTE
privilege on all custom rule-based transformation functions specified for rules in the positive rule set of the apply process.
Grant EXECUTE
privilege on any apply handlers used by the apply process.
Grant privileges to dequeue messages from the apply process's queue.
These privileges can be granted directly to the apply user, or they can be granted through roles.
In addition, the apply user must be granted EXECUTE
privilege on all packages, including Oracle-supplied packages, that are invoked in subprograms run by the apply process. These privileges must be granted directly to the apply user. They cannot be granted through roles.
See Also:
|
An instantiation SCN value must be set for each database object to which an apply process applies changes. Confirm that an instantiation SCN is set for all such objects, and set any required instantiation SCN values that are not set.
Instantiation SCN values can be set in various ways, including during export/import, by Recovery Manager (RMAN), or manually. To set instantiation SCN values manually, use one of the following procedures in the DBMS_APPLY_ADM
package:
For example, to set the instantiation SCN manually for each table in the a schema, run the SET_SCHEMA_INSTANTIATION_SCN
procedure with the recursive
parameter set to TRUE
. If an apply process applies data definition language (DDL) changes, then set the instantiation SCN values at a level higher than table level using either the SET_SCHEMA_INSTANTIATION_SCN
or SET_GLOBAL_INSTANTIATION_SCN
procedure.
See Also: Chapter 8, "Instantiation and Oracle Streams Replication" for more information about instantiation and setting instantiation SCN values |
If updates will be performed at multiple databases for the same shared database object, then ensure that you configure conflict resolution for that object. To simplify conflict resolution for tables with LOB columns, create an error handler to handle errors for the table. When registering the error handler using the DBMS_APPLY_ADM.SET_DML_HANDLER
procedure, ensure that you set the assemble_lobs
parameter to TRUE
.
If you configure conflict resolution at a destination database, then additional supplemental logging is required at the source database. Specifically, the columns specified in a column list for conflict resolution during apply must be conditionally logged if more than one column at the source database is used in the column list at the destination database.
The following sections describe best practices for configuring apply processes:
Set the parallelism of an apply process by specifying the parallelism
parameter in the DBMS_APPLY_ADM.SET_PARAMETER
procedure. The parallelism
parameter controls the number of processes that concurrently apply changes. The default setting for the parallelism
apply process parameter is 4
.
Typically, apply process parallelism is set to either 1
, 4
, 8
, or 16
. The setting that is best for a particular apply process depends on the load applied and the processing power of the computer system running the database. Follow these guidelines when setting apply process parallelism:
If the load is heavy for the apply process and the computer system running the database has excellent processing power, then set apply process parallelism to 8
or 16
. Multiple high-speed CPUs provide excellent processing power.
If the is light for the apply process, then set apply process parallelism to 1
or 4
.
If the computer system running the database has less than optimal processing power, then set apply process parallelism to 1
or 4
.
Ensure that the PROCESSES
initialization parameter is set appropriately when you set the parallelism
apply process parameter.
In addition, if parallelism is greater than 1 for an apply process, then ensure that any database objects whose changes are applied by the apply process have the appropriate settings for the INITRANS
and PCTFREE
parameters. The INITRANS
parameter specifies the initial number of concurrent transaction entries allocated within each data block allocated to the database object. Set the INITRANS
parameter to the parallelism of the apply process or higher. The PCTFREE
parameter specifies the percentage of space in each data block of the database object reserved for future updates to rows of the object. The PCTFREE
parameter should be set to 10 or higher. You can modify these parameters for a table using the ALTER
TABLE
statement
See Also:
|
When the disable_on_error
apply process parameter is set to Y
, the apply process is disabled on the first unresolved error, even if the error is not irrecoverable. When the disable_on_error
apply process parameter is set to N
, the apply process continues regardless of unresolved errors. The default setting for this parameter is Y
. If you do not want an apply process to become disabled when it encounters errors, then set the disable_on_error
parameter to N
.
The following section describes best practices for operating existing apply processes.
The error queue contains all of the current apply errors for a database. If there are multiple apply processes in a database, then the error queue contains the apply errors for each apply process. If an apply process encounters an error when it tries to apply a logical change record (LCR) in a transaction, then all of the LCRs in the transaction are moved to the error queue. To view information about apply errors, query the DBA_APPLY_ERROR
data dictionary view or use Enterprise Manager.
The MESSAGE_NUMBER
column in the DBA_APPLY_ERROR
view indicates the LCR within the transaction that resulted in the error. When apply errors are encountered, correct the problem(s) that caused the error(s), and either retry or delete the error transaction in the error queue.
See Also: Oracle Streams Concepts and Administration for information about managing apply errors and for information about displaying detailed information for the column values of each LCR in an error transaction |
This chapter describes the best practices for propagating messages in an Oracle Streams replication environment. This chapter includes these topics:
This following sections describe best practices for configuring propagations:
A propagation can be queue-to-queue or queue-to-database link (queue-to-dblink). Use queue-to-queue propagations whenever possible. A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. Even when multiple queue-to-queue propagations use the same database link, you can enable, disable, or set the propagation schedule for each queue-to-queue propagation separately.
Propagations configured before Oracle Database 10g Release 2 are queue-to-dblink propagations. Also, any propagation that includes a queue in a database before Oracle Database 10g Release 2 must be a queue-to-dblink propagation. When queue-to-dblink propagations are used, propagation will not succeed if the database link no longer connects to the owning instance of the destination queue.
If you have queue-to-dblink propagations created in a prior release of Oracle Database, you can re-create these propagation during a maintenance window to use queue-to-queue propagation. To re-create a propagation, complete the following steps:
Stop the propagation.
Ensure that the source queue is empty.
Ensure that the destination queue is empty and has no unapplied, spilled messages before you drop the propagation.
Re-create the propagation with the queue_to_queue
parameter set to TRUE
in the creation procedure.
If you automate the configuration, as described in "Automate the Oracle Streams Replication Configuration", then each propagation uses queue-to-queue propagation automatically.
See Also:
|
Propagation latency is the maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued. Set the propagation latency to an appropriate value for each propagation in your Oracle Streams replication environment. The default propagation latency value is 60.
The following scenarios describe how a propagation will behave given different propagation latency values:
If latency is set to 60, and there are no messages enqueued during the propagation window, then the queue will not be checked for 60 seconds for messages to be propagated to the specified destination. That is, messages enqueued into the queue for the propagation destination will not be propagated for at least 60 more seconds.
If the latency is set to 600, and there are no messages enqueued during the propagation window, then the queue will not be checked for 10 minutes for messages to be propagated to the specified destination. That is, messages enqueued into the queue for the propagation destination will not be propagated for at least 10 more minutes.
If the latency is set to 0, then a job slave will be waiting for messages to be enqueued for the destination and, as soon as a message is enqueued, it will be propagated. Setting latency to 0 is not recommended because it might affect the ability of the database to shut down in NORMAL
mode.
You can set propagation parameters using the ALTER_PROPAGATION_SCHEDULE
procedure from the DBMS_AQADM
package. For example, to set the latency of a propagation from the q1
source queue owned by strmadmin
to the destination queue q2
at the database with a global name of dbs2.example.com
, log in as the Oracle Streams administrator, and run the following procedure:
BEGIN DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE( queue_name => 'strmadmin.q1', destination => 'dbs2.example.com', latency => 5, destination_queue => 'strmadmin.q2'); END; /
Note: If thelatency parameter is not specified, then propagation latency is set to the default value of 60. |
When using Oracle Streams propagation in a Wide Area Network (WAN), increase the session data unit (SDU) to improve the propagation performance. The maximum value for SDU is 32K (32767). The SDU value for network transmission is negotiated between the sender and receiver sides of the connection, and the minimum SDU value of the two endpoints is used for any individual connection. To take advantage of an increased SDU for Oracle Streams propagation, the receiving side sqlnet.ora
file must include the DEFAULT_SDU_SIZE
parameter. The receiving side listener.ora
file must indicate the SDU change for the system identifier (SID). The sending side tnsnames.ora
file connect string must also include the SDU modification for the particular service.
In addition, the SEND_BUF_SIZE
and RECV_BUF_SIZE
parameters in the listener.ora
and tnsnames.ora
files increase the performance of propagation on your system. These parameters increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis of their overall impact on system performance.
The following section describes best practices for operating existing propagations.
Sometimes, the propagation job for a propagation might become "broken" or fail to start after it encounters an error or after a database restart. Typically, stopping and restarting the propagation solves the problem. For example, to stop and restart a propagation named prop1
, run the following procedures:
BEGIN DBMS_PROPAGATION_ADM.STOP_PROPAGATION( propagation_name => 'prop1'); END; / BEGIN DBMS_PROPAGATION_ADM.START_PROPAGATION( propagation_name => 'prop1'); END; /
If running these procedures does not correct the problem, then run the STOP_PROPAGATION
procedure with the force
parameter set to TRUE
, and restart propagation. For example:
BEGIN DBMS_PROPAGATION_ADM.STOP_PROPAGATION( propagation_name => 'prop1', force => TRUE); END; / BEGIN DBMS_PROPAGATION_ADM.START_PROPAGATION( propagation_name => 'prop1'); END; /
When you stop a propagation with the force
parameter set to TRUE
, the statistics for the propagation are cleared.
Oracle Streams Replication Administrator's Guide describes the features and functionality of Oracle Streams that can be used for data replication. This document contains conceptual information about Oracle Streams replication, along with information about configuring and managing an Oracle Streams replication environment.
This Preface contains these topics:
Oracle Streams Replication Administrator's Guide is intended for database administrators who create and maintain Oracle Streams replication environments. These administrators perform one or more of the following tasks
Plan for an Oracle Streams replication environment
Configure an Oracle Streams replication environment
Configure conflict resolution in an Oracle Streams replication environment
Administer an Oracle Streams replication environment
Monitor an Oracle Streams replication environment
Perform necessary troubleshooting activities for an Oracle Streams replication environment
To use this document, you must be familiar with relational database concepts, SQL, distributed database administration, general Oracle Streams concepts, Advanced Queuing concepts, PL/SQL, and the operating systems under which you run an Oracle Streams environment.
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc
.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info
or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.
For more information, see these Oracle resources:
Oracle Streams online Help for the Oracle Streams tool in Oracle Enterprise Manager
Many of the examples in this book use the sample schemas of the sample database, which is installed by default when you install Oracle Database. Refer to Oracle Database Sample Schemas for information about how these schemas were created and how you can use them yourself.
The following text conventions are used in this document:
Convention | Meaning |
---|---|
boldface | Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. |
italic | Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. |
monospace | Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter. |
Implicit capture means that a capture process or a synchronous capture captures and enqueues database changes automatically. A capture process captures changes in the redo log, while a synchronous capture captures data manipulation language (DML) changes with an internal mechanism. Both capture processes and synchronous captures reformat the captured changes into logical change records (LCRs) and enqueue the LCRs into an ANYDATA
queue.
The following topics describe configuring implicit capture:
Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.
See Also:
|
You can create a capture process that captures changes either locally at the source database or remotely at a downstream database. A downstream capture process runs on a downstream database, and redo data from the source database is copied to the downstream database. A downstream capture process captures changes in the copied redo data at the downstream database.
You can use any of the following procedures to create a local capture process:
Each of the procedures in the DBMS_STREAMS_ADM
package creates a capture process with the specified name if it does not already exist, creates either a positive rule set or negative rule set for the capture process if the capture process does not have such a rule set, and can add table rules, schema rules, or global rules to the rule set.
The CREATE_CAPTURE
procedure creates a capture process, but does not create a rule set or rules for the capture process. However, the CREATE_CAPTURE
procedure enables you to specify an existing rule set to associate with the capture process, either as a positive or a negative rule set, a first SCN, and a start SCN for the capture process. Also, to create a capture process that performs downstream capture, you must use the CREATE_CAPTURE
procedure.
The following sections describe configuring a capture process:
Caution: When a capture process is started or restarted, it might need to scan redo log files with aFIRST_CHANGE# value that is lower than start SCN. Removing required redo log files before they are scanned by a capture process causes the capture process to abort. You can query the DBA_CAPTURE data dictionary view to determine the first SCN, start SCN, and required checkpoint SCN for a capture process. A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files. See Oracle Streams Concepts and Administration for more information about the first SCN and start SCN for a capture process. |
|
The following tasks must be completed before you configure a capture process:
Complete the following tasks in "Tasks to Complete Before Configuring Oracle Streams Replication".
If you plan to create a real-time or an archived-log downstream capture process that uses redo transport services to transfer archived redo log files to the downstream database automatically, then complete the steps in "Configuring Log File Transfer to a Downstream Capture Database".
If you plan to create a real-time downstream capture process, then complete the steps in "Adding Standby Redo Logs for Real-Time Downstream Capture".
Create an ANYDATA
queue to associate with the capture process, if one does not exist. See "Creating an ANYDATA Queue" for instructions. The examples in this chapter assume that the queue used by the capture process is strmadmin.streams_queue
. Create the queue on the same database that will run the capture process.
The following sections describe using the DBMS_STREAMS_ADM
package and the DBMS_CAPTURE_ADM
package to create a local capture process.
This section contains the following examples:
To configure a local capture process using the DBMS_STREAMS_ADM
package, complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process".
In SQL*Plus, connect to the source database as the Oracle Streams administrator.
See Oracle Database Administrator's Guide for instructions about connecting to a database in SQL*Plus.
Run the ADD_TABLE_RULES
procedure in the DBMS_STREAMS_ADM
package to create a local capture process:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strmadmin.streams_queue', include_dml => TRUE, include_ddl => TRUE, include_tagged_lcr => FALSE, source_database => NULL, inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates a capture process named strm01_capture
. The capture process is created only if it does not already exist. If a new capture process is created, then this procedure also sets the start SCN to the point in time of creation.
Associates the capture process with the existing queue strmadmin.streams_queue
.
Creates a positive rule set and associates it with the capture process, if the capture process does not have a positive rule set. The rule set is a positive rule set because the inclusion_rule
parameter is set to TRUE
. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT
evaluation context. The rule set name is system generated.
Creates two rules. One rule evaluates to TRUE
for data manipulation language (DML) changes to the hr.employees
table, and the other rule evaluates to TRUE
for data definition language (DDL) changes to the hr.employees
table. The rule names are system generated.
Adds the two rules to the positive rule set associated with the capture process. The rules are added to the positive rule set because the inclusion_rule
parameter is set to TRUE
.
Specifies that the capture process captures a change in the redo log only if the change has a NULL
tag, because the include_tagged_lcr
parameter is set to FALSE
. This behavior is accomplished through the system-created rules for the capture process.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Prepares the hr.employees
table for instantiation.
Enables supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the hr.employees
table.
If necessary, complete the steps described in "After Configuring a Capture Process".
To configure a local capture process using the DBMS_CAPTURE_ADM
package, complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process".
In SQL*Plus, connect to the source database as the Oracle Streams administrator.
See Oracle Database Administrator's Guide for instructions about connecting to a database in SQL*Plus.
Create the rule set that will be used by the capture process if it does not exist. In this example, assume that the rule set is strmadmin.strm01_rule_set
. Optionally, you can also add rules to the rule set. See Oracle Streams Concepts and Administration for instructions.
Run the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a local capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'strm02_capture', rule_set_name => 'strmadmin.strm01_rule_set', start_scn => NULL, source_database => NULL, first_scn => NULL); END; /
Running this procedure performs the following actions:
Creates a capture process named strm02_capture
. A capture process with the same name must not exist.
Associates the capture process with the existing queue strmadmin.streams_queue
.
Associates the capture process with the existing rule set strmadmin.strm01_rule_set
. This rule set is the positive rule set for the capture process.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Specifies that the Oracle database determines the start SCN and first SCN for the capture process because both the start_scn
parameter and the first_scn
parameter are set to NULL
.
If no other capture processes that capture local changes are running on the local database, then the BUILD
procedure in the DBMS_CAPTURE_ADM
package is run automatically. Running this procedure extracts the data dictionary to the redo log, and a LogMiner data dictionary is created when the capture process is started for the first time.
If necessary, complete the steps described in "After Configuring a Capture Process".
This example runs the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a local capture process with a start SCN set to 223525
. This example assumes that there is at least one local capture process at the database, and that this capture process has taken at least one checkpoint. You can always specify a start SCN for a new capture process that is equal to or greater than the current SCN of the source database. To specify a start SCN that is lower than the current SCN of the database, the specified start SCN must be higher than the lowest first SCN for an existing local capture process that has been started successfully at least once and has taken at least one checkpoint.
You can determine the first SCN for existing capture processes, and whether these capture processes have taken a checkpoint, by running the following query:
SELECT CAPTURE_NAME, FIRST_SCN, MAX_CHECKPOINT_SCN FROM DBA_CAPTURE;
Your output looks similar to the following:
CAPTURE_NAME FIRST_SCN MAX_CHECKPOINT_SCN ------------------------------ ---------- ------------------ CAPTURE_SIMP 223522 230825
These results show that the capture_simp
capture process has a first SCN of 223522
. Also, this capture process has taken a checkpoint because the MAX_CHECKPOINT_SCN
value is non-NULL
. Therefore, the start SCN for the new capture process can be set to 223522
or higher.
To configure a local capture process with a non-NULL
start SCN, complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process".
In SQL*Plus, connect to the source database as the Oracle Streams administrator.
See Oracle Database Administrator's Guide for instructions about connecting to a database in SQL*Plus.
Create the rule set that will be used by the capture process if it does not exist. In this example, assume that the rule set is strmadmin.strm01_rule_set
. Optionally, you can also add rules to the rule set. See Oracle Streams Concepts and Administration for instructions.
Run the following procedure to create the capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'strm03_capture', rule_set_name => 'strmadmin.strm01_rule_set', start_scn => 223525, source_database => NULL, first_scn => NULL); END; /
Running this procedure performs the following actions:
Creates a capture process named strm03_capture
. A capture process with the same name must not exist.
Associates the capture process with the existing queue strmadmin.streams_queue
.
Associates the capture process with the existing rule set strmadmin.strm01_rule_set
. This rule set is the positive rule set for the capture process.
Specifies 223525
as the start SCN for the capture process. The new capture process uses the same LogMiner data dictionary as one of the existing capture processes. Oracle Streams automatically chooses which LogMiner data dictionary to share with the new capture process. Because the first_scn
parameter was set to NULL
, the first SCN for the new capture process is the same as the first SCN of the existing capture process whose LogMiner data dictionary was shared. In this example, the existing capture process is capture_simp
.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Note: If no local capture process exists when the procedure in this example is run, then theDBMS_CAPTURE_ADM.BUILD procedure is run automatically during capture process creation to extract the data dictionary into the redo log. The first time the new capture process is started, it uses this redo data to create a LogMiner data dictionary. In this case, a specified start_scn parameter value must be equal to or higher than the current database SCN. |
If necessary, complete the steps described in "After Configuring a Capture Process".
See Also:
|
This section describes configuring a real-time or archived-log downstream capture process.
This section contains these topics:
To create a capture process that performs downstream capture, you must use the CREATE_CAPTURE
procedure. The example in this section describes creating a real-time downstream capture process that uses a database link to the source database. However, a real-time downstream capture process might not use a database link.
This example assumes the following:
The source database is dbs1.example.com
and the downstream database is dbs2.example.com
.
The capture process that will be created at dbs2.example.com
uses the strmadmin.streams_queue
.
The capture process will capture DML changes to the hr.departments
table.
This section contains an example that runs the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a real-time downstream capture process at the dbs2.example.com
downstream database that captures changes made to the dbs1.example.com
source database. The capture process in this example uses a database link to dbs1.example.com
for administrative purposes. The name of the database link must match the global name of the source database.
Note: You can configure multiple real-time downstream capture processes that captures changes from the same source database, but you cannot configure real-time downstream capture for multiple source databases at one downstream database. |
See Also: Oracle Streams Concepts and Administration for conceptual information about real-time downstream capture |
Complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process". Ensure that you complete the tasks "Configuring Log File Transfer to a Downstream Capture Database" and "Adding Standby Redo Logs for Real-Time Downstream Capture".
In SQL*Plus, connect to the downstream database dbs2.example.com
as the Oracle Streams administrator.
See Oracle Database Administrator's Guide for information about connecting to a database in SQL*Plus.
Create the database link from dbs2.example.com
to dbs1.example.com
. For example, if the user strmadmin
is the Oracle Streams administrator on both databases, then create the following database link:
CREATE DATABASE LINK dbs1.example.com CONNECT TO strmadmin
IDENTIFIED BY password
USING 'dbs1.example.com';
This example assumes that an Oracle Streams administrator exists at the source database dbs1.example.com
. If no Oracle Streams administrator exists at the source database, then the Oracle Streams administrator at the downstream database should connect to a user who allows remote access by an Oracle Streams administrator. You can enable remote access for a user by specifying the user as the grantee when you run the GRANT_REMOTE_ADMIN_ACCESS
procedure in the DBMS_STREAMS_AUTH
package at the source database.
Run the CREATE_CAPTURE
procedure to create the capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'real_time_capture', rule_set_name => NULL, start_scn => NULL, source_database => 'dbs1.example.com', use_database_link => TRUE, first_scn => NULL, logfile_assignment => 'implicit'); END; /
Running this procedure performs the following actions:
Creates a capture process named real_time_capture
at the downstream database dbs2.example.com
. A capture process with the same name must not exist.
Associates the capture process with an existing queue on dbs2.example.com
named streams_queue
and owned by strmadmin
.
Specifies that the source database of the changes that the capture process will capture is dbs1.example.com
.
Specifies that the capture process uses a database link with the same name as the source database global name to perform administrative actions at the source database.
Specifies that the capture process accepts redo data implicitly from dbs1.example.com
. Therefore, the capture process scans the standby redo log at dbs2.example.com
for changes that it must capture. If the capture process falls behind, then it scans the archived redo log files written from the standby redo log.
This step does not associate the capture process real_time_capture
with any rule set. A rule set will be created and associated with the capture process in a later step.
If no other capture process at dbs2.example.com
is capturing changes from the dbs1.example.com
source database, then the DBMS_CAPTURE_ADM.BUILD
procedure is run automatically at dbs1.example.com
using the database link. Running this procedure extracts the data dictionary at dbs1.example.com
to the redo log, and a LogMiner data dictionary for dbs1.example.com
is created at dbs2.example.com
when the capture process real_time_capture
is started for the first time at dbs2.example.com
.
If multiple capture processes at dbs2.example.com
are capturing changes from the dbs1.example.com
source database, then the new capture process real_time_capture
uses the same LogMiner data dictionary for dbs1.example.com
as one of the existing archived-log capture process. Oracle Streams automatically chooses which LogMiner data dictionary to share with the new capture process.