Oracle® Streams Concepts and Administration 11g Release 2 (11.2) Part Number E17069-07 |
|
|
PDF · Mobi · ePub |
This appendix describes how Oracle Streams works with other Oracle Database components.
This appendix includes these topics:
The following topics describe how Oracle Streams works with Oracle Real Application Clusters (Oracle RAC):
See Also:
Oracle Streams Replication Administrator's Guide for information about best practices for Oracle Streams in an Oracle RAC environmentA capture process can capture changes in an Oracle Real Application Clusters (Oracle RAC) environment. If you use one or more capture processes and Oracle RAC in the same environment, then all archived logs that contain changes to be captured by a capture process must be available for all instances in the Oracle RAC environment. In an Oracle RAC environment, a capture process reads changes made by all instances. Any processes used by a single capture process run on a single instance in an Oracle RAC environment.
Each capture process is started and stopped on the owner instance for its ANYDATA
queue, even if the start or stop procedure is run on a different instance. Also, a capture process follows its queue to a different instance if the current owner instance becomes unavailable. The queue itself follows the rules for primary instance and secondary instance ownership.
If the owner instance for a queue table containing a queue used by a capture process becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. In addition, if the capture process was enabled when the owner instance became unavailable, then the capture process is restarted automatically on the new owner instance. If the capture process was disabled when the owner instance became unavailable, then the capture process remains disabled on the new owner instance.
LogMiner supports the LOG_ARCHIVE_DEST_
n
initialization parameter, and Oracle Streams capture processes use LogMiner to capture changes from the redo log. If an archived log file is inaccessible from one destination, then a local capture process can read it from another accessible destination. On an Oracle RAC database, this ability also enables you to use cross instance archival (CIA) such that each instance archives its files to all other instances. This solution cannot detect or resolve gaps caused by missing archived log files. Hence, it can be used only to complement an existing solution to have the archived files shared between all instances.
In a downstream capture process environment, the source database can be a single instance database or a multi-instance Oracle RAC database. The downstream database can be a single instance database or a multi-instance Oracle RAC database, regardless of whether the source database is single instance or multi-instance.
See Also:
Oracle Database Reference for more information about the DBA_QUEUE_TABLES
data dictionary view
Oracle Real Application Clusters Administration and Deployment Guide for more information about configuring archived logs to be shared between instances
A synchronous capture can capture changes in an Oracle Real Application Clusters (Oracle RAC) environment. In an Oracle RAC environment, synchronous capture reads changes made by all instances.
For the best performance with synchronous capture in an Oracle RAC environment, changes to independent sets of tables should be captured by separate synchronous captures. For example, if different applications use different sets of database objects in the database, then configure a separate synchronous capture to capture changes to the database objects for each application. In this case, each synchronous capture should use a different queue and queue table.
Combined capture and apply can be used in an Oracle Real Application Clusters (Oracle RAC) environment. In an Oracle RAC environment, the capture process and apply process can be on the same instance, on different instances in a single Oracle RAC database, or on different databases. When the capture process and apply process are on different instances in the same database or on different databases, you must configure a propagation between the capture process's queue and the apply process's queue for combined capture and apply to be used.
You can configure a queue to stage LCRs and user messages in an Oracle Real Application Clusters (Oracle RAC) environment. In an Oracle RAC environment, only the owner instance can have a buffer for a queue, but different instances can have buffers for different queues. A buffered queue is System Global Area (SGA) memory associated with a queue.
Oracle Streams processes and jobs support primary instance and secondary instance specifications for queue tables. If you use these specifications, then the secondary instance assumes ownership of a queue table when the primary instance becomes unavailable, and ownership is transferred back to the primary instance when it becomes available again.
You can set primary and secondary instance specifications using the ALTER_QUEUE_TABLE
procedure in the DBMS_AQADM
package. The DBA_QUEUE_TABLES
data dictionary view contains information about the owner instance for a queue table. A queue table can contain multiple queues. In this case, each queue in a queue table has the same owner instance as the queue table.
See Also:
Oracle Database Reference for more information about the DBA_QUEUE_TABLES
data dictionary view
Oracle Streams Advanced Queuing User's Guide for more information about queues and Oracle RAC
Oracle Database PL/SQL Packages and Types Reference for more information about the ALTER_QUEUE_TABLE
procedure
A propagation can propagate messages from one queue to another in an Oracle Real Application Clusters (Oracle RAC) environment. A propagation job running on an instance propagates logical change records (LCRs) from any queue owned by that instance to destination queues.
Any propagation to an Oracle RAC database is made over database links. The database links must be configured to connect to the destination instance that owns the queue that will receive the messages.
If the owner instance for a queue table containing a destination queue for a propagation becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. If both the primary and secondary instance for a queue table containing a destination queue become unavailable, then queue ownership is transferred automatically to another instance in the cluster. In this case, if the primary or secondary instance becomes available again, then ownership is transferred back to one of them accordingly.
A queue-to-queue propagation to a buffered destination queue uses a service to provide transparent failover in an Oracle RAC environment. That is, a propagation job for a queue-to-queue propagation automatically connects to the instance that owns the destination queue. The service used by a queue-to-queue propagation always runs on the owner instance of the destination queue. This service is created only for buffered queues in an Oracle RAC database. If you plan to use buffered messaging with an Oracle RAC database, then messages can be enqueued into a buffered queue on any instance. If messages are enqueued on an instance that does not own the queue, then the messages are sent to the correct instance, but it is more efficient to enqueue messages on the instance that owns the queue. You can use the service to connect to the owner instance of the queue before enqueuing messages into a buffered queue.
Because the queue service always runs on the owner instance of the queue, transparent failover can occur when Oracle RAC instances fail. When multiple queue-to-queue propagations use a single database link, the connect description for each queue-to-queue propagation changes automatically to propagate messages to the correct destination queue.
In contrast, queue-to-dblink propagations do not use services. Queue-to-dblink propagations require you to repoint your database links if the owner instance in an Oracle RAC database that contains the destination queue for the propagation fails. To make the propagation job connect to the correct instance on the destination database, manually reconfigure the database link from the source database to connect to the instance that owns the destination queue. You do not need to modify a propagation that uses a re-created database link.
The NAME
column in the DBA_SERVICES
data dictionary view contains the service name for a queue. The NETWORK_NAME
column in the DBA_QUEUES
data dictionary view contains the network name for a queue. Do not manage the services for queue-to-queue propagations in any way. Oracle manages them automatically. For queue-to-dblink propagations, use the network name as the service name in the connect string of the database link to connect to the correct instance.
Note:
If a queue contains or will contain captured LCRs in an Oracle RAC environment, then use queue-to-queue propagations to propagate messages to an Oracle RAC destination database. If a queue-to-dblink propagation propagates captured LCRs to an Oracle RAC destination database, then this propagation must use an instance-specific database link that refers to the owner instance of the destination queue. If such a propagation connects to any other instance, then the propagation raises an error.See Also:
"Message Propagation Between Queues"You can configure an Oracle Streams apply process to apply changes in an Oracle Real Application Clusters (Oracle RAC) environment. Each apply process is started and stopped on the owner instance for its ANYDATA
queue, even if the start or stop procedure is run on a different instance. An apply coordinator process, its corresponding apply reader server, and all of its apply servers run on a single instance.
If the owner instance for a queue table containing a queue used by an apply process becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. Also, an apply process will follow its queue to a different instance if the current owner instance becomes unavailable. The queue itself follows the rules for primary instance and secondary instance ownership. In addition, if the apply process was enabled when the owner instance became unavailable, then the apply process is restarted automatically on the new owner instance. If the apply process was disabled when the owner instance became unavailable, then the apply process remains disabled on the new owner instance.
See Also:
Oracle Database Reference for more information about the DBA_QUEUE_TABLES
data dictionary view
The following topics describe how Oracle Streams works with Transparent Data Encryption:
See Also:
Oracle Database Advanced Security Administrator's Guide for information about transparent data encryptionA local capture process can capture changes to columns that have been encrypted using transparent data encryption. A downstream capture process can capture changes to columns that have been encrypted only if the downstream database shares a wallet with the source database. A wallet can be shared through a network file system (NFS), or it can be copied from one computer system to another manually. When a wallet is shared with a downstream database, ensure that the ENCRYPTION_WALLET_LOCATION
parameter in the sqlnet.ora
file at the downstream database specifies the wallet location.
If you copy a wallet to a downstream database, then ensure that you copy the wallet from the source database to the downstream database whenever the wallet at the source database changes. Do not perform any operations on the wallet at the downstream database, such as changing the encryption key for a replicated table.
Encrypted columns in row logical change records (row LCRs) captured by a local or downstream capture process are decrypted when the row LCRs are staged in a buffered queue. If row LCRs spill to disk in a database with transparent data encryption enabled, then Oracle Streams transparently encrypts any encrypted columns while the row LCRs are stored on disk.
Note:
A capture process only supports encrypted columns if the redo logs used by the capture process were generated by a database with a compatibility level of 11.0.0 or higher. The compatibility level is controlled by theCOMPATIBLE
initialization parameter.A synchronous capture can capture changes to columns that have been encrypted using transparent data encryption. Encrypted columns in row logical change records (row LCRs) captured by a synchronous capture remain encrypted when the row LCRs are staged in a persistent queue.
You can use explicit capture to construct and enqueue row logical change records (row LCRs) for columns that are encrypted in database tables. However, you cannot specify that columns are encrypted when you construct the row LCRs. Therefore, when explicitly captured row LCRs are staged in a queue, all of the columns in the row LCRs are decrypted.
See Also:
"Explicit Capture by Applications"A persistent queue can store row logical change records (row LCRs) captured by a synchronous capture, and these row LCRs can contain changes to columns that were encrypted using transparent data encryption. The row LCRs remain encrypted while they are stored in the persistent queue. Explicitly captured row LCRs cannot contain encrypted columns.
A buffered queue can store row LCRs that contain changes captured by a capture process, and these row LCRs can contain changes to columns that were encrypted using transparent data encryption. When row LCRs with encrypted columns are stored in buffered queues, the columns are decrypted. When row LCRs spill to disk, Oracle Streams transparently encrypts any encrypted columns while the row LCRs are stored on disk.
Note:
For Oracle Streams to encrypt columns transparently, the encryption master key must be stored in the wallet on the local database, and the wallet must be open. The following statements set the master key and open the wallet:ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY key-password; ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY key-password;
A propagation can propagate row logical change records (row LCRs) that contain changes to columns that were encrypted using transparent data encryption. When a propagation propagates row LCRs with encrypted columns, the encrypted columns are decrypted while the row LCRs are transferred over the network. You can use the features of Oracle Advanced Security to encrypt data transfers over the network if necessary.
See Also:
Oracle Database Advanced Security Administrator's Guide for information about configuring network data encryption
An apply process can dequeue and process implicitly captured row logical change records (row LCRs) that contain columns encrypted using transparent data encryption. When row LCRs with encrypted columns are dequeued by an apply process, the encrypted columns are decrypted. These row LCRs with decrypted columns can be sent to an apply handler for custom processing, or they can be applied directly. When row LCRs are applied, and the modified table contains encrypted columns, any changes to encrypted columns are encrypted when they are applied.
When row LCRs contain encrypted columns, but the corresponding columns at the destination database are not encrypted, then the preserve_encryption
apply process parameter controls apply process behavior:
If the preserve_encryption
parameter is set to Y
, then the apply process raises an error when row LCRs contain encrypted columns, but the corresponding columns at the destination database are not encrypted. When an error is raised, the row LCR is not applied, and all of the row LCRs in the transaction are moved to the error queue.
If the preserve_encryption
parameter is set to N
, then the apply process applies the row changes when row LCRs contain encrypted columns, but the corresponding columns at the destination database are not encrypted.
When an apply process moves implicitly captured row LCRs with encrypted columns to the error queue, the encrypted columns are encrypted when the row LCRs are in the error queue. Row LCRs are implicitly captured using capture processes and synchronous captures.
A messaging client can dequeue implicitly captured row LCRs that contain columns encrypted using transparent data encryption. When row LCRs with encrypted columns are dequeued by a messaging client, the encrypted columns are decrypted.
Oracle Streams supports tables in a flashback data archive. Capture processes can capture data manipulation language (DML) and data definition language (DDL) changes made to these tables. Synchronous captures can capture DML changes made to these tables. Apply processes can apply changes encapsulated in logical change records (LCRs) to these tables.
Oracle Streams capture processes and apply processes also support the following DDL statements:
CREATE
FLASHBACK
ARCHIVE
ALTER
FLASHBACK
ARCHIVE
DROP
FLASHBACK
ARCHIVE
CREATE
TABLE
with a FLASHBACK
ARCHIVE
clause
ALTER
TABLE
with a FLASHBACK
ARCHIVE
clause
Note:
Oracle Streams does not capture or apply changes made to internal tables used by a flashback data archive.See Also:
The following topics describe how Oracle Streams works with Recovery Manager (RMAN):
You can use RMAN to instantiate database objects during the configuration of an Oracle Streams replication environment. The RMAN DUPLICATE
and CONVERT
DATABASE
commands can instantiate an entire database, and the RMAN TRANSPORT
TABLESPACE
command can instantiate a tablespace or set of tablespaces.
See Also:
Oracle Streams Replication Administrator's Guide for information about using RMAN for instantiationSome Recovery Manager (RMAN) deletion policies and commands delete archived redo log files. If one of these RMAN policies or commands is used on a database that generates redo log files for one or more capture processes, then ensure that the RMAN commands do not delete archived redo log files that are required by a capture process.
The following sections describe the behavior of RMAN deletion policies and commands for local capture processes and downstream capture processes
See Also:
"Are Required Redo Log Files Missing?" for information about determining whether a capture process is missing required archived redo log files and for information correcting this problem. This section also contains information about fast recovery area and local capture processes.
Oracle Database Backup and Recovery User's Guide and Oracle Database Backup and Recovery Reference for more information about RMAN
When a local capture process is configured, RMAN does not delete archived redo log files that are required by the local capture process unless there is space pressure in the fast recovery area. Specifically, RMAN does not delete archived redo log files that contain changes with system change number (SCN) values that are equal to or greater than the required checkpoint SCN for the local capture process. This is the default RMAN behavior for all RMAN deletion policies and DELETE
commands, including DELETE
ARCHIVELOG
and DELETE
OBSOLETE
.
When there is not enough space in the fast recovery area to write a new log file, RMAN automatically deletes one or more archived redo log files. Oracle Database writes warnings to the alert log when RMAN automatically deletes an archived redo log file that is required by a local capture process.
When backups of the archived redo log files are taken on the local capture process database, Oracle recommends the following RMAN deletion policy:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier;
This deletion policy requires that a log file be backed up integer
times before it is considered for deletion.
When no backups of the archived redo log files are taken on the local capture process database, no specific deletion policy is recommended. By default, RMAN does not delete archived redo log files that are required by a local capture process.
When a downstream capture process captures database changes made at a source database, ensure that no RMAN deletion policy or command deletes an archived redo log file until after it is transferred from the source database to the downstream capture process database.
The following are considerations for specific RMAN deletion policies and commands that delete archived redo log files:
The RMAN command CONFIGURE
ARCHIVELOG
DELETION
POLICY
sets a deletion policy that determines when archived redo log files in the fast recovery area are eligible for deletion. The deletion policy also applies to all RMAN DELETE
commands, including DELETE
ARCHIVELOG
and DELETE
OBSOLETE
.
The following settings determine the behavior at the source database:
A deletion policy set TO
SHIPPED
TO
STANDBY
does not delete a log file until after it is transferred to a downstream capture process database that requires the file. These log files might or might not have been processed by the downstream capture process. Automatic deletion occurs when there is not enough space in the fast recovery area to write a new log file.
A deletion policy set TO
APPLIED
ON
STANDBY
does not delete a log file until after it is transferred to a downstream capture process database that requires the file and the source database marks the log file as applied. The source database marks a log file as applied when the minimum required checkpoint SCN of all of the downstream capture processes for the source database is greater than the highest SCN in the log file.
A deletion policy set to BACKED
UP
integer
TIMES
TO
DEVICE
TYPE
requires that a log file be backed up integer
times before it is considered for deletion. A log file can be deleted even if the log file has not been processed by a downstream capture process that requires it.
A deletion policy set TO
NONE
means that a log file can be deleted when there is space pressure on the fast recovery area, even if the log file has not been processed by a downstream capture process that requires it.
The RMAN command DELETE
ARCHIVELOG
deletes archived redo log files that meet all of the following conditions:
The log files satisfy the condition specified in the DELETE
ARCHIVELOG
command.
The log files can be deleted according to the CONFIGURE
ARCHIVELOG
DELETION
POLICY
. For example, if the policy is set TO
SHIPPED
TO
STANDBY
, then this command does not delete a log file until after it is transferred to any downstream capture process database that requires it.
This behavior applies when the database is mounted or open.
If archived redo log files are not deleted because they contain changes required by a downstream capture process, then RMAN displays a warning message about skipping the delete operation for these files.
The RMAN command DELETE
OBSOLETE
permanently purges the archived redo log files that meet all of the following conditions:
The log files are obsolete according to the retention policy.
The log files can be deleted according to the CONFIGURE
ARCHIVELOG
DELETION
POLICY
. For example, if the policy is set TO
SHIPPED
TO
STANDBY
, then this command does not delete a log file until after it is transferred to any downstream capture process database that requires it.
This behavior applies when the database is mounted or open.
The RMAN command BACKUP
ARCHIVELOG
ALL
DELETE
INPUT
copies the archived redo log files and deletes the original files after completing the backup. This command does not delete the log file until after it is transferred to a downstream capture process database when the following conditions are met:
The database is mounted or open.
The log file is required by a downstream capture process.
The deletion policy is set TO
SHIPPED
TO
STANDBY
.
If archived redo log files are not deleted because they contain changes required by a downstream capture process, then RMAN displays a warning message about skipping the delete operation for these files.
Oracle recommends one of the following RMAN deletion policies at the source database for a downstream capture process:
When backups of the archived redo log files are taken on the source database, set the deletion policy to the following:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier;
When no backups of the archived redo log files are taken on the source database, set the deletion policy to the following:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;
Note:
At a downstream capture process database, archived redo log files transferred from a source database are not managed by RMAN.Oracle Streams supports replicating a recovery catalog in a one-way replication environment. Bi-directional replication of a recovery catalog is not supported.
See Also:
Oracle Database 2 Day + Data Replication and Integration Guide for information about one-way and bi-directional replication
Oracle Database Backup and Recovery User's Guide for information about the recovery catalog
You can perform distributed transactions using either of the following methods:
Modify tables in multiple databases in a coordinated manner using database links.
Use the XA interface, as exposed by the DBMS_XA
supplied PL/SQL package or by the OCI or JDBC libraries. The XA interface implements X/Open Distributed Transaction Processing (DTP) architecture.
Oracle Streams replicates changes made to the source database during a distributed transaction using either of these two methods to the destination database. An apply process at the destination database applies the changes in a transaction after the transaction has committed.
However, the distributed transaction state is not replicated or sent. The destination database or client application does not inherit the in-doubt or prepared state of such a transaction. Also, Oracle Streams does not replicate or send the changes using the same global transaction identifier used at the source database for XA transactions.
XA transactions can be performed in two ways:
Tightly coupled, where different XA branches share locks
Loosely coupled, where different XA branches do not share locks
Oracle Streams supports replication of changes made by loosely coupled XA branches regardless of the COMPATIBLE
initialization parameter value. Oracle Streams supports replication of changes made by tightly coupled branches on an Oracle RAC source database only if the COMPATIBLE
initialization parameter set to 11.2.0
or higher.
See Also:
Oracle Database Administrator's Guide for more information about distributed transactions
Oracle Database Advanced Application Developer's Guide for more information about Oracle XA
Oracle Database Vault restricts access to specific areas in an Oracle database from any user, including users who have administrative access. If you are using Oracle Streams in an Oracle Data Vault environment, then the following privileges and roles are required:
The Streams administrator must be granted the DV_STREAMS_ADMIN
role to perform the following tasks: create a capture process, create an apply process, and modify the capture user for a capture process. When the Streams administrator is not performing these tasks, you can revoke the DV_STREAMS_ADMIN
role from the Streams administrator.
The apply user for an apply process must be authorized to apply changes to realms that include replicated database objects. The replicated database objects are the objects to which the apply process applies changes.
To authorize an apply user for a realm, run the DBMS_MACADM.ADD_AUTH_TO_REALM
procedure and specify the realm and the apply user. For example, to authorize apply user strmadmin
for the sales realm, run the following procedure:
BEGIN DBMS_MACADM.ADD_AUTH_TO_REALM( realm_name => 'sales', grantee => 'strmadmin'); END; /
In addition, the user who performs the following actions must be granted the BECOME USER
system privilege:
Creates or alters a capture process
Creates or alters an outbound server
Creates or alters an inbound server
Granting the BECOME USER
system privilege to the user who performs these actions is not required if Oracle Database Vault is not installed. You can revoke the BECOME USER
system privilege from the user after the completing one of these actions, if necessary.