PK
$Aoa, mimetypeapplication/epub+zipPK $A iTunesMetadata.plist\
This chapter describes how to configure and start up sample replication schemes. It includes these topics:
You must have the ADMIN
privilege to complete the procedures in this chapter.
This section describes how to create an active standby pair with one subscriber. The active database is master1
. The standby database is master2
. The subscriber database is subscriber1
. To keep the example simple, all databases reside on the same computer, server1
.
Figure 2-1 shows this configuration.
Figure 2-1 Active standby pair with one subscriber
This section includes the following topics:
Step 1: Create the DSNs for the master and the subscriber databases
Step 7: Duplicate the active database to the standby database
Create DSNs named master1
, master2
and subscriber1
as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.
On UNIX systems, use a text editor to create the following odbc.ini
file:
[master1] DRIVER=install_dir/lib/libtten.so DataStore=/tmp/master1 DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8 [master2] DRIVER=install_dir/lib/libtten.so DataStore=/tmp/master2 DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8 [subscriber1] DRIVER=install_dir/lib/libtten.so DataStore=/tmp/subscriber1 DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8
On Windows, use the ODBC Administrator to set the same connection attributes. Use defaults for all other settings.
Use the ttIsql
utility to connect to the master1
database:
% ttIsql master1 Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=master1"; Connection successful: DSN=master1;UID=terry;DataStore=/tmp/master1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;TypeMode=0; (Default setting AutoCommit=1) Command>
Create a table called tab
with columns a
and b
:
Command> CREATE TABLE tab (a NUMBER NOT NULL, > b CHAR(18), > PRIMARY KEY (a));
Define the active standby pair on master1
:
Command> CREATE ACTIVE STANDBY PAIR master1, master2 > SUBSCRIBER subscriber1;
For more information about defining an active standby pair, see Chapter 3, "Defining an Active Standby Pair Replication Scheme".
Start the replication agent on master1
:
Command> CALL ttRepStart;
The state of a new database in an active standby pair is 'IDLE'
until the active database has been set.
Use the ttRepStateSet
built-in procedure to designate master1
as the active database:
Command> CALL ttRepStateSet('ACTIVE');
Verify the state of master1
:
Command> CALL ttRepStateGet; < ACTIVE, NO GRID > 1 row found.
Create a user terry
with a password of terry
and grant terry
the ADMIN
privilege. Creating a user with the ADMIN
privilege is required by Access Control for the next step.
Command> CREATE USER terry IDENTIFIED BY terry; User created. Command> GRANT admin TO terry;
Exit ttIsql
and use the ttRepAdmin
utility with the -duplicate
option to duplicate the active database to the standby database. If you are using two different hosts, enter the ttRepAdmin
command from the target host.
% ttRepAdmin -duplicate -from master1 -host server1 -uid terry -pwd terry "dsn=master2"
Use ttIsql
to connect to master2
and start the replication agent:
% ttIsql master2 Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=master2"; Connection successful: DSN=master2;UID=terry;DataStore=/tmp/master2; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;TypeMode=0; (Default setting AutoCommit=1) Command> CALL ttRepStart;
Starting the replication agent for the standby database automatically sets its state to 'STANDBY'
. Verify the state of master2
:
Command> CALL ttRepStateGet; < STANDBY, NO GRID > 1 row found.
Use the ttRepAdmin
utility to duplicate the standby database to the subscriber database:
% ttRepAdmin -duplicate -from master2 -host server1 -uid terry -pwd terry "dsn=subscriber1"
Use ttIsql
to connect to subscriber1
and start the replication agent. Verify the state of subscriber1
.
% ttIsql subscriber1 Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=subscriber1"; Connection successful: DSN=subscriber1;UID=terry;DataStore=/stmp/subscriber1; DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;TypeMode=0; (Default setting AutoCommit=1) Command> CALL ttRepStart; Command> call ttRepStateGet; < IDLE, NO GRID > 1 row found.
Insert a row into the tab
table on master1
.
Command> INSERT INTO tab VALUES (1,'Hello'); 1 row inserted. Command> SELECT * FROM tab; < 1, Hello > 1 row found.
Verify that the insert is replicated to master2
and subscriber1
.
Command> SELECT * FROM tab; < 1, Hello > 1 row found.
Stop the replication agents on each database:
Command> CALL ttRepStop;
Drop the active standby pair on each database. You can then drop the table tab
on any database in which you have dropped the active standby pair.
Command> DROP ACTIVE STANDBY PAIR;
Command> DROP TABLE tab;
This section describes how to configure a replication scheme that replicates the contents of a single table in a master database (masterds
) to a table in a subscriber database (subscriberds
). To keep the example simple, both databases reside on the same computer.
This section includes the following topics:
Step 2: Create a table and replication scheme on the master database
Step 3: Create a table and replication scheme on the subscriber database
Create DSNs named masterds
and subscriberds
as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.
On UNIX systems, use a text editor to create the following odbc.ini
file on each database:
[masterds] DataStore=/tmp/masterds DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8 [subscriberds] DataStore=/tmp/subscriberds DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8
On Windows, use the ODBC Administrator to set the same connection attributes. Use defaults for all other settings.
Connect to masterds
with the ttIsql
utility:
% ttIsql masterds Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=masterds"; Connection successful: DSN=masterds;UID=ttuser; DataStore=/tmp/masterds;DatabaseCharacterSet=AL32UTF8; ConnectionCharacterSet=AL32UTF8;TypeMode=0; (Default setting AutoCommit=1) Command>
Create a table named tab
with columns named a
, b
and c
:
Command> CREATE TABLE tab (a NUMBER NOT NULL, > b NUMBER, > c CHAR(8), > PRIMARY KEY (a));
Create a replication scheme called repscheme
to replicate the tab
table from masterds
to subscriberds
.
Command> CREATE REPLICATION repscheme > ELEMENT e TABLE tab > MASTER masterds > SUBSCRIBER subscriberds;
Connect to subscriberds
and create the same table and replication scheme, using the same procedure described in Step 2.
Start the replication agents on masterds
and subscriberds
:
Command> call ttRepStart;
Exit ttIsql
. Use the ttStatus
utility to verify that the replication agents are running for both databases:
% ttStatus TimesTen status report as of Thu Aug 11 17:05:23 2011 Daemon pid 18373 port 4134 instance ttuser TimesTen server pid 18381 started on port 4136 ------------------------------------------------------------------------ Data store /tmp/masterds There are 16 connections to the data store Shared Memory KEY 0x0201ab43 ID 5242889 PL/SQL Memory KEY 0x0301ab43 ID 5275658 Address 0x10000000 Type PID Context Connection Name ConnID Process 20564 0x081338c0 masterds 1 Replication 20676 0x08996738 LOGFORCE 5 Replication 20676 0x089b69a0 REPHOLD 2 Replication 20676 0x08a11a58 FAILOVER 3 Replication 20676 0x08a7cd70 REPLISTENER 4 Replication 20676 0x08ad7e28 TRANSMITTER 6 Subdaemon 18379 0x080a11f0 Manager 2032 Subdaemon 18379 0x080fe258 Rollback 2033 Subdaemon 18379 0x081cb818 Checkpoint 2036 Subdaemon 18379 0x081e6940 Log Marker 2035 Subdaemon 18379 0x08261e70 Deadlock Detector 2038 Subdaemon 18379 0xae100470 AsyncMV 2040 Subdaemon 18379 0xae11b508 HistGC 2041 Subdaemon 18379 0xae300470 Aging 2039 Subdaemon 18379 0xae500470 Flusher 2034 Subdaemon 18379 0xae55b738 Monitor 2037 Replication policy : Manual Replication agent is running. Cache Agent policy : Manual PL/SQL enabled. ------------------------------------------------------------------------ Data store /tmp/subscriberds There are 16 connections to the data store Shared Memory KEY 0x0201ab41 ID 5177351 PL/SQL Memory KEY 0x0301ab41 ID 5210120 Address 0x10000000 Type PID Context Connection Name ConnID Process 20594 0x081338f8 subscriberds 1 Replication 20691 0x0893c550 LOGFORCE 5 Replication 20691 0x089b6978 REPHOLD 2 Replication 20691 0x08a11a30 FAILOVER 3 Replication 20691 0x08a6cae8 REPLISTENER 4 Replication 20691 0x08ad7ba8 RECEIVER 6 Subdaemon 18376 0x080b1450 Manager 2032 Subdaemon 18376 0x0810e4a8 Rollback 2033 Subdaemon 18376 0x081cb8b0 Flusher 2034 Subdaemon 18376 0x08246de0 Monitor 2035 Subdaemon 18376 0x082a20a8 Deadlock Detector 2036 Subdaemon 18376 0x082fd370 Checkpoint 2037 Subdaemon 18376 0x08358638 Aging 2038 Subdaemon 18376 0x083b3900 Log Marker 2040 Subdaemon 18376 0x083ce998 AsyncMV 2039 Subdaemon 18376 0x08469e90 HistGC 2041 Replication policy : Manual Replication agent is running. Cache Agent policy : Manual PL/SQL enabled.
Use ttIsql
to connect to the master database and insert some rows into the tab
table:
% ttIsql masterds Command> INSERT INTO tab VALUES (1, 22, 'Hello'); 1 row inserted. Command> INSERT INTO tab VALUES (3, 86, 'World'); 1 row inserted.
Open a second command prompt window for the subscriber. Connect to the subscriber database and check the contents of the tab
table:
% ttIsql subscriberds Command> SELECT * FROM tab; < 1, 22, Hello> < 3, 86, World> 2 rows found.
Figure 2-3 shows that the rows that are inserted into masterds
are replicated to subscriberds
.
Figure 2-3 Replicating changes to the subscriber database
After you have completed your replication tests, stop the replication agents on both masterds
and subscriberds
:
Command> CALL ttRepStop;
To remove the tab
table and repscheme
replication scheme from the master and subscriber databases, enter these statements on each database:
Command> DROP REPLICATION repscheme; Command> DROP TABLE tab;
This preface summarizes the new features of Oracle TimesTen In-Memory Database release 11.2.2 that are documented in this guide. It provides links to more information.
You can now specify an alias or the IP address of the network interface when you want to use a specific local or remote network interface over which database duplication occurs. For details, see "Duplicating a database".
By default, replication is performed with a single thread. You can increase your performance by configuring parallel replication, which configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database.
There are two types of parallel replication: automatic and user-defined. In this release, automatic parallel replication is introduced. For more information, see "Configuring parallel replication".
You have additional control over TimesTen application behavior when Oracle Clusterware is managing a TimesTen active standby pair. The AppFailureInterval
, AppRestartAttempts
and AppUptimeThreshold
Clusterware attributes are new. See "Implementing application failover".
Durable commit behavior has changed. See "DURABLE COMMIT" for active standby pairs and "DURABLE COMMIT" for other replication schemes.
LOB columns can be replicated. See "Table requirements and restrictions for active standby pairs" and "Table requirements and restrictions for replication schemes".
TimesTen provides in-memory columnar compression. However, you cannot replicate tables with compressed columns. This restriction is mentioned in "Table requirements and restrictions for active standby pairs" and "Table requirements and restrictions for replication schemes".
The following sections describe how to design a highly available system and define replication schemes:
To reduce the amount of bandwidth required for replication, see "Compressing replicated traffic".
When you are planning an active standby pair, keep in mind the following restrictions:
You can specify at most 127 subscriber databases.
Each master and subscriber database must be on a different node to ensure high availability.
The active database and the standby database should be on the same LAN.
The clock skew between the active node and the standby node cannot exceed 250 milliseconds.
For the initial set-up, you can create a standby database only by duplicating the active database with the ttRepAdmin
-duplicate
utility or the ttRepDuplicateEx
C function.
Read-only subscribers can be created only by duplicating the standby database. If the standby database is unavailable, then the read-only subscribers can be created by duplicating the active database. See "Duplicating a database".
After failover, the new standby database can only be recovered from the active database by duplicating the active database unless return twosafe replication is used between the active and the standby databases. If return twosafe replication is used, the automated master catch-up feature may be used instead. See "Automatic catch-up of a failed master database".
Writes on replicated tables are not allowed on the standby database and the subscriber databases. However, operations on sequences and XLA bookmarks are allowed on the standby database and the subscriber databases. Reads are also allowed.
Replication from the standby database to the read-only subscribers occurs asynchronously.
ALTER ACTIVE STANDBY PAIR
statements can be executed only on the active database. If ALTER ACTIVE STANDBY PAIR
is executed on the active database, then the standby database must be regenerated by duplicating the active database. All subscribers must also be regenerated from the standby database. See "Duplicating a database".
You cannot replicate tables with compressed columns.
Before you define the active standby pair, define the DSNs for the active, standby and read-only subscriber databases. On UNIX, create an odbc.ini
file. On Windows, use the ODBC Administrator to name the databases and set connection attributes. See "Step 1: Create the DSNs for the master and the subscriber databases" for an example.
Each database "name" specified in a replication scheme must match the prefix of the database file name without the path given for the DataStore
data store attribute in the DSN definition for the database. To avoid confusion, use the same name for both the DataStore
and Data Source Name
data store attributes in each DSN definition. Values for DataStore
are case-sensitive. If the database path is directory
/
subdirectory
/foo.ds0
, then foo
is the database name that you should use.
Use the CREATE ACTIVE STANDBY PAIR
SQL statement to create an active standby pair replication scheme. The complete syntax for the CREATE ACTIVE STANDBY PAIR
statement is provided in the Oracle TimesTen In-Memory Database SQL Reference.
You must have the ADMIN
privilege to use the CREATE ACTIVE STANDBY PAIR
statement and to perform other replication operations. Only the instance administrator can duplicate databases.
Table 3-1 shows the components of an active standby pair replication scheme and identifies the parameters associated with the topics in this chapter.
Table 3-1 Components of an active standby pair replication scheme
Component | See... |
---|---|
|
"Identifying the databases in the active standby pair" |
|
|
|
"Identifying the databases in the active standby pair" |
|
|
|
"Configuring network operations" |
|
"Including or excluding database objects from replication" |
Use the full database name described in "Defining the DSNs for the databases". The first database name designates the active database. The second database name designates the standby database. Read-only subscriber databases are indicated by the SUBSCRIBER
clause.
You can also specify the hosts where the databases reside by using an IP address or a literal host name surrounded by double quotes.
The active database and the standby database should be on separate hosts to achieve a highly available system. Read-only subscribers can be either local or remote. A remote subscriber provides protection from site-specific disasters.
Provide a host ID as part of FullDatabaseName
:
DatabaseName [ON Host]
Host
can be either an IP address or a literal host name. Use the value returned by the hostname
operating system command. It is good practice to surround a host name with double quotes. For example:
CREATE ACTIVE STANDBY PAIR repdb1_1122 ON "host1", repdb2_1122 ON "host2";
Tables that are replicated in an active standby pair must have one of the following:
A primary key
A unique index over non-nullable columns
Replication uses the primary key or unique index to identify each row in the replicated table. Replication always selects the first usable index that turns up in a sequential check of the table's index array. If there is no primary key, replication selects the first unique index without NULL
columns it encounters. The selected index on the replicated table in the active database must also exist on its counterpart table in the standby database.
Note: The keys on replicated tables are transmitted in each update record to the subscribers. Smaller keys are transmitted more efficiently. |
Replicated tables have these data type restrictions:
VARCHAR2
, NVARCHAR2
, VARBINARY
and TT_VARCHAR
columns in replicated tables are limited to a size of 4 megabytes. For a VARCHAR2
column, the maximum length when using character length semantics depends on the number of bytes each character occupies when using a particular database character set. For example, if the character set requires four bytes for each character, the maximum possible length is one million characters. For an NVARCHAR2
column, which requires two bytes for each character, the maximum length when using character length semantics is two million characters.
Columns with the BLOB
data type in replicated tables are limited to a size of 16 megabytes. Columns with the CLOB
or NCLOB
data type in replicated tables are limited to a size of 4 megabytes.
A primary key column cannot have a LOB data type.
You can configure your replication scheme with a return service to ensure a higher level of confidence that your replicated data is consistent on the active and standby databases. See "Copying updates between databases". This section describes how to configure and manage the return receipt and return twosafe services. NO RETURN
(asynchronous replication) is the default and provides the fastest performance.
The following sections describe the following return service clauses:
TimesTen provides an optional return receipt service to loosely couple or synchronize your application with the replication mechanism.
You can specify the RETURN RECEIPT
clause to enable the return receipt service for the standby database. With return receipt enabled, when your application commits a transaction for an element on the active database, the application remains blocked until the standby acknowledges receipt of the transaction update.
If the standby is unable to acknowledge receipt of the transaction within a configurable timeout period, your application receives a tt_ErrRepReturnFailed
(8170) warning on its commit request. See "Setting the return service timeout period" for more information on the return service timeout period.
You can use the ttRepXactStatus
procedure to check on the status of a return receipt transaction. See "Checking the status of return service transactions" for details.
You can also configure the replication agent to disable the return receipt service after a specific number of timeouts. See "Setting the return service timeout period" for details.
RETURN RECEIPT
enables notification of receipt for all transactions. You can use the RETURN RECEIPT
BY REQUEST
clause to enable receipt notification only for specific transactions identified by your application.
If you specify RETURN RECEIPT BY REQUEST
, you must use the ttRepSyncSet
built-in procedure to enable the return receipt service for a transaction. The call to enable the return receipt service must be part of the transaction (autocommit must be off).
If the standby database is unable to acknowledge receipt of the transaction update within a configurable timeout period, the application receives a tt_ErrRepReturnFailed
(8170) warning on its commit request. See "Setting the return service timeout period" for more information on the return service timeout period.
You can use ttRepSyncGet
to check if a return service is enabled and obtain the timeout value. For example:
Command> CALL ttRepSyncGet(); < 01, 45, 1> 1 row found.
TimesTen provides a return twosafe service to fully synchronize your application with the replication mechanism. The return twosafe service ensures that each replicated transaction is committed on the standby database before it is committed on the active database. If replication is unable to verify the transaction has been committed on the standby, it returns notification of the error. Upon receiving an error, the application can either take a unique action or fall back on preconfigured actions, depending on the type of failure.
When replication is configured with RETURN TWOSAFE
, you must disable autocommit mode.
A transaction that contains operations that are replicated with RETURN TWOSAFE
cannot have a PassThrough
setting greater than 0. If PassThrough
is greater than 0, an error is returned and the transaction must be rolled back.
If the standby is unable to acknowledge commit of the transaction update within a configurable timeout period, the application receives a tt_ErrRepReturnFailed
(8170) warning on its commit request. See "Setting the return service timeout period" for more information on the return service timeout period.
RETURN TWOSAFE
enables notification of commit on the standby database for all transactions. You can use the RETURN TWOSAFE
BY REQUEST
clause to enable notification of a commit on the standby only for specific transactions identified by your application.
A transaction that contains operations that are replicated with RETURN TWOSAFE
cannot have a PassThrough
setting greater than 0. If PassThrough
is greater than 0, an error is returned and the transaction must be rolled back.
If you specify RETURN TWOSAFE BY REQUEST
for a standby database, you must use the ttRepSyncSet
built-in procedure to enable the return twosafe service for a transaction. The call to enable the return twosafe service must be part of the transaction (autocommit must be off).
If the standby is unable to acknowledge commit of the transaction within the timeout period, the application receives a tt_ErrRepReturnFailed
(8170) warning on its commit request. The application can then chose how to handle the timeout, in the same manner as described for "RETURN TWOSAFE".
The ALTER TABLE
statement cannot be used to alter a replicated table that is part of a RETURN TWOSAFE BY REQUEST
transaction. If DDLCommitBehavior
=0 (the default), the ALTER TABLE
operation succeeds because a commit is performed before the ALTER TABLE
operation, resulting in the ALTER TABLE
operation executing in a new transaction which is not part of the RETURN TWOSAFE BY REQUEST
transaction. If DDLCommitBehavior
=1, the ALTER TABLE
operation results in error 8051.
See "Setting the return service timeout period" for more information on setting the return service timeout period.
You can use ttRepSyncGet
to check if a return service is enabled and obtain the timeout value. For example:
Command> CALL ttRepSyncGet(); < 01, 45, 1> 1 row found.
Table 3-2 lists the optional STORE
attributes for the CREATE ACTIVE STANDBY PAIR
statement.
Table 3-2 STORE attribute descriptions
STORE attribute | Description |
---|---|
|
Set the return service policy so that return service blocking is disabled after the number of timeouts specified by See "Establishing return service failure/recovery policies". |
|
Set return services on or off when replication is disabled. See "Establishing return service failure/recovery policies". |
|
If See "Establishing return service failure/recovery policies". |
|
Specifies the number of seconds to wait for return service acknowledgement. A value of 0 means that there is no waiting. The default value is 10 seconds. The application can override this timeout setting by using the |
|
Overrides the See "DURABLE COMMIT". |
|
Specifies the default action to be taken for a return service transaction in the event of a timeout. The options are:
This default setting can be overridden for specific transactions by using the |
|
Compresses replicated traffic to reduce the amount of network bandwidth used. |
|
Sets the port number used by a database to listen for updates from another database. In an active standby pair, the standby database listens for updates from the active database. Read-only subscribers listen for updates from the standby database. If no See "Port assignments". |
|
Set the maximum number of seconds the replication agent waits for a response from the database. |
|
Sets the log failure threshold. |
The rest of this section includes these topics:
If a replication scheme is configured with one of the return services described in "Using a return service", a timeout occurs if the standby database is unable to send an acknowledgement back to the active within the time period specified by RETURN WAIT TIME
. If the standby database is unable to acknowledge the transaction update from the active database within the timeout period, the application receives an errRepReturnFailed
warning on its commit request.
The default return service timeout period is 10 seconds. You can specify a different return service timeout period by:
Specifying the RETURN WAIT TIME
in the CREATE ACTIVE STANDBY PAIR
statement or ALTER ACTIVE STANDBY PAIR
statement. A RETURN WAIT TIME
of 0
indicates no waiting.
Specifying a different return service timeout period programmatically by calling the ttRepSyncSet
procedure with a new value for the returnWait
parameter. Once set, the timeout period applies to all subsequent return service transactions until you either reset the timeout period or terminate the application session.
A return service may time out because of a replication failure or because replication is so far behind that the return service transaction times out before it is replicated. However, unless there is a simultaneous replication failure, failure to obtain a return service confirmation from the standby does not necessarily mean the transaction has not been or will not be replicated.
You can respond to return service timeouts by:
You may want respond if replication is stopped or return service timeout failures begin to adversely impact the performance of your replicated system. Your "tolerance threshold" for return service timeouts may depend on the historical frequency of timeouts and the performance/availability equation for your particular application, both of which should be factored into your response to the problem.
When using the return receipt service, you can manually respond by:
Using the ALTER ACTIVE STANDBY PAIR
statement to disable return receipt blocking. See "Making other changes to an active standby pair".
Calling the ttDurableCommit
built-in procedure to durably commit transactions on the active database that you can no longer verify as being received by the standby
If you decide to disable return receipt blocking, your decision to re-enable it depends on your confidence level that the return receipt transaction is no longer likely to time out.
An alternative to manually responding to return service timeout failures is to establish return service failure and recovery policies in the replication scheme. These policies direct the replication agents to detect changes to the replication state and to keep track of return service timeouts and then automatically respond in a predefined manner.
The following attributes in the CREATE ACTIVE STANDBY PAIR
statement set the failure and recovery policies when using a RETURN RECEIPT
or RETURN TWOSAFE
service:
The policies set by these attributes are applicable until changed. The replication agent must be running to enforce these policies, with the exception of DURABLE COMMIT
.
The RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED
attribute determines whether a return receipt or return twosafe service continues to be enabled or is disabled when replication is stopped. "Stopped" in this context means that either the active replication agent is stopped (for example, by ttAdmin
-repStop
active
) or the replication state of the standby database is set to stop
or pause
with respect to the active database (for example, by ttRepAdmin
-state stop
standby
). A failed standby that has exceeded the specified FAILTHRESHOLD
value is set to the failed
state, but is eventually set to the stop
state by the master replication agent.
Note: A standby database may become unavailable for a period of time that exceeds the timeout period specified byRETURN WAIT TIME but still be considered by the master replication agent to be in the start state. Failure policies related to timeouts are set by the DISABLE RETURN attribute. |
RETURN SERVICES OFF WHEN REPLICATION STOPPED
disables the return service when replication is stopped and is the default when using the RETURN RECEIPT
service. RETURN SERVICES ON WHEN REPLICATION STOPPED
allows the return service to continue to be enabled when replication is stopped and is the default when using the RETURN TWOSAFE
service.
When a DISABLE RETURN
value is set, the database keeps track of the number of return receipt or return twosafe transactions that have exceeded the timeout period set by RETURN WAIT TIME
. If the number of timeouts exceeds the maximum value set by DISABLE RETURN
, the application reverts to a default replication cycle in which it no longer waits for the standby to acknowledge the replicated updates.
Specifying SUBSCRIBER
is the same as specifying ALL
. Both settings refer to the standby database.
The DISABLE RETURN
failure policy is only enabled when the replication agent is running. If DISABLE RETURN
is specified without RESUME RETURN
, the return services remain off until the replication agent for the database has been restarted. You can cancel this failure policy by stopping the replication agent and specifying DISABLE RETURN
with a zero value for NumFailures
. The count of timeouts to trigger the failure policy is reset either when you restart the replication agent, when you set the DISABLE RETURN
value to 0, or when return service blocking is re-enabled by RESUME RETURN
.
When we say return service blocking is "disabled," we mean that the applications on the master database no longer block execution while waiting to receive acknowledgements from the subscribers that they received or committed the replicated updates. Note, however, that the master still listens for an acknowledgement of each batch of replicated updates from the standby database.
You can establish a return service recovery policy by setting the RESUME RETURN
attribute and specifying a resume latency value. When this attribute is set and return service blocking has been disabled for the standby database, the return receipt or return twosafe service is re-enabled when the commit-to-acknowledge time for a transaction falls below the value set by RESUME RETURN
. The commit-to-acknowledge time is the latency between when the application issues a commit and when the master receives acknowledgement from the subscriber.
The RESUME RETURN
policy is enabled only when the replication agent is running. You can cancel a return receipt resume policy by stopping the replication agent and then using ALTER ACTIVE STANDBY PAIR
to set RESUME RETURN
to zero.
You can set the DURABLE COMMIT
attribute to specify the durable commit policy for applications that have return service blocking disabled by DISABLE RETURN
. When DURABLE COMMIT
is set to ON
, it overrides the DurableCommits
general connection attribute on the master database and forces durable commits for those transactions that have had return service blocking disabled.
When DURABLE COMMITS
are ON
, durable commits are issued when return service blocking is disabled regardless of whether the replication agent is running or stopped. They are also issued for an active standby pair in which the ttRepStateSave
built-in procedure has marked the standby database as failed.
When you are using the return twosafe service, you can specify how the master replication agent responds to timeouts by setting LOCAL COMMIT ACTION
. You can override the setting for specific transactions by calling the localAction
parameter in the ttRepSyncSet
procedure.
The possible actions upon receiving a timeout during replication of a twosafe transaction are:
COMMIT
- On timeout, the commit function attempts to perform a commit to end the transaction locally. No more operations are possible on the same transaction.
NO ACTION
- On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can reissue the commit. This is the default.
If you are replicating over a low-bandwidth network, or if you are replicating massive amounts of data, you can set the COMPRESS TRAFFIC
attribute to reduce the amount of bandwidth required for replication. The COMPRESS TRAFFIC
attribute compresses the replicated data from the database specified by the STORE
parameter in the CREATE ACTIVE STANDBY PAIR
or ALTER ACTIVE STANDBY PAIR
statement. TimesTen does not compress traffic from other databases.
Though the compression algorithm is optimized for speed, enabling the COMPRESS TRAFFIC
attribute affects replication throughput and latency.
Example 3-1 Compressing traffic from an active database
For example, to compress replicated traffic from active database dsn1
and leave the replicated traffic from standby database dsn2
uncompressed, the CREATE ACTIVE STANDBY PAIR
statement looks like:
CREATE ACTIVE STANDBY PAIR dsn1 ON "host1", dsn2 ON "host2" SUBSCRIBER dsn3 ON "host3" STORE dsn1 ON "host1" COMPRESS TRAFFIC ON;
Example 3-2 Compressing traffic from both master databases
To compress the replicated traffic from the dsn1
and dsn2
databases, use:
CREATE ACTIVE STANDBY PAIR dsn1 ON "host1", dsn2 ON "host2" SUBSCRIBER dsn3 ON "host3" STORE dsn1 ON "host1" COMPRESS TRAFFIC ON STORE dsn2 ON "host2" COMPRESS TRAFFIC ON;
Static port assignment is recommended. If you do not assign a PORT
attribute, the TimesTen daemon dynamically selects the port. When ports are assigned dynamically in this manner for the replication agents, then the ports of the TimesTen daemons have to match as well.
You must assign static ports if you want to do online upgrades.
When statically assigning ports, it is important to specify the full host name, DSN and port in the STORE
attribute of the CREATE ACTIVE STANDBY PAIR
statement.
You can establish a threshold value that, when exceeded, sets an unavailable standby database or a read-only subscriber to the failed
state before the available log space is exhausted.
Set the log threshold by specifying the STORE
clause with a FAILTHRESHOLD
value in the CREATE ACTIVE STANDBY PAIR
or ALTER ACTIVE STANDBY PAIR
statement. The default threshold value is 0, which means "no limit."
If an active database sets the standby database or a read-only subscriber to the failed
state, it drops all of the data for the failed database from its log and transmits a message to the failed database. If the active replication agent can communicate with the replication agent of the failed database, then the message is transmitted immediately. Otherwise, the message is transmitted when the connection is reestablished.
Any application that connects to the failed database receives a tt_ErrReplicationInvalid
(8025) warning indicating that the database has been marked failed
by a replication peer. Once the database has been informed of its failed status, its state on the active database is changed from failed
to stop
.
An application can use the ODBC SQLGetInfo
function to check if the database the application is connected to has been set to the failed
state.
For more information about database states, see Table 10-2, "Database states" .
If a replication host has more than one network interface, you may wish to configure replication to use an interface other than the default interface. Although you must specify the host name returned by the operating system's hostname
command when you specify the database name, you can configure replication to send or receive traffic over a different interface using the ROUTE
clause.
The syntax of the ROUTE
clause is:
ROUTE MASTER FullDatabaseName SUBSCRIBER FullDatabaseName {{MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost} PRIORITY Priority} [...]
In the context of the ROUTE
clause, each master database is a subscriber of the other master database and each read-only subscriber is a subscriber of both master databases. This means that the CREATE ACTIVE STANDBY PAIR
statement should include ROUTE
clauses in multiples of two to specify a route in both directions. See Example 3-4.
Example 3-4 Configuring multiple network interfaces
If host host1
is configured with a second interface accessible by the host name host1fast
, and host2
is configured with a second interface at IP address 192.168.1.100, you may specify that the secondary interfaces are used with the replication scheme.
CREATE ACTIVE STANDBY PAIR dns1, dsn2 ROUTE MASTER dsn1 ON "host1" SUBSCRIBER dsn2 ON "host2" MASTERIP "host1fast" PRIORITY 1 SUBSCRIBERIP "192.168.1.100" PRIORITY 1 ROUTE MASTER dsn2 ON "host2" SUBSCRIBER dsn1 ON "host1" MASTERIP "192.168.1.100" PRIORITY 1 SUBSCRIBERIP "host1fast" PRIORITY 1;
Alternately, on a replication host with more than one interface, you may wish to configure replication to use one or more interfaces as backups, in case the primary interface fails or the connection from it to the receiving host is broken. You can use the ROUTE
clause to specify two or more interfaces for each master or subscriber that are used by replication in order of priority.
If replication on the master host is unable to bind to the MASTERIP
with the highest priority, it will try to connect using subsequent MASTERIP
addresses in order of priority immediately. However, if the connection to the subscriber fails for any other reason, replication will try to connect using each of the SUBSCRIBERIP
addresses in order of priority before it tries the MASTERIP
address with the next highest priority.
Example 3-5 Configuring network priority
If host host1
is configured with two network interfaces at IP addresses 192.168.1.100 and 192.168.1.101, and host host2
is configured with two interfaces at IP addresses 192.168.1.200 and 192.168.1.201, you may specify that replication use IP addresses 192.168.1.100 and 192.168.200 to transmit and receive traffic first, and to try IP addresses 192.168.1.101 or 192.168.1.201 if the first connection fails.
CREATE ACTIVE STANDBY PAIR dns1, dns2 ROUTE MASTER dsn1 ON "host1" SUBSCRIBER dsn2 ON "host2" MASTERIP "192.168.1.100" PRIORITY 1 MASTERIP "192.168.1.101" PRIORITY 2 SUBSCRIBERIP "192.168.1.200" PRIORITY 1 SUBSCRIBERIP "192.168.1.201" PRIORITY 2;
Automatic client failover is for use in High Availability scenarios with a TimesTen active standby pair replication configuration. If failure of the active TimesTen node results in the original standby node becoming the new active node, then automatic client failover feature automatically transfers the application connection to the new active node.
For full details on how to configure and use automatic client failover, see "Using automatic client failover" in the Oracle TimesTen In-Memory Database Operations Guide.
Note: Automatic client failover is complementary to Oracle Clusterware in situations where Oracle Clusterware is used, but the two features are not dependent on each other. For information about Oracle Clusterware, you can refer to Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs". |
An active standby pair replicates an entire database by default. Use the INCLUDE
clause to replicate only the tables, cache groups and sequences that are listed in the INCLUDE
clause. No other database objects will be replicated in an active standby pair that is defined with an INCLUDE
clause. For example, this INCLUDE
clause specifies three tables to be replicated by the active standby pair:
INCLUDE TABLE employees, departments, jobs
You can choose to exclude specific tables, cache groups or sequences from replication by using the EXCLUDE
clause of the CREATE ACTIVE STANDBY PAIR
statement. Use one EXCLUDE
clause for each object type. For example:
EXCLUDE TABLE ttuser.tab1, ttuser.tab2 EXCLUDE CACHE GROUP ttuser.cg1, ttuser.cg2 EXCLUDE SEQUENCE ttuser.seq1, ttuser.seq2
Note: Sequences with theCYCLE attribute cannot be replicated. |
When you replicate a database containing a materialized or nonmaterialized view, only the detail tables associated with the view are replicated. The view itself is not replicated. A matching view can be defined on the standby database, but it is not required. If detail tables are replicated, TimesTen automatically updates the corresponding view. However, TimesTen replication verifies only that the replicated detail tables have the same structure on both databases. It does not enforce that the materialized views are the same on each database.
Sequences are replicated unless you exclude them from the active standby pair or unless they have the CYCLE
attribute. See "Including or excluding database objects from replication". Replication of sequences is optimized by reserving a range of sequence numbers on the standby database each time a sequence is updated on the active database. Reserving a range of sequence numbers reduces the number of updates to the transaction log. The range of sequence numbers is called a cache. Sequence updates on the active database are replicated only when they are followed by or used in replicated transactions.
Consider a sequence named my.sequence
with a MINVALUE
of 1, an INCREMENT
of 1 and the default Cache
of 20. The very first time that you reference my.sequence.NEXTVAL
, the current value of the sequence on the active database is changed to 2, and a new current value of 21 (20+1) is replicated to the standby database. The next 19 references to my.seq.NEXTVAL
on the active database result in no new current value being replicated, because the current value of 21 on the standby database is still ahead of the current value on the active database. On the twenty-first reference to my.seq.NEXTVAL
, a new current value of 41 (21+20) is transmitted to the standby database because the previous current value of 21 on the standby database is now behind the value of 22 on the active database.
Operations on sequences such as SELECT my.seq.NEXTVAL FROM sys.dual
, while incrementing the sequence value, are not replicated until they are followed by transactions on replicated tables. A side effect of this behavior is that these sequence updates are not purged from the log until followed by transactions on replicated tables. This causes ttRepSubscriberWait
and ttRepAdmin
-wait
to fail when only these sequence updates are present at the end of the log.
The following sections provide an overview of TimesTen replication:
Replication is the process of maintaining copies of data in multiple databases. The purpose of replication is to make data highly available to applications with minimal performance impact. TimesTen recommends the active standby pair configuration for highest availability. In an active standby pair replication scheme, the data is copied from the active database to the standby database before being copied to read-only subscribers.
In addition to providing recovery from failures, replication schemes can also distribute application workloads across multiple databases for maximum performance and facilitate online upgrades and maintenance.
Replication is the process of copying data from a master database to a subscriber database. Replication is controlled by replication agents for each database. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database. If the subscriber replication agent is not running when the updates are forwarded by the master, the master retains the updates in its transaction log until they can be applied at the subscriber database.
An entity that is replicated between databases is called a replication element. TimesTen supports databases, cache groups, tables and sequences as replication elements. TimesTen also replicates XLA bookmarks. An active standby pair is the only supported replication scheme for databases with cache groups.
TimesTen replication is supported only between identical platforms and bit-levels. Although you can replicate between databases that reside on the same host, replication is generally used for copying updates into a database that resides on another host. This helps prevent data loss from host failure.
The databases must have DSNs with identical DatabaseCharacterSet
and TypeMode
database attributes.
Replication between databases is controlled by a replication agent. Each database is identified by:
A database name derived from the file system's path name for the database
A host name
The replication agent on the master database reads the records from the transaction log and forwards any detected changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database. If the subscriber agent is not running when the updates are forwarded by the master, the master retains the updates in the log until they can be transmitted.
The replication agents communicate through TCP/IP stream sockets. The replication agents obtain the TCP/IP address, host name, and other configuration information from the replication tables described in Oracle TimesTen In-Memory Database System Tables and Views Reference.
Updates are copied between databases in asynchronously by default. Asynchronous replication provides the best performance, but it does not provide the application with confirmation that the replicated updates have been committed on the subscriber databases. For applications that need higher levels of confidence that the replicated data is consistent between the master and subscriber databases, you can enable either return receipt or return twosafe service.
The return receipt service loosely synchronizes the application with the replication mechanism by blocking the application until replication confirms that the update has been received by the subscriber. The return twosafe service provides a fully synchronous option by blocking the application until replication confirms that the update has been both received and committed on the subscriber.
Return receipt replication has less performance impact than return twosafe at the expense of less synchronization. The operational details for asynchronous, return receipt, and return twosafe replication are discussed in these sections:
When using default TimesTen replication, an application updates a master database and continues working without waiting for the updates to be received and applied by the subscribers. The master and subscriber databases have internal mechanisms to confirm that the updates have been successfully received and committed by the subscriber. These mechanisms ensure that updates are applied at a subscriber only once, but they are completely independent of the application.
Default TimesTen replication provides maximum performance, but the application is completely decoupled from the receipt process of the replicated elements on the subscriber.
Figure 1-1 Basic asynchronous replication cycle
The default TimesTen replication cycle is:
The application commits a local transaction to the master database and is free to continue with other transactions.
During the commit, the TimesTen daemon writes the transaction update records to the transaction log buffer.
The replication agent on the master database directs the daemon to flush a batch of update records for the committed transactions from the log buffer to a transaction log file. This step ensures that, if the master fails and you need to recover the database from the checkpoint and transaction log files, the recovered master contains all the data it replicated to the subscriber.
The master replication agent forwards the batch of transaction update records to the subscriber replication agent, which applies them to the subscriber database. Update records are flushed to disk and forwarded to the subscriber in batches of 256K or less, depending on the master database's transaction load. A batch is created when there is no more log data in the transaction log buffer or when the current batch is roughly 256K bytes.
The subscriber replication agent sends an acknowledgement back to the master replication agent that the batch of update records was received. The acknowledgement includes information on which batch of records the subscriber last flushed to disk. The master replication agent is now free to purge from the transaction log the update records that have been received, applied, and flushed to disk by all subscribers and to forward another batch of update records, while the subscriber replication agent asynchronously continues on to Step 6.
The replication agent at the subscriber updates the database and directs the daemon to write the transaction update records to the transaction log buffer.
The replication agent at the subscriber database uses a separate thread to direct the daemon to flush the update records to a transaction log file.
The return receipt service provides a level of synchronization between the master and a subscriber database by blocking the application after commit on the master until the updates of the committed transaction have been received by the subscriber.
An application requesting return receipt updates the master database in the same manner as in the basic asynchronous case. However, when the application commits a transaction that updates a replicated element, the master database blocks the application until it receives confirmation that the updates for the completed transaction have been received by the subscriber.
Return receipt replication trades some performance in order to provide applications with the ability to ensure higher levels of data integrity and consistency between the master and subscriber databases. In the event of a master failure, the application has a high degree of confidence that a transaction committed at the master persists in the subscribing database.
Figure 1-2 shows that the return receipt replication cycle is the same as shown for the basic asynchronous cycle in Figure 1-1, only the master replication agent blocks the application thread after it commits a transaction (Step 1) and retains control of the thread until the subscriber acknowledges receipt of the update batch (Step 5). Upon receiving the return receipt acknowledgement from the subscriber, the master replication agent returns control of the thread to the application (Step 6), freeing it to continue executing transactions.
If the subscriber is unable to acknowledge receipt of the transaction within a configurable timeout period (default is 10 seconds), the master replication agent returns a warning stating that it did not receive acknowledgement of the update from the subscriber and returns control of the thread to the application. The application is then free to commit another transaction to the master, which continues replication to the subscriber as before. Return receipt transactions may time out for many reasons. The most likely causes for timeout are the network, a failed replication agent, or the master replication agent may be so far behind with respect to the transaction load that it cannot replicate the return receipt transaction before its timeout expires. For information on how to manage return-receipt timeouts, see "Managing return service timeout errors and replication state changes".
See "RETURN RECEIPT" for information on how to configure replication for return receipt.
The return twosafe service provides fully synchronous replication between the master and subscriber. Unlike the previously described replication modes, where transactions are transmitted to the subscriber after being committed on the master, transactions in twosafe mode are first committed on the subscriber before they are committed on the master.
The following describes the replication behavior between a master and subscriber configured for return twosafe replication:
The application commits the transaction on the master database.
The master replication agent writes the transaction records to the log and inserts a special precommit log record before the commit record. This precommit record acts as a place holder in the log until the master replication receives an acknowledgement that indicates the status of the commit on the subscriber.
Note: Transmission of return twosafe transactions is nondurable, so the master replication agent does not flush the log records to disk before sending them to the subscriber, as it does by default when replication is configured for asynchronous or return receipt replication. |
The master replication agent transmits the batch of update records to the subscriber.
The subscriber replication agent commits the transaction on the subscriber database.
The subscriber replication agent returns an acknowledgement back to the master replication agent with notification of whether the transaction was committed on the subscriber and whether the commit was successful.
If the commit on the subscriber was successful, the master replication agent commits the transaction on the master database.
The master replication agent returns control to the application.
If the subscriber is unable to acknowledge commit of the transaction within a configurable timeout period (default is 10 seconds) or if the acknowledgement from the subscriber indicates the commit was unsuccessful, the replication agent returns control to the application without committing the transaction on the master database. The application can then to decide whether to unconditionally commit or retry the commit. You can optionally configure your replication scheme to direct the master replication agent to commit all transactions that time out.
See "RETURN TWOSAFE" for information on how to configure replication for return twosafe.
You create a replication scheme to define a specific configuration of master and subscriber databases. This section describes the possible relationships you can define between master and subscriber databases when creating a replication scheme.
When defining a relationship between a master and subscriber, consider these replication schemes:
Figure 1-4 shows an active standby pair replication scheme with an active database, a standby database, and four read-only subscriber databases.
The active standby pair can replicate a whole database or select elements like tables and cache groups.
In an active standby pair, two databases are defined as masters. One is an active database, and the other is a standby database. The application updates the active database directly. Applications cannot update the standby database. It receives the updates from the active database and propagates the changes to as many as 127 read-only subscriber databases. This arrangement ensures that the standby database is always ahead of the subscriber databases and enables rapid failover to the standby database if the active database fails.
Only one of the master databases can function as an active database at a specific time. You can manage failover and recovery of an active standby pair with Oracle Clusterware. See Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs". You can also manage failover and recovery manually. See Chapter 4, "Administering an Active Standby Pair Without Cache Groups".
If the standby database fails, the active database can replicate changes directly to the read-only subscribers. After the standby database has been recovered, it contacts the active database to receive any updates that have been sent to the subscribers while the standby was down or was recovering. When the active and the standby databases have been synchronized, then the standby resumes propagating changes to the subscribers.
For details about setting up an active standby pair, see "Setting up an active standby pair with no cache groups".
Figure 1-5 illustrates a full replication scheme in which the entire master database is replicated to the subscriber.
Figure 1-5 Replicating the entire master database
You can also configure your master and subscriber databases to selectively replicate some elements in a master database to subscribers. Figure 1-6 shows examples of selective replication. The left side of the figure shows a master database that replicates the same selected elements to multiple subscribers, while the right side shows a master that replicates different elements to each subscriber.
Figure 1-6 Replicating selected elements to multiple subscribers
So far in this chapter, we have described unidirectional replication, where a master database sends updates to one or more subscriber databases. However, you can also configure databases to operate bidirectionally, where each database is both a master and a subscriber.
These are basic ways to use bidirectional replication:
In a split workload configuration, each database serves as a master for some elements and a subscriber for others.
Consider the example shown in Figure 1-7, where the accounts for Chicago are processed on database A while the accounts for New York are processed on database B.
Figure 1-7 Split workload bidirectional replication
In a distributed workload replication scheme, user access is distributed across duplicate application/database combinations that replicate any update on any element to each other. In the event of a failure, the affected users can be quickly shifted to any application/database combination.The distributed workload configuration is shown in Figure 1-8. Users access duplicate applications on each database, which serves as both master and subscriber for the other database.
Figure 1-8 Distributed workload configuration
When databases are replicated in a distributed workload configuration, it is possible for separate users to concurrently update the same rows and replicate the updates to one another. Your application should ensure that such conflicts cannot occur, that they be acceptable if they do occur, or that they can be successfully resolved using the conflict resolution mechanism described in Chapter 14, "Resolving Replication Conflicts".
Note: Do not use a distributed workload configuration with the return twosafe return service. |
You can define a subscriber to serve as a propagator that receives replicated updates from a master and passes them on to subscribers of its own.
Propagators are useful for optimizing replication performance over lower-bandwidth network connections, such as those between servers in an intranet. For example, consider the direct replication configuration illustrated in Figure 1-9, where a master directly replicates to four subscribers over an intranet connection. Replicating to each subscriber over a network connection in this manner is an inefficient use of network bandwidth.
Figure 1-9 Master replicating directly to multiple subscribers over a network
For optimum performance, consider the configuration shown in Figure 1-10, where the master replicates to a single propagator over the network connection. The propagator in turn forwards the updates to each subscriber on its local area network.
Figure 1-10 Master replicating to a single propagator over a network
Propagators are also useful for distributing replication loads in configurations that involve a master database that must replicate to a large number of subscribers. For example, it is more efficient for the master to replicate to three propagators, rather than directly to the 12 subscribers as shown in Figure 1-11.
Figure 1-11 Using propagators to replicate to many subscribers
Note: Each propagator is one-hop, which means that you can forward an update only once. You cannot have a hierarchy of propagators where propagators forward updates to other propagators. |
As described in Oracle In-Memory Database Cache User's Guide, a cache group is a group of tables stored in a central Oracle database that are cached in a local Oracle In-Memory Database Cache (IMDB Cache). This section describes how cache groups can be replicated between TimesTen databases. You can achieve high availability by using an active standby pair to replicate asynchronous writethrough cache groups or read-only cache groups.
This section describes the following ways to replicate cache groups:
See Chapter 5, "Administering an Active Standby Pair with Cache Groups" for details about configuring replication of cache groups.
An asynchronous writethrough (AWT) cache group can be configured as part of an active standby pair with optional read-only subscribers to ensure high availability and to distribute the application workload. Figure 1-12 shows this configuration.
Figure 1-12 AWT cache group replicated by an active standby pair
Application updates are made to the active database, the updates are replicated to the standby database, and then the updates are asynchronously written to the Oracle database by the standby. At the same time, the updates are also replicated from the standby to the read-only subscribers, which may be used to distribute the load from reading applications. The tables on the read-only subscribers are not in cache groups.
When there is no standby database, the active both accepts application updates and writes the updates asynchronously to the Oracle database and the read-only subscribers. This situation can occur when the standby has not yet been created, or when the active fails and the standby becomes the new active. TimesTen reconfigures the AWT cache group when the standby becomes the new active.
If a failure occurs on the node where the active database resides, the standby node becomes the new active node. TimesTen automatically reconfigures the AWT cache group so that it can be updated directly by the application and continue to propagate the updates to Oracle asynchronously.
You can recover from a complete failure of a site by creating a special disaster recovery read-only subscriber on a remote site as part of the active standby pair replication configuration. Figure 1-13 shows this configuration.
Figure 1-13 Disaster recovery configuration with active standby pair
The standbyX database sends updates to cache group tables on the read-only subscriber. This special subscriber is located at a remote disaster recovery site and can propagate updates to a second Oracle database, also located at the disaster recovery site. You can set up more than one disaster recovery site with read-only subscribers and Oracle databases. See "Using a disaster recovery subscriber in an active standby pair".
A read-only cache group enforces caching behavior in which committed updates on the Oracle tables are automatically refreshed to the corresponding TimesTen cache tables. Figure 1-14 shows a read-only cache group replicated by an active standby pair.
Figure 1-14 Read-only cache group replicated by an active standby pair
When the read-only cache group is replicated by an active standby pair, the cache group on the active database is autorefreshed from the Oracle database and replicates the updates to the standby, where AUTOREFRESH
is also configured on the cache group but is in the PAUSED
state. In the event of a failure of the active, TimesTen automatically reconfigures the standby to be autorefreshed when it takes over for the failed master database by setting the AUTOREFRESH STATE
to ON
.TimesTen also tracks whether updates that have been autorefreshed from the Oracle database to the active database have been replicated to the standby. This ensures that the autorefresh process picks up from the correct point after the active fails, and no autorefreshed updates are lost.This configuration may also include read-only subscriber databases.This allows the read workload to be distributed across many databases. The cache groups on the standby database replicate to regular (non-cache) tables on the subscribers.
In some replication configurations, you may need to keep sequences synchronized between two or more databases. For example, you may have a master database containing a replicated table that uses a sequence to fill in the primary key value for each row. The subscriber database is used as a hot backup for the master database. If updates to the sequence's current value are not replicated, insertions of new rows on the subscriber after the master has failed could conflict with rows that were originally inserted on the master.
TimesTen replication allows the incremented sequence value to be replicated to subscriber databases, ensuring that rows in this configuration inserted on either database does not conflict. See "Replicating sequences" for details on writing a replication scheme to replicate sequences.
If a table with a foreign key configured with ON DELETE CASCADE
is replicated, then the matching foreign key on the subscriber must also be configured with ON DELETE CASCADE
. In addition, you must replicate any other table with a foreign key relationship to that table. This requirement prevents foreign key conflicts from occurring on subscriber tables when a cascade deletion occurs on the master database.
TimesTen replicates a cascade deletion as a single operation, rather than replicating to the subscriber each individual row deletion which occurs on the child table when a row is deleted on the parent. As a result, any row on the child table on the subscriber database, which contains the foreign key value that was deleted on the parent table, is also deleted, even if that row did not exist on the child table on the master database.
When a table or cache group is configured with least recently used (LRU) or time-based aging, the following rules apply to the interaction with replication:
The aging configuration on replicated tables and cache groups must be identical on every peer database.
If the replication scheme is an active standby pair, then aging is performed only on the active database. Deletes that result from aging are then replicated to the standby database. The aging configuration must be set to ON
on both the active and standby databases. TimesTen automatically determines which database is actually performing the aging based on its current role as active or standby.
In a replication scheme that is not an active standby pair, aging is performed individually in each database. Deletes performed by aging are not replicated to other databases.
When an asynchronous writethrough cache group is in a database that is replicated by an active standby pair, delete operations that result from aging are not propagated to the Oracle database.
Replication Guide
11g Release 2 (11.2.2)
E21635-04
September 2012
Oracle TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2)
E21635-04
Copyright © 2012, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
This chapter describes some of the TimesTen utilities and procedures you can use to monitor the replication status of your databases.
You can monitor replication from both the command line and within your programs. The ttStatus
and ttRepAdmin
utilities described in this chapter are useful for command line queries. To monitor replication from your programs, you can use the TimesTen built-in procedures described in Oracle TimesTen In-Memory Database Reference or create your own SQL SELECT
statements to query the replication tables described in Oracle TimesTen In-Memory Database System Tables and Views Reference.
Note: You can only access the TimesTenSYS and TTREP tables for queries. Do not try to alter the contents of these tables. |
This chapter includes the following topics:
You can display information about the current state of the replication agents:
You can also obtain the state of specific replicated databases as described in "Show subscriber database information" and "Show the configuration of replicated databases".
Use the ttStatus
utility to confirm that the replication agent is started for the master database.
Example 12-1 Using ttStatus to obtain replication agent status
> ttStatus TimesTen status report as of Thu Aug 11 17:05:23 2011 Daemon pid 18373 port 4134 instance ttuser TimesTen server pid 18381 started on port 4136 ------------------------------------------------------------------------ Data store /tmp/masterds There are 16 connections to the data store Shared Memory KEY 0x0201ab43 ID 5242889 PL/SQL Memory KEY 0x0301ab43 ID 5275658 Address 0x10000000 Type PID Context Connection Name ConnID Process 20564 0x081338c0 masterds 1 Replication 20676 0x08996738 LOGFORCE 5 Replication 20676 0x089b69a0 REPHOLD 2 Replication 20676 0x08a11a58 FAILOVER 3 Replication 20676 0x08a7cd70 REPLISTENER 4 Replication 20676 0x08ad7e28 TRANSMITTER 6 Subdaemon 18379 0x080a11f0 Manager 2032 Subdaemon 18379 0x080fe258 Rollback 2033 Subdaemon 18379 0x081cb818 Checkpoint 2036 Subdaemon 18379 0x081e6940 Log Marker 2035 Subdaemon 18379 0x08261e70 Deadlock Detector 2038 Subdaemon 18379 0xae100470 AsyncMV 2040 Subdaemon 18379 0xae11b508 HistGC 2041 Subdaemon 18379 0xae300470 Aging 2039 Subdaemon 18379 0xae500470 Flusher 2034 Subdaemon 18379 0xae55b738 Monitor 2037 Replication policy : Manual Replication agent is running. Cache Agent policy : Manual PL/SQL enabled.
Use the ttAdmin
utility with the -query
option to confirm the policy settings for a database, including the replication restart policy described in "Starting and stopping the replication agents".
To obtain the status of the replication agents from a program, use the ttDataStoreStatus
built-in procedure.
Example 12-3 Calling ttDataStoreStatus
Call ttDataStoreStatus
to obtain the status of the replication agents for the masterds
databases:
> ttIsql masterds Command> CALL ttDataStoreStatus('/tmp/masterds'); < /tmp/masterds, 964, 00000000005D8150, subdaemon, Global\DBI3b3234c0.0.SHM.35 > < /tmp/masterds, 1712, 00000000016A72E0, replication, Global\DBI3b3234c0.0.SHM.35 > < /tmp/masterds, 1712, 0000000001683DE8, replication, Global\DBI3b3234c0.0.SHM.35 > < /tmp/masterds, 1620, 0000000000608128, application, Global\DBI3b3234c0.0.SHM.35 > 4 rows found.
The output from ttDataStoreStatus
is similar to that shown for the ttStatus
utility in "Using ttStatus to obtain replication agent status".
You can display information for a master database:
Using ttRepAdmin to display information about the master database
Querying replication tables to obtain information about a master database
Use the ttRepAdmin
utility with the -self -list
options to display information about the master database:
ttRepAdmin -dsn masterDSN -self -list
Example 12-4 Using ttRepAdmin to display information about a master database
This example shows the output for the master database described in "Multiple subscriber schemes with return services and a log failure threshold".
> ttRepAdmin -dsn masterds -self -list Self host "server1", port auto, name "masterds", LSN 0/2114272
The following table describes the fields.
Field | Description |
---|---|
host | The name of the host for the database. |
port | TCP/IP port used by a replication agent of another database to receive updates from this database. A value of 0 (zero) indicates replication has automatically assigned the port. |
name | Name of the database |
Log file/Replication hold LSN | Indicates the oldest location in the transaction log that is held for possible transmission to the subscriber. A value of -1/-1 indicates replication is in the stop state with respect to all subscribers. |
Use the following SELECT
statement to query the TTREP.TTSTORES
and TTREP.REPSTORES
replication tables to obtain information about a master database:
SELECT t.host_name, t.rep_port_number, t.tt_store_name FROM ttrep.ttstores t, ttrep.repstores s WHERE t.is_local_store = 0x01 AND t.tt_store_id = s.tt_store_id;
This is the output of the SELECT
statement for the master database described in "Multiple subscriber schemes with return services and a log failure threshold". The fields are the host name, the replication port number, and the database name.
< server1, 0, masterds>
Replication uses the TimesTen transaction log to retain information that must be transmitted to subscriber sites. When communication to subscriber databases is interrupted or the subscriber sites are down, the log data accumulates. Part of the output from the queries described in this section allows you to see how much log data has accumulated on behalf of each subscriber database and the amount of time since the last successful communication with each subscriber database.
You can display information for subscriber databases:
To display information about subscribers, use the ttRepAdmin
utility with the -receiver -list
options:
ttRepAdmin
-dsn
masterDSN
-receiver -list
Example 12-5 Using ttRepAdmin to display information about subscribers
This example shows the output for the subscribers described in "Multiple subscriber schemes with return services and a log failure threshold".
> ttRepAdmin -dsn masterds -receiver -list Peer name Host name Port State Proto ---------------- ------------------------ ------ ------- ----- subscriber1ds server2 Auto Start 10 Last Msg Sent Last Msg Recv Latency TPS RecordsPS Logs ------------- ------------- ------- ------- --------- ---- 0:01:12 - 19.41 5 5 52 2 Peer name Host name Port State Proto ---------------- ------------------------ ------ ------- ----- subscriber2ds server3 Auto Start 10 Last Msg Sent Last Msg Recv Latency TPS RecordsPS Logs ------------- ------------- ------- ------- --------- ---- 0:01:04 - 20.94 4 48 2
The first line of the display contains the subscriber definition. The following row of the display contains latency and rate information, as well as the number of transaction log files being retained on behalf of this subscriber. The latency for subscriber1ds
is 19.41 seconds, and it is 2 logs behind the master. This is a high latency, indicating a problem if it continues to be high and the number of logs continues to increase.
If you have more than one scheme specified in the TTREP.REPLICATIONS
table, you must use the -scheme
option to specify which scheme you wish to list. Otherwise you receive the following error:
Must specify -scheme to identify which replication scheme to use
You can obtain more detailed status for a specific replicated database by using the ttReplicationStatus
built-in procedure.
To obtain information about a master's subscribers from a program, use the following SELECT
statement to query the TTREP.REPPEERS
, TTREP.TTSTORES
, and SYS.MONITOR
tables:
SELECT t1.tt_store_name, t1.host_name, t1.rep_port_number, p.state, p.protocol, p.timesend, p.timerecv, p.latency, p.tps, p.recspersec, t3.last_log_file - p.sendlsnhigh + 1 FROM ttrep.reppeers p, ttrep.ttstores t1, ttrep.ttstores t2, sys.monitor t3 WHERE p.tt_store_id = t1.tt_store_id AND t2.is_local_store = 0X01 AND p.subscriber_id = t2.tt_store_id AND p.replication_name = 'repscheme' AND p.replication_owner = 'repl' AND (p.state = 0 OR p.state = 1);
The following is sample output from the 3 statement above:
< subscriber1ds, server2, 0, 0, 7, 1003941635, 0, -1.00000000000000, -1, -1, 1 > < subscriber2ds, server3, 0, 0, 7, 1003941635, 0, -1.00000000000000, -1, -1, 1 >
The output from either the ttRepAdmin
utility or the SELECT
statement contains the following fields:
Field | Description |
---|---|
Peer name | Name of the subscriber database |
Host name | Name of the machine that hosts the subscriber |
Port | TCP/IP port used by the subscriber agent to receive updates from the master. A value of 0 indicates replication has automatically assigned the port. |
State | Current replication state of the subscriber with respect to its master database (see "Show subscriber database information" for information). |
Protocol | Internal protocol used by replication to communicate between this master and its subscribers. You can ignore this value. |
Last message sent | Time (in seconds) since the master sent the last message to the subscriber. This includes the "heartbeat" messages sent between the databases. |
Last message received | Time (in seconds) since this subscriber received the last message from the master. |
Latency | The average latency time (in seconds) between when the master sends a message and when it receives the final acknowledgement from the subscriber. (See note below.) |
Transactions per second | The average number of transactions per second that are committed on the master and processed by the subscriber. (See note below.) |
Records per second | The average number of transmitted records per second. (See note below.) |
Logs | Number of transaction log files the master database is retaining for a subscriber. |
Note: Latency , TPS , and RecordsPS report averages detected while replicating a batch of records. These values can be unstable if the workload is not relatively constant. A value of -1 indicates the master's replication agent has not yet established communication with its subscriber replication agents or sent data to them. |
You can display the configuration of your replicated databases:
Using the ttIsql repschemes command to display configuration information
Querying replication tables to display configuration information
To display the configuration of your replicated databases from the ttIsql
prompt, use the repschemes
command:
Command> repschemes;
Example 12-6 shows the configuration output from the replication scheme shown in "Propagation scheme".
Example 12-6 Output from ttIsql repschemes command
Replication Scheme PROPAGATOR: Element: A Type: Table TAB Master Store: CENTRALDS on FINANCE Transmit Durable Subscriber Store: PROPDS on NETHANDLER Element: B Type: Table TAB Propagator Store: PROPDS on NETHANDLER Transmit Durable Subscriber Store: BACKUP1DS on BACKUPSYSTEM1 Subscriber Store: BACKUP2DS on BACKUPSYSTEM2 Store: BACKUP1DS on BACKUPSYSTEM1 Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Store: BACKUP2DS on BACKUPSYSTEM2 Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Store: CENTRALDS on FINANCE Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Store: PROPDS on NETHANDLER Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled
To display the configuration of your replicated databases, use the ttRepAdmin
utility with the -showconfig
option:
ttRepAdmin -showconfig -dsn masterDSN
Example 12-7 shows the configuration output from the propagated databases configured by the replication scheme shown in "Propagation scheme". The propds
propagator shows a latency of 19.41 seconds and is 2 logs behind the master.
Example 12-7 ttRepAdmin output
> ttRepAdmin -showconfig -dsn centralds Self host "finance", port auto, name "centralds", LSN 0/155656, timeout 120, threshold 0 List of subscribers ----------------- Peer name Host name Port State Proto ---------------- ------------------------ ------ ------- ----- propds nethandler Auto Start 10 Last Msg Sent Last Msg Recv Latency TPS RecordsPS Logs ------------- ------------- ------- ------- --------- ---- 0:01:12 - 19.41 5 52 2 List of tables and subscriptions -------------------------------- Table details ------------- Table : tab Timestamp updates : - Master Name Subscriber Name ----------- ------------- centralds propds Table details ------------- Table : tab Timestamp updates : - Master Name Subscriber name ----------- ------------- propds backup1ds propds backup2ds
See "Querying replication tables to display information about subscribers" for the meaning of the "List of subscribers" fields. The "Table details" fields list the table and the names of its master (Sender) and subscriber databases.
Use the following SELECT
statements to query the TTREP.TTSTORES
, TTREP.REPSTORES
, TTREP.REPPEERS
, SYS.MONITOR
, TTREP.REPELEMENTS
, and TTREP.REPSUBSCRIPTIONS
tables for configuration information:
SELECT t.host_name, t.rep_port_number, t.tt_store_name, s.peer_timeout, s.fail_threshold FROM ttrep.ttstores t, ttrep.repstores s WHERE t.is_local_store = 0X01 AND t.tt_store_id = s.tt_store_id; SELECT t1.tt_store_name, t1.host_name, t1.rep_port_number, p.state, p.protocol, p.timesend, p.timerecv, p.latency, p.tps, p.recspersec, t3.last_log_file - p.sendlsnhigh + 1 FROM ttrep.reppeers p, ttrep.ttstores t1, ttrep.ttstores t2, sys.monitor t3 WHERE p.tt_store_id = t2.tt_store_id AND t2.is_local_store = 0X01 AND p.subscriber_id = t1.tt_store_id AND (p.state = 0 OR p.states = 1); SELECT ds_obj_owner, DS_OBJ_NAME, t1.tt_store_name,t2.tt_store_name FROM ttrep.repelements e, ttrep.repsubscriptions s, ttrep.ttstores t1, ttrep.ttstores t2 WHERE s.element_name = e.element_name AND e.master_id = t1.tt_store_id AND s.subscriber_id = t2.tt_store_id ORDER BY ds_obj_owner, ds_obj_name;
Example 12-8 Output from queries
The output from the queries refer to the databases configured by the replication scheme shown in "Propagation scheme".
The output from the first query might be:
< finance, 0, centralds, 120, 0 >
It shows the host name, port number and the database name. The fourth value (120) is the TIMEOUT
value that defines the amount of time a database waits for a response from another database before resending a message. The last value (0) is the log failure threshold value described in "Setting the log failure threshold".
The output from the second query might be:
< propds, nethandler, 0, 0, 7, 1004378953, 0, -1.00000000000000, -1, -1, 1 >
See "Querying replication tables to display information about subscribers" for a description of the fields.
The output from the last query might be:
< repl, tab, centralds, propds > < repl, tab, propds, backup1ds > < repl, tab, propds, backup2ds >
The rows show the replicated table and the names of its master (sender) and subscriber (receiver) databases.
In a replicated database, transactions remain in the log buffer and transaction log files until the master replication agent confirms they have been fully processed by the subscriber. Only then can the master consider purging them from the log buffer and transaction log files. When the log space is exhausted, subsequent updates on the master database are aborted. Use the ttLogHolds
built-in procedure to get information about replication log holds. For more information about transaction log growth, see "Monitoring accumulation of transaction log files" in Oracle TimesTen In-Memory Database Operations Guide.
Transactions are stored in the log in the form of log records. You can use bookmarks to detect which log records have or have not been replicated by a master database.
A bookmark consists of log sequence numbers (LSNs) that identify the location of particular records in the transaction log that you can use to gauge replication performance. The LSNs associated with a bookmark are: hold LSN, last written LSN, and last LSN forced to disk. The hold LSN describes the location of the lowest (or oldest) record held in the log for possible transmission to a subscriber. You can compare the hold LSN with the last written LSN to determine the amount of data in the transaction log that have not yet been transmitted to the subscribers. The last LSN forced to disk describes the last records saved in a transaction log file on disk.
A more accurate way to monitor replication to a particular subscriber is to look at the send LSN for the subscriber, which consists of the SENDLSNHIGH
and SENDLSNLOW
fields in the TTREP.REPPEERS
table. In contrast to the send LSN value, the hold LSN returned in a bookmark is computed every 10 seconds to describe the minimum send LSN for all the subscribers, so it provides a more general view of replication progress that does not account for the progress of replication to the individual subscribers. Because replication acknowledgements are asynchronous for better performance, the send LSN can also be some distance behind. Nonetheless, the send LSN for a subscriber is the most accurate value available and is always ahead of the hold LSN.
You can display replicated log records:
Use the ttRepAdmin
utility with the -bookmark
option to display the location of bookmarks:
> ttRepAdmin -dsn masterds -bookmark Replication hold LSN ...... 10/927692 Last written LSN .......... 10/928908 Last LSN forced to disk ... 10/280540 Each LSN is defined by two values: Log file number / Offset in log file
The LSNs output from ttRepAdmin
-bookmark
are:
Line | Description |
---|---|
Replication hold LSN | The location of the lowest (or oldest) record held in the log for possible transmission to a subscriber. A value of -1/-1 indicates replication is in the stop state with respect to all subscribers (or the queried database is not a master database). |
Last written LSN | The location of the most recently generated transaction log record for the database. |
Last LSN forced to disk | The location of the most recent transaction log record written to the disk. |
Use the ttBookmark
built-in procedure to display the location of bookmarks.
Example 12-9 Using ttBookmark to display bookmark location
> ttIsql masterds Command> call ttBookMark(); < 10, 928908, 10, 280540, 10, 927692 > 1 row found.
The first two columns in the returned row define the "Last written LSN," the next two columns define the "Last LSN forced to disk," and the last two columns define the "Replication hold LSN."
You can use the ttRepAdmin
utility with the -showstatus
option to display the current status of the replication agent. The status output includes the bookmark locations, port numbers, and communication protocols used by the replication agent for the queried database.
The output from ttRepAdmin
-showstatus
includes the status of the main thread and the TRANSMITTER
and RECEIVER
threads used by the replication agent. A master database has a TRANSMITTER
thread and a subscriber database has a RECEIVER
thread. A database that serves a master/subscriber role in a bidirectional replication scheme has both a TRANSMITTER
and a RECEIVER
thread.
Each replication agent has a single REPLISTENER
thread that listens on a port for peer connections. On a master database, the REPLISTENER
thread starts a separate TRANSMITTER
thread for each subscriber database. On a subscriber database, the REPLISTENER
thread starts a separate RECEIVER
thread for each connection from a master.
If the TimesTen daemon requests that the replication agent stop or if a fatal error occurs in any of the other threads used by the replication agent, the main thread waits for the other threads to gracefully terminate. The TimesTen daemon may or may not restart the replication agent, depending upon certain fatal errors. The REPLISTENER
thread never terminates during the lifetime of the replication agent. A TRANSMITTER
or RECEIVER
thread may stop but the replication agent may restart it. The RECEIVER
thread terminates on errors from which it cannot recover or when the master disconnects.
Example 12-9 shows ttRepAdmin
-showstatus
output for a unidirectional replication scheme in which the rep1
database is the master and rep2
database is the subscriber. The first ttRepAdmin
-showstatus
output shows the status of the rep1
database and its TRANSMITTER
thread. The second output shows the status of the rep2
database and its RECEIVER
thread.
Following the example are sections that describe the meaning of each field in the ttRepAdmin
-showstatus
output:
Example 12-10 Unidirectional replication scheme
Consider the unidirectional replication scheme from the rep1
database to the rep2
database:
CREATE REPLICATION r ELEMENT e1 TABLE t MASTER rep1 SUBSCRIBER rep2;
The replication status for the rep1
database should look similar to the following:
> ttRepAdmin -showstatus rep1 DSN : rep1 Process ID : 1980 Replication Agent Policy : MANUAL Host : MYHOST RepListener Port : 1113 (AUTO) Last write LSN : 0.1487928 Last LSN forced to disk : 0.1487928 Replication hold LSN : 0.1486640 Replication Peers: Name : rep2 Host : MYHOST Port : 1154 (AUTO) Replication State : STARTED Communication Protocol : 12 TRANSMITTER thread(s): For : rep2 Start/Restart count : 2 Send LSN : 0.1485960 Transactions sent : 3 Total packets sent : 10 Tick packets sent : 3 MIN sent packet size : 48 MAX sent packet size : 460 AVG sent packet size : 167 Last packet sent at : 17:41:05 Total Packets received: 9 MIN rcvd packet size : 48 MAX rcvd packet size : 68 AVG rcvd packet size : 59 Last packet rcvd'd at : 17:41:05 Earlier errors (max 5): TT16060 in transmitter.c (line 3590) at 17:40:41 on 08-25-2004 TT16122 in transmitter.c (line 2424) at 17:40:41 on 08-25-2004
Note that the Replication hold LSN
, the Last write LSN
and the Last LSN
forced to disk are very close, which indicates that replication is operating satisfactorily. If the Replication hold LSN
falls behind the Last write LSN
and the Last LSN
, then replication is not keeping up with updates to the master.
The replication status for the rep2
database should look similar to the following:
> ttRepAdmin -showstatus rep2 DSN : rep2 Process ID : 2192 Replication Agent Policy : MANUAL Host : MYHOST RepListener Port : 1154 (AUTO) Last write LSN : 0.416464 Last LSN forced to disk : 0.416464 Replication hold LSN : -1.-1 Replication Peers: Name : rep1 Host : MYHOST Port : 0 (AUTO) Replication State : STARTED Communication Protocol : 12 RECEIVER thread(s): For : rep1 Start/Restart count : 1 Transactions received : 0 Total packets sent : 20 Tick packets sent : 0 MIN sent packet size : 48 MAX sent packet size : 68 AVG sent packet size : 66 Last packet sent at : 17:49:51 Total Packets received: 20 MIN rcvd packet size : 48 MAX rcvd packet size : 125 AVG rcvd packet size : 52 Last packet rcvd'd at : 17:49:51
The following fields are output for the MAIN
thread in the replication agent for the queried database.
MAIN Thread | Description |
---|---|
DSN | Name of the database to be queried. |
Process ID | Process Id of the replication agent. |
Replication Agent Policy | The restart policy, as described in "Starting and stopping the replication agents" |
Host | Name of the machine that hosts this database. |
RepListener Port | TCP/IP port used by the replication agent to listen for connections from the TRANSMITTER threads of remote replication agents. A value of 0 indicates that this port has been assigned automatically to the replication agent (the default), rather than being specified as part of a replication scheme. |
Last write LSN | The location of the most recently generated transaction log record for the database. See "Show replicated log records" for more information. |
Last LSN forced to disk | The location of the most recent transaction log record written to the disk. See "Show replicated log records" for more information. |
Replication hold LSN | The location of the lowest (or oldest) record held in the log for possible transmission to a subscriber. A value of -1/-1 indicates replication is in the stop state with respect to all subscribers. See "Show replicated log records" for more information. |
The following fields are output for each replication peer that participates in the replication scheme with the queried database. A "peer" could play the role of master, subscriber, propagator or both master and subscriber in a bidirectional replication scheme.
Replication Peers | Description |
---|---|
Name | Name of a database that is a replication peer to this database. |
Host | Host of the peer database. |
Port | TCP/IP port used by the replication agent for the peer database. A value of 0 indicates this port has been assigned automatically to the replication agent (the default), rather than being specified as part of a replication scheme. |
Replication State | Current replication state of the replication peer with respect to the queried database (see "Show subscriber database information" for information). |
Communication Protocol | Internal protocol used by replication to communicate between the peers. (For internal use only.) |
The following fields are output for each TRANSMITTER
thread used by a master replication agent to send transaction updates to a subscriber. A master with multiple subscribers has multiple TRANSMITTER
threads.
Note: The counts in theTRANSMITTER output begin to accumulate when the replication agent is started. These counters are reset to 0 only when the replication agent is started or restarted. |
TRANSMITTER Thread | Description |
---|---|
For | Name of the subscriber database that is receiving replicated data from this database. |
Start/Restart count | Number of times this TRANSMITTER thread was started or restarted by the replication agent due to a temporary error, such as operation timeout, network failure, and so on. |
Send LSN | The last LSN transmitted to this peer. See "Show replicated log records" for more information. |
Transactions sent | Total number of transactions sent to the subscriber. |
Total packets sent | Total number of packets sent to the subscriber (including tick packets) |
Tick packets sent | Total number of tick packets sent. Tick packets are used to maintain a "heartbeat" between the master and subscriber. You can use this value to determine how many of the 'Total packets sent' packets are not related to replicated data. |
MIN sent packet size | Size of the smallest packet sent to the subscriber. |
MAX sent packet size | Size of the largest packet sent to the subscriber. |
AVG sent packet size | Average size of the packets sent to the subscriber. |
Last packet sent at | Time of day last packet was sent (24-hour clock time) |
Total packets received | Total packets received from the subscriber (tick packets and acknowledgement data) |
MIN rcvd packet size | Size of the smallest packet received |
MAX rcvd packet size | Size of the largest packet received |
AVG rcvd packet size | Average size of the packets received |
Last packet rcvd at | Time of day last packet was received (24-hour clock time) |
Earlier errors (max 5) | Last five errors generated by this thread |
The following fields are output for each RECEIVER
thread used by a subscriber replication agent to receive transaction updates from a master. A subscriber that is updated by multiple masters has multiple RECEIVER
threads.
Note: The counts in theRECEIVER output begin to accumulate when the replication agent is started. These counters are reset to 0 only when the replication agent is started or restarted. |
RECEIVER Thread | Description |
---|---|
For | Name of the master database that is sending replicated data from this database |
Start/Restart count | Number of times this RECEIVER thread was started or restarted by the replication agent due to a temporary error, such as operation timeout, network failure, and so on. |
Transactions received | Total number of transactions received from the master |
Total packets sent | Total number of packets sent to the master (tick packets and acknowledgement data) |
Tick packets sent | Total number of tick packets sent to the master. Tick packets are used to maintain a "heartbeat" between the master and subscriber. You can use this value to determine how many of the 'Total packets sent' packets are not related to acknowledgement data. |
MIN sent packet size | Size of the smallest packet sent to the master |
MAX sent packet size | Size of the largest packet sent to the master |
AVG sent packet size | Average size of the packets sent to the master |
Last packet sent at | Time of day last packet was sent to the master (24-hour clock time) |
Total packets received | Total packets of acknowledgement data received from the master |
MIN rcvd packet size | Size of the smallest packet received |
MAX rcvd packet size | Size of the largest packet received |
AVG rcvd packet size | Average size of the packets received |
Last packet rcvd at | Time of day last packet was received (24-hour clock time) |
You can determine whether the return service for a particular subscriber has been disabled by the DISABLE RETURN
failure policy by calling the ttRepSyncSubscriberStatus
built-in procedure or by means of the SNMP trap, ttRepReturnTransitionTrap
. The ttRepSyncSubscriberStatus
procedure returns a value of '1' to indicate the return service has been disabled for the subscriber, or a value of '0' to indicate that the return service is still enabled.
Example 12-11 Using ttRepSyncSubscriberStatus to obtain return receipt status
To use ttRepSyncSubscriberStatus
to obtain the return receipt status of the subscriberds
database with respect to its master database, masterDSN
, enter:
> ttIsql masterDSN Command> CALL ttRepSyncSubscriberStatus ('subscriberds'); < 0 > 1 row found.
This result indicates that the return service is still enabled.
See "DISABLE RETURN" for more information.
You can check the status of the last return receipt or return twosafe transaction executed on the connection handle by calling the ttRepXactTokenGet
and ttRepXactStatus
procedures.
First, call ttRepXactTokenGet
to get a unique token for the last return service transaction. If you are using return receipt, the token identifies the last return receipt transaction committed on the master database. If you are using return twosafe, the token identifies the last twosafe transaction on the master that, in the event of a successful commit on the subscriber, is committed by the replication agent on the master. However, in the event of a timeout or other error, the twosafe transaction identified by the token is not committed by the replication agent on the master.
Next, pass the token returned by ttRepXactTokenGet
to the ttRepXactStatus
procedure to obtain the return service status. The output of the ttRepXactStatus
procedure reports which subscriber or subscribers are configured to receive the replicated data and the current status of the transaction (not sent, received, committed) with respect to each subscriber. If the subscriber replication agent encountered a problem applying the transaction to the subscriber database, the ttRepXactStatus
procedure also includes the error string. If you are using return twosafe and receive a timeout or other error, you can then decide whether to unconditionally commit or retry the commit, as described in "RETURN TWOSAFE".
Note: IfttRepXactStatus is called without a token from ttRepXactTokenGet , it returns the status of the most recent transaction on the connection which was committed with the return receipt or return twosafe replication service. |
The ttRepXactStatus
procedure returns the return service status for each subscriber as a set of rows formatted as:
subscriberName, status, error
Example 12-12 Reporting the status of each subscriber
You can call the ttRepXactTokenGet
and ttRepXactStatus
built-in procedures in a GetRSXactStatus
function to report the status of each subscriber in your replicated system:
SQLRETURN GetRSXactStatus (HDBC hdbc) { SQLRETURN rc = SQL_SUCCESS; HSTMT hstmt = SQL_NULL_HSTMT; char xactId [4001] = ""; char subscriber [62] = ""; char state [3] = ""; /* get the last RS xact id executed on this connection */ SQLAllocStmt (hdbc, &hstmt); SQLExecDirect (hstmt, "CALL ttRepXactTokenGet ('R2')", SQL_NTS); /* bind the xact id result as a null terminated hex string */ SQLBindCol (hstmt, 1, SQL_C_CHAR, (SQLPOINTER) xactId, sizeof (xactId), NULL); /* fetch the first and gonly row */ rc = SQLFetch (hstmt); /* close the cursor */ SQLFreeStmt (hstmt, SQL_CLOSE); if (rc != SQL_ERROR && rc != SQL_NO_DATA_FOUND) { /* display the xact id */ printf ("\nRS Xact ID: 0x%s\n\n", xactId); /* get the status of this xact id for every subscriber */ SQLBindParameter (hstmt, 1, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARBINARY, 0, 0, (SQLPOINTER) xactId, strlen (xactId), NULL); /* execute */ SQLExecDirect (hstmt, "CALL ttRepXactStatus (?)", SQL_NTS); /* bind the result columns */ SQLBindCol (hstmt, 1, SQL_C_CHAR, (SQLPOINTER) subscriber, sizeof (subscriber), NULL); SQLBindCol (hstmt, 2, SQL_C_CHAR, (SQLPOINTER) state, sizeof (state), NULL); /* fetch the first row */ rc = SQLFetch (hstmt); while (rc != SQL_ERROR && rc != SQL_NO_DATA_FOUND) { /* report the status of this subscriber */ printf ("\n\nSubscriber: %s", subscriber); printf ("\nState: %s", state); /* are there more rows to fetch? */ rc = SQLFetch (hstmt); } } /* close the statement */ SQLFreeStmt (hstmt, SQL_DROP); return rc; }
To increase replication performance, consider these tips:
Configure parallel replication. See "Configuring parallel replication".
Use asynchronous replication, which is the default. For more information, see "Making decisions about performance and recovery tradeoffs". However, if you are using active standby pairs, return twosafe (synchronous replication) has better performance than return receipt (semi-synchronous replication).
Set the LogFileSize
and LogBufMB
first connection attributes to their maximum values. For more information, see "Setting connection attributes for logging".
If the workload is heavy enough that replication sometimes falls behind, replicated changes must be captured from the transaction logs on disk rather than from the in-memory log buffer. Using the fastest possible storage for the TimesTen transaction logs reduces I/O contention between transaction log flushing and replication capture and helps replication to catch up more quickly during periods of reduced workload. Consider using a high performance, cached disk array using a RAID-0 stripe across multiple fast disks or solid state storage.
Experiment with the number of connections to the database where the updates are applied. If you need more than 64 concurrent connections, set the Connections
first connection attribute to a higher value. See "Connections" in Oracle TimesTen In-Memory Database Reference.
See also "Poor replication or XLA performance" in Oracle TimesTen In-Memory Database Troubleshooting Guide.
This chapter describes how to administer an active standby pair that does not replicate cache groups.
For information about administering active standby pairs that replicate cache groups, see Chapter 5, "Administering an Active Standby Pair with Cache Groups".
For information about managing failover and recovery automatically, see Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs".
This chapter includes the following topics:
This section summarizes the possible states of a master database. These states are referenced in the tasks described in the rest of the chapter.
The master databases can be in one of the following states:
ACTIVE
- A database in this state is the active database. Applications can update its replicated tables.
STANDBY
- A database in this state is the standby database. Applications can update only nonreplicated tables in the standby database. Nonreplicated tables are tables that have been excluded from the replication scheme by using the EXCLUDE TABLE
or EXCLUDE CACHE GROUP
clauses of the CREATE ACTIVE STANDBY PAIR
statement.
FAILED
- A database in this state is a failed master database. No updates can be replicated to it.
IDLE
- A database in this state has not yet had its role in the active standby pair assigned. It cannot be updated. Every database comes up in the IDLE
state.
RECOVERING
- When a previously failed master database is synchronizing updates with the active database, it is in the RECOVERING
state.
You can use the ttRepStateGet
built-in procedure to discover the state of a master database.
When you set up a replication scheme or administer a recovery, a common task is duplicating a database. You can use the ttRepAdmin
-duplicate
utility or the ttRepDuplicateEx
C function to duplicate a database.
To duplicate a database, these conditions must be fulfilled:
The instance administrator performs the duplicate operation.
The instance administrator user name must be the same on both instances involved in the duplication.
You must provide the user name and password for a user with the ADMIN
privilege on the source database.
The target DSN cannot include client/server attributes.
On the source database, create a user and grant the ADMIN
privilege to the user:
CREATE USER ttuser IDENTIFIED BY ttuser; User created. GRANT admin TO ttuser;
Assume the user name of the instance administrator is timesten
. Logged in as timesten
on the target host, duplicate database dsn1
on host1
to dsn2
:
ttRepAdmin -duplicate -from dsn1 -host host1 dsn2 Enter internal UID at the remote datastore with ADMIN privileges: ttuser Enter password of the internal Uid at the remote datastore:
Enter ttuser
when prompted for the password of the internal user at the remote database.
If you are duplicating an active database that has cache groups, use the -keepCG
option. You must also specify the cache administration user ID and password with the -cacheUid
and -cachePwd
options. If you do not provide the cache administration user password, ttRepAdmin
prompts for a password. If the cache administration user ID is orauser
and the password is orapwd
, duplicate database dsn1
on host1
:
ttRepAdmin -duplicate -from dsn1 -host host1 -keepCG "DSN=dsn2;UID=;PWD=" Enter internal UID at the remote datastore with ADMIN privileges: ttuser Enter password of the internal Uid at the remote datastore:
Enter ttuser
when prompted for the password. ttRepAdmin
then prompts for the cache administration user and password:
Enter cache administrator UID: orauser Enter cache administrator password:
Enter orapwd
when prompted for the cache administration password.
The UID
and PWD
for dsn2
are specified as null values in the connection string so that the connection is made as the current OS user, which is the instance administrator. Only the instance administrator can run ttRepAdmin -duplicate
. If dsn2
is configured with PWDCrypt
instead of PWD
, then the connection string should be "DSN=dsn2;UID=;PWDCrypt="
.
When you duplicate a standby database with cache groups to a read-only subscriber, use the -nokeepCG
option. In this example, dsn2
is the standby database and sub1
is the read-only subscriber:
ttRepAdmin -duplicate -from dsn2 -host host2 -nokeepCG "DSN=sub1;UID=;PWD="
The ttRepAdmin
utility prompts for values for -uid
and -pwd
.
If you want to use a specific local or remote network interface over which the database duplication occurs, you can optionally specify either by providing an alias or the IP address of the network interface.
You can specify the local and remote network interfaces for the source and target hosts by using the -localIP
and -remoteIP
options of ttRepAdmin -duplicate
. If you do not specify one or both network interfaces, TimesTen chooses them.
For more information about the ttRepAdmin
utility, see "ttRepAdmin" in Oracle TimesTen In-Memory Database Reference. For more information about the ttRepDuplicateEx
C function, see "ttRepDuplicateEx" in Oracle TimesTen In-Memory Database C Developer's Guide.
To set up an active standby pair, complete the tasks in this section. See "Configuring an active standby pair with one subscriber" for an example.
If you intend to replicate read-only cache groups or asynchronous writethrough (AWT) cache groups, see Chapter 5, "Administering an Active Standby Pair with Cache Groups".
Before you create a database, see the information in these sections:
Create a database. See "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.
Create the replication scheme using the CREATE ACTIVE STANDBY PAIR
statement. See Chapter 3, "Defining an Active Standby Pair Replication Scheme".
Call ttRepStateSet
('ACTIVE')
on the active database.
Start the replication agent. See "Starting and stopping the replication agents".
Create a user on the active database and grant the ADMIN
privilege to the user.
Duplicate the active database to the standby database.
Start the replication agent on the standby database. See "Starting and stopping the replication agents".
Wait for the standby database to enter the STANDBY
state. Use the ttRepStateGet
procedure to check the state of the standby database.
Duplicate all of the subscribers from the standby database. See "Duplicating a master database to a subscriber".
Set up the replication agent policy and start the replication agent on each of the subscriber databases. See "Starting and stopping the replication agents".
This section includes the following topics:
This section describes how to recover the active database when the standby database is available and synchronized with the active database. It includes the following topics:
Complete the following tasks:
Stop the replication agent on the failed database if it has not already been stopped.
On the standby database, execute ttRepStateSet
('ACTIVE')
. This changes the role of the database from STANDBY
to ACTIVE
.
On the new active database, execute ttRepStateSave
('FAILED', '
failed_database
','
host_name
')
, where failed_database
is the former active database that failed. This step is necessary for the new active database to replicate directly to the subscriber databases. During normal operation, only the standby database replicates to the subscribers.
Destroy the failed database.
Duplicate the new active database to the new standby database.
Set up the replication agent policy and start the replication agent on the new standby database. See "Starting and stopping the replication agents".
The standby database contacts the active database. The active database stops sending updates to the subscribers. When the standby database is fully synchronized with the active database, then the standby database enters the STANDBY
state and starts sending updates to the subscribers.
Note: You can verify that the standby database has entered theSTANDBY state by using the ttRepStateGet built-in procedure. |
Complete the following tasks:
On the standby database, execute ttRepStateSet
('ACTIVE')
. This changes the role of the database from STANDBY
to ACTIVE
.
On the new active database, execute ttRepStateSave
('FAILED', '
failed_database
','
host_name
')
, where failed_database
is the former active database that failed. This step is necessary for the new active database to replicate directly to the subscriber databases. During normal operation, only the standby database replicates to the subscribers.
Connect to the failed database. This triggers recovery from the local transaction logs. If database recovery fails, you must continue from Step 5 of the procedure for recovering when replication is return receipt or asynchronous. See "When replication is return receipt or asynchronous".
Verify that the replication agent for the failed database has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".
When the active database determines that it is fully synchronized with the standby database, then the standby database enters the STANDBY
state and starts sending updates to the subscribers.
Note: You can verify that the standby database has entered theSTANDBY state by using the ttRepStateSet built-in procedure. |
Consider the following scenarios:
The standby database fails. The active database fails before the standby comes back up or before the standby has been synchronized with the active database.
The active database fails. The standby database becomes ACTIVE
, and the rest of the recovery process begins. (See "Recovering from a failure of the active database".) The new active database fails before the new standby database is fully synchronized with it.
In both scenarios, the subscribers may have had more changes applied than the standby database.
When the active database fails and the standby database has not applied all of the changes that were last sent from the active database, there are two choices for recovery:
Recover the active database from the local transaction logs.
Recover the standby database from the local transaction logs.
The choice depends on which database is available and which is more up to date.
Connect to the failed active database. This triggers recovery from the local transaction logs.
Verify that the replication agent for the failed active database has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".
Execute ttRepStateSet
('ACTIVE')
on the newly recovered database.
Continue with Step 6 in "Setting up an active standby pair with no cache groups".
Connect to the failed standby database. This triggers recovery from the local transaction logs.
If the replication agent for the standby database has automatically restarted, you must stop the replication agent. See "Starting and stopping the replication agents".
Drop the replication configuration using the DROP ACTIVE STANDBY PAIR
statement.
Re-create the replication configuration using the CREATE ACTIVE STANDBY PAIR
statement.
Execute ttRepStateSet
('ACTIVE')
on the master database, giving it the ACTIVE
role.
Set up the replication agent policy and start the replication agent on the new standby database. See "Starting and stopping the replication agents".
Continue from Step 6 in "Setting up an active standby pair with no cache groups".
After a successful failover, you may wish to fail back so that the active database and the standby database are on their original nodes. See "Reversing the roles of the active and standby databases" for instructions.
To recover from a failure of the standby database, complete the following tasks:
Detect the standby database failure.
If return twosafe service is enabled, the failure of the standby database may prevent a transaction in progress from being committed on the active database, resulting in error 8170, "Receipt or commit acknowledgement not returned in the specified timeout interval". If so, then call the ttRepSyncSet
procedure with a localAction
parameter of 2
(COMMIT
) and commit the transaction again. For example:
call ttRepSyncSet( null, null, 2); commit;
Execute ttRepStateSave
('FAILED','
standby_database
','
host_name
')
on the active database. After this, as long as the standby database is unavailable, updates to the active database are replicated directly to the subscriber databases. Subscriber databases may also be duplicated directly from the active.
If the replication agent for the standby database has automatically restarted, stop the replication agent. See "Starting and stopping the replication agents".
Recover the standby database in one of the following ways:
Connect to the standby database. This triggers recovery from the local transaction logs.
Duplicate the standby database from the active database.
The amount of time that the standby database has been down and the amount of transaction logs that need to be applied from the active database determine the method of recovery that you should use.
Set up the replication agent policy and start the replication agent on the new standby database. See "Starting and stopping the replication agents".
The standby database enters the STANDBY
state and starts sending updates to the subscribers after the active database determines that the two master databases have been synchronized and stops sending updates to the subscribers.
Note: You can verify that the standby database has entered theSTANDBY state by using the ttRepStateGet procedure. |
If a subscriber database fails, then you can recover it by one of the following methods:
Connect to the failed subscriber. This triggers recovery from the local transaction logs. Start the replication agent and let the subscriber catch up.
Duplicate the subscriber from the standby database.
If the standby database is down or in recovery, then duplicate the subscriber from the active database.
After the subscriber database has been recovered, then set up the replication agent policy and start the replication agent. See "Starting and stopping the replication agents".
To change the role of the active database to standby and vice versa:
Pause any applications that are generating updates on the current active database.
Execute ttRepSubscriberWait
on the active database, with the DSN and host of the current standby database as input parameters. It must return success (<00>
). This ensures that all updates have been transmitted to the current standby database.
Stop the replication agent on the current active database. See "Starting and stopping the replication agents".
Execute ttRepDeactivate
on the current active database. This puts the database in the IDLE
state.
Execute ttRepStateSet
('ACTIVE')
on the current standby database. This database now acts as the active database in the active standby pair.
Set up the replication agent policy and start the replication agent on the old active database.
Use the ttRepStateGet
procedure to determine when the database's state has changed from IDLE
to STANDBY
. The database now acts as the standby database in the active standby pair.
Resume any applications that were paused in Step 1.
Ordinarily, the designation of the active and standby databases in an active standby pair is explicitly controlled by the user. However, in some circumstances the user may not have the ability to modify both the active and standby databases when changing the role of the standby database to active.
For example, if network communication to the site of an active database is interrupted, the user may need the standby database at a different site to take over the role of the active, but cannot stop replication on the current active or change its role manually. Changing the standby database to active without first stopping replication on the active leads to a situation where both masters are in the ACTIVE
state and accepting transactions. In such a scenario, TimesTen can automatically negotiate the active/standby role of the master databases when network communication between the databases is restored.
If, during the initial handshake between the databases, TimesTen determines that the master databases in an active standby pair replication scheme are both in the ACTIVE
state, TimesTen performs the following operations automatically:
The database which was set to the ACTIVE
state most recently is left in the ACTIVE
state and may continue to be connected to and updated by applications.
The database which was set to the ACTIVE
state least recently is invalidated. All applications are disconnected.
When the invalidated database comes back up, TimesTen determines whether any transactions have occurred on the database that have not yet been replicated to the other master database. If such transactions have occurred, they are now trapped, and the database is left in the IDLE
state. The database needs to be duplicated from the active in order to become a standby. If there are no trapped transactions, the database is safe to use as a standby database and is automatically set to the STANDBY
state.
Oracle TimesTen In-Memory Database is a memory-optimized relational database. Deployed in the application tier, Oracle TimesTen In-Memory Database operates on databases that fit entirely in physical memory using standard SQL interfaces. High availability for the in-memory database is provided through real-time transactional replication.
This document is intended for application developers and system administrators who use and administer TimesTen to TimesTen Replication.To work with this guide, you should understand how database systems work. You should also have knowledge of SQL (Structured Query Language) and either ODBC (Open DataBase Connectivity) or JDBC (JavaDataBase Connectivity).
TimesTen documentation is available on the product distribution media and on the Oracle Technology Network:
http://www.oracle.com/technetwork/products/timesten/documentation
TimesTen supports multiple platforms. Unless otherwise indicated, the information in this guide applies to all supported platforms. The term Windows refers to all supported Windows platforms. The term UNIX applies to all supported UNIX and Linux platforms. See "Platforms" in Oracle TimesTen In-Memory Database Release Notes for specific platform versions supported by TimesTen.
Note: In TimesTen documentation, the terms "data store" and "database" are equivalent. Both terms refer to the TimesTen database unless otherwise noted. |
This document uses the following text conventions:
Convention | Meaning |
---|---|
boldface | Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. |
italic | Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. |
monospace | Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter. |
italic monospace | Italic monospace type indicates a variable in a code example that you must replace. For example:
Replace |
[ ] | Square brackets indicate that an item in a command line is optional. |
{ } | Curly braces indicated that you must choose one of the items separated by a vertical bar ( | ) in a command line. |
| | A vertical bar (or pipe) separates alternative arguments. |
. . . | An ellipsis (. . .) after an argument indicates that you may use more than one argument on a single command line. |
% | The percent sign indicates the UNIX shell prompt. |
# | The number (or pound) sign indicates the UNIX root prompt. |
TimesTen documentation uses these variables to identify path, file and user names:
Convention | Meaning |
---|---|
install_dir | The path that represents the directory where the current release of TimesTen is installed. |
TTinstance | The instance name for your specific installation of TimesTen. Each installation of TimesTen must be identified at install time with a unique alphanumeric instance name. This name appears in the install path. |
bits or bb | Two digits, either 32 or 64, that represent either the 32-bit or 64-bit operating system. |
release or rr | The first three parts in a release number, with or without dots. The first three parts of a release number represent a major TimesTen release. For example, 1122 or 11.2.2 represents TimesTen 11g Release 2 (11.2.2). |
jdk_version | Two digits that represent the version number of the major JDK release. Specifically, 14 represent JDK 1.4; 5 represents JDK 5. |
DSN | The data source name. |
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc
.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info
or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.