Oracle® TimesTen In-Memory Database Replication Guide 11g Release 2 (11.2.2) Part Number E21635-04 |
|
|
PDF · Mobi · ePub |
Oracle Clusterware monitors and controls applications to provide high availability. This chapter describes how to use Oracle Clusterware to manage availability for a TimesTen active standby pair.
Note:
For more information about Oracle Clusterware, see Oracle Clusterware Administration and Deployment Guide in the Oracle Database documentation.This chapter includes the following topics:
Figure 7-1 shows an active standby pair with one read-only subscriber in the same local network. The active database, the standby database and the read-only subscriber are on different nodes. There are two nodes that are not part of the active standby pair that are also running TimesTen. An application updates the active database. An application reads from the standby and the subscriber. All of the nodes are connected to shared storage.
Figure 7-1 Active standby pair with one subscriber
You can use Oracle Clusterware to start, monitor and automatically fail over TimesTen databases and applications in response to node failures and other events. See "Planned maintenance" and "Recovering from failures" for details.
Oracle Clusterware can be implemented at two levels of availability for TimesTen. The basic level of availability manages two master nodes and up to 127 read-only subscriber nodes in the cluster. The active standby pair is defined with local host names or IP addresses. If both master nodes fail, user intervention is necessary to migrate the active standby scheme to new hosts. When both master nodes fail, Oracle Clusterware notifies the user.
The advanced level of availability uses virtual IP addresses for the active, standby and read-only subscriber databases. Extra nodes can be included in the cluster that are not part of the initial active standby pair. If a failure occurs, the use of virtual IP addresses allows one of the extra nodes to take on the role of a failed node automatically.
If your applications connect to TimesTen in a client/server configuration, automatic client failover enables the client to reconnect automatically to the master database with the active role after a failure. See "Using automatic client failover for an active standby pair" and "TTC_FailoverPortRange" in the Oracle TimesTen In-Memory Database Reference.
The ttCWAdmin
utility is used to administer TimesTen active standby pairs in a cluster that is managed by Oracle Clusterware. The configuration for each active standby pair is manually created in an initialization file called cluster.oracle.ini
by default. The information in this file is used to create Oracle Clusterware resources. Resources are used to manage each TimesTen daemon, database, TimesTen processes, user applications and virtual IP addresses. For more information about the ttCWAdmin
utility, see "ttCWAdmin" in Oracle TimesTen In-Memory Database Reference. For more information about the cluster.oracle.ini
file, see "The cluster.oracle.ini file".
Use Oracle Clusterware to manage only these configurations:
Active standby pair with or without read-only subscribers
Active standby pair (with or without read-only subscribers) with AWT cache groups, read-only cache groups and global cache groups
See "ttCWAdmin" in Oracle TimesTen In-Memory Database Reference for information about the privileges required to execute ttCWAdmin
commands.
Oracle Clusterware release 11.2.0.2.x is supported with TimesTen active standby pair replication, beginning with release 11.2.0.2.0. See Oracle Clusterware Administration and Deployment Guide for network and storage requirements and information about Oracle Clusterware configuration files.
Oracle Clusterware and TimesTen should be installed in the same location on all nodes.
The TimesTen instance administrator must belong to the same UNIX primary group as the Oracle Clusterware installation owner.
Note that the /tmp
directory contains essential TimesTen Oracle Clusterware directories. Their names have the prefix crsTT
. Do not delete them.
All hosts should use Network Time Protocol (NTP) or a similar system so that clocks on the hosts remain within 250 milliseconds of each other.
When you use Oracle Clusterware with TimesTen, you cannot use these commands and SQL statements:
CREATE ACTIVE STANDBY PAIR
, ALTER ACTIVE STANDBY PAIR
and DROP ACTIVE STANDBY PAIR
SQL statements
The -repStart
and -repStop
options of the ttAdmin
utility
The -cacheStart
and -cacheStop
options of the ttAdmin
utility after the active standby pair has been created
The -duplicate
option of the ttRepAdmin
utility
The ttRepStart
and ttRepStop
built-in procedures
Built-in procedures for managing a cache grid when the active standby pair in a cluster is a member of a grid
In addition, do not call ttDaemonAdmin -stop
before calling ttCWAdmin -shutdown
.
The TimesTen integration with Oracle Clusterware accomplishes these operations with the ttCWAdmin
utility and the attributes in the cluster.oracle.ini
file.
For more information about the built-ins and utilities, see Oracle TimesTen In-Memory Database Reference. For more information about the SQL statements, see Oracle TimesTen In-Memory Database SQL Reference.
Create an initialization file called cluster.oracle.ini
as a text file. The information in this file is used to create Oracle Clusterware resources that manage TimesTen databases, TimesTen processes, user applications and virtual IP addresses.
Note:
All of the attributes that can be used in thecluster.oracle.ini
file are described in Chapter 8, "TimesTen Configuration Attributes for Oracle Clusterware".The ttCWAdmin -create
command reads this file for configuration information, so the location of the text file must be reachable by ttCWAdmin
. It is recommended that you place this file in the daemon home directory on the host for the active database. However, you can place this file in any directory or shared drive on the same host as where you will execute the ttCWAdmin -create
command.
The default location for this file is in one of the following directories:
The install_dir
/info
directory on UNIX platforms
The c:\TimesTen\
install_dir
\srv\info
directory on Windows platforms
If you place this file in another location, identify the path of the location with the -ttclusterini
option.
The entry name in the cluster.oracle.ini
file must be the same as an existing DSN:
In the sys.odbc.ini
file on UNIX platforms
In a system DSN on Windows platforms
For example, [basicDSN]
is the entry name in the cluster.oracle.ini
file described in "Configuring basic availability". [basicDSN]
must also be the DataStore
and Data Source Name
data store attributes in the sys.odbc.ini
files on each host. For example, the sys.odbc.ini
file for the basicDSN
DSN on host1
might be:
[basicDSN] DataStore=/path1/basicDSN LogDir=/path1/log DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8
The sys.odbc.ini
file for basicDSN
on host2
can have a different path, but all other attributes should be the same:
[basicDSN] DataStore=/path2/basicDSN LogDir=/path2/log DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8
This section includes sample cluster.oracle.ini
files for these configurations:
This example shows an active standby pair with no subscribers. The hosts for the active database and the standby database are host1
and host2
. The list of hosts is delimited by commas. You can include spaces for readability if desired.
The ttCWAdmin
utility is used to administer TimesTen active standby pairs in a cluster that is managed by Oracle Clusterware.
[basicDSN] MasterHosts=host1,host2
The following is an example of a cluster.oracle.ini
file for an active standby pair with one subscriber on host3
:
[basicSubscriberDSN] MasterHosts=host1,host2 SubscriberHosts=host3
In this example, the hosts for the active database and the standby database are host1
and host2
. The specified host3
and host4
are extra nodes that can be used for failover. There are no subscriber nodes. MasterVIP
specifies the virtual IP addresses defined for the master databases. VIPInterface
is the name of the public network adaptor. VIPNetMask
defines the netmask of the virtual IP addresses.
[advancedDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0
This example has one subscriber on host4
. There is one extra node that can be used for failing over the master databases and one extra node that can be used for the subscriber database. MasterVIP
and SubscriberVIP
specify the virtual IP addresses defined for the master and subscriber databases. VIPInterface
is the name of the public network adaptor. VIPNetMask
defines the netmask of the virtual IP addresses.
[advancedSubscriberDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4,host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0
Ensure that the extra nodes:
Have TimesTen installed
Have the direct-linked application installed if this is part of the configuration. See "Implementing application failover".
If the active standby pair replicates one or more AWT or read-only cache groups, set the CacheConnect
attribute to y
.
This example specifies an active standby pair with one subscriber in an advanced availability configuration. The active standby pair replicates one or more cache groups.
[advancedCacheDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 CacheConnect=y
If the active standby pair is a member of a cache grid, assign port numbers for the active and standby databases by setting the GridPort
attribute.
This example specifies an active standby pair with no subscribers in an advanced availability configuration. The active standby pair is a member of a cache grid.
[advancedGridDSN] MasterHosts=host1,host2,host3 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 CacheConnect=y GridPort=16101, 16102
For more information about using Oracle Clusterware with a cache grid, see "Using Oracle Clusterware with a TimesTen cache grid".
TimesTen integration with Oracle Clusterware can facilitate the failover of a TimesTen application that is linked to any of the databases in the active standby pair. Both direct-linked and client/server applications that are on the same host as Oracle Clusterware and TimesTen can be managed.
The required attributes in the cluster.oracle.ini
file for failing over a TimesTen application are:
AppName
- Name of the application to be managed by Oracle Clusterware
AppStartCmd
- Command line for starting the application
AppStopCmd
- Command line for stopping the application
AppCheckCmd
- Command line for executing an application that checks the status of the application specified by AppName
AppType
- Determines the database to which the application is linked. The possible values are Active
, Standby
, DualMaster
, Subscriber
(all) and Subscriber
[
index
]
.
Optionally, you can also set AppFailureThreshold
, DatabaseFailoverDelay
, and AppScriptTimeout
. These attributes have default values.
The TimesTen application monitor process uses the user-supplied script or program specified by AppCheckCmd
to monitor the application. The script that checks the status of the application must be written to return 0
for success and a nonzero number for failure. When Oracle Clusterware detects a nonzero value, it takes action to recover the failed application.
This example shows advanced availability configured for an active standby pair with with no subscribers. The reader
application is an application that queries the data in the standby database. AppStartCmd
, AppStopCmd
and AppCheckCmd
can include arguments such as start
, stop
and check
commands. On UNIX, do not use quotes in the values for AppStartCmd
, AppStopCmd
and AppCheckCmd
.
[appDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 AppName=reader AppType=Standby AppStartCmd=/mycluster/reader/app_start.sh start AppStopCmd=/mycluster/reader/app_stop.sh stop AppCheckCmd=/mycluster/reader/app_check.sh check
AppStartCmd
, AppStopCmd
and AppCheckCmd
can include arguments. For example, the following is a valid cluster.oracle.ini
file on Windows that demonstrates configuration for an application that is directly linked to the active database. The script for starting, stopping, and checking the application takes arguments for the DSN and the action to take (-start
, -stop
and -check
).
Note the double quotes for the specified paths in AppStartCmd
, AppStopCmd
and AppCheckCmd
. The quotes are needed because there are spaces in the path. Enclose only the path in quotes. Do not enclose the DSN or the action in quotes.
[appWinDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=Local Area Connection VIPNetMask=255.255.255.0 AppName=UpdateApp AppType=Active AppStartCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -start AppStopCmd= "C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -stop AppCheckCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -check
You can configure failover for more than one application. Use AppName
to name the application and provide values for AppType
, AppStartCmd
, AppStopCmd
and AppCheckCmd
immediately following the AppName
attribute. You can include blank lines for readability. For example:
[app2DSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 AppName=reader AppType=Standby AppStartCmd=/mycluster/reader/app_start.sh AppStopCmd=/mycluster/reader/app_stop.sh AppCheckCmd=/mycluster/reader/app_check.sh AppName=update AppType=Active AppStartCmd=/mycluster/update/app2_start.sh AppStopCmd=/mycluster/update/app2_stop.sh AppCheckCmd=/mycluster/update/app2_check.sh
The application is considered available if it has been running for 15 times the value of AppScriptTimeout
attribute. The default value of AppScriptTimeout
is 60 seconds, so the application's "uptime threshold" is 15 minutes by default. If the application fails after running for more than 15 minutes, it will be restarted on the same host. If the application fails within 15 minutes of being started, the failure is considered a failure to start properly, and the application will be restarted on another host. If you want to modify the application's uptime threshold after the application has started, use the crs_register -update
command. See Oracle Clusterware Administration and Deployment Guide for information about the crs_register -update
command.
If you set AppType to DualMaster
, the application starts on both the active host and the standby host.The failure of the application on the active host causes the active database and all other applications on the host to fail over to the standby host. You can configure the failure interval, the number of restart attempts and the uptime threshold by setting the AppFailureInterval
, AppRestartAttempts
and AppUptimeThreshold
attributes. These attributes have default values. For example:
[appDualDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 AppName=update AppType=DualMaster AppStartCmd=/mycluster/update/app2_start.sh AppStopCmd=/mycluster/update/app2_stop.sh AppCheckCmd=/mycluster/update/app2_check.sh AppRestartAttempts=5 AppUptimeThreshold=300 AppFailureInterval=30
If both master nodes fail and then come back up, Oracle Clusterware can automatically recover the master databases. Automatic recovery of temporary dual failure requires:
RETURN TWOSAFE
is not specified for the active standby pair.
AutoRecover
is set to y
.
RepBackupDir
specifies a directory on shared storage.
RepBackupPeriod
is set to a value greater than 0
.
If both master nodes fail permanently, Oracle Clusterware can automatically recover the master databases to two new nodes if:
Advanced availability is configured (virtual IP addresses and at least four hosts).
The active standby pair does not replicate cache groups.
A cache grid is not configured.
RETURN TWOSAFE
is not specified.
AutoRecover
is set to y
.
RepBackupDir
specifies a directory on shared storage.
RepBackupPeriod
must be set to a value greater than 0
.
TimesTen first performs a full backup of the active database and then performs incremental backups. You can specify the optional attribute RepFullBackupCycle
to manage when TimesTen performs subsequent full backup. By default, TimesTen performs a full backup after every five incremental backups.
If RepBackupDir
and RepBackupPeriod
are configured for backups, TimesTen performs backups for any master database that becomes active. It does not delete backups that were performed for a database that used to be active and has become the standby unless the database becomes active again. Ensure that the shared storage has enough space for two complete database backups. ttCWAdmin -restore
automatically chooses the correct backup files.
Incremental backups increase the amount of log records in the transaction log files. Ensure that the values of RepBackupPeriod
and RepFullBackupCycle
are small enough to prevent a large amount of log records in the transaction log file.
This example shows attribute settings for automatic recovery.
[autorecoveryDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
AutoRecover=y
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600
If you have cache groups in the active standby pair or prefer to recover manually from failure of both master hosts, ensure that AutoRecover
is set to n
(the default). Manual recovery requires:
RepBackupDir
specifies a directory on shared storage
RepBackupPeriod
must be set to a value greater than 0
This example shows attribute settings for manual recovery. The default value for AutoRecover
is n
, so it is not included in the file.
[manrecoveryDSN]
MasterHosts=host1,host2,host3
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600
The RepDDL
attribute represents the SQL statement that creates the active standby pair. The RepDDL
attribute is optional. You can use it to exclude tables, cache groups and sequences from the active standby pair.
If you include RepDDL
in the cluster.oracle.ini
file, do not specify ReturnServiceAttribute
, MasterStoreAttribute
or SubscriberStoreAttribute
in the cluster.oracle.ini
file. Include those replication settings in the RepDDL
attribute.
When you specify a value for RepDDL
, use the <DSN>
macro for the database file name prefix. Use the <MASTERHOST[1]>
and <MASTERHOST[2]>
macros to specify the master host names. TimesTen substitutes the correct values from the MasterHosts
or MasterVIP
attributes, depending on whether your configuration uses virtual IP addresses. Similarly, use the <SUBSCRIBERHOST[
n
]>
macro to specify subscriber host names, where n
is a number from 1 to the total number of SubscriberHosts
attribute values or 1 to the total number of SubscriberVIP
attribute values if virtual IP addresses are used.
Use the RepDDL
attribute to exclude tables, cache groups and sequences from the active standby pair:
[excludeDSN] MasterHosts=host1,host2,host3,host4 SubscriberHosts=host5,host6 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 RepDDL=CREATE ACTIVE STANDBY PAIR \ <DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]> SUBSCRIBER <DSN> ON <SUBSCRIBERHOST[1]>\ EXCLUDE TABLE pat.salaries, \ EXCLUDE CACHE GROUP terry.salupdate, \ EXCLUDE SEQUENCE ttuser.empcount
The replication agent transmitter obtains route information as follows, in order of priority:
From the ROUTE
clause in the RepDDL
setting, if a ROUTE
clause is specified. Do not specify a ROUTE
clause if you are configuring advanced availability.
From Oracle Clusterware, which provides the private host names and public host names of the local and remote hosts as well as the remote daemon port number. The private host name is preferred over the public host name. The replication agent transmitter cannot connect to the IPC socket, it attempts to connect to the remote daemon, using information that Oracle Clusterware maintains about the replication scheme.
From the active and standby hosts. If they fail, then the replication agent chooses the connection method based on host name.
This is an example of specifying the ROUTE
clause in RepDDL
:
[routeDSN] MasterHosts=host1,host2,host3,host4 RepDDL=CREATE ACTIVE STANDBY PAIR \ <DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]>\ ROUTE MASTER <DSN> ON <MASTERHOST[1]> SUBSCRIBER <DSN> ON <MASTERHOST[2]>\ MASTERIP "192.168.1.2" PRIORITY 1\ SUBSCRIBERIP "192.168.1.3" PRIORITY 1\ MASTERIP "10.0.0.1" PRIORITY 2\ SUBSCRIBERIP "10.0.0.2" PRIORITY 2\ MASTERIP "140.87.11.203" PRIORITY 3\ SUBSCRIBERIP "140.87.11.204" PRIORITY 3\ ROUTE MASTER <DSN> ON <MASTERHOST[2]> SUBSCRIBER <DSN> ON <MASTERHOST[1]>\ MASTERIP "192.168.1.3" PRIORITY 1\ SUBSCRIBERIP "192.168.1.2" PRIORITY 1\ MASTERIP "10.0.0.2" PRIORITY 2\ SUBSCRIBERIP "10.0.0.1" PRIORITY 2\ MASTERIP "140.87.11.204" PRIORITY 3\ SUBSCRIBERIP "140.87.11.203" PRIORITY 3\
To create and initialize a cluster, perform these tasks:
If you plan to have more than one active standby pair in the cluster, see "Including more than one active standby pair in a cluster".
If you want to configure an Oracle database as a remote disaster recovery subscriber, see "Configuring an Oracle database as a disaster recovery subscriber".
If you want to set up a read-only subscriber that is not managed by Oracle Clusterware, see "Configuring a read-only subscriber that is not managed by Oracle Clusterware".
Install Oracle Clusterware. By default, the installation occurs on all hosts concurrently. See Oracle Clusterware installation documentation for your platform.
Oracle Clusterware starts automatically after successful installation.
Install TimesTen in the same location on each host in the cluster, including extra hosts. The instance name must be the same on each host. The user name of the instance administrator must be the same on all hosts. The TimesTen instance administrator must belong to the same UNIX primary group as the Oracle Clusterware installation owner.
On UNIX platforms, the installer prompts you for values for:
The TCP/IP port number associated with the TimesTen cluster agent. The port number can be different on each host. If you do not provide a port number, TimesTen uses the default TimesTen port.
The Oracle Clusterware location. The location must be the same on each host.
The hosts included in the cluster, including spare hosts, with host names separated by commas. This list must be the same on each host.
The installer uses these values to create the ttcrsagent.options
file on UNIX platforms. See "TimesTen Installation" in Oracle TimesTen In-Memory Database Installation Guide for details. You can also use ttmodinstall -crs
to create the file after installation. Use the -record
and -batch
options for setup.sh
to perform identical installations on additional hosts if desired.
On Windows, execute ttmodinstall
-crs
on each node after installation to create the ttcrsagent.options
file.
For more information about ttmodinstall
, see "ttmodinstall" in Oracle TimesTen In-Memory Database Reference.
TimesTen cluster information is stored in the Oracle Cluster Registry (OCR). As the root user on UNIX platforms, or as the instance administrator on Windows, enter this command:
ttCWAdmin -ocrConfig
As long as Oracle Clusterware and TimesTen are installed on the hosts, this step never needs to be repeated.
Start the TimesTen cluster agent by executing the ttCWAdmin -init
command on one of the hosts. For example:
ttCWAdmin -init
This command starts the TimesTen cluster agent (ttCRSAgent
) and the TimesTen daemon monitor (ttCRSDaemon
). There is one TimesTen cluster agent and one TimesTen daemon monitor for the TimesTen installation. When the TimesTen cluster agent has started, Oracle Clusterware begins monitoring the TimesTen daemon and will restart it if it fails.
Create a database on the host where you intend the active database to reside. The DSN must be the same as the database file name.
Create schema objects such as tables, AWT cache groups and read-only cache groups. Do not load the cache groups.
On all hosts that will be in the cluster, create sys.odbc.ini
files. The DataStore
attribute and the Data Source Name
must be the same as the entry name for the cluster.oracle.ini
file. See "The cluster.oracle.ini file" for information about the contents of the sys.odbc.ini
files.
Create a cluster.oracle.ini
file as a text file. See "The cluster.oracle.ini file" for details about its contents and acceptable locations for the file.
For advanced availability, execute the ttCWAdmin -createVIPs
command on any host in the cluster. On UNIX, you must execute this command as the root
user. For example:
ttCWAdmin -createVIPs -dsn myDSN
Create an active standby pair replication scheme by executing the ttCWAdmin -create
command on any host.
Note:
Thecluster.oracle.ini
file contains the configuration needed to perform the ttCWAdmin -create
command and so must reachable by the ttCWAdmin
executable. See "The cluster.oracle.ini file" for details about acceptable locations for the cluster.oracle.ini
file.For example:
ttCWAdmin -create -dsn myDSN
This command prompts for an encryption pass phrase that the user will not need again. The command also prompts for the user ID and password for an internal user with the ADMIN
privilege if it does not find this information in the sys.odbc.ini
file. This internal user will be used to create the active standby pair.
If the CacheConnect
Clusterware attribute is enabled, the command prompts for the user password for the Oracle database. The Oracle password is used to set the autorefresh states for cache groups. See "CacheConnect" for more details on this attribute.
Start the active standby pair replication scheme by executing the ttCWAdmin -start
command on any host. For example:
ttCWAdmin -start -dsn myDSN
This command starts the following processes for the active standby pair:
Monitor for application AppName
If the active standby pair includes cache groups, use the LOAD CACHE GROUP
statement to load the cache group tables from the Oracle tables.
If you want to use Oracle Clusterware to manage more than one active standby pair in a cluster, include additional configuration in the cluster.oracle.ini
file. Oracle Clusterware can only manage more than one active standby pair in a cluster if all TimesTen databases are a part of the same TimesTen instance on a single host.
For example, the following cluster.oracle.ini
file contains configuration information for two active standby pair replication schemes on the same host:
Note:
For details on configuration attributes in thecluster.oracle.ini
file, see Chapter 8, "TimesTen Configuration Attributes for Oracle Clusterware".[advancedSubscriberDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 [advSub2DSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.4, 192.168.1.5 SubscriberVIP=192.168.1.6 VIPInterface=eth0 VIPNetMask=255.255.255.0
Perform these tasks for additional replication schemes:
Create and populate the databases.
Create the virtual IP addresses. Use the ttCWAdmin -createVIPs
command.
Create the active standby pair replication scheme. Use the ttCWAdmin -create
command.
Start the active standby pair. Use the ttCWAdmin -start
command.
You can create an active standby pair on the primary site with an Oracle database as a remote disaster recovery subscriber. See "Using a disaster recovery subscriber in an active standby pair". Oracle Clusterware manages the active standby pair but does not manage the disaster recovery subscriber. The user must perform a switchover if the primary site fails.
To use Oracle Clusterware to manage an active standby pair that has a remote disaster recovery subscriber, perform these tasks:
Use the RepDDL
or RemoteSubscriberHosts
Clusterware attribute to provide information about the remote disaster recovery subscriber.For example:
[advancedDRsubDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 RemoteSubscriberHosts=host6 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 CacheConnect=y
Use ttCWAdmin -create
to create the active standby pair replication scheme on the primary site. This does not create the disaster recovery subscriber.
Use ttCWAdmin -start
to start the active standby pair replication scheme.
Load the cache groups that are replicated by the active standby pair.
Set up the disaster recovery subscriber using the procedure in "Rolling out a disaster recovery subscriber".
You can include a read-only TimesTen subscriber database that is not managed by Oracle Clusterware. Perform these tasks:
Include the RemoteSubscriberHosts Clusterware attribute in the cluster.oracle.ini
file.For example:
[advancedROsubDSN] MasterHosts=host1,host2,host3 RemoteSubscriberHosts=host6 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0
Use ttCWAdmin -create
to create the active standby pair replication scheme on the primary site.
Use ttCWAdmin -start
to start the active standby pair replication scheme. This does not create the read-only subscriber.
Use the ttRepStateGet
procedure to verify that the state of the standby database is STANDBY
.
On the subscriber host, use ttRepAdmin -duplicate
option to duplicate the standby database to the read-only subscriber. See "Duplicating a database".
Start the replication agent on the subscriber host.
To add a read-only subscriber to an existing configuration, see "Adding a read-only subscriber not managed by Oracle Clusterware".
To rebuild a read-only subscriber, see "Rebuilding a read-only subscriber not managed by Oracle Clusterware".
You can use the TimesTen implementation of Oracle Clusterware to manage a cache grid when each grid member is an active standby pair. TimesTen does not support using Oracle Clusterware to manage standalone grid members.
This section includes:
See "Install TimesTen on each host" for installation requirements. In addition, each grid member must have a DSN that is unique within the cache grid.
Perform the tasks described in "Creating and initializing a cluster" for each grid member. Include the GridPort Clusterware attribute in the cluster.oracle.ini
file as described in "Including the active standby pair in a cache grid". Ensure that the specified port numbers are not in use.
The ttCWAdmin -start
command automatically attaches a grid member to the cache grid attach. The ttCWAdmin -stop
command automatically detaches a grid member from the cache grid.
If both nodes of an active standby pair grid member fail, then the grid member fails. Oracle Clusterware evicts the failed grid member from the grid automatically. However, when a cache grid is configured, any further automatic recovery after a dual failure, whether temporary or permanent, is not possible. In this case, you can only recover manually. For details, see "Manual recovery of both nodes of an active standby pair grid member".
You can add, drop or change a cache group while the active database is attached to the grid.
Use the ttCWAdmin -beginAlterSchema
command to make these schema changes. This command stops replication but allows the active database to remain attached to the grid. The ttCWAdmin -endAlterSchema
command duplicates the changes to the standby database, registers the altered replication scheme and starts replication.
To add a table and include it in the active standby pair, see "Making DDL changes in an active standby pair". See the same section for information about dropping a replicated table.
Perform these steps on the active database of each active standby pair grid member.
Enable the addition of the cache group to the active standby pair.
ttCWAdmin -beginAlterSchema advancedGridDSN
Create the cache group.
If the cache group is a read-only cache group, alter the active standby pair to include the cache group.
ALTER ACTIVE STANDBY PAIR INCLUDE CACHE GROUP samplecachegroup;
Duplicate the change to the standby database.
ttCWAdmin -endAlterSchema advancedGridDSN
You can load the cache group at any time after you create the cache group.
Perform these steps to drop a cache group.
Unload the cache group in all members of the cache grid.
CALL ttOptSetFlag('GlobalProcessing', 1); UNLOAD CACHE GROUP samplecachegroup;
On the active database of an active standby pair grid member, enable dropping the cache group.
ttCWAdmin -beginAlterSchema advancedGridDSN
If the cache group is a read-only cache group, alter the active standby pair to exclude the cache group.
ALTER ACTIVE STANDBY PAIR EXCLUDE CACHE GROUP samplecachegroup;
If the cache group is a read-only cache group, set the autorefresh state to PAUSED
.
ALTER CACHE GROUP samplecachegroup SET AUTOREFRESH STATE PAUSED;
Drop the cache group.
DROP CACHE GROUP samplecachegroup;
If the cache group was a read-only cache group, run the TimesTen_install_dir
/oraclescripts/cacheCleanUp.sql
SQL*Plus script as the cache administration user on the Oracle database to drop the Oracle objects used to implement autorefresh operations.
Duplicate the change to the standby database.
ttCWAdmin -endAlterSchema advancedGridDSN
Repeat steps 2 through 7 on the active database of each active standby pair grid member.
To change an existing cache group, first drop the existing cache group as described in "Drop a cache group". Then add the cache group with the desired changes as described in "Add a cache group".
Oracle Clusterware can recover automatically from many kinds of failures. The following sections describe several failure scenarios and how Oracle Clusterware manages the failures.
How TimesTen performs recovery when Oracle Clusterware is configured
Performing a forced switchover after failure of the active database or host
The TimesTen database monitor (ttCRSmaster
process) performs recovery. It attempts to connect to the failed database without using the forceconnect
option. If the connection fails with error 994 (Data store connection terminated
), the database monitor tries to connect 10 times. If the connection fails with error 707 (Attempt to connect to a data store that has been manually unloaded from RAM
), the database monitor changes the RAM policy and tries to connect again. If the database monitor cannot connect, it returns connection failure.
If the database monitor can connect to the database, then it performs these tasks:
It queries the CHECKSUM
column in the TTREP.REPLICATIONS
replication table.
If the value in the CHECKSUM
column matches the checksum stored in the Oracle Cluster Registry, then the database monitor verifies the role of the database. If the role is 'ACTIVE
', then recovery is complete.
If the role is not 'ACTIVE'
, then the database monitor queries the replication Commit Ticket Number (CTN) in the local database and the CTN in the active database to find out whether there are transactions that have not been replicated. If all transactions have been replicated, then recovery is complete.
If the checksum does not match or if some transactions have not been replicated, then the database monitor performs a duplicate operation from the remote database to re-create the local database.
If the database monitor fails to connect with the database because of error 8110 or 8111 (master catchup required or in progress), then it uses the forceconnect=1
option to connect and starts master catchup. Recovery is complete when master catchup has been completed. If master catchup fails with error 8112 (Operation not permitted
), then the database monitor performs a duplicate operation from the remote database. For more information about master catchup, see "Automatic catch-up of a failed master database".
If the connection fails because of other errors, then the database monitor tries to perform a duplicate operation from the remote database.
The duplicate operation verifies that:
The remote database is available.
The replication agent is running.
The remote database has the correct role. The role must be 'ACTIVE'
when the duplicate operation is attempted for creation of a standby database. The role must be 'STANDBY'
or 'ACTIVE'
when the duplicate operation is attempted for creation of a read-only subscriber.
When the conditions for the duplicate operation are satisfied, the existing failed database is destroyed and the duplicate operation starts.
If there is a failure on the node where the active database resides, Oracle Clusterware automatically changes the state of the standby database to 'ACTIVE'
. If application failover is configured, then the application begins updating the new active database.
Figure 7-2 shows that the state of the standby database has changed to 'ACTIVE'
and that the application is updating the new active database.
Figure 7-2 Standby database becomes active
Oracle Clusterware tries to restart the database or host where the failure occurred. If it is successful, then that database becomes the standby database.
Figure 7-3 shows a cluster where the former active node becomes the standby node.
Figure 7-3 Standby database starts on former active host
If the failure of the former active node is permanent and advanced availability is configured, Oracle Clusterware starts a standby database on one of the extra nodes.
Figure 7-4 shows a cluster in which the standby database is started on one of the extra nodes.
Figure 7-4 Standby database starts on extra host
If you do not want to wait for these automatic actions to occur, see "Performing a forced switchover after failure of the active database or host".
If there is a failure on the standby node, Oracle Clusterware first tries to restart the database or host. If it cannot restart the standby database on the same host and advanced availability is configured, Oracle Clusterware starts the standby database on an extra node.
Figure 7-5 shows a cluster in which the standby database is started on one of the extra nodes.
If there is a failure on a subscriber node, Oracle Clusterware first tries to restart the database or host. If it cannot restart the database on the same host and advanced availability is configured, Oracle Clusterware starts the subscriber database on an extra node.
This section includes these topics:
Manual recovery of both nodes of an active standby pair grid member
Manual recovery to the same master nodes when databases are corrupt
Oracle Clusterware can achieve automatic recovery from temporary failure on both master nodes after the nodes come back up if:
RETURN TWOSAFE
is not specified for the active standby pair.
AutoRecover
is set to y
.
RepBackupDir
specifies a directory on shared storage.
RepBackupPeriod
is set to a value greater than 0
.
Oracle Clusterware can achieve automatic recovery from permanent failure on both master nodes if:
Advanced availability is configured (virtual IP addresses and at least four hosts).
The active standby pair does not replicate cache groups.
A cache grid is not configured.
RETURN TWOSAFE
is not specified for the active standby pair.
AutoRecover
is set to y
.
RepBackupDir
specifies a directory on shared storage.
RepBackupPeriod
is set to a value greater than 0
.
See "Recovering from permanent failure of both master nodes" for examples of cluster.oracle.ini
files.
If both nodes of an active standby pair grid member fail, then the grid member fails. Oracle Clusterware evicts the failed grid member from the grid automatically. After the failed grid member is removed from the grid, you can continue to recover manually. However, when a cache grid is configured, any further automatic recovery after a dual failure, whether temporary or permanent, is not possible.
If the active standby pair grid member is in an asynchronous replication scheme, the grid member is recovered automatically and reattached to the grid. If the active standby pair grid member is in a replication scheme with RETURN TWOSAFE
configured, perform these steps to recover the grid member and reattach it to the grid:
Stop the replication agent and the cache agent and disconnect the application from both databases. This step detaches the grid member from the grid.
ttCWAdmin -stop advancedGridDSN
Drop the active standby pair.
ttCWAdmin -drop advancedGridDSN
Create the active standby pair replication scheme.
ttCWAdmin -create advancedGridDSN
Start the active standby pair replication scheme. This step attaches the grid member to the grid.
ttCWAdmin -start advancedGridDSN
This section assumes that the failed master nodes will be recovered to new hosts on which TimesTen and Oracle Clusterware have been installed. These steps use the manrecoveryDSN
database and cluster.oracle.ini
file for examples.
To perform manual recovery in an advanced availability configuration, perform these tasks:
Ensure that the TimesTen cluster agent is running on the local host.
ttCWAdmin -init -hosts localhost
Restore the backup database. Ensure that there is not already a database on the host with the same DSN as the database you want to restore.
ttCWAdmin -restore -dsn manrecoveryDSN
If there are cache groups in the database, drop and re-create the cache groups.
If the new hosts are not already specified by MasterHosts
and SubscriberHosts
in the cluster.oracle.ini
file, then modify the file to include the new hosts.
These steps use manrecoveryDSN
. This step is not necessary for manrecoveryDSN
because extra hosts are already specified in the cluster.oracle.ini
file.
Re-create the active standby pair replication scheme.
ttCWAdmin -create -dsn manrecoveryDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn manrecoveryDSN
This section assumes that the failed master nodes will be recovered to new hosts on which TimesTen and Oracle Clusterware have been installed. These steps use the basicDSN
database and cluster.oracle.ini
file for examples.
To perform manual recovery in a basic availability configuration, perform these steps:
Acquire new hosts for the databases in the active standby pair.
Ensure that the TimesTen cluster agent is running on the local host.
ttCWAdmin -init -hosts localhost
Restore the backup database. Ensure that there is not already a database on the host with the same DSN as the database you want to restore.
ttCWADmin -restore -dsn basicDSN
If there are cache groups in the database, drop and re-create the cache groups.
Update the MasterHosts
and SubscriberHosts
entries in the cluster.oracle.ini
file. This example uses the basicDSN
database. The MasterHosts
entry changes from host1
to host10
. The SubscriberHosts
entry changes from host2
to host20
.
[basicDSN] MasterHosts=host10,host20
Re-create the active standby pair replication scheme.
ttCWAdmin -create -dsn basicDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn basicDSN
Failures can occur on both master nodes so that the databases are corrupt. If you want to recover to the same master nodes, perform the following steps:
Ensure that the replication agent and the cache agent are stopped and that applications are disconnected from both databases. This example uses the basicDSN
database.
ttCWAdmin -stop -dsn basicDSN
On the node where you want the new active database to reside, destroy the databases by using the ttDestroy
utility.
ttDestroy basicDSN
Restore the backup database.
ttCWADmin -restore -dsn basicDSN
If there are cache groups in the database, drop and re-create the cache groups.
Re-create the active standby pair replication scheme.
ttCWAdmin -create -dsn basicDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn basicDSN
You can configure an active standby pair to have a return service of RETURN TWOSAFE
by using the ReturnServiceAttribute
Clusterware attribute in the cluster.oracle.ini
file. When RETURN TWOSAFE
is configured, the database logs may be available on one or both nodes after both nodes fail.
This cluster.oracle.ini
example includes backup configuration in case the database logs are not available:
[basicTwosafeDSN]
MasterHosts=host1,host2
ReturnServiceAttribute=RETURN TWOSAFE
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600
Perform these recovery tasks:
Ensure that the replication agent and the cache agent are stopped and that applications are disconnected from both databases.
ttCWAdmin -stop -dsn basicTwosafeDSN
Drop the active standby pair.
ttCWAdmin -drop -dsn basicTwosafeDSN
Decide whether the former active or standby database is more up to date and re-create the active standby pair using the chosen database. The command prompts you to choose the host on which the active database will reside.
ttCWAdmin -create -dsn basicTwosafeDSN
If neither database is usable, restore the database from backups.
ttCWAdmin -restore -dsn basicTwosafeDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn basicTwosafeDSN
Approach a failure of more than two master hosts as a more extreme case of dual host failure. Use these guidelines:
Address the root cause of the failure if it is something like a power outage or network failure.
Identify or obtain at least two healthy hosts for the active and standby databases.
Update the MasterHosts
and SubscriberHosts
entries in the cluster.oracle.ini
file.
See "Manual recovery for advanced availability" and "Manual recovery for basic availability" for guidelines on subsequent actions to take.
If you want to force a switchover to the standby database without waiting for automatic recovery to be performed by TimesTen and Oracle Clusterware, you can write an application that uses Oracle Clusterware commands. These are the tasks to perform:
Use the crs_stop
command to stop the ttCRSmaster
resource on the active database. This causes the role of the standby database to change to active.
Use the crs_start
command to restart the ttCRSmaster
resource on the former active database. This causes the database to recover and become the standby database.
See Oracle Clusterware Administration and Deployment Guide for more information about the crs_stop
and crs_start
commands.
This section includes the following topics:
To include or exclude a table, see "Making DDL changes in an active standby pair".
To include or exclude a cache group, see "Making schema changes to active standby pairs in a grid".
To create PL/SQL procedures, sequences, materialized views and indexes on tables with data, perform these tasks:
Enable the addition of the object to the active standby pair.
ttCWAdmin -beginAlterSchema advancedDSN
Create the object.
If the object is a sequence and you want to include it in the active standby pair replication scheme, alter the active standby pair.
ALTER ACTIVE STANDBY PAIR INCLUDE samplesequence;
Duplicate the change to the standby database.
ttCWAdmin -endAlterSchema advancedDSN
To add or drop a subscriber database or alter database attributes, perform the following tasks:
Stop the replication agents on the databases in the active standby pair. These commands use the advancedCacheDSN
as an example.
ttCWAdmin -stop -dsn advancedCacheDSN
Drop the active standby pair.
ttCWAdmin -drop -dsn advancedCacheDSN
Modify the schema as desired.
Re-create the active standby pair replication scheme.
ttCWAdmin -create -dsn advancedCacheDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn advancedCacheDSN
See "Upgrading TimesTen when using Oracle Clusterware" in Oracle TimesTen In-Memory Database Installation Guide.
To add a read-only subscriber to an active standby pair replication scheme managed by Oracle Clusterware, perform these steps:
Stop the replication agents on all databases. This example uses the advancedSubscriberDSN
, which already has a subscriber and is configured for advanced availability.
ttCWAdmin -stop -dsn advancedSubscriberDSN
Drop the active standby pair.
ttCWAdmin -drop -dsn advancedSubscriberDSN
Modify the cluster.oracle.ini
file.
Add the subscriber to the SubscriberHosts
attribute.
If the cluster is configured for advanced availability, add a virtual IP address to the SubscriberVIP
attribute.
See "Configuring advanced availability" for an example using these attributes.
Create the active standby pair replication scheme.
ttCWAdmin -create -dsn advancedSubscriberDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn advancedSubscriberDSN
To remove a read-only subscriber from an active standby pair, perform these steps:
Stop the replication agents on all databases. This example uses the advancedSubscriberDSN
, which has a subscriber and is configured for advanced availability.
ttCWAdmin -stop -dsn advancedSubscriberDSN
Drop the active standby pair.
ttCWAdmin -drop -dsn advancedSubscriberDSN
Modify the cluster.oracle.ini
file.
Remove the subscriber from the SubscriberHosts
attribute or remove the attribute altogether if there are no subscribers left in the active standby pair.
Remove a virtual IP from the SubscriberVIP
attribute or remove the attribute altogether if there are no subscribers left in the active standby pair.
Create the active standby pair replication scheme.
ttCWAdmin -create -dsn advancedSubscriberDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn advancedSubscriberDSN
To add an active standby pair (with or without subscribers) to a cluster that is already managing an active standby pair, perform these tasks:
Create and populate a database on the host where you intend the active database to reside initially. See "Create and populate a TimesTen database on one host".
Modify the cluster.oracle.ini
file. This example adds advSub2DSN
to the cluster.oracle.ini
file that already contains the configuration for advancedSubscriberDSN
. The new active standby pair is on different hosts from the original active standby pair.
[advancedSubscriberDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 [advSub2DSN] MasterHosts=host6,host7,host8 SubscriberHosts=host9, host10 MasterVIP=192.168.1.4, 192.168.1.5 SubscriberVIP=192.168.1.6 VIPInterface=eth0 VIPNetMask=255.255.255.0
Create new virtual IP addresses. On UNIX, the user must be root
to do this.
ttCWAdmin -createVIPs -dsn advSub2DSN
Create the new active standby pair replication scheme.
ttCWAdmin -create -dsn advSub2DSN
Start the new active standby pair replication scheme.
ttCWAdmin -start -dsn advSub2DSN
You can add a read-only subscriber that is not managed by Oracle Clusterware to an existing active standby pair replication scheme that is managed by Oracle Clusterware. Using the ttCWAdmin -beginAlterSchema
command enables you to add the subscriber without dropping and recreating the replication scheme. Oracle Clusterware does not manage the subscriber because it is not part of the configuration that was set up for Oracle Clusterware management.
Perform these steps:
Enter the ttCWAdmin -beginAlterSchema
command to stop the replication agent on the active and standby databases.
Using ttIsql
to connect to the active database, add the subscriber to the replication scheme by using an ALTER ACTIVE STANDBY PAIR
statement.
ALTER ACTIVE STANDBY PAIR ADD SUBSCRIBER ROsubDSN ON host6;
Enter the ttCWAdmin -endAlterSchema
command to duplicate the standby database, register the altered replication scheme and start replication.
Enter the ttIsql repschemes
command to verify that the read-only subscriber has been added to the replication scheme.
Use the ttRepStateGet
procedure to verify that the state of the standby database is STANDBY
.
On the subscriber host, use ttRepAdmin -duplicate
to duplicate the standby database to the read-only subscriber. See "Duplicating a database".
Start the replication agent on the subscriber host.
You can destroy and rebuild a read-only subscriber that is not managed by Oracle Clusterware. Perform these tasks:
Stop the replication agent on the subscriber host.
Use the ttDestroy
utility to destroy the subscriber database.
On the subscriber host, use ttRepAdmin -duplicate
to duplicate the standby database to the read-only subscriber. See "Duplicating a database".
To remove an active standby pair (with or without subscribers) from a cluster, perform these tasks:
Stop the replication agents on all databases in the active standby pair. This example uses advSub2DSN
, which was added in "Adding an active standby pair to a cluster".
ttCWAdmin -stop -dsn advSub2DSN
Drop the active standby replication scheme.
ttCWAdmin -drop -dsn advSub2DSN
Drop the virtual IP addresses for the active standby pair.
ttCWAdmin -dropVIPs -dsn advSub2DSN
Modify the cluster.oracle.ini
file (optional). Remove the entries for advSub2DSN
.
If you want to destroy the databases, log onto each host that was included in the configuration for this active standby pair and use the ttDestroy
utility.
ttDestroy advSub2DSN
For more information about ttDestroy
, see "ttDestroy" in Oracle TimesTen In-Memory Database Reference.
Adding a host requires that the cluster be configured for advanced availability. The examples in this section use the advancedSubscriberDSN
.
To add two spare master hosts to a cluster, enter a command similar to the following:
ttCWAdmin -addMasterHosts -hosts "host8,host9" -dsn advancedSubscriberDSN
To add a spare subscriber host to a cluster, enter a command similar to the following:
ttCWAdmin -addSubscriberHosts -hosts "subhost1" -dsn advancedSubscriberDSN
Removing a host from the cluster requires that the cluster be configured for advanced availability. MasterHosts
must list more than two hosts if one of the master hosts is to be removed. SubscriberHosts
must list at least one more host than the number of subscriber databases if one of the subscriber hosts is to be removed.
The examples in this section use the advancedSubscriberDSN
.
To remove two spare master host from the cluster, enter a command similar to the following:
ttCWAdmin -delMasterHosts "host8,host9" -dsn advancedSubscriberDSN
To remove a spare subscriber hosts from the cluster, enter a command similar to the following:
ttCWAdmin -delSubscriberHosts "subhost1" -dsn advancedSubscriberDSN
After a failover, the active and standby databases are on different hosts than they were before the failover. You can use the -switch
option of the ttCWAdmin
utility to restore the original configuration.
For example:
ttCWAdmin -switch -dsn basicDSN
Ensure that there are no open transactions before using the -switch
option. If there are open transactions, the command fails.
Figure 7-6 shows the hosts for an active standby pair. The active database resides on host A, and the standby database resides on host B.
Figure 7-6 Hosts for an active standby pair
The ttCWAdmin -switch
command performs these tasks:
Deactivates the TimesTen cluster agent (ttCRSAgent
) on host A (the active node)
Disables the database monitor (ttCRSmaster
) on host A
Calls the ttRepSubscriberWait
, ttRepStop
and ttRepDeactivate
built-in procedures on host A
Stops the active service (ttCRSActiveService
) on host A and reports a failure event to the Oracle Clusterware CRSD
process
Enables monitoring on host A and moves the active service to host B
Starts the replication agent on host A, stops the standby service (ttCRSsubservice
) on host B and reports a failure event to the Oracle Clusterware CRSD
process on host B
Starts the standby service (ttCRSsubservice
) on host A
When a cluster is configured for advanced availability, you can use the -relocate
option of the ttCWAdmin
utility to move a database from the local host to the next available spare host specified in the MasterHosts
attribute in the cluster.oracle.ini
file. If the database on the local host has the active role, the -relocate
option first reverses the roles. Thus the newly migrated active database becomes the standby and the standby becomes the active.
The -relocate
option is useful for relocating a database if you decide to take the host offline. Ensure that there are no open transactions before you use the command.
For example:
ttCWAdmin -relocate -dsn advancedDSN
If you decide to upgrade the operating system or hardware for a host or perform network maintenance, shut down Oracle Clusterware and disable automatic startup. Execute these Oracle Clusterware commands as root
or OS administrator:
# crsctl stop crs # crsctl disable crs
Shut down TimesTen. See "Shutting down a TimesTen application" in Oracle TimesTen In-Memory Database Operations Guide.
Perform the host maintenance. Then enable automatic startup and start Oracle Clusterware:
# crsctl enable crs # crsctl start crs
See Oracle Clusterware Administration and Deployment Guide for more information about these commands.
When all of the hosts in the cluster need to be brought down, stop Oracle Clusterware on each host individually. Execute these Oracle Clusterware commands as root
or OS administrator:
# crsctl stop crs # crsctl disable crs
Shut down TimesTen. See "Shutting down a TimesTen application" in Oracle TimesTen In-Memory Database Operations Guide.
Perform the maintenance. Then enable automatic startup and start Oracle Clusterware:
# crsctl enable crs # crsctl start crs
See Oracle Clusterware Administration and Deployment Guide for more information about these commands.
When you create the active standby pair replication scheme with the ttCWAdmin -create
command, Oracle Clusterware prompts for the user name and password of the internal user. If there are cache groups in the active standby pair, Oracle Clusterware also stores the cache administration user name and password. To change the user name or password for the internal user or the cache administration user, you must re-create the cluster.
To change the user name or password of the internal user that created the active standby pair replication or to change the cache administration user name or password, perform these tasks:
Stop the replication agents on the databases in the active standby pair. These commands use the advancedCacheDSN
as an example.
ttCWAdmin -stop -dsn advancedCacheDSN
Drop the active standby pair.
ttCWAdmin -drop -dsn advancedCacheDSN
Change the appropriate user name or password:
Change the internal user name or password by using the CREATE USER
or ALTER USER
statements. See "Creating or identifying users to the database" in Oracle TimesTen In-Memory Database Operations Guide.
Change the cache administration user name or password by using the ttCacheUidPwdSet
built-in procedure. See "Setting the cache administration user name and password" in Oracle In-Memory Database Cache User's Guide.
Re-create the active standby pair replication scheme.
ttCWAdmin -create -dsn advancedCacheDSN
Start the active standby pair replication scheme.
ttCWAdmin -start -dsn advancedCacheDSN
This section includes:
Using the -status
option of the ttCWAdmin
utility reports information about all of the active standby pairs in an instance that are managed by the same instance administrator. If you specify the DSN, the utility reports information for the active standby pair with that DSN.
Example 7-1 Status after creating an active standby pair
After you have created an active standby pair replication scheme but have not yet started replication, ttCWAdmin -status
returns information like this. Note that these grid states will be displayed before replication is started regardless of whether there is a cache grid.
$ ttCWAdmin -status TimesTen Cluster status report as of Thu Nov 11 13:54:35 2010 ==================================================================== TimesTen daemon monitors: Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== ==================================================================== TimesTen Cluster agents Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== Status of Cluster related to DSN MYDSN: ==================================================================== 1. Status of Cluster monitoring components: Monitor Process for Active datastore:NOT RUNNING Monitor Process for Standby datastore:NOT RUNNING Monitor Process for Master Datastore 1 on Host host1: NOT RUNNING Monitor Process for Master Datastore 2 on Host host2: NOT RUNNING 2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Grid:NO GRID Master Datastore 2: Host:host2 Status:UNAVAILABLE State:UNKNOWN Grid:UNKNOWN ==================================================================== The cluster containing the replicated DSN is offline
Example 7-2 Status when the active database is running
After you have started the replication scheme and the active database is running but the standby database is not yet running, ttCWAdmin -status
returns information like this when a cache grid is not configured.
$ ttcwadmin -status TimesTen Cluster status report as of Thu Nov 11 13:58:25 2010 ==================================================================== TimesTen daemon monitors: Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== ==================================================================== TimesTen Cluster agents Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== Status of Cluster related to DSN MYDSN: ==================================================================== 1. Status of Cluster monitoring components: Monitor Process for Active datastore:RUNNING on Host host1 Monitor Process for Standby datastore:RUNNING on Host host1 Monitor Process for Master Datastore 1 on Host host1: RUNNING Monitor Process for Master Datastore 2 on Host host2: RUNNING 2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Grid:NO GRID Master Datastore 2: Host:host2 Status:AVAILABLE State:IDLE Grid:NO GRID ==================================================================== The cluster containing the replicated DSN is online
If a cache grid is configured, then the last section appears as follows:
2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Grid:AVAILABLE Master Datastore 2: Host:host2 Status:AVAILABLE State:IDLE Grid:NO GRID
Example 7-3 Status when the active and the standby databases are running
After you have started the replication scheme and the active database and the standby database are both running, ttCWAdmin -status
returns information like this when a cache grid is not configured.
$ ttcwadmin -status TimesTen Cluster status report as of Thu Nov 11 13:59:20 2010 ==================================================================== TimesTen daemon monitors: Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== ==================================================================== TimesTen Cluster agents Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== Status of Cluster related to DSN MYDSN: ==================================================================== 1. Status of Cluster monitoring components: Monitor Process for Active datastore:RUNNING on Host host1 Monitor Process for Standby datastore:RUNNING on Host host2 Monitor Process for Master Datastore 1 on Host host1: RUNNING Monitor Process for Master Datastore 2 on Host host2: RUNNING 2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Grid:NO GRID Master Datastore 2: Host:host2 Status:AVAILABLE State:STANDBY Grid:NO GRID ==================================================================== The cluster containing the replicated DSN is online
If a cache grid is configured, then the last section appears as follows:
2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Grid:AVAILABLE Master Datastore 2: Host:host2 Status:AVAILABLE State:STANDBY Grid:AVAILABLE
The monitor processes report events and errors to the ttcwerrors.log
and ttcwmsg.log
files. The files are located in the daemon_home
/info
directory. The default size of these files is the same as the default maximum size of the user log. The maximum number of log files is the same as the default number of files for the user log. When the maximum number of files has been written, additional errors and messages overwrite the files, beginning with the oldest file.
For the default values for number of log files and log file size, see "Modifying informational messages" in Oracle TimesTen In-Memory Database Operations Guide.