PK $Aoa,mimetypeapplication/epub+zipPK$AiTunesMetadata.plist\ artistName Oracle Corporation book-info cover-image-hash 553392710 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 206127536 publisher-unique-id E21635-04 unique-id 540207062 genre Oracle Documentation itemName Oracle® TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2) releaseDate 2012-08-27T13:13:14Z year 2012 PK +a\PK$AMETA-INF/container.xml PKYuPK$AOEBPS/gettingstarted.htm#Wܨ Getting Started

2 Getting Started

This chapter describes how to configure and start up sample replication schemes. It includes these topics:

You must have the ADMIN privilege to complete the procedures in this chapter.

Configuring an active standby pair with one subscriber

This section describes how to create an active standby pair with one subscriber. The active database is master1. The standby database is master2. The subscriber database is subscriber1. To keep the example simple, all databases reside on the same computer, server1.

Figure 2-1 shows this configuration.

Figure 2-1 Active standby pair with one subscriber

Description of Figure 2-1 follows
Description of "Figure 2-1 Active standby pair with one subscriber"

This section includes the following topics:

Step 1: Create the DSNs for the master and the subscriber databases

Create DSNs named master1, master2 and subscriber1 as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.

On UNIX systems, use a text editor to create the following odbc.ini file:

[master1]
DRIVER=install_dir/lib/libtten.so
DataStore=/tmp/master1
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8
[master2]
DRIVER=install_dir/lib/libtten.so
DataStore=/tmp/master2
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8
[subscriber1]
DRIVER=install_dir/lib/libtten.so
DataStore=/tmp/subscriber1
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8

On Windows, use the ODBC Administrator to set the same connection attributes. Use defaults for all other settings.

Step 2: Create a table in one of the master databases

Use the ttIsql utility to connect to the master1 database:

% ttIsql master1
 
Copyright (c) 1996-2011, Oracle.  All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
 
connect "DSN=master1";
Connection successful: DSN=master1;UID=terry;DataStore=/tmp/master1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;TypeMode=0;
(Default setting AutoCommit=1)
Command>

Create a table called tab with columns a and b:

Command> CREATE TABLE tab (a NUMBER NOT NULL,
       > b CHAR(18),
       > PRIMARY KEY (a));

Step 3: Define the active standby pair

Define the active standby pair on master1:

Command> CREATE ACTIVE STANDBY PAIR master1, master2
       > SUBSCRIBER subscriber1;

For more information about defining an active standby pair, see Chapter 3, "Defining an Active Standby Pair Replication Scheme".

Step 4: Start the replication agent on a master database

Start the replication agent on master1:

Command> CALL ttRepStart;

Step 5: Set the state of a master database to 'ACTIVE'

The state of a new database in an active standby pair is 'IDLE' until the active database has been set.

Use the ttRepStateSet built-in procedure to designate master1 as the active database:

Command> CALL ttRepStateSet('ACTIVE');

Verify the state of master1:

Command> CALL ttRepStateGet;
< ACTIVE, NO GRID >
1 row found.

Step 6. Create a user on the active database

Create a user terry with a password of terry and grant terry the ADMIN privilege. Creating a user with the ADMIN privilege is required by Access Control for the next step.

Command> CREATE USER terry IDENTIFIED BY terry;
User created.
Command> GRANT admin TO terry;

Step 7: Duplicate the active database to the standby database

Exit ttIsql and use the ttRepAdmin utility with the -duplicate option to duplicate the active database to the standby database. If you are using two different hosts, enter the ttRepAdmin command from the target host.

% ttRepAdmin -duplicate -from master1 -host server1 -uid terry -pwd terry 
"dsn=master2"

Step 8: Start the replication agent on the standby database

Use ttIsql to connect to master2 and start the replication agent:

% ttIsql master2
Copyright (c) 1996-2011, Oracle.  All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
 
connect "DSN=master2";
Connection successful: DSN=master2;UID=terry;DataStore=/tmp/master2;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;TypeMode=0;
(Default setting AutoCommit=1)
Command> CALL ttRepStart;

Starting the replication agent for the standby database automatically sets its state to 'STANDBY'. Verify the state of master2:

Command> CALL ttRepStateGet;
< STANDBY, NO GRID >
1 row found.

Step 9. Duplicate the standby database to the subscriber

Use the ttRepAdmin utility to duplicate the standby database to the subscriber database:

% ttRepAdmin -duplicate -from master2 -host server1 -uid terry -pwd terry 
"dsn=subscriber1"

Step 10: Start the replication agent on the subscriber

Use ttIsql to connect to subscriber1 and start the replication agent. Verify the state of subscriber1.

% ttIsql subscriber1
 
Copyright (c) 1996-2011, Oracle.  All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
 
connect "DSN=subscriber1";
Connection successful: DSN=subscriber1;UID=terry;DataStore=/stmp/subscriber1;
DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;TypeMode=0;
(Default setting AutoCommit=1)
Command> CALL ttRepStart;
Command> call ttRepStateGet;
< IDLE, NO GRID >
1 row found.

Step 11: Insert data into the table on the active database

Insert a row into the tab table on master1.

Command> INSERT INTO tab VALUES (1,'Hello');
1 row inserted.
Command> SELECT * FROM tab;
< 1, Hello              >
1 row found.

Verify that the insert is replicated to master2 and subscriber1.

Command> SELECT * FROM tab;
< 1, Hello              >
1 row found.

Step 12: Drop the active standby pair and the table

Stop the replication agents on each database:

Command> CALL ttRepStop;

Drop the active standby pair on each database. You can then drop the table tab on any database in which you have dropped the active standby pair.

Command> DROP ACTIVE STANDBY PAIR;
Command> DROP TABLE tab;

Configuring a replication scheme with one master and one subscriber

This section describes how to configure a replication scheme that replicates the contents of a single table in a master database (masterds) to a table in a subscriber database (subscriberds). To keep the example simple, both databases reside on the same computer.

Figure 2-2 Simple replication scheme

Description of Figure 2-2 follows
Description of "Figure 2-2 Simple replication scheme"

This section includes the following topics:

Step 1: Create the DSNs for the master and the subscriber

Create DSNs named masterds and subscriberds as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.

On UNIX systems, use a text editor to create the following odbc.ini file on each database:

[masterds]
DataStore=/tmp/masterds
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8
[subscriberds]
DataStore=/tmp/subscriberds
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8

On Windows, use the ODBC Administrator to set the same connection attributes. Use defaults for all other settings.

Step 2: Create a table and replication scheme on the master database

Connect to masterds with the ttIsql utility:

% ttIsql masterds
Copyright (c) 1996-2011, Oracle.  All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.

connect "DSN=masterds";
Connection successful: DSN=masterds;UID=ttuser;
DataStore=/tmp/masterds;DatabaseCharacterSet=AL32UTF8;
ConnectionCharacterSet=AL32UTF8;TypeMode=0;
(Default setting AutoCommit=1)
Command>

Create a table named tab with columns named a, b and c:

Command> CREATE TABLE tab (a NUMBER NOT NULL,
       > b NUMBER,
       > c CHAR(8),
       > PRIMARY KEY (a));

Create a replication scheme called repscheme to replicate the tab table from masterds to subscriberds.

Command> CREATE REPLICATION repscheme
       > ELEMENT e TABLE tab
       > MASTER masterds
       > SUBSCRIBER subscriberds;

Step 3: Create a table and replication scheme on the subscriber database

Connect to subscriberds and create the same table and replication scheme, using the same procedure described in Step 2.

Step 4: Start the replication agent on each database

Start the replication agents on masterds and subscriberds:

Command> call ttRepStart;

Exit ttIsql. Use the ttStatus utility to verify that the replication agents are running for both databases:

% ttStatus
TimesTen status report as of Thu Aug 11 17:05:23 2011
 
Daemon pid 18373 port 4134 instance ttuser
TimesTen server pid 18381 started on port 4136
------------------------------------------------------------------------
Data store /tmp/masterds
There are 16 connections to the data store
Shared Memory KEY 0x0201ab43 ID 5242889
PL/SQL Memory KEY 0x0301ab43 ID 5275658 Address 0x10000000
Type            PID     Context     Connection Name              ConnID
Process         20564   0x081338c0  masterds                          1
Replication     20676   0x08996738  LOGFORCE                          5
Replication     20676   0x089b69a0  REPHOLD                           2
Replication     20676   0x08a11a58  FAILOVER                          3
Replication     20676   0x08a7cd70  REPLISTENER                       4
Replication     20676   0x08ad7e28  TRANSMITTER                       6
Subdaemon       18379   0x080a11f0  Manager                        2032
Subdaemon       18379   0x080fe258  Rollback                       2033
Subdaemon       18379   0x081cb818  Checkpoint                     2036
Subdaemon       18379   0x081e6940  Log Marker                     2035
Subdaemon       18379   0x08261e70  Deadlock Detector              2038
Subdaemon       18379   0xae100470  AsyncMV                        2040
Subdaemon       18379   0xae11b508  HistGC                         2041
Subdaemon       18379   0xae300470  Aging                          2039
Subdaemon       18379   0xae500470  Flusher                        2034
Subdaemon       18379   0xae55b738  Monitor                        2037
Replication policy  : Manual
Replication agent is running.
Cache Agent policy  : Manual
PL/SQL enabled.
------------------------------------------------------------------------
Data store /tmp/subscriberds
There are 16 connections to the data store
Shared Memory KEY 0x0201ab41 ID 5177351
PL/SQL Memory KEY 0x0301ab41 ID 5210120 Address 0x10000000
Type            PID     Context     Connection Name              ConnID
Process         20594   0x081338f8  subscriberds                      1
Replication     20691   0x0893c550  LOGFORCE                          5
Replication     20691   0x089b6978  REPHOLD                           2
Replication     20691   0x08a11a30  FAILOVER                          3
Replication     20691   0x08a6cae8  REPLISTENER                       4
Replication     20691   0x08ad7ba8  RECEIVER                          6
Subdaemon       18376   0x080b1450  Manager                        2032
Subdaemon       18376   0x0810e4a8  Rollback                       2033
Subdaemon       18376   0x081cb8b0  Flusher                        2034
Subdaemon       18376   0x08246de0  Monitor                        2035
Subdaemon       18376   0x082a20a8  Deadlock Detector              2036
Subdaemon       18376   0x082fd370  Checkpoint                     2037
Subdaemon       18376   0x08358638  Aging                          2038
Subdaemon       18376   0x083b3900  Log Marker                     2040
Subdaemon       18376   0x083ce998  AsyncMV                        2039
Subdaemon       18376   0x08469e90  HistGC                         2041
Replication policy  : Manual
Replication agent is running.
Cache Agent policy  : Manual
PL/SQL enabled.

Step 5: Insert data into the table on the master database

Use ttIsql to connect to the master database and insert some rows into the tab table:

% ttIsql masterds
Command> INSERT INTO tab VALUES (1, 22, 'Hello');
1 row inserted.
Command> INSERT INTO tab VALUES (3, 86, 'World');
1 row inserted.

Open a second command prompt window for the subscriber. Connect to the subscriber database and check the contents of the tab table:

% ttIsql subscriberds
Command> SELECT * FROM tab;
< 1, 22, Hello>
< 3, 86, World>
2 rows found.

Figure 2-3 shows that the rows that are inserted into masterds are replicated to subscriberds.

Figure 2-3 Replicating changes to the subscriber database

Description of Figure 2-3 follows
Description of "Figure 2-3 Replicating changes to the subscriber database"

Step 6: Drop the replication scheme and table

After you have completed your replication tests, stop the replication agents on both masterds and subscriberds:

Command> CALL ttRepStop;

To remove the tab table and repscheme replication scheme from the master and subscriber databases, enter these statements on each database:

Command> DROP REPLICATION repscheme;
Command> DROP TABLE tab;
PK(W#WPK$AOEBPS/cover.htmO Cover

Oracle Corporation

PK[pTOPK$AOEBPS/whatsnew.htm What's New

What's New

This preface summarizes the new features of Oracle TimesTen In-Memory Database release 11.2.2 that are documented in this guide. It provides links to more information.

New features in Release 11.2.2.4.0

You can now specify an alias or the IP address of the network interface when you want to use a specific local or remote network interface over which database duplication occurs. For details, see "Duplicating a database".

New features in Release 11.2.2.2.0

New features in Release 11.2.2.1.0

New features in Release 11.2.2.0.0

PKzPK$AOEBPS/definepair.htm Defining an Active Standby Pair Replication Scheme

3 Defining an Active Standby Pair Replication Scheme

The following sections describe how to design a highly available system and define replication schemes:

To reduce the amount of bandwidth required for replication, see "Compressing replicated traffic".

Restrictions on active standby pairs

When you are planning an active standby pair, keep in mind the following restrictions:

Defining the DSNs for the databases

Before you define the active standby pair, define the DSNs for the active, standby and read-only subscriber databases. On UNIX, create an odbc.ini file. On Windows, use the ODBC Administrator to name the databases and set connection attributes. See "Step 1: Create the DSNs for the master and the subscriber databases" for an example.

Each database "name" specified in a replication scheme must match the prefix of the database file name without the path given for the DataStore data store attribute in the DSN definition for the database. To avoid confusion, use the same name for both the DataStore and Data Source Name data store attributes in each DSN definition. Values for DataStore are case-sensitive. If the database path is directory/subdirectory/foo.ds0, then foo is the database name that you should use.

Defining an active standby pair replication scheme

Use the CREATE ACTIVE STANDBY PAIR SQL statement to create an active standby pair replication scheme. The complete syntax for the CREATE ACTIVE STANDBY PAIR statement is provided in the Oracle TimesTen In-Memory Database SQL Reference.

You must have the ADMIN privilege to use the CREATE ACTIVE STANDBY PAIR statement and to perform other replication operations. Only the instance administrator can duplicate databases.

Table 3-1 shows the components of an active standby pair replication scheme and identifies the parameters associated with the topics in this chapter.

Table 3-1 Components of an active standby pair replication scheme

ComponentSee...

CREATE ACTIVE STANDBY PAIR FullDatabaseName, FullDatabaseName

"Identifying the databases in the active standby pair"


[ReturnServiceAttribute]

"Using a return service"


[SUBSCRIBER FullDatabaseName [,...]]

"Identifying the databases in the active standby pair"


[STORE FullDatabaseName [StoreAttribute [...]]]

"Setting STORE attributes"


[NetworkOperation [...]]

"Configuring network operations"


[{INCLUDE|EXCLUDE}

{TABLE [[Owner.]TableName[,...]]|

CACHE GROUP [[Owner.]CacheGroupName[,...]|

SEQUENCE [[Owner.]SequenceName[,...]]}

[,...]]

"Including or excluding database objects from replication"



Identifying the databases in the active standby pair

Use the full database name described in "Defining the DSNs for the databases". The first database name designates the active database. The second database name designates the standby database. Read-only subscriber databases are indicated by the SUBSCRIBER clause.

You can also specify the hosts where the databases reside by using an IP address or a literal host name surrounded by double quotes.

The active database and the standby database should be on separate hosts to achieve a highly available system. Read-only subscribers can be either local or remote. A remote subscriber provides protection from site-specific disasters.

Provide a host ID as part of FullDatabaseName:

DatabaseName [ON Host]

Host can be either an IP address or a literal host name. Use the value returned by the hostname operating system command. It is good practice to surround a host name with double quotes. For example:

CREATE ACTIVE STANDBY PAIR 
  repdb1_1122 ON "host1", 
  repdb2_1122 ON "host2";

Table requirements and restrictions for active standby pairs

Tables that are replicated in an active standby pair must have one of the following:

Replication uses the primary key or unique index to identify each row in the replicated table. Replication always selects the first usable index that turns up in a sequential check of the table's index array. If there is no primary key, replication selects the first unique index without NULL columns it encounters. The selected index on the replicated table in the active database must also exist on its counterpart table in the standby database.


Note:

The keys on replicated tables are transmitted in each update record to the subscribers. Smaller keys are transmitted more efficiently.

Replicated tables have these data type restrictions:

You cannot replicate tables with compressed columns.

Using a return service

You can configure your replication scheme with a return service to ensure a higher level of confidence that your replicated data is consistent on the active and standby databases. See "Copying updates between databases". This section describes how to configure and manage the return receipt and return twosafe services. NO RETURN (asynchronous replication) is the default and provides the fastest performance.

The following sections describe the following return service clauses:

RETURN RECEIPT

TimesTen provides an optional return receipt service to loosely couple or synchronize your application with the replication mechanism.

You can specify the RETURN RECEIPT clause to enable the return receipt service for the standby database. With return receipt enabled, when your application commits a transaction for an element on the active database, the application remains blocked until the standby acknowledges receipt of the transaction update.

If the standby is unable to acknowledge receipt of the transaction within a configurable timeout period, your application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. See "Setting the return service timeout period" for more information on the return service timeout period.

You can use the ttRepXactStatus procedure to check on the status of a return receipt transaction. See "Checking the status of return service transactions" for details.

You can also configure the replication agent to disable the return receipt service after a specific number of timeouts. See "Setting the return service timeout period" for details.

RETURN RECEIPT BY REQUEST

RETURN RECEIPT enables notification of receipt for all transactions. You can use the RETURN RECEIPT BY REQUEST clause to enable receipt notification only for specific transactions identified by your application.

If you specify RETURN RECEIPT BY REQUEST, you must use the ttRepSyncSet built-in procedure to enable the return receipt service for a transaction. The call to enable the return receipt service must be part of the transaction (autocommit must be off).

If the standby database is unable to acknowledge receipt of the transaction update within a configurable timeout period, the application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. See "Setting the return service timeout period" for more information on the return service timeout period.

You can use ttRepSyncGet to check if a return service is enabled and obtain the timeout value. For example:

Command> CALL ttRepSyncGet();
< 01, 45, 1>
1 row found.

RETURN TWOSAFE

TimesTen provides a return twosafe service to fully synchronize your application with the replication mechanism. The return twosafe service ensures that each replicated transaction is committed on the standby database before it is committed on the active database. If replication is unable to verify the transaction has been committed on the standby, it returns notification of the error. Upon receiving an error, the application can either take a unique action or fall back on preconfigured actions, depending on the type of failure.

When replication is configured with RETURN TWOSAFE, you must disable autocommit mode.

A transaction that contains operations that are replicated with RETURN TWOSAFE cannot have a PassThrough setting greater than 0. If PassThrough is greater than 0, an error is returned and the transaction must be rolled back.

If the standby is unable to acknowledge commit of the transaction update within a configurable timeout period, the application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. See "Setting the return service timeout period" for more information on the return service timeout period.

RETURN TWOSAFE BY REQUEST

RETURN TWOSAFE enables notification of commit on the standby database for all transactions. You can use the RETURN TWOSAFE BY REQUEST clause to enable notification of a commit on the standby only for specific transactions identified by your application.

A transaction that contains operations that are replicated with RETURN TWOSAFE cannot have a PassThrough setting greater than 0. If PassThrough is greater than 0, an error is returned and the transaction must be rolled back.

If you specify RETURN TWOSAFE BY REQUEST for a standby database, you must use the ttRepSyncSet built-in procedure to enable the return twosafe service for a transaction. The call to enable the return twosafe service must be part of the transaction (autocommit must be off).

If the standby is unable to acknowledge commit of the transaction within the timeout period, the application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. The application can then chose how to handle the timeout, in the same manner as described for "RETURN TWOSAFE".

The ALTER TABLE statement cannot be used to alter a replicated table that is part of a RETURN TWOSAFE BY REQUEST transaction. If DDLCommitBehavior=0 (the default), the ALTER TABLE operation succeeds because a commit is performed before the ALTER TABLE operation, resulting in the ALTER TABLE operation executing in a new transaction which is not part of the RETURN TWOSAFE BY REQUEST transaction. If DDLCommitBehavior=1, the ALTER TABLE operation results in error 8051.

See "Setting the return service timeout period" for more information on setting the return service timeout period.

You can use ttRepSyncGet to check if a return service is enabled and obtain the timeout value. For example:

Command> CALL ttRepSyncGet();
< 01, 45, 1>
1 row found.

NO RETURN

You can use the NO RETURN clause to explicitly disable either the return receipt or return twosafe service, depending on which one you have enabled. NO RETURN is the default condition.

Setting STORE attributes

Table 3-2 lists the optional STORE attributes for the CREATE ACTIVE STANDBY PAIR statement.

Table 3-2 STORE attribute descriptions

STORE attributeDescription

DISABLE RETURN {SUBSCRIBER|ALL} NumFailures

Set the return service policy so that return service blocking is disabled after the number of timeouts specified by NumFailures.

See "Establishing return service failure/recovery policies".

RETURN SERVICES {ON|OFF} WHEN [REPLICATION] STOPPED

Set return services on or off when replication is disabled.

See "Establishing return service failure/recovery policies".

RESUME RETURN Milliseconds

If DISABLE RETURN has disabled return service blocking, this attribute sets the policy for re-enabling the return service.

See "Establishing return service failure/recovery policies".

RETURN WAIT TIME Seconds

Specifies the number of seconds to wait for return service acknowledgement. A value of 0 means that there is no waiting. The default value is 10 seconds.

The application can override this timeout setting by using the returnWait parameter in the ttRepSyncSet built-in procedure.

See "Setting the return service timeout period".

DURABLE COMMIT {ON|OFF}

Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. It also enables durable commits when the ttRepStateSave built-in procedure has marked the standby database as failed.

See "DURABLE COMMIT".

LOCAL COMMIT ACTION {NO ACTION|COMMIT}

Specifies the default action to be taken for a return service transaction in the event of a timeout. The options are:

NO ACTION - On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can reissue the commit. This is the default.

COMMIT- On timeout, the commit function attempts to perform a commit to end the transaction locally. No more operations are possible on the same transaction.

This default setting can be overridden for specific transactions by using the localAction parameter in the ttRepSyncSet procedure.

See "LOCAL COMMIT ACTION".

COMPRESS TRAFFIC {ON|OFF}

Compresses replicated traffic to reduce the amount of network bandwidth used.

See "Compressing replicated traffic".

PORT PortNumber

Sets the port number used by a database to listen for updates from another database.

In an active standby pair, the standby database listens for updates from the active database. Read-only subscribers listen for updates from the standby database.

If no PORT attribute is specified, the TimesTen daemon dynamically selects the port. While static port assignment is allowed by TimesTen, dynamic port allocation is recommended.

See "Port assignments".

TIMEOUT Seconds

Set the maximum number of seconds the replication agent waits for a response from the database.

FAILTHRESHOLD Value

Sets the log failure threshold.

See "Setting the log failure threshold".


The rest of this section includes these topics:

Setting the return service timeout period

If a replication scheme is configured with one of the return services described in "Using a return service", a timeout occurs if the standby database is unable to send an acknowledgement back to the active within the time period specified by RETURN WAIT TIME. If the standby database is unable to acknowledge the transaction update from the active database within the timeout period, the application receives an errRepReturnFailed warning on its commit request.

The default return service timeout period is 10 seconds. You can specify a different return service timeout period by:

  • Specifying the RETURN WAIT TIME in the CREATE ACTIVE STANDBY PAIR statement or ALTER ACTIVE STANDBY PAIR statement. A RETURN WAIT TIME of 0 indicates no waiting.

  • Specifying a different return service timeout period programmatically by calling the ttRepSyncSet procedure with a new value for the returnWait parameter. Once set, the timeout period applies to all subsequent return service transactions until you either reset the timeout period or terminate the application session.

A return service may time out because of a replication failure or because replication is so far behind that the return service transaction times out before it is replicated. However, unless there is a simultaneous replication failure, failure to obtain a return service confirmation from the standby does not necessarily mean the transaction has not been or will not be replicated.

You can respond to return service timeouts by:

Disabling return service blocking manually

You may want respond if replication is stopped or return service timeout failures begin to adversely impact the performance of your replicated system. Your "tolerance threshold" for return service timeouts may depend on the historical frequency of timeouts and the performance/availability equation for your particular application, both of which should be factored into your response to the problem.

When using the return receipt service, you can manually respond by:

  • Using the ALTER ACTIVE STANDBY PAIR statement to disable return receipt blocking. See "Making other changes to an active standby pair".

  • Calling the ttDurableCommit built-in procedure to durably commit transactions on the active database that you can no longer verify as being received by the standby

If you decide to disable return receipt blocking, your decision to re-enable it depends on your confidence level that the return receipt transaction is no longer likely to time out.

Establishing return service failure/recovery policies

An alternative to manually responding to return service timeout failures is to establish return service failure and recovery policies in the replication scheme. These policies direct the replication agents to detect changes to the replication state and to keep track of return service timeouts and then automatically respond in a predefined manner.

The following attributes in the CREATE ACTIVE STANDBY PAIR statement set the failure and recovery policies when using a RETURN RECEIPT or RETURN TWOSAFE service:

The policies set by these attributes are applicable until changed. The replication agent must be running to enforce these policies, with the exception of DURABLE COMMIT.

RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED

The RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED attribute determines whether a return receipt or return twosafe service continues to be enabled or is disabled when replication is stopped. "Stopped" in this context means that either the active replication agent is stopped (for example, by ttAdmin -repStop active) or the replication state of the standby database is set to stop or pause with respect to the active database (for example, by ttRepAdmin -state stop standby). A failed standby that has exceeded the specified FAILTHRESHOLD value is set to the failed state, but is eventually set to the stop state by the master replication agent.


Note:

A standby database may become unavailable for a period of time that exceeds the timeout period specified by RETURN WAIT TIME but still be considered by the master replication agent to be in the start state. Failure policies related to timeouts are set by the DISABLE RETURN attribute.

RETURN SERVICES OFF WHEN REPLICATION STOPPED disables the return service when replication is stopped and is the default when using the RETURN RECEIPT service. RETURN SERVICES ON WHEN REPLICATION STOPPED allows the return service to continue to be enabled when replication is stopped and is the default when using the RETURN TWOSAFE service.

DISABLE RETURN

When a DISABLE RETURN value is set, the database keeps track of the number of return receipt or return twosafe transactions that have exceeded the timeout period set by RETURN WAIT TIME. If the number of timeouts exceeds the maximum value set by DISABLE RETURN, the application reverts to a default replication cycle in which it no longer waits for the standby to acknowledge the replicated updates.

Specifying SUBSCRIBER is the same as specifying ALL. Both settings refer to the standby database.

The DISABLE RETURN failure policy is only enabled when the replication agent is running. If DISABLE RETURN is specified without RESUME RETURN, the return services remain off until the replication agent for the database has been restarted. You can cancel this failure policy by stopping the replication agent and specifying DISABLE RETURN with a zero value for NumFailures. The count of timeouts to trigger the failure policy is reset either when you restart the replication agent, when you set the DISABLE RETURN value to 0, or when return service blocking is re-enabled by RESUME RETURN.

RESUME RETURN

When we say return service blocking is "disabled," we mean that the applications on the master database no longer block execution while waiting to receive acknowledgements from the subscribers that they received or committed the replicated updates. Note, however, that the master still listens for an acknowledgement of each batch of replicated updates from the standby database.

You can establish a return service recovery policy by setting the RESUME RETURN attribute and specifying a resume latency value. When this attribute is set and return service blocking has been disabled for the standby database, the return receipt or return twosafe service is re-enabled when the commit-to-acknowledge time for a transaction falls below the value set by RESUME RETURN. The commit-to-acknowledge time is the latency between when the application issues a commit and when the master receives acknowledgement from the subscriber.

The RESUME RETURN policy is enabled only when the replication agent is running. You can cancel a return receipt resume policy by stopping the replication agent and then using ALTER ACTIVE STANDBY PAIR to set RESUME RETURN to zero.

DURABLE COMMIT

You can set the DURABLE COMMIT attribute to specify the durable commit policy for applications that have return service blocking disabled by DISABLE RETURN. When DURABLE COMMIT is set to ON, it overrides the DurableCommits general connection attribute on the master database and forces durable commits for those transactions that have had return service blocking disabled.

When DURABLE COMMITS are ON, durable commits are issued when return service blocking is disabled regardless of whether the replication agent is running or stopped. They are also issued for an active standby pair in which the ttRepStateSave built-in procedure has marked the standby database as failed.

LOCAL COMMIT ACTION

When you are using the return twosafe service, you can specify how the master replication agent responds to timeouts by setting LOCAL COMMIT ACTION. You can override the setting for specific transactions by calling the localAction parameter in the ttRepSyncSet procedure.

The possible actions upon receiving a timeout during replication of a twosafe transaction are:

  • COMMIT - On timeout, the commit function attempts to perform a commit to end the transaction locally. No more operations are possible on the same transaction.

  • NO ACTION - On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can reissue the commit. This is the default.

Compressing replicated traffic

If you are replicating over a low-bandwidth network, or if you are replicating massive amounts of data, you can set the COMPRESS TRAFFIC attribute to reduce the amount of bandwidth required for replication. The COMPRESS TRAFFIC attribute compresses the replicated data from the database specified by the STORE parameter in the CREATE ACTIVE STANDBY PAIR or ALTER ACTIVE STANDBY PAIR statement. TimesTen does not compress traffic from other databases.

Though the compression algorithm is optimized for speed, enabling the COMPRESS TRAFFIC attribute affects replication throughput and latency.

Example 3-1 Compressing traffic from an active database

For example, to compress replicated traffic from active database dsn1 and leave the replicated traffic from standby database dsn2 uncompressed, the CREATE ACTIVE STANDBY PAIR statement looks like:

CREATE ACTIVE STANDBY PAIR dsn1 ON "host1", dsn2 ON "host2"
  SUBSCRIBER dsn3 ON "host3"
  STORE dsn1 ON "host1" COMPRESS TRAFFIC ON;

Example 3-2 Compressing traffic from both master databases

To compress the replicated traffic from the dsn1 and dsn2 databases, use:

CREATE ACTIVE STANDBY PAIR dsn1 ON "host1", dsn2 ON "host2"
  SUBSCRIBER dsn3 ON "host3"
STORE dsn1 ON "host1" COMPRESS TRAFFIC ON
STORE dsn2 ON "host2" COMPRESS TRAFFIC ON;

Port assignments

Static port assignment is recommended. If you do not assign a PORT attribute, the TimesTen daemon dynamically selects the port. When ports are assigned dynamically in this manner for the replication agents, then the ports of the TimesTen daemons have to match as well.

You must assign static ports if you want to do online upgrades.

When statically assigning ports, it is important to specify the full host name, DSN and port in the STORE attribute of the CREATE ACTIVE STANDBY PAIR statement.

Example 3-3 Assigning static ports

CREATE ACTIVE STANDBY PAIR dsn1 ON "host1", dsn2 ON "host2"
  SUBSCRIBER dsn3 ON "host3"
STORE dsn1 ON "host1" PORT 16080
STORE dsn2 ON "host2" PORT 16083
STORE dsn3 ON "host3" PORT 16084;

Setting the log failure threshold

You can establish a threshold value that, when exceeded, sets an unavailable standby database or a read-only subscriber to the failed state before the available log space is exhausted.

Set the log threshold by specifying the STORE clause with a FAILTHRESHOLD value in the CREATE ACTIVE STANDBY PAIR or ALTER ACTIVE STANDBY PAIR statement. The default threshold value is 0, which means "no limit."

If an active database sets the standby database or a read-only subscriber to the failed state, it drops all of the data for the failed database from its log and transmits a message to the failed database. If the active replication agent can communicate with the replication agent of the failed database, then the message is transmitted immediately. Otherwise, the message is transmitted when the connection is reestablished.

Any application that connects to the failed database receives a tt_ErrReplicationInvalid (8025) warning indicating that the database has been marked failed by a replication peer. Once the database has been informed of its failed status, its state on the active database is changed from failed to stop.

An application can use the ODBC SQLGetInfo function to check if the database the application is connected to has been set to the failed state.

For more information about database states, see Table 10-2, "Database states" .

Configuring network operations

If a replication host has more than one network interface, you may wish to configure replication to use an interface other than the default interface. Although you must specify the host name returned by the operating system's hostname command when you specify the database name, you can configure replication to send or receive traffic over a different interface using the ROUTE clause.

The syntax of the ROUTE clause is:

ROUTE MASTER FullDatabaseName SUBSCRIBER FullDatabaseName
  {{MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost}
    PRIORITY Priority} [...]

In the context of the ROUTE clause, each master database is a subscriber of the other master database and each read-only subscriber is a subscriber of both master databases. This means that the CREATE ACTIVE STANDBY PAIR statement should include ROUTE clauses in multiples of two to specify a route in both directions. See Example 3-4.

Example 3-4 Configuring multiple network interfaces

If host host1 is configured with a second interface accessible by the host name host1fast, and host2 is configured with a second interface at IP address 192.168.1.100, you may specify that the secondary interfaces are used with the replication scheme.

CREATE ACTIVE STANDBY PAIR dns1, dsn2
ROUTE MASTER dsn1 ON "host1" SUBSCRIBER dsn2 ON "host2"
    MASTERIP "host1fast" PRIORITY 1
    SUBSCRIBERIP "192.168.1.100" PRIORITY 1
ROUTE MASTER dsn2 ON "host2" SUBSCRIBER dsn1 ON "host1"
    MASTERIP "192.168.1.100" PRIORITY 1
    SUBSCRIBERIP "host1fast" PRIORITY 1;

Alternately, on a replication host with more than one interface, you may wish to configure replication to use one or more interfaces as backups, in case the primary interface fails or the connection from it to the receiving host is broken. You can use the ROUTE clause to specify two or more interfaces for each master or subscriber that are used by replication in order of priority.

If replication on the master host is unable to bind to the MASTERIP with the highest priority, it will try to connect using subsequent MASTERIP addresses in order of priority immediately. However, if the connection to the subscriber fails for any other reason, replication will try to connect using each of the SUBSCRIBERIP addresses in order of priority before it tries the MASTERIP address with the next highest priority.

Example 3-5 Configuring network priority

If host host1 is configured with two network interfaces at IP addresses 192.168.1.100 and 192.168.1.101, and host host2 is configured with two interfaces at IP addresses 192.168.1.200 and 192.168.1.201, you may specify that replication use IP addresses 192.168.1.100 and 192.168.200 to transmit and receive traffic first, and to try IP addresses 192.168.1.101 or 192.168.1.201 if the first connection fails.

CREATE ACTIVE STANDBY PAIR dns1, dns2
ROUTE MASTER dsn1 ON "host1" SUBSCRIBER dsn2 ON "host2"
  MASTERIP "192.168.1.100" PRIORITY 1
  MASTERIP "192.168.1.101" PRIORITY 2
  SUBSCRIBERIP "192.168.1.200" PRIORITY 1
  SUBSCRIBERIP "192.168.1.201" PRIORITY 2;

Using automatic client failover for an active standby pair

Automatic client failover is for use in High Availability scenarios with a TimesTen active standby pair replication configuration. If failure of the active TimesTen node results in the original standby node becoming the new active node, then automatic client failover feature automatically transfers the application connection to the new active node.

For full details on how to configure and use automatic client failover, see "Using automatic client failover" in the Oracle TimesTen In-Memory Database Operations Guide.


Note:

Automatic client failover is complementary to Oracle Clusterware in situations where Oracle Clusterware is used, but the two features are not dependent on each other. For information about Oracle Clusterware, you can refer to Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs".

Including or excluding database objects from replication

An active standby pair replicates an entire database by default. Use the INCLUDE clause to replicate only the tables, cache groups and sequences that are listed in the INCLUDE clause. No other database objects will be replicated in an active standby pair that is defined with an INCLUDE clause. For example, this INCLUDE clause specifies three tables to be replicated by the active standby pair:

INCLUDE TABLE employees, departments, jobs

You can choose to exclude specific tables, cache groups or sequences from replication by using the EXCLUDE clause of the CREATE ACTIVE STANDBY PAIR statement. Use one EXCLUDE clause for each object type. For example:

EXCLUDE TABLE ttuser.tab1, ttuser.tab2
EXCLUDE CACHE GROUP ttuser.cg1, ttuser.cg2
EXCLUDE SEQUENCE ttuser.seq1, ttuser.seq2

Note:

Sequences with the CYCLE attribute cannot be replicated.

Materialized views in an active standby pair

When you replicate a database containing a materialized or nonmaterialized view, only the detail tables associated with the view are replicated. The view itself is not replicated. A matching view can be defined on the standby database, but it is not required. If detail tables are replicated, TimesTen automatically updates the corresponding view. However, TimesTen replication verifies only that the replicated detail tables have the same structure on both databases. It does not enforce that the materialized views are the same on each database.

Replicating sequences in an active standby pair

Sequences are replicated unless you exclude them from the active standby pair or unless they have the CYCLE attribute. See "Including or excluding database objects from replication". Replication of sequences is optimized by reserving a range of sequence numbers on the standby database each time a sequence is updated on the active database. Reserving a range of sequence numbers reduces the number of updates to the transaction log. The range of sequence numbers is called a cache. Sequence updates on the active database are replicated only when they are followed by or used in replicated transactions.

Consider a sequence named my.sequence with a MINVALUE of 1, an INCREMENT of 1 and the default Cache of 20. The very first time that you reference my.sequence.NEXTVAL, the current value of the sequence on the active database is changed to 2, and a new current value of 21 (20+1) is replicated to the standby database. The next 19 references to my.seq.NEXTVAL on the active database result in no new current value being replicated, because the current value of 21 on the standby database is still ahead of the current value on the active database. On the twenty-first reference to my.seq.NEXTVAL, a new current value of 41 (21+20) is transmitted to the standby database because the previous current value of 21 on the standby database is now behind the value of 22 on the active database.

Operations on sequences such as SELECT my.seq.NEXTVAL FROM sys.dual, while incrementing the sequence value, are not replicated until they are followed by transactions on replicated tables. A side effect of this behavior is that these sequence updates are not purged from the log until followed by transactions on replicated tables. This causes ttRepSubscriberWait and ttRepAdmin -wait to fail when only these sequence updates are present at the end of the log.

PK!/PK$AOEBPS/overview.htm Overview of TimesTen Replication

1 Overview of TimesTen Replication

The following sections provide an overview of TimesTen replication:

What is replication?

Replication is the process of maintaining copies of data in multiple databases. The purpose of replication is to make data highly available to applications with minimal performance impact. TimesTen recommends the active standby pair configuration for highest availability. In an active standby pair replication scheme, the data is copied from the active database to the standby database before being copied to read-only subscribers.

In addition to providing recovery from failures, replication schemes can also distribute application workloads across multiple databases for maximum performance and facilitate online upgrades and maintenance.

Replication is the process of copying data from a master database to a subscriber database. Replication is controlled by replication agents for each database. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database. If the subscriber replication agent is not running when the updates are forwarded by the master, the master retains the updates in its transaction log until they can be applied at the subscriber database.

An entity that is replicated between databases is called a replication element. TimesTen supports databases, cache groups, tables and sequences as replication elements. TimesTen also replicates XLA bookmarks. An active standby pair is the only supported replication scheme for databases with cache groups.

Requirements for replication compatibility

TimesTen replication is supported only between identical platforms and bit-levels. Although you can replicate between databases that reside on the same host, replication is generally used for copying updates into a database that resides on another host. This helps prevent data loss from host failure.

The databases must have DSNs with identical DatabaseCharacterSet and TypeMode database attributes.

Replication agents

Replication between databases is controlled by a replication agent. Each database is identified by:

The replication agent on the master database reads the records from the transaction log and forwards any detected changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database. If the subscriber agent is not running when the updates are forwarded by the master, the master retains the updates in the log until they can be transmitted.

The replication agents communicate through TCP/IP stream sockets. The replication agents obtain the TCP/IP address, host name, and other configuration information from the replication tables described in Oracle TimesTen In-Memory Database System Tables and Views Reference.

Copying updates between databases

Updates are copied between databases in asynchronously by default. Asynchronous replication provides the best performance, but it does not provide the application with confirmation that the replicated updates have been committed on the subscriber databases. For applications that need higher levels of confidence that the replicated data is consistent between the master and subscriber databases, you can enable either return receipt or return twosafe service.

The return receipt service loosely synchronizes the application with the replication mechanism by blocking the application until replication confirms that the update has been received by the subscriber. The return twosafe service provides a fully synchronous option by blocking the application until replication confirms that the update has been both received and committed on the subscriber.

Return receipt replication has less performance impact than return twosafe at the expense of less synchronization. The operational details for asynchronous, return receipt, and return twosafe replication are discussed in these sections:

Default replication

When using default TimesTen replication, an application updates a master database and continues working without waiting for the updates to be received and applied by the subscribers. The master and subscriber databases have internal mechanisms to confirm that the updates have been successfully received and committed by the subscriber. These mechanisms ensure that updates are applied at a subscriber only once, but they are completely independent of the application.

Default TimesTen replication provides maximum performance, but the application is completely decoupled from the receipt process of the replicated elements on the subscriber.

Figure 1-1 Basic asynchronous replication cycle

Description of Figure 1-1 follows
Description of "Figure 1-1 Basic asynchronous replication cycle"

The default TimesTen replication cycle is:

  1. The application commits a local transaction to the master database and is free to continue with other transactions.

  2. During the commit, the TimesTen daemon writes the transaction update records to the transaction log buffer.

  3. The replication agent on the master database directs the daemon to flush a batch of update records for the committed transactions from the log buffer to a transaction log file. This step ensures that, if the master fails and you need to recover the database from the checkpoint and transaction log files, the recovered master contains all the data it replicated to the subscriber.

  4. The master replication agent forwards the batch of transaction update records to the subscriber replication agent, which applies them to the subscriber database. Update records are flushed to disk and forwarded to the subscriber in batches of 256K or less, depending on the master database's transaction load. A batch is created when there is no more log data in the transaction log buffer or when the current batch is roughly 256K bytes.

  5. The subscriber replication agent sends an acknowledgement back to the master replication agent that the batch of update records was received. The acknowledgement includes information on which batch of records the subscriber last flushed to disk. The master replication agent is now free to purge from the transaction log the update records that have been received, applied, and flushed to disk by all subscribers and to forward another batch of update records, while the subscriber replication agent asynchronously continues on to Step 6.

  6. The replication agent at the subscriber updates the database and directs the daemon to write the transaction update records to the transaction log buffer.

  7. The replication agent at the subscriber database uses a separate thread to direct the daemon to flush the update records to a transaction log file.

Return receipt replication

The return receipt service provides a level of synchronization between the master and a subscriber database by blocking the application after commit on the master until the updates of the committed transaction have been received by the subscriber.

An application requesting return receipt updates the master database in the same manner as in the basic asynchronous case. However, when the application commits a transaction that updates a replicated element, the master database blocks the application until it receives confirmation that the updates for the completed transaction have been received by the subscriber.

Return receipt replication trades some performance in order to provide applications with the ability to ensure higher levels of data integrity and consistency between the master and subscriber databases. In the event of a master failure, the application has a high degree of confidence that a transaction committed at the master persists in the subscribing database.

Figure 1-2 Return receipt replication

Description of Figure 1-2 follows
Description of "Figure 1-2 Return receipt replication"

Figure 1-2 shows that the return receipt replication cycle is the same as shown for the basic asynchronous cycle in Figure 1-1, only the master replication agent blocks the application thread after it commits a transaction (Step 1) and retains control of the thread until the subscriber acknowledges receipt of the update batch (Step 5). Upon receiving the return receipt acknowledgement from the subscriber, the master replication agent returns control of the thread to the application (Step 6), freeing it to continue executing transactions.

If the subscriber is unable to acknowledge receipt of the transaction within a configurable timeout period (default is 10 seconds), the master replication agent returns a warning stating that it did not receive acknowledgement of the update from the subscriber and returns control of the thread to the application. The application is then free to commit another transaction to the master, which continues replication to the subscriber as before. Return receipt transactions may time out for many reasons. The most likely causes for timeout are the network, a failed replication agent, or the master replication agent may be so far behind with respect to the transaction load that it cannot replicate the return receipt transaction before its timeout expires. For information on how to manage return-receipt timeouts, see "Managing return service timeout errors and replication state changes".

See "RETURN RECEIPT" for information on how to configure replication for return receipt.

Return twosafe replication

The return twosafe service provides fully synchronous replication between the master and subscriber. Unlike the previously described replication modes, where transactions are transmitted to the subscriber after being committed on the master, transactions in twosafe mode are first committed on the subscriber before they are committed on the master.

Figure 1-3 Return twosafe replication

Description of Figure 1-3 follows
Description of "Figure 1-3 Return twosafe replication"

The following describes the replication behavior between a master and subscriber configured for return twosafe replication:

  1. The application commits the transaction on the master database.

  2. The master replication agent writes the transaction records to the log and inserts a special precommit log record before the commit record. This precommit record acts as a place holder in the log until the master replication receives an acknowledgement that indicates the status of the commit on the subscriber.


    Note:

    Transmission of return twosafe transactions is nondurable, so the master replication agent does not flush the log records to disk before sending them to the subscriber, as it does by default when replication is configured for asynchronous or return receipt replication.

  3. The master replication agent transmits the batch of update records to the subscriber.

  4. The subscriber replication agent commits the transaction on the subscriber database.

  5. The subscriber replication agent returns an acknowledgement back to the master replication agent with notification of whether the transaction was committed on the subscriber and whether the commit was successful.

  6. If the commit on the subscriber was successful, the master replication agent commits the transaction on the master database.

  7. The master replication agent returns control to the application.

    If the subscriber is unable to acknowledge commit of the transaction within a configurable timeout period (default is 10 seconds) or if the acknowledgement from the subscriber indicates the commit was unsuccessful, the replication agent returns control to the application without committing the transaction on the master database. The application can then to decide whether to unconditionally commit or retry the commit. You can optionally configure your replication scheme to direct the master replication agent to commit all transactions that time out.

    See "RETURN TWOSAFE" for information on how to configure replication for return twosafe.

Types of replication schemes

You create a replication scheme to define a specific configuration of master and subscriber databases. This section describes the possible relationships you can define between master and subscriber databases when creating a replication scheme.

When defining a relationship between a master and subscriber, consider these replication schemes:

Active standby pair with read-only subscribers

Figure 1-4 shows an active standby pair replication scheme with an active database, a standby database, and four read-only subscriber databases.

Figure 1-4 Active standby pair

Description of Figure 1-4 follows
Description of "Figure 1-4 Active standby pair"

The active standby pair can replicate a whole database or select elements like tables and cache groups.

In an active standby pair, two databases are defined as masters. One is an active database, and the other is a standby database. The application updates the active database directly. Applications cannot update the standby database. It receives the updates from the active database and propagates the changes to as many as 127 read-only subscriber databases. This arrangement ensures that the standby database is always ahead of the subscriber databases and enables rapid failover to the standby database if the active database fails.

Only one of the master databases can function as an active database at a specific time. You can manage failover and recovery of an active standby pair with Oracle Clusterware. See Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs". You can also manage failover and recovery manually. See Chapter 4, "Administering an Active Standby Pair Without Cache Groups".

If the standby database fails, the active database can replicate changes directly to the read-only subscribers. After the standby database has been recovered, it contacts the active database to receive any updates that have been sent to the subscribers while the standby was down or was recovering. When the active and the standby databases have been synchronized, then the standby resumes propagating changes to the subscribers.

For details about setting up an active standby pair, see "Setting up an active standby pair with no cache groups".

Full database replication or selective replication

Figure 1-5 illustrates a full replication scheme in which the entire master database is replicated to the subscriber.

Figure 1-5 Replicating the entire master database

Description of Figure 1-5 follows
Description of "Figure 1-5 Replicating the entire master database"

You can also configure your master and subscriber databases to selectively replicate some elements in a master database to subscribers. Figure 1-6 shows examples of selective replication. The left side of the figure shows a master database that replicates the same selected elements to multiple subscribers, while the right side shows a master that replicates different elements to each subscriber.

Figure 1-6 Replicating selected elements to multiple subscribers

Description of Figure 1-6 follows
Description of "Figure 1-6 Replicating selected elements to multiple subscribers"

Unidirectional or bidirectional replication

So far in this chapter, we have described unidirectional replication, where a master database sends updates to one or more subscriber databases. However, you can also configure databases to operate bidirectionally, where each database is both a master and a subscriber.

These are basic ways to use bidirectional replication:

Split workload configuration

In a split workload configuration, each database serves as a master for some elements and a subscriber for others.

Consider the example shown in Figure 1-7, where the accounts for Chicago are processed on database A while the accounts for New York are processed on database B.

Figure 1-7 Split workload bidirectional replication

Description of Figure 1-7 follows
Description of "Figure 1-7 Split workload bidirectional replication"

Distributed workload

In a distributed workload replication scheme, user access is distributed across duplicate application/database combinations that replicate any update on any element to each other. In the event of a failure, the affected users can be quickly shifted to any application/database combination.The distributed workload configuration is shown in Figure 1-8. Users access duplicate applications on each database, which serves as both master and subscriber for the other database.

Figure 1-8 Distributed workload configuration

Description of Figure 1-8 follows
Description of "Figure 1-8 Distributed workload configuration"

When databases are replicated in a distributed workload configuration, it is possible for separate users to concurrently update the same rows and replicate the updates to one another. Your application should ensure that such conflicts cannot occur, that they be acceptable if they do occur, or that they can be successfully resolved using the conflict resolution mechanism described in Chapter 14, "Resolving Replication Conflicts".


Note:

Do not use a distributed workload configuration with the return twosafe return service.

Direct replication or propagation

You can define a subscriber to serve as a propagator that receives replicated updates from a master and passes them on to subscribers of its own.

Propagators are useful for optimizing replication performance over lower-bandwidth network connections, such as those between servers in an intranet. For example, consider the direct replication configuration illustrated in Figure 1-9, where a master directly replicates to four subscribers over an intranet connection. Replicating to each subscriber over a network connection in this manner is an inefficient use of network bandwidth.

Figure 1-9 Master replicating directly to multiple subscribers over a network

Description of Figure 1-9 follows
Description of "Figure 1-9 Master replicating directly to multiple subscribers over a network"

For optimum performance, consider the configuration shown in Figure 1-10, where the master replicates to a single propagator over the network connection. The propagator in turn forwards the updates to each subscriber on its local area network.

Figure 1-10 Master replicating to a single propagator over a network

Description of Figure 1-10 follows
Description of "Figure 1-10 Master replicating to a single propagator over a network"

Propagators are also useful for distributing replication loads in configurations that involve a master database that must replicate to a large number of subscribers. For example, it is more efficient for the master to replicate to three propagators, rather than directly to the 12 subscribers as shown in Figure 1-11.

Figure 1-11 Using propagators to replicate to many subscribers

Description of Figure 1-11 follows
Description of "Figure 1-11 Using propagators to replicate to many subscribers"


Note:

Each propagator is one-hop, which means that you can forward an update only once. You cannot have a hierarchy of propagators where propagators forward updates to other propagators.

Cache groups and replication

As described in Oracle In-Memory Database Cache User's Guide, a cache group is a group of tables stored in a central Oracle database that are cached in a local Oracle In-Memory Database Cache (IMDB Cache). This section describes how cache groups can be replicated between TimesTen databases. You can achieve high availability by using an active standby pair to replicate asynchronous writethrough cache groups or read-only cache groups.

This section describes the following ways to replicate cache groups:

See Chapter 5, "Administering an Active Standby Pair with Cache Groups" for details about configuring replication of cache groups.

Replicating an AWT cache group

An asynchronous writethrough (AWT) cache group can be configured as part of an active standby pair with optional read-only subscribers to ensure high availability and to distribute the application workload. Figure 1-12 shows this configuration.

Figure 1-12 AWT cache group replicated by an active standby pair

Description of Figure 1-12 follows
Description of "Figure 1-12 AWT cache group replicated by an active standby pair"

Application updates are made to the active database, the updates are replicated to the standby database, and then the updates are asynchronously written to the Oracle database by the standby. At the same time, the updates are also replicated from the standby to the read-only subscribers, which may be used to distribute the load from reading applications. The tables on the read-only subscribers are not in cache groups.

When there is no standby database, the active both accepts application updates and writes the updates asynchronously to the Oracle database and the read-only subscribers. This situation can occur when the standby has not yet been created, or when the active fails and the standby becomes the new active. TimesTen reconfigures the AWT cache group when the standby becomes the new active.

If a failure occurs on the node where the active database resides, the standby node becomes the new active node. TimesTen automatically reconfigures the AWT cache group so that it can be updated directly by the application and continue to propagate the updates to Oracle asynchronously.

Replicating an AWT cache group with a subscriber propagating to an Oracle database

You can recover from a complete failure of a site by creating a special disaster recovery read-only subscriber on a remote site as part of the active standby pair replication configuration. Figure 1-13 shows this configuration.

Figure 1-13 Disaster recovery configuration with active standby pair

Description of Figure 1-13 follows
Description of "Figure 1-13 Disaster recovery configuration with active standby pair"

The standbyX database sends updates to cache group tables on the read-only subscriber. This special subscriber is located at a remote disaster recovery site and can propagate updates to a second Oracle database, also located at the disaster recovery site. You can set up more than one disaster recovery site with read-only subscribers and Oracle databases. See "Using a disaster recovery subscriber in an active standby pair".

Replicating a read-only cache group

A read-only cache group enforces caching behavior in which committed updates on the Oracle tables are automatically refreshed to the corresponding TimesTen cache tables. Figure 1-14 shows a read-only cache group replicated by an active standby pair.

Figure 1-14 Read-only cache group replicated by an active standby pair

Description of Figure 1-14 follows
Description of "Figure 1-14 Read-only cache group replicated by an active standby pair"

When the read-only cache group is replicated by an active standby pair, the cache group on the active database is autorefreshed from the Oracle database and replicates the updates to the standby, where AUTOREFRESH is also configured on the cache group but is in the PAUSED state. In the event of a failure of the active, TimesTen automatically reconfigures the standby to be autorefreshed when it takes over for the failed master database by setting the AUTOREFRESH STATE to ON.TimesTen also tracks whether updates that have been autorefreshed from the Oracle database to the active database have been replicated to the standby. This ensures that the autorefresh process picks up from the correct point after the active fails, and no autorefreshed updates are lost.This configuration may also include read-only subscriber databases.This allows the read workload to be distributed across many databases. The cache groups on the standby database replicate to regular (non-cache) tables on the subscribers.

Sequences and replication

In some replication configurations, you may need to keep sequences synchronized between two or more databases. For example, you may have a master database containing a replicated table that uses a sequence to fill in the primary key value for each row. The subscriber database is used as a hot backup for the master database. If updates to the sequence's current value are not replicated, insertions of new rows on the subscriber after the master has failed could conflict with rows that were originally inserted on the master.

TimesTen replication allows the incremented sequence value to be replicated to subscriber databases, ensuring that rows in this configuration inserted on either database does not conflict. See "Replicating sequences" for details on writing a replication scheme to replicate sequences.

Foreign keys and replication

If a table with a foreign key configured with ON DELETE CASCADE is replicated, then the matching foreign key on the subscriber must also be configured with ON DELETE CASCADE. In addition, you must replicate any other table with a foreign key relationship to that table. This requirement prevents foreign key conflicts from occurring on subscriber tables when a cascade deletion occurs on the master database.

TimesTen replicates a cascade deletion as a single operation, rather than replicating to the subscriber each individual row deletion which occurs on the child table when a row is deleted on the parent. As a result, any row on the child table on the subscriber database, which contains the foreign key value that was deleted on the parent table, is also deleted, even if that row did not exist on the child table on the master database.

Aging and replication

When a table or cache group is configured with least recently used (LRU) or time-based aging, the following rules apply to the interaction with replication:

PKtbXPK$AOEBPS/title.htmV Oracle TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2)

Oracle® TimesTen In-Memory Database

Replication Guide

11g Release 2 (11.2.2)

E21635-04

September 2012


Oracle TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2)

E21635-04

Copyright © 2012, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PKwݵPK$AOEBPS/monitor.htm Monitoring Replication

12 Monitoring Replication

This chapter describes some of the TimesTen utilities and procedures you can use to monitor the replication status of your databases.

You can monitor replication from both the command line and within your programs. The ttStatus and ttRepAdmin utilities described in this chapter are useful for command line queries. To monitor replication from your programs, you can use the TimesTen built-in procedures described in Oracle TimesTen In-Memory Database Reference or create your own SQL SELECT statements to query the replication tables described in Oracle TimesTen In-Memory Database System Tables and Views Reference.


Note:

You can only access the TimesTen SYS and TTREP tables for queries. Do not try to alter the contents of these tables.

This chapter includes the following topics:

Show state of replication agents

You can display information about the current state of the replication agents:

You can also obtain the state of specific replicated databases as described in "Show subscriber database information" and "Show the configuration of replicated databases".

Using ttStatus to obtain replication agent status

Use the ttStatus utility to confirm that the replication agent is started for the master database.

Example 12-1 Using ttStatus to obtain replication agent status

> ttStatus
TimesTen status report as of Thu Aug 11 17:05:23 2011
Daemon pid 18373 port 4134 instance ttuser
TimesTen server pid 18381 started on port 4136
------------------------------------------------------------------------
Data store /tmp/masterds
There are 16 connections to the data store
Shared Memory KEY 0x0201ab43 ID 5242889
PL/SQL Memory KEY 0x0301ab43 ID 5275658 Address 0x10000000
Type            PID     Context     Connection Name              ConnID
Process         20564   0x081338c0  masterds                          1
Replication     20676   0x08996738  LOGFORCE                          5
Replication     20676   0x089b69a0  REPHOLD                           2
Replication     20676   0x08a11a58  FAILOVER                          3
Replication     20676   0x08a7cd70  REPLISTENER                       4
Replication     20676   0x08ad7e28  TRANSMITTER                       6
Subdaemon       18379   0x080a11f0  Manager                        2032
Subdaemon       18379   0x080fe258  Rollback                       2033
Subdaemon       18379   0x081cb818  Checkpoint                     2036
Subdaemon       18379   0x081e6940  Log Marker                     2035
Subdaemon       18379   0x08261e70  Deadlock Detector              2038
Subdaemon       18379   0xae100470  AsyncMV                        2040
Subdaemon       18379   0xae11b508  HistGC                         2041
Subdaemon       18379   0xae300470  Aging                          2039
Subdaemon       18379   0xae500470  Flusher                        2034
Subdaemon       18379   0xae55b738  Monitor                        2037
Replication policy  : Manual
Replication agent is running.
Cache Agent policy  : Manual
PL/SQL enabled.

Using ttAdmin -query to confirm policy settings

Use the ttAdmin utility with the -query option to confirm the policy settings for a database, including the replication restart policy described in "Starting and stopping the replication agents".

Example 12-2 Using ttAdmin to confirm policy settings

> ttAdmin -query masterDSN
RAM Residence Policy : inUse
Manually Loaded In Ram : False
Replication Agent Policy : manual
Replication Manually Started : True
Cache Agent Policy : manual
Cache Agent Manually Started : False

Using ttDataStoreStatus to obtain replication agent status

To obtain the status of the replication agents from a program, use the ttDataStoreStatus built-in procedure.

Example 12-3 Calling ttDataStoreStatus

Call ttDataStoreStatus to obtain the status of the replication agents for the masterds databases:

> ttIsql masterds
Command> CALL ttDataStoreStatus('/tmp/masterds');
< /tmp/masterds, 964, 00000000005D8150, subdaemon, Global\DBI3b3234c0.0.SHM.35 >
< /tmp/masterds, 1712, 00000000016A72E0, replication, Global\DBI3b3234c0.0.SHM.35 >
< /tmp/masterds, 1712, 0000000001683DE8, replication, Global\DBI3b3234c0.0.SHM.35 >
< /tmp/masterds, 1620, 0000000000608128, application, Global\DBI3b3234c0.0.SHM.35 >
4 rows found.

The output from ttDataStoreStatus is similar to that shown for the ttStatus utility in "Using ttStatus to obtain replication agent status".

Show master database information

You can display information for a master database:

Using ttRepAdmin to display information about the master database

Use the ttRepAdmin utility with the -self -list options to display information about the master database:

ttRepAdmin -dsn masterDSN -self -list

Example 12-4 Using ttRepAdmin to display information about a master database

This example shows the output for the master database described in "Multiple subscriber schemes with return services and a log failure threshold".

> ttRepAdmin -dsn masterds -self -list
Self host "server1", port auto, name "masterds", LSN 0/2114272

The following table describes the fields.

FieldDescription
hostThe name of the host for the database.
portTCP/IP port used by a replication agent of another database to receive updates from this database. A value of 0 (zero) indicates replication has automatically assigned the port.
nameName of the database
Log file/Replication hold LSNIndicates the oldest location in the transaction log that is held for possible transmission to the subscriber. A value of -1/-1 indicates replication is in the stop state with respect to all subscribers.

Querying replication tables to obtain information about a master database

Use the following SELECT statement to query the TTREP.TTSTORES and TTREP.REPSTORES replication tables to obtain information about a master database:

SELECT t.host_name, t.rep_port_number, t.tt_store_name
  FROM ttrep.ttstores t, ttrep.repstores s
    WHERE t.is_local_store = 0x01
      AND t.tt_store_id = s.tt_store_id;

This is the output of the SELECT statement for the master database described in "Multiple subscriber schemes with return services and a log failure threshold". The fields are the host name, the replication port number, and the database name.

< server1, 0, masterds>

Show subscriber database information

Replication uses the TimesTen transaction log to retain information that must be transmitted to subscriber sites. When communication to subscriber databases is interrupted or the subscriber sites are down, the log data accumulates. Part of the output from the queries described in this section allows you to see how much log data has accumulated on behalf of each subscriber database and the amount of time since the last successful communication with each subscriber database.

You can display information for subscriber databases:

Using ttRepAdmin to display subscriber status

To display information about subscribers, use the ttRepAdmin utility with the -receiver -list options:

ttRepAdmin -dsn masterDSN -receiver -list

Example 12-5 Using ttRepAdmin to display information about subscribers

This example shows the output for the subscribers described in "Multiple subscriber schemes with return services and a log failure threshold".

> ttRepAdmin -dsn masterds -receiver -list
Peer name        Host name                Port   State   Proto
---------------- ------------------------ ------ ------- -----
subscriber1ds    server2                  Auto   Start      10

Last Msg Sent Last Msg Recv Latency TPS     RecordsPS Logs
------------- ------------- ------- ------- --------- ----
0:01:12       -             19.41 5       5        52    2

Peer name        Host name                Port   State   Proto
---------------- ------------------------ ------ ------- -----
subscriber2ds    server3                  Auto   Start      10

Last Msg Sent Last Msg Recv Latency TPS     RecordsPS Logs
------------- ------------- ------- ------- --------- ----
0:01:04       -             20.94         4        48    2

The first line of the display contains the subscriber definition. The following row of the display contains latency and rate information, as well as the number of transaction log files being retained on behalf of this subscriber. The latency for subscriber1ds is 19.41 seconds, and it is 2 logs behind the master. This is a high latency, indicating a problem if it continues to be high and the number of logs continues to increase.

If you have more than one scheme specified in the TTREP.REPLICATIONS table, you must use the -scheme option to specify which scheme you wish to list. Otherwise you receive the following error:

Must specify -scheme to identify which replication scheme to use

Using ttReplicationStatus to display subscriber status

You can obtain more detailed status for a specific replicated database by using the ttReplicationStatus built-in procedure.

Querying replication tables to display information about subscribers

To obtain information about a master's subscribers from a program, use the following SELECT statement to query the TTREP.REPPEERS, TTREP.TTSTORES, and SYS.MONITOR tables:

SELECT t1.tt_store_name, t1.host_name, t1.rep_port_number,
p.state, p.protocol, p.timesend, p.timerecv, p.latency,
p.tps, p.recspersec, t3.last_log_file - p.sendlsnhigh + 1
  FROM ttrep.reppeers p, ttrep.ttstores t1, ttrep.ttstores t2, sys.monitor t3
  WHERE p.tt_store_id = t1.tt_store_id
    AND t2.is_local_store = 0X01
    AND p.subscriber_id = t2.tt_store_id
    AND p.replication_name = 'repscheme'
    AND p.replication_owner = 'repl'
    AND (p.state = 0 OR p.state = 1);

The following is sample output from the 3 statement above:

< subscriber1ds, server2, 0, 0, 7, 1003941635, 0, -1.00000000000000, -1, -1, 1 >
< subscriber2ds, server3, 0, 0, 7, 1003941635, 0, -1.00000000000000, -1, -1, 1 >

The output from either the ttRepAdmin utility or the SELECT statement contains the following fields:

FieldDescription
Peer nameName of the subscriber database
Host nameName of the machine that hosts the subscriber
PortTCP/IP port used by the subscriber agent to receive updates from the master. A value of 0 indicates replication has automatically assigned the port.
StateCurrent replication state of the subscriber with respect to its master database (see "Show subscriber database information" for information).
ProtocolInternal protocol used by replication to communicate between this master and its subscribers. You can ignore this value.
Last message sentTime (in seconds) since the master sent the last message to the subscriber. This includes the "heartbeat" messages sent between the databases.
Last message receivedTime (in seconds) since this subscriber received the last message from the master.
LatencyThe average latency time (in seconds) between when the master sends a message and when it receives the final acknowledgement from the subscriber. (See note below.)
Transactions per secondThe average number of transactions per second that are committed on the master and processed by the subscriber. (See note below.)
Records per secondThe average number of transmitted records per second. (See note below.)
LogsNumber of transaction log files the master database is retaining for a subscriber.


Note:

Latency, TPS, and RecordsPS report averages detected while replicating a batch of records. These values can be unstable if the workload is not relatively constant. A value of -1 indicates the master's replication agent has not yet established communication with its subscriber replication agents or sent data to them.

Show the configuration of replicated databases

You can display the configuration of your replicated databases:

Using the ttIsql repschemes command to display configuration information

To display the configuration of your replicated databases from the ttIsql prompt, use the repschemes command:

Command> repschemes;

Example 12-6 shows the configuration output from the replication scheme shown in "Propagation scheme".

Example 12-6 Output from ttIsql repschemes command

Replication Scheme PROPAGATOR:

  Element: A
    Type: Table TAB
    Master Store: CENTRALDS on FINANCE Transmit Durable
    Subscriber Store: PROPDS on NETHANDLER

  Element: B
    Type: Table TAB
    Propagator Store: PROPDS on NETHANDLER Transmit Durable
    Subscriber Store: BACKUP1DS on BACKUPSYSTEM1
    Subscriber Store: BACKUP2DS on BACKUPSYSTEM2

Store: BACKUP1DS on BACKUPSYSTEM1
  Port: (auto)
  Log Fail Threshold: (none)
  Retry Timeout: 120 seconds
  Compress Traffic: Disabled

Store: BACKUP2DS on BACKUPSYSTEM2
  Port: (auto)
  Log Fail Threshold: (none)
  Retry Timeout: 120 seconds
  Compress Traffic: Disabled

Store: CENTRALDS on FINANCE
  Port: (auto)
  Log Fail Threshold: (none)
  Retry Timeout: 120 seconds
  Compress Traffic: Disabled

Store: PROPDS on NETHANDLER
  Port: (auto)
  Log Fail Threshold: (none)
  Retry Timeout: 120 seconds
  Compress Traffic: Disabled

Using ttRepAdmin to display configuration information

To display the configuration of your replicated databases, use the ttRepAdmin utility with the -showconfig option:

ttRepAdmin -showconfig -dsn masterDSN

Example 12-7 shows the configuration output from the propagated databases configured by the replication scheme shown in "Propagation scheme". The propds propagator shows a latency of 19.41 seconds and is 2 logs behind the master.

Example 12-7 ttRepAdmin output

> ttRepAdmin -showconfig -dsn centralds
Self host "finance", port auto, name "centralds", LSN 0/155656, timeout 120, 
threshold 0

List of subscribers
-----------------
Peer name        Host name                Port   State   Proto
---------------- ------------------------ ------ ------- -----
propds           nethandler               Auto   Start      10

Last Msg Sent Last Msg Recv Latency TPS     RecordsPS Logs
------------- ------------- ------- ------- --------- ----
0:01:12       -             19.41         5        52    2

List of tables and subscriptions
--------------------------------
Table details
-------------
Table : tab          Timestamp updates : -

Master Name                 Subscriber Name
-----------                 -------------
centralds                   propds

Table details
-------------
Table : tab          Timestamp updates : -

Master Name                 Subscriber name
-----------                 -------------
propds                      backup1ds
propds                      backup2ds

See "Querying replication tables to display information about subscribers" for the meaning of the "List of subscribers" fields. The "Table details" fields list the table and the names of its master (Sender) and subscriber databases.

Querying replication tables to display configuration information

Use the following SELECT statements to query the TTREP.TTSTORES, TTREP.REPSTORES, TTREP.REPPEERS, SYS.MONITOR, TTREP.REPELEMENTS, and TTREP.REPSUBSCRIPTIONS tables for configuration information:

SELECT t.host_name, t.rep_port_number, t.tt_store_name, s.peer_timeout, 
s.fail_threshold
  FROM ttrep.ttstores t, ttrep.repstores s
    WHERE t.is_local_store = 0X01
      AND t.tt_store_id = s.tt_store_id;

SELECT t1.tt_store_name, t1.host_name, t1.rep_port_number,
       p.state, p.protocol, p.timesend, p.timerecv, p.latency,
       p.tps, p.recspersec, t3.last_log_file - p.sendlsnhigh + 1
  FROM ttrep.reppeers p, ttrep.ttstores t1, ttrep.ttstores t2, sys.monitor t3
    WHERE p.tt_store_id = t2.tt_store_id
      AND t2.is_local_store = 0X01
      AND p.subscriber_id = t1.tt_store_id
      AND (p.state = 0 OR p.states = 1);

SELECT ds_obj_owner, DS_OBJ_NAME, t1.tt_store_name,t2.tt_store_name
  FROM ttrep.repelements e, ttrep.repsubscriptions s, 
      ttrep.ttstores t1, ttrep.ttstores t2
    WHERE s.element_name = e.element_name
      AND e.master_id = t1.tt_store_id
      AND s.subscriber_id = t2.tt_store_id
    ORDER BY ds_obj_owner, ds_obj_name;

Example 12-8 Output from queries

The output from the queries refer to the databases configured by the replication scheme shown in "Propagation scheme".

The output from the first query might be:

< finance, 0, centralds, 120, 0 >

It shows the host name, port number and the database name. The fourth value (120) is the TIMEOUT value that defines the amount of time a database waits for a response from another database before resending a message. The last value (0) is the log failure threshold value described in "Setting the log failure threshold".

The output from the second query might be:

< propds, nethandler, 0, 0, 7, 1004378953, 0, -1.00000000000000, -1, -1, 1 >

See "Querying replication tables to display information about subscribers" for a description of the fields.

The output from the last query might be:

< repl, tab, centralds, propds >
< repl, tab, propds, backup1ds >
< repl, tab, propds, backup2ds >

The rows show the replicated table and the names of its master (sender) and subscriber (receiver) databases.

Show replicated log records

In a replicated database, transactions remain in the log buffer and transaction log files until the master replication agent confirms they have been fully processed by the subscriber. Only then can the master consider purging them from the log buffer and transaction log files. When the log space is exhausted, subsequent updates on the master database are aborted. Use the ttLogHolds built-in procedure to get information about replication log holds. For more information about transaction log growth, see "Monitoring accumulation of transaction log files" in Oracle TimesTen In-Memory Database Operations Guide.

Transactions are stored in the log in the form of log records. You can use bookmarks to detect which log records have or have not been replicated by a master database.

A bookmark consists of log sequence numbers (LSNs) that identify the location of particular records in the transaction log that you can use to gauge replication performance. The LSNs associated with a bookmark are: hold LSN, last written LSN, and last LSN forced to disk. The hold LSN describes the location of the lowest (or oldest) record held in the log for possible transmission to a subscriber. You can compare the hold LSN with the last written LSN to determine the amount of data in the transaction log that have not yet been transmitted to the subscribers. The last LSN forced to disk describes the last records saved in a transaction log file on disk.

A more accurate way to monitor replication to a particular subscriber is to look at the send LSN for the subscriber, which consists of the SENDLSNHIGH and SENDLSNLOW fields in the TTREP.REPPEERS table. In contrast to the send LSN value, the hold LSN returned in a bookmark is computed every 10 seconds to describe the minimum send LSN for all the subscribers, so it provides a more general view of replication progress that does not account for the progress of replication to the individual subscribers. Because replication acknowledgements are asynchronous for better performance, the send LSN can also be some distance behind. Nonetheless, the send LSN for a subscriber is the most accurate value available and is always ahead of the hold LSN.

You can display replicated log records:

Using ttRepAdmin to display bookmark location

Use the ttRepAdmin utility with the -bookmark option to display the location of bookmarks:

> ttRepAdmin -dsn masterds -bookmark
Replication hold LSN ...... 10/927692
Last written LSN .......... 10/928908
Last LSN forced to disk ... 10/280540
Each LSN is defined by two values:
Log file number / Offset in log file

The LSNs output from ttRepAdmin -bookmark are:

LineDescription
Replication hold LSNThe location of the lowest (or oldest) record held in the log for possible transmission to a subscriber. A value of -1/-1 indicates replication is in the stop state with respect to all subscribers (or the queried database is not a master database).
Last written LSNThe location of the most recently generated transaction log record for the database.
Last LSN forced to diskThe location of the most recent transaction log record written to the disk.

Using ttBookMark to display bookmark location

Use the ttBookmark built-in procedure to display the location of bookmarks.

Example 12-9 Using ttBookmark to display bookmark location

> ttIsql masterds

Command> call ttBookMark();
< 10, 928908, 10, 280540, 10, 927692 >
1 row found.

The first two columns in the returned row define the "Last written LSN," the next two columns define the "Last LSN forced to disk," and the last two columns define the "Replication hold LSN."

Using ttRepAdmin to show replication status

You can use the ttRepAdmin utility with the -showstatus option to display the current status of the replication agent. The status output includes the bookmark locations, port numbers, and communication protocols used by the replication agent for the queried database.

The output from ttRepAdmin -showstatus includes the status of the main thread and the TRANSMITTER and RECEIVER threads used by the replication agent. A master database has a TRANSMITTER thread and a subscriber database has a RECEIVER thread. A database that serves a master/subscriber role in a bidirectional replication scheme has both a TRANSMITTER and a RECEIVER thread.

Each replication agent has a single REPLISTENER thread that listens on a port for peer connections. On a master database, the REPLISTENER thread starts a separate TRANSMITTER thread for each subscriber database. On a subscriber database, the REPLISTENER thread starts a separate RECEIVER thread for each connection from a master.

If the TimesTen daemon requests that the replication agent stop or if a fatal error occurs in any of the other threads used by the replication agent, the main thread waits for the other threads to gracefully terminate. The TimesTen daemon may or may not restart the replication agent, depending upon certain fatal errors. The REPLISTENER thread never terminates during the lifetime of the replication agent. A TRANSMITTER or RECEIVER thread may stop but the replication agent may restart it. The RECEIVER thread terminates on errors from which it cannot recover or when the master disconnects.

Example 12-9 shows ttRepAdmin -showstatus output for a unidirectional replication scheme in which the rep1 database is the master and rep2 database is the subscriber. The first ttRepAdmin -showstatus output shows the status of the rep1 database and its TRANSMITTER thread. The second output shows the status of the rep2 database and its RECEIVER thread.

Following the example are sections that describe the meaning of each field in the ttRepAdmin -showstatus output:

Example 12-10 Unidirectional replication scheme

Consider the unidirectional replication scheme from the rep1 database to the rep2 database:

CREATE REPLICATION r
ELEMENT e1 TABLE t
  MASTER rep1
  SUBSCRIBER rep2;

The replication status for the rep1 database should look similar to the following:

> ttRepAdmin -showstatus rep1

DSN                      : rep1
Process ID               : 1980
Replication Agent Policy : MANUAL
Host                     : MYHOST
RepListener Port         : 1113 (AUTO)
Last write LSN           : 0.1487928
Last LSN forced to disk  : 0.1487928
Replication hold LSN     : 0.1486640

Replication Peers:
  Name                   : rep2
  Host                   : MYHOST
  Port                   : 1154 (AUTO)
  Replication State      : STARTED
  Communication Protocol : 12

TRANSMITTER thread(s):
  For                     : rep2
    Start/Restart count   : 2
    Send LSN              : 0.1485960
    Transactions sent     : 3
    Total packets sent    : 10
    Tick packets sent     : 3
    MIN sent packet size  : 48
    MAX sent packet size  : 460
    AVG sent packet size  : 167
    Last packet sent at   : 17:41:05
    Total Packets received: 9
    MIN rcvd packet size  : 48
    MAX rcvd packet size  : 68
    AVG rcvd packet size  : 59
    Last packet rcvd'd at : 17:41:05
    Earlier errors (max 5):
    TT16060 in transmitter.c (line 3590) at 17:40:41 on 08-25-2004
    TT16122 in transmitter.c (line 2424) at 17:40:41 on 08-25-2004

Note that the Replication hold LSN, the Last write LSN and the Last LSN forced to disk are very close, which indicates that replication is operating satisfactorily. If the Replication hold LSN falls behind the Last write LSN and the Last LSN, then replication is not keeping up with updates to the master.

The replication status for the rep2 database should look similar to the following:

> ttRepAdmin -showstatus rep2

DSN                      : rep2
Process ID               : 2192
Replication Agent Policy : MANUAL
Host                     : MYHOST
RepListener Port         : 1154 (AUTO)
Last write LSN           : 0.416464
Last LSN forced to disk  : 0.416464
Replication hold LSN     : -1.-1

Replication Peers:
  Name              : rep1
  Host              : MYHOST
  Port              : 0 (AUTO)
  Replication State : STARTED
  Communication Protocol : 12

RECEIVER thread(s):
  For                   : rep1
  Start/Restart count   : 1
  Transactions received : 0
  Total packets sent    : 20
  Tick packets sent     : 0
  MIN sent packet size  : 48
  MAX sent packet size  : 68
  AVG sent packet size  : 66
  Last packet sent at   : 17:49:51
  Total Packets received: 20
  MIN rcvd packet size  : 48
  MAX rcvd packet size  : 125
  AVG rcvd packet size  : 52
  Last packet rcvd'd at : 17:49:51

MAIN thread status fields

The following fields are output for the MAIN thread in the replication agent for the queried database.

MAIN ThreadDescription
DSNName of the database to be queried.
Process IDProcess Id of the replication agent.
Replication Agent PolicyThe restart policy, as described in "Starting and stopping the replication agents"
HostName of the machine that hosts this database.
RepListener PortTCP/IP port used by the replication agent to listen for connections from the TRANSMITTER threads of remote replication agents. A value of 0 indicates that this port has been assigned automatically to the replication agent (the default), rather than being specified as part of a replication scheme.
Last write LSNThe location of the most recently generated transaction log record for the database. See "Show replicated log records" for more information.
Last LSN forced to diskThe location of the most recent transaction log record written to the disk. See "Show replicated log records" for more information.
Replication hold LSNThe location of the lowest (or oldest) record held in the log for possible transmission to a subscriber. A value of -1/-1 indicates replication is in the stop state with respect to all subscribers. See "Show replicated log records" for more information.

Replication peer status fields

The following fields are output for each replication peer that participates in the replication scheme with the queried database. A "peer" could play the role of master, subscriber, propagator or both master and subscriber in a bidirectional replication scheme.

Replication PeersDescription
NameName of a database that is a replication peer to this database.
HostHost of the peer database.
PortTCP/IP port used by the replication agent for the peer database. A value of 0 indicates this port has been assigned automatically to the replication agent (the default), rather than being specified as part of a replication scheme.
Replication StateCurrent replication state of the replication peer with respect to the queried database (see "Show subscriber database information" for information).
Communication ProtocolInternal protocol used by replication to communicate between the peers. (For internal use only.)

TRANSMITTER thread status fields

The following fields are output for each TRANSMITTER thread used by a master replication agent to send transaction updates to a subscriber. A master with multiple subscribers has multiple TRANSMITTER threads.


Note:

The counts in the TRANSMITTER output begin to accumulate when the replication agent is started. These counters are reset to 0 only when the replication agent is started or restarted.

TRANSMITTER ThreadDescription
ForName of the subscriber database that is receiving replicated data from this database.
Start/Restart countNumber of times this TRANSMITTER thread was started or restarted by the replication agent due to a temporary error, such as operation timeout, network failure, and so on.
Send LSNThe last LSN transmitted to this peer. See "Show replicated log records" for more information.
Transactions sentTotal number of transactions sent to the subscriber.
Total packets sentTotal number of packets sent to the subscriber (including tick packets)
Tick packets sentTotal number of tick packets sent. Tick packets are used to maintain a "heartbeat" between the master and subscriber. You can use this value to determine how many of the 'Total packets sent' packets are not related to replicated data.
MIN sent packet sizeSize of the smallest packet sent to the subscriber.
MAX sent packet sizeSize of the largest packet sent to the subscriber.
AVG sent packet sizeAverage size of the packets sent to the subscriber.
Last packet sent atTime of day last packet was sent (24-hour clock time)
Total packets receivedTotal packets received from the subscriber (tick packets and acknowledgement data)
MIN rcvd packet sizeSize of the smallest packet received
MAX rcvd packet sizeSize of the largest packet received
AVG rcvd packet sizeAverage size of the packets received
Last packet rcvd atTime of day last packet was received (24-hour clock time)
Earlier errors (max 5)Last five errors generated by this thread

RECEIVER thread status fields

The following fields are output for each RECEIVER thread used by a subscriber replication agent to receive transaction updates from a master. A subscriber that is updated by multiple masters has multiple RECEIVER threads.


Note:

The counts in the RECEIVER output begin to accumulate when the replication agent is started. These counters are reset to 0 only when the replication agent is started or restarted.

RECEIVER ThreadDescription
ForName of the master database that is sending replicated data from this database
Start/Restart countNumber of times this RECEIVER thread was started or restarted by the replication agent due to a temporary error, such as operation timeout, network failure, and so on.
Transactions receivedTotal number of transactions received from the master
Total packets sentTotal number of packets sent to the master (tick packets and acknowledgement data)
Tick packets sentTotal number of tick packets sent to the master. Tick packets are used to maintain a "heartbeat" between the master and subscriber. You can use this value to determine how many of the 'Total packets sent' packets are not related to acknowledgement data.
MIN sent packet sizeSize of the smallest packet sent to the master
MAX sent packet sizeSize of the largest packet sent to the master
AVG sent packet sizeAverage size of the packets sent to the master
Last packet sent atTime of day last packet was sent to the master (24-hour clock time)
Total packets receivedTotal packets of acknowledgement data received from the master
MIN rcvd packet sizeSize of the smallest packet received
MAX rcvd packet sizeSize of the largest packet received
AVG rcvd packet sizeAverage size of the packets received
Last packet rcvd atTime of day last packet was received (24-hour clock time)

Checking the status of return service transactions

You can determine whether the return service for a particular subscriber has been disabled by the DISABLE RETURN failure policy by calling the ttRepSyncSubscriberStatus built-in procedure or by means of the SNMP trap, ttRepReturnTransitionTrap. The ttRepSyncSubscriberStatus procedure returns a value of '1' to indicate the return service has been disabled for the subscriber, or a value of '0' to indicate that the return service is still enabled.

Example 12-11 Using ttRepSyncSubscriberStatus to obtain return receipt status

To use ttRepSyncSubscriberStatus to obtain the return receipt status of the subscriberds database with respect to its master database, masterDSN, enter:

> ttIsql masterDSN

Command> CALL ttRepSyncSubscriberStatus ('subscriberds');
< 0 >
1 row found.

This result indicates that the return service is still enabled.

See "DISABLE RETURN" for more information.

You can check the status of the last return receipt or return twosafe transaction executed on the connection handle by calling the ttRepXactTokenGet and ttRepXactStatus procedures.

First, call ttRepXactTokenGet to get a unique token for the last return service transaction. If you are using return receipt, the token identifies the last return receipt transaction committed on the master database. If you are using return twosafe, the token identifies the last twosafe transaction on the master that, in the event of a successful commit on the subscriber, is committed by the replication agent on the master. However, in the event of a timeout or other error, the twosafe transaction identified by the token is not committed by the replication agent on the master.

Next, pass the token returned by ttRepXactTokenGet to the ttRepXactStatus procedure to obtain the return service status. The output of the ttRepXactStatus procedure reports which subscriber or subscribers are configured to receive the replicated data and the current status of the transaction (not sent, received, committed) with respect to each subscriber. If the subscriber replication agent encountered a problem applying the transaction to the subscriber database, the ttRepXactStatus procedure also includes the error string. If you are using return twosafe and receive a timeout or other error, you can then decide whether to unconditionally commit or retry the commit, as described in "RETURN TWOSAFE".


Note:

If ttRepXactStatus is called without a token from ttRepXactTokenGet, it returns the status of the most recent transaction on the connection which was committed with the return receipt or return twosafe replication service.

The ttRepXactStatus procedure returns the return service status for each subscriber as a set of rows formatted as:

subscriberName, status, error

Example 12-12 Reporting the status of each subscriber

You can call the ttRepXactTokenGet and ttRepXactStatus built-in procedures in a GetRSXactStatus function to report the status of each subscriber in your replicated system:

SQLRETURN GetRSXactStatus (HDBC hdbc)
{
  SQLRETURN rc = SQL_SUCCESS;
  HSTMT hstmt = SQL_NULL_HSTMT;
  char xactId [4001] = "";
  char subscriber [62] = "";
  char state [3] = "";

  /* get the last RS xact id executed on this connection */
  SQLAllocStmt (hdbc, &hstmt);
  SQLExecDirect (hstmt, "CALL ttRepXactTokenGet ('R2')", SQL_NTS);

  /* bind the xact id result as a null terminated hex string */
  SQLBindCol (hstmt, 1, SQL_C_CHAR, (SQLPOINTER) xactId,
    sizeof (xactId), NULL);

  /* fetch the first and 
gonly row */
  rc = SQLFetch (hstmt);

  /* close the cursor */
  SQLFreeStmt (hstmt, SQL_CLOSE);

  if (rc != SQL_ERROR && rc != SQL_NO_DATA_FOUND)
  {
    /* display the xact id */
    printf ("\nRS Xact ID: 0x%s\n\n", xactId);

    /* get the status of this xact id for every subscriber */
    SQLBindParameter (hstmt, 1, SQL_PARAM_INPUT, SQL_C_CHAR,
      SQL_VARBINARY, 0, 0,
     (SQLPOINTER) xactId, strlen (xactId), NULL);

    /* execute */
    SQLExecDirect (hstmt, "CALL ttRepXactStatus (?)", SQL_NTS);

   /* bind the result columns */
   SQLBindCol (hstmt, 1, SQL_C_CHAR, (SQLPOINTER) subscriber,
     sizeof (subscriber), NULL);

   SQLBindCol (hstmt, 2, SQL_C_CHAR, (SQLPOINTER) state,
     sizeof (state), NULL);

   /* fetch the first row */
   rc = SQLFetch (hstmt);

   while (rc != SQL_ERROR && rc != SQL_NO_DATA_FOUND)
   {
     /* report the status of this subscriber */
     printf ("\n\nSubscriber: %s", subscriber);
     printf ("\nState: %s", state);

     /* are there more rows to fetch? */
     rc = SQLFetch (hstmt);
     }
  }

  /* close the statement */
  SQLFreeStmt (hstmt, SQL_DROP);

  return rc;
}

Improving replication performance

To increase replication performance, consider these tips:

See also "Poor replication or XLA performance" in Oracle TimesTen In-Memory Database Troubleshooting Guide.

PK^  PK$AOEBPS/standby.htm1oΐ Administering an Active Standby Pair Without Cache Groups

4 Administering an Active Standby Pair Without Cache Groups

This chapter describes how to administer an active standby pair that does not replicate cache groups.

For information about administering active standby pairs that replicate cache groups, see Chapter 5, "Administering an Active Standby Pair with Cache Groups".

For information about managing failover and recovery automatically, see Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs".

This chapter includes the following topics:

Overview of master database states

This section summarizes the possible states of a master database. These states are referenced in the tasks described in the rest of the chapter.

The master databases can be in one of the following states:

You can use the ttRepStateGet built-in procedure to discover the state of a master database.

Duplicating a database

When you set up a replication scheme or administer a recovery, a common task is duplicating a database. You can use the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database.

To duplicate a database, these conditions must be fulfilled:

On the source database, create a user and grant the ADMIN privilege to the user:

CREATE USER ttuser IDENTIFIED BY ttuser;
User created.

GRANT admin TO ttuser;

Assume the user name of the instance administrator is timesten. Logged in as timesten on the target host, duplicate database dsn1 on host1 to dsn2:

ttRepAdmin -duplicate -from dsn1 -host host1 dsn2

Enter internal UID at the remote datastore with ADMIN privileges: ttuser 
Enter password of the internal Uid at the remote datastore:

Enter ttuser when prompted for the password of the internal user at the remote database.

If you are duplicating an active database that has cache groups, use the -keepCG option. You must also specify the cache administration user ID and password with the -cacheUid and -cachePwd options. If you do not provide the cache administration user password, ttRepAdmin prompts for a password. If the cache administration user ID is orauser and the password is orapwd, duplicate database dsn1 on host1:

ttRepAdmin -duplicate -from dsn1 -host host1 -keepCG "DSN=dsn2;UID=;PWD="

Enter internal UID at the remote datastore with ADMIN privileges: ttuser 
Enter password of the internal Uid at the remote datastore:

Enter ttuser when prompted for the password. ttRepAdmin then prompts for the cache administration user and password:

Enter cache administrator UID: orauser
Enter cache administrator password: 

Enter orapwd when prompted for the cache administration password.

The UID and PWD for dsn2 are specified as null values in the connection string so that the connection is made as the current OS user, which is the instance administrator. Only the instance administrator can run ttRepAdmin -duplicate. If dsn2 is configured with PWDCrypt instead of PWD, then the connection string should be "DSN=dsn2;UID=;PWDCrypt=".

When you duplicate a standby database with cache groups to a read-only subscriber, use the -nokeepCG option. In this example, dsn2 is the standby database and sub1 is the read-only subscriber:

ttRepAdmin -duplicate -from dsn2 -host host2 -nokeepCG "DSN=sub1;UID=;PWD="

The ttRepAdmin utility prompts for values for -uid and -pwd.

If you want to use a specific local or remote network interface over which the database duplication occurs, you can optionally specify either by providing an alias or the IP address of the network interface.

You can specify the local and remote network interfaces for the source and target hosts by using the -localIP and -remoteIP options of ttRepAdmin -duplicate. If you do not specify one or both network interfaces, TimesTen chooses them.

For more information about the ttRepAdmin utility, see "ttRepAdmin" in Oracle TimesTen In-Memory Database Reference. For more information about the ttRepDuplicateEx C function, see "ttRepDuplicateEx" in Oracle TimesTen In-Memory Database C Developer's Guide.

Setting up an active standby pair with no cache groups

To set up an active standby pair, complete the tasks in this section. See "Configuring an active standby pair with one subscriber" for an example.

If you intend to replicate read-only cache groups or asynchronous writethrough (AWT) cache groups, see Chapter 5, "Administering an Active Standby Pair with Cache Groups".

Before you create a database, see the information in these sections:

  1. Create a database. See "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.

  2. Create the replication scheme using the CREATE ACTIVE STANDBY PAIR statement. See Chapter 3, "Defining an Active Standby Pair Replication Scheme".

  3. Call ttRepStateSet('ACTIVE') on the active database.

  4. Start the replication agent. See "Starting and stopping the replication agents".

  5. Create a user on the active database and grant the ADMIN privilege to the user.

  6. Duplicate the active database to the standby database.

  7. Start the replication agent on the standby database. See "Starting and stopping the replication agents".

  8. Wait for the standby database to enter the STANDBY state. Use the ttRepStateGet procedure to check the state of the standby database.

  9. Duplicate all of the subscribers from the standby database. See "Duplicating a master database to a subscriber".

  10. Set up the replication agent policy and start the replication agent on each of the subscriber databases. See "Starting and stopping the replication agents".

Recovering from a failure of the active database

This section includes the following topics:

Recovering when the standby database is ready

This section describes how to recover the active database when the standby database is available and synchronized with the active database. It includes the following topics:

When replication is return receipt or asynchronous

Complete the following tasks:

  1. Stop the replication agent on the failed database if it has not already been stopped.

  2. On the standby database, execute ttRepStateSet('ACTIVE'). This changes the role of the database from STANDBY to ACTIVE.

  3. On the new active database, execute ttRepStateSave('FAILED', 'failed_database','host_name'), where failed_database is the former active database that failed. This step is necessary for the new active database to replicate directly to the subscriber databases. During normal operation, only the standby database replicates to the subscribers.

  4. Destroy the failed database.

  5. Duplicate the new active database to the new standby database.

  6. Set up the replication agent policy and start the replication agent on the new standby database. See "Starting and stopping the replication agents".

The standby database contacts the active database. The active database stops sending updates to the subscribers. When the standby database is fully synchronized with the active database, then the standby database enters the STANDBY state and starts sending updates to the subscribers.


Note:

You can verify that the standby database has entered the STANDBY state by using the ttRepStateGet built-in procedure.

When replication is return twosafe

Complete the following tasks:

  1. On the standby database, execute ttRepStateSet('ACTIVE'). This changes the role of the database from STANDBY to ACTIVE.

  2. On the new active database, execute ttRepStateSave('FAILED', 'failed_database','host_name'), where failed_database is the former active database that failed. This step is necessary for the new active database to replicate directly to the subscriber databases. During normal operation, only the standby database replicates to the subscribers.

  3. Connect to the failed database. This triggers recovery from the local transaction logs. If database recovery fails, you must continue from Step 5 of the procedure for recovering when replication is return receipt or asynchronous. See "When replication is return receipt or asynchronous".

  4. Verify that the replication agent for the failed database has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".

When the active database determines that it is fully synchronized with the standby database, then the standby database enters the STANDBY state and starts sending updates to the subscribers.


Note:

You can verify that the standby database has entered the STANDBY state by using the ttRepStateSet built-in procedure.

Recovering when the standby database is not ready

Consider the following scenarios:

  • The standby database fails. The active database fails before the standby comes back up or before the standby has been synchronized with the active database.

  • The active database fails. The standby database becomes ACTIVE, and the rest of the recovery process begins. (See "Recovering from a failure of the active database".) The new active database fails before the new standby database is fully synchronized with it.

In both scenarios, the subscribers may have had more changes applied than the standby database.

When the active database fails and the standby database has not applied all of the changes that were last sent from the active database, there are two choices for recovery:

  • Recover the active database from the local transaction logs.

  • Recover the standby database from the local transaction logs.

The choice depends on which database is available and which is more up to date.

Recover the active database

  1. Connect to the failed active database. This triggers recovery from the local transaction logs.

  2. Verify that the replication agent for the failed active database has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".

  3. Execute ttRepStateSet('ACTIVE') on the newly recovered database.

  4. Continue with Step 6 in "Setting up an active standby pair with no cache groups".

Recover the standby database

  1. Connect to the failed standby database. This triggers recovery from the local transaction logs.

  2. If the replication agent for the standby database has automatically restarted, you must stop the replication agent. See "Starting and stopping the replication agents".

  3. Drop the replication configuration using the DROP ACTIVE STANDBY PAIR statement.

  4. Re-create the replication configuration using the CREATE ACTIVE STANDBY PAIR statement.

  5. Execute ttRepStateSet('ACTIVE') on the master database, giving it the ACTIVE role.

  6. Set up the replication agent policy and start the replication agent on the new standby database. See "Starting and stopping the replication agents".

  7. Continue from Step 6 in "Setting up an active standby pair with no cache groups".

Failing back to the original nodes

After a successful failover, you may wish to fail back so that the active database and the standby database are on their original nodes. See "Reversing the roles of the active and standby databases" for instructions.

Recovering from a failure of the standby database

To recover from a failure of the standby database, complete the following tasks:

  1. Detect the standby database failure.

  2. If return twosafe service is enabled, the failure of the standby database may prevent a transaction in progress from being committed on the active database, resulting in error 8170, "Receipt or commit acknowledgement not returned in the specified timeout interval". If so, then call the ttRepSyncSet procedure with a localAction parameter of 2 (COMMIT) and commit the transaction again. For example:

    call ttRepSyncSet( null, null, 2);
    commit;
    
  3. Execute ttRepStateSave('FAILED','standby_database','host_name') on the active database. After this, as long as the standby database is unavailable, updates to the active database are replicated directly to the subscriber databases. Subscriber databases may also be duplicated directly from the active.

  4. If the replication agent for the standby database has automatically restarted, stop the replication agent. See "Starting and stopping the replication agents".

  5. Recover the standby database in one of the following ways:

    • Connect to the standby database. This triggers recovery from the local transaction logs.

    • Duplicate the standby database from the active database.

    The amount of time that the standby database has been down and the amount of transaction logs that need to be applied from the active database determine the method of recovery that you should use.

  6. Set up the replication agent policy and start the replication agent on the new standby database. See "Starting and stopping the replication agents".

The standby database enters the STANDBY state and starts sending updates to the subscribers after the active database determines that the two master databases have been synchronized and stops sending updates to the subscribers.


Note:

You can verify that the standby database has entered the STANDBY state by using the ttRepStateGet procedure.

Recovering from the failure of a subscriber database

If a subscriber database fails, then you can recover it by one of the following methods:

If the standby database is down or in recovery, then duplicate the subscriber from the active database.

After the subscriber database has been recovered, then set up the replication agent policy and start the replication agent. See "Starting and stopping the replication agents".

Reversing the roles of the active and standby databases

To change the role of the active database to standby and vice versa:

  1. Pause any applications that are generating updates on the current active database.

  2. Execute ttRepSubscriberWait on the active database, with the DSN and host of the current standby database as input parameters. It must return success (<00>). This ensures that all updates have been transmitted to the current standby database.

  3. Stop the replication agent on the current active database. See "Starting and stopping the replication agents".

  4. Execute ttRepDeactivate on the current active database. This puts the database in the IDLE state.

  5. Execute ttRepStateSet('ACTIVE') on the current standby database. This database now acts as the active database in the active standby pair.

  6. Set up the replication agent policy and start the replication agent on the old active database.

  7. Use the ttRepStateGet procedure to determine when the database's state has changed from IDLE to STANDBY. The database now acts as the standby database in the active standby pair.

  8. Resume any applications that were paused in Step 1.

Detection of dual active databases

Ordinarily, the designation of the active and standby databases in an active standby pair is explicitly controlled by the user. However, in some circumstances the user may not have the ability to modify both the active and standby databases when changing the role of the standby database to active.

For example, if network communication to the site of an active database is interrupted, the user may need the standby database at a different site to take over the role of the active, but cannot stop replication on the current active or change its role manually. Changing the standby database to active without first stopping replication on the active leads to a situation where both masters are in the ACTIVE state and accepting transactions. In such a scenario, TimesTen can automatically negotiate the active/standby role of the master databases when network communication between the databases is restored.

If, during the initial handshake between the databases, TimesTen determines that the master databases in an active standby pair replication scheme are both in the ACTIVE state, TimesTen performs the following operations automatically:

PK<݃)6o1oPK$AOEBPS/preface.htm+] Preface

Preface

Oracle TimesTen In-Memory Database is a memory-optimized relational database. Deployed in the application tier, Oracle TimesTen In-Memory Database operates on databases that fit entirely in physical memory using standard SQL interfaces. High availability for the in-memory database is provided through real-time transactional replication.

Audience

This document is intended for application developers and system administrators who use and administer TimesTen to TimesTen Replication.To work with this guide, you should understand how database systems work. You should also have knowledge of SQL (Structured Query Language) and either ODBC (Open DataBase Connectivity) or JDBC (JavaDataBase Connectivity).

Related documents

TimesTen documentation is available on the product distribution media and on the Oracle Technology Network:

http://www.oracle.com/technetwork/products/timesten/documentation

Conventions

TimesTen supports multiple platforms. Unless otherwise indicated, the information in this guide applies to all supported platforms. The term Windows refers to all supported Windows platforms. The term UNIX applies to all supported UNIX and Linux platforms. See "Platforms" in Oracle TimesTen In-Memory Database Release Notes for specific platform versions supported by TimesTen.


Note:

In TimesTen documentation, the terms "data store" and "database" are equivalent. Both terms refer to the TimesTen database unless otherwise noted.

This document uses the following text conventions:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.
italic monospaceItalic monospace type indicates a variable in a code example that you must replace. For example:

Driver=install_dir/lib/libtten.sl

Replace install_dir with the path of your TimesTen installation directory.

[ ]Square brackets indicate that an item in a command line is optional.
{ }Curly braces indicated that you must choose one of the items separated by a vertical bar ( | ) in a command line.
|
A vertical bar (or pipe) separates alternative arguments.
. . .An ellipsis (. . .) after an argument indicates that you may use more than one argument on a single command line.
%
The percent sign indicates the UNIX shell prompt.
#
The number (or pound) sign indicates the UNIX root prompt.

TimesTen documentation uses these variables to identify path, file and user names:

ConventionMeaning
install_dirThe path that represents the directory where the current release of TimesTen is installed.
TTinstanceThe instance name for your specific installation of TimesTen. Each installation of TimesTen must be identified at install time with a unique alphanumeric instance name. This name appears in the install path.
bits or bbTwo digits, either 32 or 64, that represent either the 32-bit or 64-bit operating system.
release or rrThe first three parts in a release number, with or without dots. The first three parts of a release number represent a major TimesTen release. For example, 1122 or 11.2.2 represents TimesTen 11g Release 2 (11.2.2).
jdk_versionTwo digits that represent the version number of the major JDK release. Specifically, 14 represent JDK 1.4; 5 represents JDK 5.
DSNThe data source name.

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

PK\++PK$AOEBPS/index.htm Index

Index

A  B  C  D  E  F  G  H  I  L  M  N  O  P  R  S  T  U  V  W 

A

active database
change to standby, 4.7
detecting dual active masters, 4.8
active standby pair, 6.1
add or drop table column, 6.1
adding host to cluster, 7.6.10
adding or dropping a subscriber, 6.2
adding or dropping cache groups, 6.2
adding sequences and cache groups, 6.2
altering, 6.2
changing PORT or TIMEOUT connection attributes, 6.2
configuring network interfaces, 3.8
create or drop index, 6.1
create or drop synonym, 6.1
defined, 1.5.1
detecting dual active masters, 4.8
disaster recovery, 5.9
dropping sequences and cache groups, 6.2
DSN, 3.2
examples of altering, 6.2.1
failback, 4.4.3, 5.4.3
overview, 1.5.1
recover active when standby not ready, 4.4.2
recovering active database, 4.4
replicating a global AWT cache group, 5.3
replicating a local read-only cache group, 5.2
replicating materialized views, 3.11
replicating sequences, 3.12
restrictions, 3.1
return service, 3.6
reverse roles, 4.7
setting up, 4.3
states, 4.1
SUBSCRIBER clause, 3.4
subscriber failure, 4.6
active standby pair with cache groups
recover active when standby not ready, 5.4.2
recovering active database, 5.4
subscriber failure, 5.6
ADD ELEMENT clause
DATASTORE, 13.1.3
ADMIN privilege, 3.3, 9.2
aging
replication, 1.9
ALTER ELEMENT clause, 13.1.1
ALTER REPLICATION state
using, 13.1
ALTER TABLE
and replication, 13.2
ALTER USER statement, 13.1
AppCheckCmd Clusterware attribute, 8.1
AppFailoverDelay Clusterware attribute, 8.1
AppFailureInterval attribute
Oracle Clusterware, 8.1
AppFailureThreshold Clusterware attribute, 8.1
application failover
Oracle Clusterware, 7.2.5
AppName Clusterware attribute, 8.1
AppRestartAttempts attribute
Oracle Clusterware, 8.1
AppScriptTimeout Clusterware attribute, 8.1
AppStartCmd Clusterware attribute, 8.1
AppStopCmd Clusterware attribute, 8.1
AppType Clusterware attribute, 8.1
AppUptimeThreshold attribute
Oracle Clusterware, 8.1
asynchronous writethrough cache group
propagating to Oracle database, 1.6.2
replication, 1.6.1
attributes, connection
required, 10.2.2
autocommit
and RETURN RECEIPT BY REQUEST, 9.7.2
and RETURN TWOSAFE BY REQUEST, 9.7.3
RETURN RECEIPT BY REQUEST, 3.6.2
RETURN TWOSAFE BY REQUEST, 3.6.4
autocommit mode
RETURN TWOSAFE, 3.6.3
automatic catch-up, 11.2.3
automatic client failover, 3.9, 7.1
AutoRecover Clusterware attribute, 8.1
AWT cache group
propagating to Oracle Database, 1.6.2
replicating, 1.6.1, 5.3
AWT cache groups
parallel threads, 10.2.3.1

B

bidirectional general workload
syntax example, 9.10.6
update conflicts, 9.10.6
bidirectional replication, 1.5.3
bidirectional replication scheme
recovery, 9.1.1
return twosafe service, 9.7.4
bidirectional split workload
syntax example, 9.10.5
bookmarks in log, 10.2.4.3, 12.5

C

cache grid
active standby pairs, 5.3
add cache group with Oracle Clusterware, 7.4.3.1
creating a cluster, 7.4.1
drop cache group with Oracle Clusterware, 7.4.3.2
managing with Oracle Clusterware, 7.4
recovery with Oracle Clusterware, 7.4.2, 7.5.5.2
schema changes with Oracle Clusterware, 7.4.3
cache groups
replicating, 1.6
replicating a global AWT cache group, 5.3
replicating a read-only cache group, 5.2
replicating a user managed cache group, 5.1
CacheConnect Clusterware attribute, 7.2.3, 8.1
catch-up feature
replication, 11.2.3
CHECK CONFLICTS clause
examples, 14.3.2
in CREATE REPLICATION statement, 9.5
client failover, 7.1
automatic client failover, 3.9
cluster
virtual IP addresses, 7.1
cluster agent, 7.3.2
cluster manager
role, 9.1.1
cluster status, 7.7
cluster.oracle.ini file, 7.1
advanced availability, 7.2.2
advanced availability, one subscriber, 7.2.2
and sys.odbc.ini file, 7.2
application failover, 7.2.5, 7.2.5
attribute descriptions, 8
automatic recovery from failure of both master nodes, 7.2.6
basic availability, 7.2.1
basic availability, one subscriber, 7.2.1
cache grid, 7.2.4
cache groups, 7.2.3
examples, 7.2
excluding tables, cache groups and sequences, 7.2.7
location, 7.2
manual recovery from failure of both master nodes, 7.2.6
specify route, 7.2.7
Windows example, 7.2.5
Clusterware
required privileges, 7.1.2
columns
compressed, 3.5, 9.3
COMPRESS TRAFFIC clause
in CREATE ACTIVE STANDBY PAIR statement, 3.7.2
in CREATE REPLICATION statement, 9.8, 9.8.3
compression
table columns, 3.5, 9.3
configuring replication, 9.1
configuring the network, 10.1
conflict report
XML Document Type Definition, 14.5
conflict reporting, 14.4
CONFLICT REPORTING clause
in CREATE REPLICATION statement, 9.8
conflict resolution, 14.1
update rules, 14.2
conflict types, 14.1
controlling replication, 10.8
copying a database
privileges, 4.2
copying a master database, 10.4
CREATE ACTIVE STANDBY PAIR statement, 4.3, 5.2
syntax, 3.3
create or drop table, 6.1
CREATE REPLICATION statement
defining data store element, 9.4.1
defining table element, 9.4.2
use of, 9.2
CREATE USER statement, 13.1
crsTT directories, 7.1.3

D

data source name, 9.2.2
data types
size limits, 3.5, 9.3
database
duplicating, 10.4
failed, 3.7.4
ForceConnect connection attribute, 11.2.3
temporary, 3.1, 9.4.1
database name, 9.2.2
database objects
excluding from active standby pair, 3.10
DatabaseCharacterSet data store attribute, 10.2.2
DatabaseFailoverDelay Clusterware attribute, 8.1
databases
establishing, 10.2.1
failed, 9.8.5
managing logs, 10.2.4
recovering, 9.1.1
required connection attributes for replication, 10.2.2
setting state, 10.8
DATASTORE element, 9.4, 9.4.1
adding to replication scheme, 13.1.3
and materialized views, 9.4.5
and nonmaterialized views, 9.4.5
DDLReplicationAction connection attribute, 6.1
DDLReplicationLevel connection attribute, 6.1
default column values
changing, 13.2
DISABLE RETURN clause
in CREATE ACTIVE STANDBY PAIR statement, 3.7.1.2.2
in CREATE REPLICATION statement, 9.8, 9.8
DISABLE RETURN policy, 9.8.2.2.2
active standby pair, 3.7.1.2.2, 3.7.1.2.4
disaster recovery
active standby pair with AWT cache group, 5.9
disaster recovery subscriber
Oracle Clusterware, 7.3.13
distributed workload configuration, 1.5.3.2
recovery issues, 9.1.1
DNS server
UNIX, 10.1.3.2
Windows, 10.1.3.3
DROP REPLICATION statement, 2.2.6, 13.4
DROP USER statement, 13.1
dropping replication scheme, 2.2.6, 13.4
DSN
creating, 2.2.1, 10.2.1
define for active standby pair, 3.2
defining, 9.2.2
DualMaster
Oracle Clusterware, 7.2.5
duplicating a database
privileges, 4.2
with cache groups, 4.2
duplicating a master database, 10.4
DURABLE COMMIT clause
in CREATE ACTIVE STANDBY PAIR statement, 3.7.1.2.4
in CREATE REPLICATION statement, 9.8
DURABLE COMMIT policy, 9.8.2.2.4

E

element
DATASTORE, 9.4.1
defined, 1.1
ELEMENT descriptions, 9.4
EXACT
table definition, 9.8
example
replicating tables to different subscribers, 9.10.3
EXCLUDE clause
in CREATE ACTIVE STANDBY PAIR statement, 3.10
EXCLUDE SEQUENCE clause
in ALTER REPLICATION statement, 13.1.3.2
in CREATE REPLICATION statement, 9.4.1
EXCLUDE TABLE clause
in ALTER REPLICATION statement, 13.1.3.2
in CREATE REPLICATION statement, 9.4.1

F

failback, 4.4.3
active standby pair, 5.4.3
failed database
connecting to, 3.7.4, 9.8.5
failed state
log space, 9.8.5
log threshold, 3.7.4
replication, 10.8
failover, 3.9
failover and recovery
issues, 9.1.1
FAILTHRESHOLD attribute, 11.2.1
FAILTHRESHOLD clause
active standby pair, 3.7.1.2.1
altering, 13.1.13
example, 9.10.2
in CREATE ACTIVE STANDBY PAIR statement, 3.7.4
in CREATE REPLICATION statement, 9.8, 9.8.2.2.1, 9.8.5
subscriber failures, 11.2.1
failure
return service, 3.7.1.2
subscriber, 4.6, 11.2.1
failure recovery script, 11.5
failure threshold
description, 10.2.4.3
displaying, 12.4.1
example, 9.10.2
subscriber failures, 11.2.1
FailureThreshold Clusterware attribute, 8.1
ForceConnect connection attribute, 11.2.3, 11.2.3, 11.4
foreign keys
replication, 1.8, 9.4.3
full replication, 1.5.2
full store name
active standby pair, 3.4

G

general workload
syntax example, 9.10.6
GRANT statement, 13.1
GridPort Clusterware attribute, 7.2.4, 8.1

H

host name
identifying, 9.4.1, 10.1.3
hostname command, 9.4.1

I

INCLUDE clause
in CREATE ACTIVE STANDBY PAIR statement, 3.10
INCLUDE SEQUENCE clause
in ALTER REPLICATION statement, 13.1.3.1
in CREATE REPLICATION statement, 9.4.1
INCLUDE TABLE
in CREATE REPLICATION statement, 9.4.1
INCLUDE TABLE clause
in ALTER REPLICATION statement, 13.1.3.1
IP addresses
replication, 10.1.3

L

LOB columns
size limit, 3.5, 9.3
LOCAL COMMIT ACTION clause
active standby pair, 3.7.1.2.5
in CREATE REPLICATION statement, 9.8
LOCAL COMMIT ACTION policy, 9.8.2.2.5
log
locating bookmarks, 10.2.4.3, 12.5
management, 10.2.4
size and persistence, 10.2.4.1
threshold value, 9.8.5, 9.8.5, 10.2.4.3
log failure threshold, 9.8.5
log sequence number, 12.5
log threshold value
active standby pair, 3.7.4
LogBufMB connection attribute, 10.2.4.3, 10.2.4.3
LogBufParallelism first connection attribute, 10.2.3.1
LogFileSize connection attribute, 10.2.4.3
logging, 10.2.4.3
logs
setting the size, 10.2.4.3
LSN, 12.5

M

master catch-up, 11.2.3
master database
defined, 9.2.2
MasterHosts Clusterware attribute, 8.1
MasterStoreAttribute Clusterware attribute, 8.1
MasterVIP Clusterware attribute, 8.1
materialized views
active standby pair, 3.11
replicating, 9.4.5
monitoring replication, 12

N

network configuration
replication, 10.1
NO RETURN clause
in CREATE ACTIVE STANDBY PAIR statement, 3.6.5
in CREATE REPLICATION statement, 9.7.5
NVARCHAR columns
size limit, 3.5, 9.3
NVARCHAR2 columns
size limit, 3.5, 9.3

O

ocrConfig option, 7.3.3
ON DELETE CASCADE clause
replication, 1.8
Oracle Cluster Registry
configuring for TimesTen cluster, 7.3.3
Oracle Clusterware, 7
add cache group to cache grid, 7.4.3.1
add subscriber not managed by Oracle Clusterware, 7.6.7
adding active standby pair to cluster, 7.6.6
adding subscriber to active standby pair, 7.6.4
altering tables and cache groups, 7.6.1
AppFailureInterval attribute, 8.1
application failure, 7.2.5
AppRestartAttempts, 8.1
AppScriptTimeout attribute, 7.2.5
AppType=DualMaster, 7.2.5
AppUptimeThreshold, 8.1
automatic recovery, 7.2.6
automatic recovery from dual failure, 7.5.5.1
cache grid, 7.2.4
cache grid recovery, 7.4.2, 7.5.5.2
changing cache administration user name or password, 7.6.16
changing internal user name or password, 7.6.16
cluster.oracle.ini and sys.odbc.ini files, 7.2
creating a cluster of cache grid members, 7.4.1
creating or dropping tables and cache groups, 7.6.1
crs_start command, 7.5.7
crs_stop command, 7.5.7
crsTT directories, 7.1.3
drop cache group from cache grid, 7.4.3.2
failure of both master nodes, 7.2.6
failure of more than two master hosts, 7.5.6
forced switchover, 7.5.7
GridPort attribute, 7.2.4
host maintenance, 7.6.14
machine room maintenance, 7.6.15
manual recovery for advanced availability, 7.5.5.3
manual recovery for basic availability, 7.5.5.4
message log files, 7.7.2
moving a database to another host, 7.6.13
network maintenance, 7.6.14
rebuild subscriber not managed by Oracle Clusterware, 7.6.8
recovery process, 7.5.1
recovery when RETURN TWOSAFE, 7.5.5.6
remote disaster recovery subscriber, 7.3.13
removing active standby pair from cluster, 7.6.9
removing host from cluster, 7.6.11
removing subscriber from active standby pair, 7.6.5
required privileges, 7.1.2
restricted commands, 7.1.4
rolling upgrade, 7.6.2
routing, 7.2.7
schema changes in cache grid, 7.4.3
status, 7.7
stopping the TimesTen daemon, 7.3.4
storage for backups, 7.2.6
subscriber not managed by Oracle Clusterware, 7.3.14
switching the active and the standby, 7.6.12
TimesTen advanced level, 7.1
TimesTen basic level, 7.1
TimesTen cluster agent, 7.3.4
TimesTen daemon monitor, 7.3.4
tmp directory, 7.1.3
ttDaemonAdmin, 7.3.4
upgrading TimesTen, 7.6.3
using RepDDL attribute, 7.2.7
using with cache grid, 7.4
virtual IP addresses, 7.1
Oracle Clusterware attributes
AppCheckCmd, 8.1
AppFailoverDelay, 8.1
AppFailureThreshold, 8.1
AppName, 8.1
AppScriptTimeout, 8.1
AppStartCmd, 8.1
AppStopCmd, 8.1
AppType, 8.1
AutoRecover, 8.1
CacheConnect, 8.1
conditional, 8.1
DatabaseFailoverDelay, 8.1
FailureThreshold, 8.1
GridPort, 8.1
MasterHosts, 8.1
MasterStoreAttribute, 8.1
MasterVIP, 8.1
optional, 8.1, 8.1
RemoteSubscriberHosts, 8.1
RepBackupDir, 8.1
RepBackupPeriod, 8.1
RepDDL, 7.2.7, 8.1
RepFullBackupCycle, 8.1
required, 8.1, 8.1
ReturnServiceAttribute, 8.1
SubscriberHosts, 8.1
SubscriberStoreAttribute, 8.1
SubscriberVIP, 8.1
TimesTenScriptTimeout, 8.1
VIPInterface, 8.1
VIPNetMask, 8.1
owner
replication scheme, 9.2.1

P

parallel replication, 10.2.3
attributes, 10.2.2
automatic, 10.2.3.1
AWT, 10.2.3.1
DDL statements, 10.2.3
DML statements, 10.2.3
partitions
in a table, 9.8.6
PassThrough connection attribute
and RETURN TWOSAFE, 3.6.3
and RETURN TWOSAFE BY REQUEST, 3.6.4
pause state
replication, 10.8
performance
logging attributes, 10.2.4.3
replication, 12.8
PL/SQL object
replicating in an active standby pair, 6.1.1
PL/SQL objects
replicating, 13.1.2
PORT assignment
active standby pair, 3.7.3
PORT attribute
in CREATE REPLICATION statement, 9.8, 9.8.4
ports
dynamic, 9.8.4
static, 9.8.4
privilege
create a replication scheme, 9.2
create an active standby pair, 3.3
propagation, 1.5.4
example, 9.10.4
PROPAGATOR clause
example, 9.10.4
propagator database
defined, 9.2.2
definition, 1.5.4

R

read-only cache group
replicating, 1.6.3
ReceiverThreads first connection attribute, 10.2.2
recovering failed databases, 9.1.1
recovery
return service, 3.7.1.2
RELAXED
table definition, 9.8, 9.8.6
RemoteSubscriberHosts Clusterware attribute, 8.1
RepBackupDir Clusterware attribute, 8.1
RepBackupPeriod Clusterware attribute, 8.1
RepDDL Clusterware attribute, 7.2.7, 8.1
example, 7.2.7
RepFullBackupCycle Clusterware attribute, 8.1
replicated tables
requirements, 9.3
replicating over a network, 1.5.4, 10.1
replication
across releases, 10.6
aging, 1.9
and ttAdmin, 10.7
bidirectional, 1.5.3
configuring timestamp comparison, 14.3
conflict reporting, 14.4
conflict resolution, 14.1
controlling, 10.8
described, 1.1
design decisions, 9.1
element, 1.1, 9.4
failed state, 10.8
FAILTHRESHOLD clause in CREATE ACTIVE STANDBY PAIR statement, 3.7.1.2.1
FAILTHRESHOLD clause in CREATE REPLICATION statement, 9.8.2.2.1
foreign keys, 1.8, 9.4.3
gauging performance, 12.5
host IP addresses, 10.1.3
monitoring, 12
of materialized views, 9.4.5
of sequences, 9.4.4
ON DELETE CASCADE clause, 1.8
parallelism, see parallel replication
pause state, 10.8
relaxed checking, 9.8.6
restart policy, 10.7
return receipt, 1.4.2
start state, 10.8
starting, 10.7
state, 10.8
stop state, 10.8
stopping, 10.7, 10.7
tables with different definitions, 9.8.6
timestamp column maintenance, 14.3.2.1
unidirectional, 1.5.3, 1.5.3
replication agent
defined, 1.3
starting, 2.2.4, 10.7
stopping, 2.2.6, 10.7
replication daemon, see "replication agent"
replication scheme
active standby pair, 1.5.1
applying to DSNs, 10.3
configuring, 9.1
defining, 9.2
examples, 9.10
for cache groups, 1.6
naming, 9.2.1
owner, 9.2.1
replication schemes
types, 1.5
replication stopped
return services policy, 9.8.2.2.1
ReplicationApplyOrdering data store attribute, 10.2.2, 10.2.3.1
ReplicationParallelism data store attribute, 10.2.2, 10.2.3.1
repschemes
ttIsql command, 12.4.1
resource
defined, 7.1
restart policy, 10.7
restrictions
active standby pairs, 3.1
RESUME RETURN clause
in CREATE ACTIVE STANDBY PAIR statement, 3.7.1.2.3
in CREATE REPLICATION statement, 9.8
RESUME RETURN policy, 9.8.2.2.3
active standby pair, 3.7.1.2.3
return receipt
definition, 1.4
RETURN RECEIPT BY REQUEST clause
example, 9.10.2
in CREATE ACTIVE STANDBY PAIR statement, 3.6.2
in CREATE REPLICATION statement, 9.7.2
RETURN RECEIPT clause
active standby pair, 3.6.1
example, 9.10.1, 9.10.2
in CREATE REPLICATION statement, 9.7.1
RETURN RECEIPT failure policy
report settings, 12.4.1
return receipt replication, 1.4.2
RETURN RECEIPTclause
example, 9.10.2
RETURN RECEPT timeout errors, 1.4.2, 9.8
return service
active standby pair, 3.6
failure policy, 3.7.1.2
in CREATE REPLICATION statement, 9.7
performance and recovery tradeoffs, 9.1.2
recovery policy, 3.7.1.2
return service blocking
disabling, 9.8.2.2
return service failure policy, 9.8.2
return service timeout errors, 9.8.2
RETURN SERVICES clause
in CREATE ACTIVE STANDBY PAIR statement, 3.7.1.2.1
return services policy
when replication stopped, 9.8.2.2.1
RETURN SERVICES WHEN REPLICATION STOPPED clause
in CREATE REPLICATION statement, 9.8
RETURN TWOSAFE
Oracle Clusterware recovery, 7.5.5.6
return twosafe
bidirectional replication scheme, 9.7.4
definition, 1.4
RETURN TWOSAFE BY REQUEST clause
in CREATE ACTIVE STANDBY PAIR statement, 3.6.4
in CREATE REPLICATION statement, 9.7.3
RETURN TWOSAFE clause
in CREATE ACTIVE STANDBY PAIR statement, 3.6.3
in CREATE REPLICATION statement, 9.7.4
RETURN WAIT TIME clause
in CREATE REPLICATION statement, 9.8
ReturnServiceAttribute Clusterware attribute, 8.1
example, 7.5.5.6
REVOKE statement, 13.1
roles
reverse, 4.7
ROUTE clause
in CREATE ACTIVE STANDBY PAIR statement, 3.8
in replication scheme, 9.9

S

selective replication, 1.5.2
sequence
adding to replication scheme, 13.1.1
changing element name, 13.1.7
dropping from replication scheme, 13.1.4
SEQUENCE element, 9.4
sequences
replicating, 1.7, 9.4.4
replicating in an active standby pair, 3.12
split workload, 1.5.3.1
syntax example, 9.10.5
split workload replication scheme
recovery, 9.1.1
SQLGetInfo function, 9.8.5
checking database state, 3.7.4
monitoring subscriber, 11.2.1
standby database
change to active, 4.7
recover from failure, 4.5, 5.5
start state
replication, 10.8
starting the replication agent, 2.2.4, 10.7
status
cluster, 7.7
Oracle Clusterware, 7.7
stop state
replication, 10.8
stopping the replication agent, 2.2.6, 10.7
STORE attributes
in CREATE ACTIVE STANDBY PAIR statement, 3.7
in CREATE REPLICATION statement, 9.8
subscriber
adding to replication scheme, 13.1.5
dropping from replication scheme, 13.1.6
SUBSCRIBER clause
and return service, 9.7
in CREATE ACTIVE STANDBY PAIR statement, 3.4
subscriber database
defined, 9.2.2
subscriber failure, 4.6, 11.2.1
active standby pair with cache groups, 5.6
SubscriberHosts Clusterware attribute, 8.1
subscribers
maximum number, 9.10.2
SubscriberStoreAttribute Clusterware attribute, 8.1
SubscriberVIP Clusterware attribute, 8.1

T

table
adding to replication scheme, 13.1.1
changing element name, 13.1.7
dropping from replication scheme, 13.1.4
excluding from database, 13.1.3.2
including in database, 13.1.3.1
partitioned, 9.8.6
relaxed checking, 9.8.6
TABLE DEFINITION CHECKING clause
examples, 9.8.6
in CREATE REPLICATION statement, 9.8
table definitions, 9.8
TABLE element, 9.4
table element, 9.4.2
table requirements
active standby pairs, 3.5
replication schemes, 9.3
tables
altering and replication, 13.2
threshold log setting, 9.8.5, 9.8.5, 10.2.4.3
active standby pair, 3.7.4, 3.7.4
timeout
return service for an active standby pair, 3.7.1
TIMEOUT clause
in CREATE REPLICATION statement, 9.8
timestamp
from operating system, 14.2
timestamp column maintenance
by user, 14.3.2.2
system, 14.3.2.1
timestamp comparison
configuring, 14.3
local transactions, 14.2.1
TimesTen cluster agent, 7.3.2, 7.3.4
TimesTen daemon monitor, 7.3.4
TimesTenScriptTimeout Clusterware attribute, 8.1
track
parallel replication, 10.2.3.2
TRANSMIT DURABLE clause
in CREATE REPLICATION statement, 9.6
TRANSMIT NONDURABLE clause
and recovery, 11.4
in CREATE REPLICATION statement, 9.6
trapped transaction, 11.2.3.1
TRUNCATE TABLE statement, 13.3
truncating a replicated table, 13.3
TT_VARCHAR columns
size limit, 3.5, 9.3
ttAdmin utility
-ramPolicy option, 11.3.1, 11.3.2
-repPolicy option, 10.7
-repStart option, 10.7, 10.7
-repStop option, 10.7
ttCkpt built-in procedure, 10.2.4.2
ttCkptBlocking built-in procedure, 10.2.4.2
ttCRSActiveService process, 7.3.10
ttCRSAgent process, 7.3.4
ttcrsagent.options file, 7.3.2
ttCRSDaemon process, 7.3.4
ttCRSMaster process, 7.3.10
ttCRSsubservice process, 7.3.10
ttCWAdmin
-beginAlterSchema option, 7.4.3
ttCWAdmin -beginAlterSchema command, 7.4.3
ttCWAdmin -endAlterSchema command, 7.4.3
ttCWAdmin utility, 7.1, 7.2.1
-endAlterSchema option, 7.4.3
ocrConfig option, 7.3.3
-relocate option, 7.6.13
required privileges, 7.1.2
-status option, 7.7
-switch option, 7.6.12
ttcwerrors.log file, 7.7.2
ttcwmsg.log file, 7.7.2
ttDestroy utility, 11.3.1
ttDestroyDataStore built-in procedure, 11.3.2
ttDurableCommit built-in procedure, 3.7.1.1, 9.8.2.1
ttIsql utility
-f option, 10.3
ttRepAdmin utility
-bookmark option, 12.5.1
-duplicate option, 9.6, 9.8.6, 10.4, 11.2.3, 11.3.1, 11.3.1, 11.4
privileges for -duplicate options, 4.2
-ramLoad option, 11.3.1, 11.3.1
-receiver -list options, 12.3.1
-self -list options, 12.2.1
-showconfig option, 12.4.2
-state option, 10.8
ttRepDuplicate built-in procedure, 11.3.2
ttRepDuplicateEx C function
privileges, 4.2
ttReplicationStatus built-in procedure, 12.3.2
ttRepReturnTransitionTrap SNMP trap, 9.8.2.2.2
ttRepStart built-in procedure, 10.7, 13.1
ttRepStop built-in procedure, 10.7, 13.1
ttRepSubscriberStateSet built-in procedure, 10.8
ttRepSubscriberWait built-in procedure
replicating sequences, 3.12, 9.4.4
ttRepSyncGet built-in procedure, 3.6.2, 3.6.4, 9.7.3
ttRepSyncSet built-in procedure, 3.6.2, 3.6.4, 3.7, 3.7, 3.7.1, 3.7.1.2.5, 9.8.1, 9.8.2
and RETURN RECEIPT BY REQUEST, 9.7.2
and RETURN TWOSAFE BY REQUEST, 9.7.2, 9.7.3
different return services, 9.10.2
local action policy, 9.8.2.2.5
overriding LOCAL COMMIT ACTION, 9.8
overriding RETURN WAIT TIME, 9.8
setting return service timeout, 9.8.1
ttRepSyncSubscriberStatus built-in procedure, 12.7
DISABLE RETURN clause, 9.8.2.2.2
ttRepXactStatus built-in procedure, 3.6.1, 9.7.1, 9.8.2.2.5, 12.7
ttRepXactTokenGet built-in procedure, 12.7
TypeMode data store attribute, 10.2.2

U

unidirectional replication, 1.5.3
update conflicts
example, 9.10.6
syntax example, 9.10.6

V

VARBINARY columns
size limit, 3.5, 3.5, 9.3, 9.3
VARCHAR2 columns
size limit, 3.5, 9.3
views
active standby pair, 3.11
VIPInterface Clusterware attribute, 8.1
VIPNetMask Clusterware attribute, 8.1
virtual IP address
Oracle Clusterware, 7.1

W

WINS server
UNIX, 10.1.3.2
Windows, 10.1.3.3
PKe  PK$AOEBPS/conflict.htm Resolving Replication Conflicts

14 Resolving Replication Conflicts

This chapter includes these topics:

How replication conflicts occur

Tables in databases configured in a bidirectional replication scheme may be subject to replication conflicts. A replication conflict occurs when applications on bidirectionally replicated databases initiate an update, insert or delete operation on the same data item at the same time. If no special steps are taken, each database can end up in disagreement with the last update made by the other database.

These types of replication conflicts can occur:

See "Reporting conflicts" for example reports generated by TimesTen upon detecting update, uniqueness, and delete conflicts.


Note:

TimesTen does not detect conflicts involving TRUNCATE TABLE statements.

Update and insert conflicts

Figure 14-1 shows the results from an update conflict, which would occur for the value of X under the following circumstances:

StepsOn Database AOn Database B
Initial conditionX is 1.X is 1.
The application on each database updates X simultaneously.Set X=2.Set X=100.
The replication agent on each database sends its update to the other database.Replicate X to database B.Replicate X to database A.
Each database now has the other's update.Replication says to set X=100.Replication says to set X=2.


Note:

Uniqueness conflicts resulting from conflicting inserts follow a similar pattern as update conflicts, but the conflict involves the whole row.

Figure 14-1 Update conflict

Description of Figure 14-1 follows
Description of "Figure 14-1 Update conflict"

If update or insert conflicts remain unchecked, the master and subscriber databases fall out of synchronization with each other. It may be difficult or even impossible to determine which database is correct.

With update conflicts, it is possible for a transaction to update many data items but have a conflict on a few of them. Most of the transaction's effects survive the conflict, with only a few being overwritten by replication. If you decide to ignore such conflicts, the transactional consistency of the application data is compromised.

If an update conflict occurs, and if the updated columns for each version of the row are different, then the non-primary key fields for the row may diverge between the replicated tables.


Note:

Within a single database, update conflicts are prevented by the locking protocol: only one transaction at a time can update a specific row in the database. However, update conflicts can occur in replicated systems due to the ability of each database to operate independently.

TimesTen replication uses timestamp-based conflict resolution to cope with simultaneous updates or inserts. Through the use of timestamp-based conflict resolution, you may be able to keep the replicated databases synchronized and transactionally consistent.

Delete/update conflicts

Figure 14-2 shows the results from a delete/update conflict, which would occur for Row 4 under the following circumstances:

StepsOn database AOn database B
Initial conditionRow 4 existsRow 4 exists
The applications issue a conflicting update and delete on Row 4 simultaneouslyUpdate Row 4Delete Row 4
The replication agent on each database sends the delete or update to the otherReplicate update to database BReplicate delete to database A
Each database now has the delete or update from the other databaseReplication says to delete Row 4Replication says to update Row 4

Figure 14-2 Delete/update conflict

Description of Figure 14-2 follows
Description of "Figure 14-2 Delete/update conflict"

Although TimesTen can detect and report delete/update conflicts, it cannot resolve them. Under these circumstances, the master and subscriber databases fall out of synchronization with each other.

Although TimesTen cannot ensure synchronization between databases following such a conflict, it does ensure that the most recent transaction is applied to each database. If the timestamp for the delete is more recent than that for the update, the row is deleted on each database. If the timestamp for the update is more recent than that for the delete, the row is updated on the local database. However, because the row was deleted on the other database, the replicated update is discarded. See "Reporting delete/update conflicts" for example reports.


Note:

There is an exception to this behavior when timestamp comparison is enabled on a table using UPDATE BY USER. See "Enabling user timestamp column maintenance" for details.

Using a timestamp to resolve conflicts

For replicated tables that are subject to conflicts, create the table with a special column of type BINARY(8) to hold a timestamp value that indicates the time the row was inserted or last updated. You can then configure TimesTen to automatically insert a timestamp value into this column each time a particular row is changed, as described in "Configuring timestamp comparison".


Note:

TimesTen does not support conflict resolution between cached tables in a cache group and an Oracle database.

How replication computes the timestamp column depends on your system:

TimesTen uses the time value returned by the system at the time the transaction performs each update as the record's insert or update time. Therefore, rows that are inserted or updated by a single transaction may receive different timestamp values.

When applying an update received from a master, the replication agent at the subscriber database performs timestamp resolution in the following manner:

Timestamp comparisons for local updates

To maintain synchronization of tables between replicated sites, TimesTen also performs timestamp comparisons for updates performed by local transactions. If an updated table is declared to have automatic timestamp maintenance, then updates to records that have timestamps exceeding the current system time are prohibited.

Normally, clocks on replicated systems are synchronized sufficiently to ensure that a locally updated record is given a later timestamp than that in the same record stored on the other systems. Perfect synchronization may not be possible or affordable, but by protecting record timestamps from "going backwards," replication can help to ensure that the tables on replicated systems stay synchronized.

Configuring timestamp comparison

To configure timestamp comparison:

Including a timestamp column in replicated tables

To use timestamp comparison on replicated tables, you must specify a nullable column of type BINARY(8) to hold the timestamp value. The timestamp column must be created along with the table as part of a CREATE TABLE statement. It cannot be added later as part of an ALTER TABLE statement. In addition, the timestamp column cannot be part of a primary key or index. Example 14-1 shows that the rep.tab table contains a column named tstamp of type BINARY(8) to hold the timestamp value.

Example 14-1 Including a timestamp column when creating a table

CREATE TABLE rep.tab (col1 NUMBER NOT NULL,
                      col2 NUMBER NOT NULL,
                      tstamp BINARY(8),
                      PRIMARY KEY (col1));

If no timestamp column is defined in the replicated table, timestamp comparison cannot be performed to detect conflicts. Instead, at each site, the value of a row in the database reflects the most recent update applied to the row, either by local applications or by replication.

Configuring the CHECK CONFLICTS clause

When configuring your replication scheme, you can set up timestamp comparison for a TABLE element by including a CHECK CONFLICTS clause in the table's element description in the CREATE REPLICATION statement.


Note:

A CHECK CONFLICT clause cannot be specified for DATASTORE elements.

The syntax of the CREATE REPLICATION statement is described in Oracle TimesTen In-Memory Database SQL Reference. Example 14-2 shows how CHECK CONFLICTS might be used when configuring your replication scheme.

Example 14-2 Automatic timestamp comparison

In this example, we establish automatic timestamp comparison for the bidirectional replication scheme defined in Example 9-29. The DSNs, west_dsn and east_dsn, define the westds and eastds databases that replicate the repl.accounts table containing the tstamp timestamp table. In the event of a comparison failure, discard the transaction that includes an update with the older timestamp.

CREATE REPLICATION r1
ELEMENT elem_accounts_1 TABLE accounts
  CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN tstamp
    UPDATE BY SYSTEM
    ON EXCEPTION ROLLBACK WORK
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_accounts_2 TABLE accounts
  CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN tstamp
    UPDATE BY SYSTEM
    ON EXCEPTION ROLLBACK WORK
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast";

When bidirectionally replicating databases with conflict resolution, the replicated tables on each database must be set with the same CHECK CONFLICTS attributes. If you need to disable or change the CHECK CONFLICTS settings for the replicated tables, use the ALTER REPLICATION statement described in "Eliminating conflict detection" and apply to each replicated database.

Enabling system timestamp column maintenance

Enable system timestamp comparison by using:

CHECK CONFLICTS BY ROW TIMESTAMP
  COLUMN ColumnName
  UPDATE BY SYSTEM

TimesTen automatically maintains the value of the timestamp column using the current time returned by the underlying operating system. This is the default setting.

When you specify UPDATE BY SYSTEM, TimesTen:

  • Initializes the timestamp column to the current time when a new record is inserted into the table.

  • Updates the timestamp column to the current time when an existing record is modified.

During initial load, the timestamp column values should be left NULL, and applications should not give a value for the timestamp column when inserting or updating a row.

When you use the ttBulkCp or ttMigrate utility to save TimesTen tables, the saved rows maintain their current timestamp values. When the table is subsequently copied or migrated back into TimesTen, the timestamp column retains the values it had when the copy or migration file was created.


Note:

If you configure TimesTen for timestamp comparison after using the ttBulkCp or ttMigrate to copy or migrate your tables, the initial values of the timestamp columns remain NULL, which is considered by replication to be the earliest possible time.

Enabling user timestamp column maintenance

Enable user timestamp column maintenance on a table by using:

CHECK CONFLICTS BY ROW TIMESTAMP
  COLUMN ColumnName
  UPDATE BY USER

When you configure UPDATE BY USER, your application is responsible for maintaining timestamp values. The timestamp values used by your application can be arbitrary, but the time values cannot decrease. In cases where the user explicitly sets or updates the timestamp column, the application-provided value is used instead of the current time.

Replicated delete operations always carry a system-generated timestamp. If replication has been configured with UPDATE BY USER and an update/delete conflict occurs, the conflict is resolved by comparing the two timestamp values and the operation with the larger timestamp wins. If the basis for the user timestamp varies from that of the system-generated timestamp, the results may not be as expected. Therefore, if you expect delete conflicts to occur, use system-generated timestamps.

Reporting conflicts

TimesTen conflict checking may be configured to report conflicts to a human-readable plain text file, or to an XML file for use by user applications. This section includes the topics:

Reporting conflicts to a text file

To configure replication to report conflicts to a human-readable text file (the default), use:

CHECK CONFLICTS BY ROW TIMESTAMP
  COLUMN ColumnName
  ...
  REPORT TO 'FileName' FORMAT STANDARD

An entry is added to the report file FileName that describes each conflict. The phrase FORMAT STANDARD is optional and may be omitted, as the standard report format is the default.

Each failed operation logged in the report consists of an entry that starts with a header, followed by information specific to the conflicting operation. Each entry is separated by a number of blank lines in the report.

The header contains:

  • The time the conflict was discovered.

  • The databases that sent and received the conflicting update.

  • The table in which the conflict occurred.

The header has the following format:

Conflict detected at time on date
Datastore : subscriber_database
Transmitting name : master_database
Table : username.tablename

For example:

Conflict detected at 20:08:37 on 05-17-2004
Datastore : /tmp/subscriberds
Transmitting name : MASTERDS
Table : USER1.T1

Following the header is the information specific to the conflict. Data values are shown in ASCII format. Binary data is translated into hexadecimal before display, and floating-point values are shown with appropriate precision and scale.

For further description of the conflict report file, see "Reporting uniqueness conflicts", "Reporting update conflicts" and "Reporting delete/update conflicts".

Reporting conflicts to an XML file

To configure replication to report conflicts to an XML file, use:

CHECK CONFLICTS BY ROW TIMESTAMP
  COLUMN ColumnName
  ...
  REPORT TO 'FileName' FORMAT XML

Replication uses the base file name FileName to create two files. FileName.xml is a header file that contains the XML Document Type Definition for the conflict report structure, as well as the root element, defined as <ttrepconflictreport>. Inside the root element is an XML directive to include the file FileName.include, and it is to this file that all conflicts are written. Each conflict is written as a single element of type <conflict>.

For further description of the conflict report file XML elements, see "The conflict report XML Document Type Definition".


Note:

When performing log maintenance on an XML conflict report file, only the file FileName.include should be truncated or moved. For conflict reporting to continue to function correctly, the file FileName.xml should be left untouched.

Reporting uniqueness conflicts

A uniqueness conflict record is issued when a replicated insert fails because of a conflict.

A uniqueness conflict record in the report file contains:

  • The timestamp and values for the existing tuple, which is the tuple that the conflicting tuple is in conflict with.

  • The timestamp and values for the conflicting insert tuple, which is the tuple of the insert that failed.

  • The key column values used to identify the record.

  • The action that was taken when the conflict was detected (discard the single row insert or the entire transaction)


    Note:

    If the transaction was discarded, the contents of the entire transaction are logged in the report file.

The format of a uniqueness conflict record is:

Conflicting insert tuple timestamp : <timestamp in binary format>
Existing tuple timestamp : <timestamp in binary format>
The existing tuple :
<<column value> [,<column value>. ..]>
The conflicting tuple :
<<column value> [,<column value> ...]>
The key columns for the tuple:
<<key column name> : <key column value>>
Transaction containing this insert skipped
Failed transaction:
Insert into table <user>.<table> <<columnvalue> [,<columnvalue>...]>
End of failed transaction

Example 14-3 shows the output from a uniqueness conflict on the row identifi3z̅ed by the primary key value, '2'. The older insert replicated from subscriberds conflicts with the newer insert in masterds, so the replicated insert is discarded.

Example 14-3 Output from uniqueness conflict

Conflict detected at 13:36:00 on 03-25-2002
Datastore : /tmp/masterds
Transmitting name : SUBSCRIBERDS
Table : TAB
Conflicting insert tuple timestamp : 3C9F983D00031128
Existing tuple timestamp : 3C9F983E000251C0
The existing tuple :
< 2, 2, 3C9F983E000251C0>
The conflicting tuple :
< 2, 100, 3C9F983D00031128>
The key columns for the tuple:
<COL1 : 2>
Transaction containing this insert skipped
Failed transaction:
Insert into table TAB < 2, 100, 3C9F983D00031128>
End of failed transaction

Reporting update conflicts

An update conflict record is issued when a replicated update fails because of a conflict. This record reports:

  • The timestamp and values for the existing tuple, which is the tuple that the conflicting tuple is in conflict with.

  • The timestamp and values for the conflicting update tuple, which is the tuple of the update that failed.

  • The old values, which are the original values of the conflicting tuple before the failed update.

  • The key column values used to identify the record.

  • The action that was taken when the conflict was detected (discard the single row update or the entire transaction).


    Note:

    If the transaction was discarded, the contents of the entire transaction are logged in the report file.

The format of an update conflict record is:

Conflicting update tuple timestamp : <timestamp in binary format>
Existing tuple timestamp : <timestamp in binary format>
The existing tuple :
<<column value> [,<column value>. ..]>
The conflicting update tuple :
TSTAMP :<timestamp> :<<column value> [,<column value>. ..]>
The old values in the conflicting update:
TSTAMP :<timestamp> :<<column value> [,<column value>. ..]>
The key columns for the tuple:
<<key column name> : <key column value>>
Transaction containing this update skipped
Failed transaction:
Update table <user>.<table> with keys:
<<key column name> : <key column value>>
New tuple value:
<TSTAMP :<timestamp> :<<column value> [,<column value>. ..]>
End of failed transaction

Example 14-4 shows the output from an update conflict on the col2 value in the row identified by the primary key value, '6'. The older update replicated from the masterds database conflicts with the newer update in subscriberds, so the replicated update is discarded.

Example 14-4 Output from an update conflict

Conflict detected at 15:03:18 on 03-25-2002
Datastore : /tmp/subscriberds
Transmitting name : MASTERDS
Table : TAB
Conflicting update tuple timestamp : 3C9FACB6000612B0
Existing tuple timestamp : 3C9FACB600085CA0
The existing tuple :
< 6, 99, 3C9FACB600085CA0>
The conflicting update tuple :
<TSTAMP :3C9FACB6000612B0, COL2 : 50>
The old values in the conflicting update:
<TSTAMP :3C9FAC85000E01F0, COL2 : 2>
The key columns for the tuple:
<COL1 : 6>
Transaction containing this update skipped
Failed transaction:
Update table TAB with keys:
<COL1 : 6>
New tuple value: <TSTAMP :3C9FACB6000612B0, COL2 : 50>
End of failed transaction

Reporting delete/update conflicts

A delete/update conflict record is issued when an update is attempted on a row that has more recently been deleted. This record reports:

  • The timestamp and values for the conflicting update tuple or conflicting delete tuple, whichever tuple failed.

  • If the delete tuple failed, the report also includes the timestamp and values for the existing tuple, which is the surviving update tuple with which the delete tuple was in conflict.

  • The key column values used to identify the record.

  • The action that was taken when the conflict was detected (discard the single row update or the entire transaction).


    Note:

    If the transaction was discarded, the contents of the entire transaction are logged in the report file. TimesTen cannot detect delete/insert conflicts.

The format of a record that indicates a delete conflict with a failed update is:

Conflicting update tuple timestamp : <timestamp in binary format>
The conflicting update tuple :
TSTAMP :<timestamp> :<<column value> [,<column value>. ..]>
This transaction skipped
The tuple does not exist
Transaction containing this update skipped
Update table <user>.<table> with keys:
<<key column name> : <key column value>>
New tuple value:
<TSTAMP :<timestamp> :<<column value> [,<column value>. ..]>
End of failed transaction

Example 14-5 shows the output from a delete/update conflict caused by an update on a row that has more recently been deleted. Because there is no row to update, the update from SUBSCRIBERDS is discarded.

Example 14-5 Output from a delete/update conflict: delete is more recent

Conflict detected at 15:27:05 on 03-25-2002
Datastore : /tmp/masterds
Transmitting name : SUBSCRIBERDS
Table : TAB
Conflicting update tuple timestamp : 3C9FB2460000AFC8
The conflicting update tuple :
<TSTAMP :3C9FB2460000AFC8, COL2 : 99>
The tuple does not exist
Transaction containing this update skipped
Failed transaction:
Update table TAB with keys:
<COL1 : 2>
New tuple value: <TSTAMP :3C9FB2460000AFC8,
COL2 : 99>
End of failed transaction

The format of a record that indicates an update conflict with a failed delete is:

Conflicting binary delete tuple timestamp : <timestamp in binary format>
Existing binary tuple timestamp : <timestamp in binary format>
The existing tuple :
<<column value> [,<column value>. ..]>
The key columns for the tuple:
<<key column name> : <key column value>>
Transaction containing this delete skipped
Failed transaction:
Delete table <user>.<table> with keys:
<<key column name> : <key column value>>
End of failed transaction

Example 14-6 shows the output from a delete/update conflict caused by a delete on a row that has more recently been updated. Because the row was updated more recently than the delete, the delete from masterds is discarded.

Example 14-6 Output from a delete/update conflict: update is more recent

Conflict detected at 15:27:20 on 03-25-2002
Datastore : /tmp/subscriberds
Transmitting name : MASTERDS
Table : TAB
Conflicting binary delete tuple timestamp : 3C9FB258000708C8
Existing binary tuple timestamp : 3C9FB25800086858
The existing tuple :
< 147, 99, 3C9FB25800086858>
The key columns for the tuple:
<COL1 : 147>
Transaction containing this delete skipped
Failed transaction:
Delete table TAB with keys:
<COL1 : 147>

Suspending and resuming the reporting of conflicts

Provided your applications are well-behaved, replication usually encounters and reports only sporadic conflicts. However, it is sometimes possible under heavy load to trigger a flurry of conflicts in a short amount of time, particularly when applications are in development and such errors are expected. This can potentially have a negative impact on the performance of the host because of excessive writes to the conflict report file and the large number of SNMP traps that can be generated.

To avoid overwhelming a host with replication conflicts, you can configure replication to suspend conflict reporting when the number of conflicts per second has exceeded a user-specified threshold. Conflict reporting may also be configured to resume once the conflicts per second have fallen below a user-specified threshold.

Conflict reporting suspension and resumption can be detected by an application by catching the SNMP traps ttRepConflictReportStoppingTrap and ttRepConflictReportStartingTrap, respectively. See "Diagnostics through SNMP Traps" in Oracle TimesTen In-Memory Database Error Messages and SNMP Traps for more information.

To configure conflict reporting to be suspended and resumed based on the number of conflicts per second, use the CONFLICT REPORTING SUSPEND AT and CONFLICT REPORTING RESUME AT attributes for the STORE clause of a replication scheme.

If the replication agent is stopped while conflict reporting is suspended, conflict reporting is enabled when the replication agent is restarted. The SNMP trap ttRepConflictReportingStartingTrap is not sent if this occurs. This means that an application that monitors the conflict report suspension traps must also monitor the traps for replication agent stopping and starting.

If you set CONFLICT REPORTING RESUME AT to 0, reporting does not resume until the replication agent is restarted.

Example 14-7 demonstrates the configuration of a replication schemes where conflict reporting ceases when the number of conflicts exceeds 20 per second, and conflict reporting resumes when the number of conflicts drops below 10 per second.

Example 14-7 Configuring conflict reporting thresholds

CREATE REPLICATION r1
ELEMENT elem_accounts_1 TABLE accounts
      CHECK CONFLICTS BY ROW TIMESTAMP
        COLUMN tstamp
        UPDATE BY SYSTEM
        ON EXCEPTION ROLLBACK WORK
        REPORT TO 'conflicts' FORMAT XML
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_accounts_2 TABLE accounts
      CHECK CONFLICTS BY ROW TIMESTAMP
        COLUMN tstamp
        UPDATE BY SYSTEM
        ON EXCEPTION ROLLBACK WORK
        REPORT TO 'conflicts' FORMAT XML
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast"
STORE westds ON "westcoast"
  CONFLICT REPORTING SUSPEND AT 20
  CONFLICT REPORTING RESUME AT 10
STORE eastds ON "eastcoast"
  CONFLICT REPORTING SUSPEND AT 20
  CONFLICT REPORTING RESUME AT 10;

The conflict report XML Document Type Definition

The TimesTen XML format conflict report is are based on the XML 1.0 specification (http://www.w3.org/TR/REC-xml). The XML Document Type Definition (DTD) for the replication conflict report is a set of markup declarations that describes the elements and structure of a valid XML file containing a log of replication conflicts. This DTD can be found in the XML header file, identified by the suffix .xml, that is created when replication is configured to report conflicts to an XML file. User applications which understand XML use the DTD to parse the rest of the XML replication conflict report. For more information on reading and understanding XML Document Type Definitions, see http://www.w3.org/TR/REC-xml.

<?xml version="1.0"?>
<!DOCTYPE ttreperrorlog [
    <!ELEMENT ttrepconflictreport(conflict*) >
    <!ELEMENT repconflict        (header, conflict, scope, failedtransaction) > 
    <!ELEMENT header             (time, datastore, transmitter, table) >
    <!ELEMENT time               (hour, min, sec, year, month, day) >
    <!ELEMENT hour               (#PCDATA) >
    <!ELEMENT min                (#PCDATA) >
    <!ELEMENT sec                (#PCDATA) >
    <!ELEMENT year               (#PCDATA) >
    <!ELEMENT month              (#PCDATA) >
    <!ELEMENT day                (#PCDATA) >
    <!ELEMENT datastore          (#PCDATA) >
    <!ELEMENT transmitter        (#PCDATA) >
    <!ELEMENT table              (tableowner, tablename) >
    <!ELEMENT tableowner          (#PCDATA) >
    <!ELEMENT tablename          (#PCDATA) >
    <!ELEMENT scope              (#PCDATA) >
    <!ELEMENT failedtransaction  ((insert | update | delete)+) >
    <!ELEMENT insert             (sql) >
    <!ELEMENT update             (sql, keyinfo, newtuple) >
    <!ELEMENT delete             (sql, keyinfo) >
    <!ELEMENT sql                (#PCDATA) >
    <!ELEMENT keyinfo            (column+) >
    <!ELEMENT newtuple           (column+) >
    <!ELEMENT column             (columnname, columntype, columnvalue) >
    <!ATTLIST column               
        pos CDATA #REQUIRED >
    <!ELEMENT columnname         (#PCDATA) >
    <!ELEMENT columnvalue        (#PCDATA) >
    <!ATTLIST columnvalue 
        isnull (true | false) "false">
    <!ELEMENT existingtuple       (column+) >
    <!ELEMENT conflictingtuple    (column+) >
    <!ELEMENT conflictingtimestamp(#PCDATA) >
    <!ELEMENT existingtimestamp   (#PCDATA) >
    <!ELEMENT oldtuple            (column+) >
    <!ELEMENT conflict            (conflictingtimestamp, existingtimestamp*,
                                   existingtuple*, conflictingtuple*, 
                                   oldtuple*, keyinfo*) > 
<!ATTLIST conflict
    type (insert | update | deletedupdate | updatedeleted) #REQUIRED>
<!ENTITY logFile                  SYSTEM "Filename.include">
]>
<ttrepconflictreport>
  &logFile;
</ttrepconflictreport>

The main body of the document

The .xml file for the XML replication conflict report is merely a header, containing the XML Document Type Definition that describes the report format and links to a file with the suffix .include. This include file is the main body of the report, containing each replication conflict as a separate element. There are three possible types of elements: insert, update and delete/update conflicts. Each conflict type requires a slightly different element structure.

The uniqueness conflict element

A uniqueness conflict occurs when a replicated insertion fails because a row with an identical key column was inserted more recently. See "Reporting uniqueness conflicts" for a description of the information that is written to the conflict report for a uniqueness conflict.

Example 14-8 illustrates the format of a uniqueness conflict XML element, using the values from Example 14-3.

Example 14-8 Uniqueness conflict element

<repconflict>
    <header>
     <time>
          <hour>13</hour>
          <min>36</min>
          <sec>00</sec>
          <year>2002</year> <month>03</month>
          <day>25</day>
      </time>
      <datastore>/tmp/masterds</datastore>
      <transmitter>SUBSCRIBERDS</transmitter>
      <table>
          <tableowner>REPL</tableowner>
          <tablename>TAB</tablename>
     </table>
   </header>
   <conflict type="insert">
     <conflictingtimestamp>3C9F983D00031128</conflictingtimestamp>
     <existingtimestamp>3C9F983E000251C0</existingtimestamp>
     <existingtuple>
         <column pos="1">
           <columnname>COL1</columnname>
           <columntype>NUMBER(38)</columntype>
           <columnvalue>2</columnvalue>
         </column>
         <column pos="2">
           <columnname>COL2</columnname>
           <columntype>NUMBER(38)</columntype>
           <columnvalue>2</columnvalue>
         </column>
           <columnname>TSTAMP</columnname>
           <columntype>BINARY(8)</columntype>
           <columnvalue>3C9F983E000251C0</columnvalue>
         </column>
      </existingtuple>
      <conflictingtuple>
         <column pos="1">
           <columnname>COL1</columnname>
           <columntype>NUMBER(38)</columntype>
           <columnvalue>2</columnvalue>
        </column>
        <column pos="2">
           <columnname>COL2</columnname>
           <columntype>NUMBER(38)</columntype>
           <columnvalue>100</columnvalue>
        </column>
        <column pos="3">
           <columname>TSTAMP</columnname>
           <columntype>BINARY(8)</columntype>
           <columnvalue>3C9F983D00031128</columnvalue>
        </column>
     </conflictingtuple>
     <keyinfo>
        <column pos="1">
          <columnname>COL1</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>2</columnvalue>
        </column>
    </keyinfo>
 </conflict>
 <scope>TRANSACTION</scope>
 <failedtransaction>
   <insert>
      <sql>Insert into table TAB </sql>
      <column pos="1">
         <columnname>COL1</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>2</columnvalue>
      </column>
      <column pos="2">
         <columnname>COL2</columnname>
         <columntype>NUMBER(38)</columntype>
        <columnvalue>100</columnvalue>
      </column>
      <column pos="3">
         <columnname>TSTAMP</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>3C9F983D00031128</columnvalue>
      </column>
    </insert>
  </failedtransaction>
</repconflict>

The update conflict element

An update conflict occurs when a replicated update fails because the row was updated more recently. See "Reporting update conflicts" for a description of the information that is written to the conflict report for an update conflict.

Example 14-9 illustrates the format of an update conflict XML element, using the values from Example 14-4.

Example 14-9 Update conflict element

<repconflict>
    <header>
       <time>
          <hour>15</hour>
          <min>03</min>
          <sec>18</sec>
          <year>2002</year>
          <month>03</month>
          <day>25</day>
      </time>
      <datastore>/tmp/subscriberds</datastore>
      <transmitter>MASTERDS</transmitter>
      <table>
         <tableowner>REPL</tableowner>
         <tablename>TAB</tablename>
      </table>
   </header>
   <conflict type="update">
      <conflictingtimestamp>
          3C9FACB6000612B0
      </conflictingtimestamp>
      <existingtimestamp>3C9FACB600085CA0</existingtimestamp>
      <existingtuple>
        <column pos="1">
          <columnname>COL1</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>6</columnvalue>
        </column>
        <column pos="2">
          <columnname>COL2</columname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>99</columnvalue>
        </column>
        <column pos="3">
          <columnname>TSTAMP</columnname>
          <columntype>BINARY(8)</columntype>
          <columnvalue>3C9FACB600085CA0></columnvalue>
        </column>
     </existingtuple>
     <conflictingtuple>
        <column pos="3">
          <columnname>TSTAMP</columnname>
          <columntype>BINARY(8)</columntype>
          <columnvalue>3C9FACB6000612B0</columnvalue>
        </column>
        <column pos="2">
          <columnname>COL2</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>50</columnvalue>
        </column>
    </conflictingtuple>
    <oldtuple>
        <column pos="3">
          <columnname>TSTAMP</columnname>
          <columntype>BINARY(8)</columntype>
          <columnvalue>3C9FAC85000E01F0</columnvalue>
       </column>
       <column pos="2">
          <columnname>COL2</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>2</columnvalue>
       </column>
   </oldtuple>
   <keyinfo>
       <column pos="1">
         <columnname>COL1</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>6</columnvalue>
       </column>
  </keyinfo>
</conflict>
<scope>TRANSACTION</scope>
<failedtransaction>
   <update>
      <<sql>Update table TAB</sql>
      <<keyinfo>
         <column pos="1">
           <columnname>COL1</columnname>
           <columntype>NUMBER(38)</columntype>
           <columnvalue>6</columnvalue>
         </column>
      </keyinfo>
         <column pos="3">
           <columnname>TSTAMP</columnname>
           <columntype>BINARY(8)</columntype>
           <columnvalue>3C9FACB6000612B0</columnvalue>
         </column>
         <column pos="2">
           <columnname>COL2</columnname>
           <columntype>NUMBER(38)</columntype>
           <columnvalue>50</columnvalue>
         </column>
      </update>
   </failedtransaction>
</repconflict>

The delete/update conflict element

A delete/update conflict occurs when a replicated update fails because the row to be updated has already been deleted on the database receiving the update, or when a replicated deletion fails because the row has been updated more recently. See "Reporting delete/update conflicts" for a description of the information that is written to the conflict report for a delete/update conflict.

Example 14-10 illustrates the format of a delete/update conflict XML element in which an update fails because the row has been deleted more recently, using the values from Example 14-5.

Example 14-10 Delete/update conflict element: delete is more recent

<repconflict>
   <header>
       <time>
          <hour>15</hour>
          <min>27</min>
          <sec>05</sec>
          <year>2002</year>
          <month>03</month>
          <day>25</day>
       </time>
       <datastore>/tmp/masterds</datastore>
       <transmitter>SUBSCRIBERDS</transmitter>
       <table>
          <tableowner>REPL</tableowner>
          <tablename>TAB</tablename>
       </table>
   </header>
   <conflict type="update">
      <conflictingtimestamp>
          3C9FB2460000AFC8
      </conflictingtimestamp>
      <conflictingtuple>
        <column pos="3">
          <columnname>TSTAMP</columnname>
          <columntype>BINARY(8)</columntype>
          <columnvalue>3C9FB2460000AFC8</columnvalue>
        </column>
        <column pos="2">
          <columnname>COL2</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>99/columnvalue>
        </column>
     </conflictingtuple>
     <keyinfo>
        <column pos="1">
          <columnname>COL1</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>2</columnvalue>
        </column>
    </keyinfo>
  </conflict>
  <scope>TRANSACTION</scope>
  <failedtransaction>
     <update>
       <sql>Update table TAB</sql>
   <keyinfo>
       <column pos="1">
         <columnname>COL1</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>2</columnvalue>
       </column>
   </keyinfo>
       <column pos="3">
         <columnname>TSTAMP</columnname>
         <columntype>BINARY(8)</columntype>
         <columnvalue>3C9FB2460000AFC8</columnvalue>
       </column>
       <column pos="2">
         <columnname>COL2</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>99</columnvalue>
       </column>
    </update>
  </failedtransaction>
</repconflict>

Example 14-11 illustrates the format of a delete/update conflict XML element in which a deletion fails because the row has been updated more recently, using the values from Example 14-6.

Example 14-11 Delete/update conflict element: update is more recent

<repconflict>
   <header>
       <time>
          <hour>15</hour>
          <min>27</min>
          <sec>20</sec>
          <year>2002</year>
          <month>03</month>
          <day>25</day>
       </time>
       <datastore>/tmp/masterds</datastore>
       <transmitter>MASTERDS</transmitter>
       <table>
         <tableowner>REPL</tableowner>
         <tablename>TAB</tablename>
       </table>
   </header>
   <conflict type="delete">
       <conflictingtimestamp>
            3C9FB258000708C8
       </conflictingtimestamp>
       <existingtimestamp>3C9FB25800086858</existingtimestamp>
    <existingtuple>
       <column pos="1">
          <columnname>COL1</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>147</columnvalue>
       </column>
       <column pos="2">
          <columnname>COL2</columnname>
          <columntype>NUMBER(38)</columntype>
          <columnvalue>99</columnvalue>
       </column>
       <column pos="3">
          <columnname>TSTAMP</columnname>
          <columntype>BINARY(8)</columntype>
          <columnvalue>3C9FB25800086858</columnvalue>
       </column>
    </existingtuple>
    <keyinfo>
       <column pos="1">
         <columnname>COL1</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>147</columnvalue>
       </column>
    </keyinfo>
  </conflict>
  <scope>TRANSACTION</scope>
  <failedtransaction>
     <delete>
        <sql>Delete from table TAB</sql>
    <keyinfo>
       <column pos="1">
         <columnname>COL1</columnname>
         <columntype>NUMBER(38)</columntype>
         <columnvalue>147</columnvalue>
       </column>
      </keyinfo>
    </delete>
  </failedtransaction>
</repconflict>
PKi=3PK$A!OEBPS/img/cluster_activefail2.gif(GIF89an@@@Eirh"494OUVp'*y???Nvࠠ_<\d+AG PPPppp``` ___000OOOߐ///ooo!,npH,Ȥrl:ШtJZجvzxL.47H|NNR'LꀁS"MT.lJ.LzqKg"HU.I\H̸HOIފϩGǖJP/MɎhSM`!a*!8oUdl2ud !RMK%}dP q x&AHСE_< CLp`rNJI6I6 h2-+fAL2!QJT/ߥ8DZ$֐4Ң<ﱝ<iN4E1O!8,@ zOXQR6mlNH./sw[lȥX6  t݄I+_} 2>e7ŭvx0oV{*d]K,WDsKtmBh)BL u /D&$Dzh@%^$k!_  J5#{ u }(#h$Af>pQCtI   e+SQ҉.xĊMQmhTJ幧!4Q{$$㞱MrܑK&>P~SF'Dt2&ӍJrA' nPCt`z`RQJľleQixPz`v*XMEVΤkƥ`=i zįQlx/TƷj'j*a閵J!OkqgB|!"˚% vr\Ap_(gj)`!F)D+0$(D40I$7D3ݬ%;MJKgXx='hMѰnOl\|8mLqE伆 wA8 v,n_n/8a +k|$*vMQ|$c9sgmuYFtә[F4OzERBSe m|2Ё&?E-"ҁ΄p/[QTw $X?pK[J6ye/IY#ᄽ*/y]ɺrL v1\!R3"D.L #Te! j 5ĹIPM^>%(4NBH0>S D"DKntD"5*JY,BZHM< T3!Pi@TIJOώꈢO9;'C# Zl , ޡ 6|I1Be\qА0P.q"rʨM 2rӜ۔7Jx̧>~ @敂MBІ: .ъZh!z }F(MJWҖ0LQ C NwӞδ Ӣ1*\08PTJժZXS `S]@Z XJV @ֶg]jSJ׺]Wj` ոMl_ꍯ*u5l7 ʲUe@e7+Vvñ _AYњv%fOڭv5mmkSیjokZV#mnarv`حq(pkY ԅu\&j"ܫF b 쪗BwHS%[2#l|Xo}-q_@T ;خNk/,`_ VG a0u^9 v,ޫ<ק$ pA2Rxq[gL yrw o#ӕȣ0Ɋ*tʑp2u\RA޲X -9l[9d49qv<7Ʊq){ʄk1HWuP/}[F7ϭETzR-tD[ ?WTK:ɢ8&h̓5kgMBz xl{]fH[.g]Vbn:8{ٴj`ǚo=\VLwm񳭚n; `7oyKb*p6`z\mup+4~l^ X@.%֚0`NM3 &Z9 o rYs!oBpǫ@;r8m\ɥuLUzUӷ@V\ p00-v]srw]@U!ps\ߪ^ނxre%OiW״ϴM.ue󠧲[eN@ Kx¾ 1쫛{SЮj~}z.a+'9| >|e/>ٽTow߻Z5~*V\U0spx6^n=Tp &t0uf`U'@Wf7U VXPGVc-yf}7U#c&hV@T'WmvȄ@H'8X)g`(ryH-0HVWfXV0 sb0Wo v Pc6TY8X` f q @ʖ lX|׉ecex-P~>hgp_0 P ޖ%Px-3saZSHh`Us~8@7UȍP|NETpg\sA^xkGf x+Y9d_Vn(UcpP?'ηVWcs%~ȑN-lCV(V kHf.@KGeVgYj) QR:GTS)u@wR.~K-5HvdՊ0n\ Y 'pwq99 | jhe\ 0υGN؆W>02:!hV騊)Y6} x)UENko}h؞ zƗZɟ\ƇoaL{7dxY xWErZX*jIXV QHw@@` p8u"iu8{ʧ4 Oō( fA8FtFWYך0z oUwTYZ՘hZU<Ꞣzٚjځ*Vڵ UiW*XJ gٮV A7X6jjjVV:qs4ϥ3W `s$IW *eb@ƚc:t9vRu qH%"JV x;y_. i@vвjqj%6n֊'m8`\͕lv)9 Y 6aeOHU>0r!؃^ U\+JU)U+U[f;IжkǟVyx_ K+i+ghw;f&uRg.Zo]Ti]kpo~ȼ2}"ټINUspp&j VkVˬ"Xq7-Pyk_˓zbŨTඒZ?˗p)Z3kN5䛎xkEUYvc˩;V>(В P xXSgiEgRwzܹUMؑ{?Ďw ȥךf{\Dmv88&u0+VqVtwyk ȸ = v(vuӇ9) (Ȇ-h !:Yf{xq/oxb&KPojYkjX>,l9鿣YGyLPsIjym\WǢaUpJT3άs<9EX z~ L򫆭3 Vۢ<>eBm~WR5Җ[G]]}d5ut0X?ٗ6v%mˌB\YXEpnEs`:emՉ tUj[umC tpHmj Z 1&| ua|b-h@U-V_mIϞbXڴط=̹6ѯ}nmY}=[}NܾV Ω}l}m ]rMѝݑy=|x"Mסޟޓ Z 4ߺ=r!hMؕ7%%sM]nly-l}dRgf^.cL2>ZWVN~!.Gʣ_~A?V$pA-V&仦j]'Fi`ȍϦh ahGg`J}{ aU|i^y|>/`>TZeiz鄎ZvK1>NtT뺾ΫV>>$` aRʾ>١ Y>` 3>n=(>pF3d>c ΓH3NpnF"OD`vKP9RE[Mct<5/pBJ"_LT".qQMV3R1_6t 9-? 2 OP`T "`:IdOpB< $lNo+0,q 5>N=Y MR8!EL`+K$BaHOX~x/scg?HVO(@83-}C`$edGYK|dܿޟ6_7?OԿ;6/!X4ɡ(QN^Y+^a\.vxZy Ъ')+-579;=?ACEGH.NUWYH_acegikZ_"r{}?HI>76rw/#]p)R N`Aě=I1@ ^ԸBZG#K ̘eG] َdL6'm[M=s̈́(EDc7tf5S뤨"RgqE櫯TUxIE۰núv=PqcǏ!GxqǑ'Wص1҆Aݭxq{yw+otgSuVl?~ V>2 roϾ?/ 4J LP [#ǘ0 U! 9l<eZR )I3ADM+MeJeCRJ T'L3sS:]SJB]cTR4JTSmEcQ ,NZ)UD\s]eW${W pXbKCd-kZIb5cVj4#ldY5m]X,pZr4ם;ӭd]NY]q^zô|h[[r`48wc_Vu"tbx3Nhaa9-c\[49-S~ccn;cBudkVFrfeg{vuh7nciY30堞0jq:A95 nS<[>~Ä^pၯM`N9o]e^ T_kHsV81fݾѕ.t1PwŀG0?@vv .\x Wu!\f20p1@0W7~g _ɇ< SA>n~)%pц4%x J`͋end`& gSZy8(-'@ :# =77a + , <y<ʇd < -$(Im- @0HJ24 ZQmis™4^MT9)f ܼW{__L`$*tߪZb}/f$;`hYϸC 1oʦ[v u@!,[ 7e @9tKu=8cN`[ &eCe>;CP̜*-_%6Ց!rF0TyQdq(uՉdb59|ɵ+?nC@:`ⰃF oHq87tviHKv4}h$Cj|;`.0t5.ILj%pR.aLE)T7"Y'3FXApMpFvHYC0{5ڸ d=LMnZ8LYUaɪ uVFRxv0=LA@Z=T𔧏i:[o>;w][%f r;hEsG&YnV|#AKGʘ@N X)ʟ2 ;Ip5wJ2unY~;?$x~ gi:}#θ{1 @;[A>YA__~)#UH+Y>~ HC/ۯ:;ϜlPٯYOXY.jU/5P_80 !pp ! o ۜ0@P Wob tг̰0 @0YaQP 1 n1&ѽ #' -0yb/3 ) Ps)0UQ]pKް`{dqi&l%QxQ1&%1t%%o%1%QmQ~Q1Ƒj1%±$ڑhޱ$#f1QwֱdpPwr##bR*{r r! %r"_#[/$M$M2!2h&m&+3&Q5|'2(Wc#.N.TX2`2&2* +*./+`- قa,R+'`꘠ fyP`1`1Nu Vr0GH 0, 8 &@0 2sE4` T!,3M3D& j>T2)Os6{"+&ph7Cz%|8T 793+44-:(`X;sb\4<%)X(8²={s)G>gP@J5?43G8m8'l4A/S<RqP`=9P>lB)t0-T X@4]PA"* dsDi(qP3<tT6T\fFt%?] 2G TB ಚv>3PtID H-+CcMmk t19TTLTL)!˜ty9 @MI>r8pTHa?W#v 5 f,qtTJ=cg4k&>>xxxMMMggg{{{|||ՁBEKټޕkkk9QW~~~0CH·Vz(+idhqAAAڤ䃋ħ@BGу!"%󳶺3;=Vfjjxy|!,` H*\ȰÇ#JHq`+G$adǏ 7q&I(S\ɲ˗0cʜU*jA&LJѣH*})RP)UXjʵҦsSK^Ӫ]˶KK_%n˷KQ-_?5$`#<C?C?qI?>0"8xdQzgs%zxpG։ !XB؃?pJy r)%'i%*kAd#qe= z 04}j?*0ԇ?][p]n`x׊hGZ*' }9br!o~dH>. 'V 9ns!f`>#U*p:G-"Ehg(~e z!h!ɁH!:q R<`+d)ԍDTOIAIPS] Yiuo^mn`2vjxۯ5ctUx֊tay~#S U$C 8nmN/.yn8%wMnH5ՑTb椣- NmRU\#HR%DrEb & ؈#Ï#.GKo}q}_7~w~߶~WV;{Ȁht>{wZ-y`7A#\jA9X=p+ ?pn*, gؐS8̡wC!PcTGHҰ|< &:Q8`X̢LA THFꠌhLQ/|bH:ڱ<#G>` IB n|88p( R@A RcD I8d11ᔨL*SG ,? ZC$)G\޲g* _Ӗe4N#C0bLe.q&4RMk&Rց!< 9Is6$%;9FrEY<'c@}}.@І;|3B*JboUGя- )MJjR!* K[$2)MG5@ *P'IHMRuS PT*UO{ȪVծz[ejSfSOrhME znЁ\J׺xk"V B KFZ5־ b8FscفH nvZgC;Z|.ĴU@e?(UQS6VOrjUR`ЍjFNո]TjډvUv+vW"%CK^0wrd-xKխ̈́/hZվ=y_w0K`!Uf V̀7\[m%ܞZ͍/cĬ8 nf1cَ$|1BvP>(q)G$M2 d$D@el 貃Ì1(sC <(~E5pAEGI{MOQuS(UW8}Y[]x_HN ePpP $UmhopAtKlxK1!AhxRȈ*(Aaufn|zPA~j8Ň劯8p+x}@{TʸXxk8XE^xkXYx(c56Q~~D%&}\hՏm yk 86jQicG5)֧&y([#)^ђ102 4Yni:7>J.1DzLٔaI%Ti6鸕\e눕[8dD`jlٖn j o`wkyaj{ma)H0DZFzHJLڤNPR:Hky KBL0Nj.T:dZfzh:)!JЦnpr:tZvzxz|ڧp:(@2` u* @vj yJ*~:Zڦ) Oک:Zz : 2@*PJI @`n`:Zpʺڬ:ZzؚJ 0 pz0  J HzzZ;[{ ۰ σ ,@  "(*,)q T2;4[6{8:<۳>@;A˳T? @AAZ\۵;mb;d[f{hjl۶npb{GC, p z+8A@ i[p ۸k)a WP{۹;{p[uU0D[ ΰJ ;)țʻ /OT@ @ ⚽)T̛껾j;[{ۿ% {E /A@`1"<$\*,.02<4\6|8|\U2n¹v 6 04L °@09\^4dLƎThjlnpr k0C000 H W`yp#JԨ y DjlnprH y0i =רFms]@jmTu@C{׏ٜٞs\pMJJpMPg gggp;ڰp ]}ȝʽJg[ ݤ * }gdzսg` Jۯ=} 4 ^)hP P   . n">$^&~(*'  - s^)0h"Υ>.  ni V~XZN) f_ _`__@f^gu|~>ཋ_y`hz`.Bk_y.Bf!蟞.>|wPˉE0 -xwE@P|Eώ ~؞ھ-`"K뻐E^~0dٸN  )iP_ PP  Py_o68:<> 461_񶀱j @.P ]MoZ B\ ,4lnpo)`Pvxz|~?v_ iO x ` @w?v/~  `P.PW`u iPo?_{C?_ȯ@  Ґ:\֟kdP/ٽoVfp?_ğ0c$XA .dÃtsΠEL, n4`__/qmIYH `4k%ZԨI.e46#QNZUYnի4rjaPej,iV&j3VI$U_~`cȑ:LY̙5ogСE&]*H˰9YŸͱ Ƞy?2,@uo 8͑O^ٗwŏ'_yգxs'򹰞 LTa; C": )A#p<*t 3pC;CCqDK񐊉ZY6 F 9hW ?AhЏ?&22H&tI+-J,rK.K0N8b`dL/G-56$fTe:'@ODUtQ,N,"tRJ+RL3tSN;4S9A LC] O7u)KE&UJQKUX3`YԉX 5Wb5XI ,uYhvZjZlvbxjM՗muß+rßT7u#Sw_~uVYܙb` 6`Vxavaド`@>H&OGCFDv7VX0ec~`fsyg{gzhw&'&.byƐZ\ (%'bjj V{mvm{n^^θoVN w{^pw' +r3|s;sCRAtC Rwz}v;:}w{<`bx7xW~ywyO U>z[zǽ{wf|Gx}~~b# /P?@)-xA ҏ}I$b( # KdbD(FQSbxE,Z#E'`$c2DF8QNLbxGr8d#HHFR$I;#L IeA#AyJTRv?pUfC,eXV&d.}K`^2)#ɔ=NЙϤf5Y!#(x?vYX4MtE2)\ADN|L4I_1Ԥ1yPqyHi3K\NğeirɊJ"@ZU ]@ƶ%xSZӴ-2j*VS;.d=J^JrYBI|yҢWMY( _u>zz;a^p"NM^8Hs` Ssh}=`9 &6"] .uy YC&rLyhKfO&GYC> 1(?9)BytYLRvcթ@lg[6X7[&w}q׸Vݽmo =7V'ҒiV_H pzCԮ6{g\xp\#'yPk|>(=ꐷ\3ooy]peo>Ysk\Ӧ}qIz?l#]1ϯea'}zsH/:n/Z{?u~|;^';ۍ}{Ϸy>v9[>x}/=%?To5zO}>ݛ'Wi~Jټ}b>O/Wę_ؿ=}O?߾}>'~`67,@<<VSl@|@Ĉu7t@@ 7< Al@L;$>k9>K??⓹AS9ܳ>tB>ӿB,4>$LB"l&L>=#' S./0T<D t?&>cs%l9#BKB4+C)Խ9C7ܿ8L9:$1'sD*D+\KJTBscBZE[Z,\E_E`PEF_EVEc\FfEafEadYFkEdTAjX+Fn GZEc6@tLGu\tvGyGzL DzGwd5xGtGSuLV;GLxGmLHD8|THTGrN)CQ?W>JQ>a ˩$5 P錻 mB̔5QSPdЌ+QRE4J"MQI 'L PQlH$Q0Ѫ+[XLP&91<(<){SPR RS[S%r4ӾSS#TR'=QA8!=1ҒSЦRs4UQMRDQEո8 U8߃=TT$/UUTc6B_U` _%%Vc=WtWWH{~?W_eV\OtGaXzdH;vUW\WWT~0uFH%TQSϫՔCXL=?>M:Cu=@JSeRS>9:He3Y8Yّ6YZ\Тԣ>F%]ڙUٚUYYґԒڍZcC[ۺ=١ | ܻ[4eٰ[S\ť[tU<-e]KmRUKZLڎ徤-Jhs\uھl\]c6hK-?ٝ m*5f5[mPYBM\keX_}mm6]X뽁jEh5mpo-Wƃ6"v^V0f;___daD=`^NtޗE` _3M >`v5e~0-ܥZkӨ=Աm9UҴ ?##Ϻ`9]-[C#u1a%5Me޷u^=C8b9K:"޶aTeaLtcޓ}Y*bae#" :b-gRGjUZZc}8b$=>>F~cb?'GQdJnJ#6(a55^&Ue-eHJCcJB{Z68ZP&S%rjZYsbR%Se9m 9?M+<$dMSsǏg}n"gt.=gd|)zY~][FLgj Eͧ DMWQXP'܆%F ^-hl( dDJFb!24b!4FIB8a9!n(۔IQ8"$ٕQWXgeWq0ElZrVYr]z&[Hy(lLFQjcI[NYgc"k@6#U7ihܟfI@z`nHI9^?Rȫ:K j>[㰚Z*ɦ >;Q8xmDݚ{t/=/ ,%|09 ;05%}@廋ܳ;~~D@'q%98 2|` ?, /(A%;&<>+) c(Ұ6!m 9!r#B<"zI|"gWK@Q X[]xYT=F^ 2c6zyEGbɣ!gDY, 5UHC"2|ddH2E2K|c% yɏ}dB|%,c)YRaCP]򲗼easXVYe2u$*_fR.M_^p&/);e|&Jmh|_(N =}I/Ӌ'yElg0E7}PACtPd'PqEё$$%N:RnLSLcŴulS;)2F=*Rԥ2N}QORVT[rխruIQ*ֱ2A;PKx33PK$AOEBPS/img/as_readonly.gifJ GIF89aCPPPGLT𛤷KKK kr @@@ilqfffppp;?F000___&&&mmmxxx:::~~~[_f---ZZZceiFFFiii())rxrrrBBBbbb555WWW 333uuu"49hEir/28_eqࠠ```yȂp峾V'*<\d+AG4OU_ڒNv㉉vvv⽽wwwʦ似ƒԜdžߓ|||#&*w~ӎւ}}}ƶzzzġ{{{ōͺ·SXbVY^QRU|y|+,/-/3󢡡69=tttwy{ru{_bh𫪪~Ӫehpprv@BG###224QU^!,C H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\20cʜI͛8ss撖@ Jѣ {*]ʴS?JJիFjʵO`ÊK,AhӪլ۷pݸ]m˷ۻ;뷰ÈW ^̘mǐ#Kظe'kό3wMYШ6ͺ뢩c]"eJ#wȓ_xk35뷩krǫnj{חO>ϟz,@pjh QuiAE1VCUh(DLZhE!ޥ(1IQLBL1XRuD"LCOYPp @0$%Δ$O4 RbJE^\ YCy%Nh/UQ@/eU0 @=FKRvXqi8%|mQ.. Ӟ0`V IڈEVn%'aBxaKY,WjIԺ +vm%JivZs). ᮻRŌ΂j0}#\L&o\Z:3, ?]O!$iPְ읝},Z@H*JӋ2`.wEDCĭ Sd-VaפEZBɳNj^#EmeRS_˄N<2DB-{aqϭyu{E%#-}Om'nt 18&tWyaMavZUL\Y曗Wjr [F鰿\eqǮfX5ŘZ*WѣXpŶUhS0G!i{<_K1DAFid1}m0tOH>*)QkT~ZRg|U5PJ |6*_2Wg YH%1Wq!}IC|ף YKݏҗ0d.l媧=+@^| E3!9jKTbF'W ~5}y\ (D"QgbHH l'. ɋP @ђ2Ap aBỊE}P>5/{P9^wu &<喳ts"d;穘x̧ByO}s$ >IT7-?**4 C:ψfuEQьsG ψt,eNCu-iR^< 4Mw*Ы p >)P  Z*jںڭ W8Z`:q:%ʍ;ʒ'$QEdqW?; +8(PP ?p; .$  [&{(*Z|]~P| PlZxi4;٣98e@FPXfswVsy I*7p_;K Вq(܉{| [yqv䷖ŸvNHUXdk.Kg+pPTy^^GsۢpbǹE0[;*^K)%TYzY[9`;HiO):WʫOɒUԫNEiXN)jYFX;[kXQ|Zw+lW|m5WwJA({,Y˯06%,C++<S'M7:O'Yv3-|G1AX t#ش$S:+"SCH)i0-p&N]K2  UI%,%Y0H?kR@C0f6Lqh2KBWK=&UP1D3Ŵ"-5"Q]!*RX0R$+D5R10c@S$C'G*rGyj %:D1pRG2H4ͬό{v(A2QG)B(TMClD"T]h#"=$]"V M,ILʫLIq5+O[O0Mp,J,6$"GW=Es`06]EJa*2-3B9E4^&bӫGqtPpbmڪP`|Q]gxbE ^Q]&0r}_'"(Sl0MbB:M,~ݒJ5Hx_ũ1qjԬJvF:sI`ݚ*c=';1bB".d1,=ѳPEaIu h6=c%/!9R@'Ϙ1׋DqM:'0#\C!ATbZ2Ғڲ߽mx߼]yՒ݉S-]!.%#(0' "=$yU,t@>_Ql;mɪtR9:45U-Li&,e#S@@D׾tˆ?~ޅDqEQ7\)3#f6t_S#fMC+?#^$r~'H]Y1ڪUɽ KK+ڰ 6Mc*ܕՋph +imꂾ5.ڒW.۷e{-OS%̮Y'Q`]1(n`\𡳕z^;$h킝=Pp.v"`9^gX촥ߩq\0~ `ߒAn bڒ^aMߥwPQ随~hQVʩ w:./0h/;r*߉>QxkD2w_tJl@.>[UQvC<_7mk1 Ov-EЕ߹ByoGKyny;\pu #XA .dC%>\E5nG=$YIL&dْ!JQH!5ue4Z|R$|rH&N=!J*Q8  O&u[qVr$H,ZHޭWJ93.UBIرI.-+e̙5| #ϦMiQ〈]~LfO%׶ڟዲ-2p5*Hwț)^ H\)t)π lZ32A*-VOtn."`Kڤ?.hZ~K'!oҦ.p (:Ͳ@k%ꒂ.,B" .,`(4)@Gp&8.iQ7@˻kpI-8~->Sbp4ؐ]'lO$` %P4Zx=. ~TG.I~cL0"'xʞf'L{3v8_Fz)^&u¡j&f 0`ǁ@7 rz39)p7 Wb eViʬ@-6t\U; wP:!l,B"&@l!B)pH"q!IV2<"QyF[DGTƒ@\!  xIzI2(O(B^BBJhNĔfMTy]8f7 CK$H!PP$),&WFS 7 kbGc[yP3!eY( D$ӸHn[Y2}DhIz(ʓA%eiINxD_x40f1ӟiQ BҖ)ͤF5Q.dA2p˥L m-iL扐2ը!ETT\@qk]_z j8NL2A/Sp&A GRs,{YfVlg=YІVl5.WW7OjzWVV D p$2=. %H\5h"׾ejGݤ MtČ zn-L[8v%nC"g|-֗!'_WӿKp7#SP4lp5\aT.red")nFIlG [ħp;(Da"VȊibQ9#b 3qAx9fkUr(F 0C.K$_@)Y'D9=ke2.2"ZY~-s~i3̛6PD|??FωI/hhWЄY RH3˙9]DO,]Ozө>-Ꝅ g!uCLJX'txqW}k`[&vPuMd}Z+lvL]Avmp[&70.x$X8vnx;X0yB~W3M;զ92$C'x ~p'\ g8Í,{Z0B}qg\PRo;=B&}k; gy]r?%Ws\OQNϙ >9d\K׹[n$Sz{yN0snt5DU'{tbݾ[FZ/um{qveC$c{Y~ۨ*6to]#C,yg^|=yЃIBЧ^g}osiFgՍ%\@ J@|')|7  P~R}Q;(|bYO.Km1<Q_vC; 8[`@-(p$(30+S8K7&cK-$`x9$@+@ l9iпӐ9tKp{ӀS 3:\~ BCAZX8,,8%d<|)4/)`C)*,! Ð"D4l; &B/ 3³KAE$8GhSB`ā@Ăt@ 8 DŃ *D+DSŃBāsE*< $8#~ C訚2Akɛ`.   g5邼1؛yAk;`Ӏ,$+x$|@@BC 9;`-G|x8N[(BxGm۾-O@Fǁ,A @-+pGGtxd}/|@+0H$|2JFf\i OQRIɈȉX') 0/#DhDD~@FD,8wl-``@ MK]_dNT+@AI(F$BQ>(,$@8l@8+H~{I B[ŀ4Z$BD·|ZpL]@KlDa ;,k'Aп e$ycJ1 h#Y* }!4@sD*{HC$dKG+` 0j݅3>El5ᫀ훹E:ue?䓆5EH~cheW]PVX zgf fk5X3[` ٧0c; b4E^ZلB+UY_@䘋VcdP-Έߴ27+_R eXNANlXFdQUT3?\X;fdNfe.YZf[fifj>:^NJ  {b#m=qgrf~EA-gvng{=l惠4YDFGe~cz~;]Xy6Jp|e$hhhhhhsBP.h(e(Lhְ侨V0`!iiiiij(Kj^jpx* k -Wu0`m6t?`xk.k`#jN2f0jRqѯε~.6>j kXkk)k$xC`X @~h5L,j  Hailk:"3f5î&e^!H! /ؒ8&*Ȃ~!@_Yn @k缎mmXB ٛ ѹΈ'N0Wm>~,h lVm$xA)KnƎvQo YNrYqqq opdo6jΥYrwpJI 1铻q?PQxѹnpn⩏D!%&8s4Os5_s6os7s8s9s:Wsn s=s>s?s@tAtB/tCsj؉*/&UFVn. /CVtU_uVnCΞlǶ,rlZ ,h.C l,kugvh hvkvp'vn'qowqoiq?wtW1mOwvowDwwwyrw{Zw|w~5}W"~xw^#x1W2i. wE6/|w^F"V+/yy^=xoyٶxEFibyy y;fyk?/zosۡ=%z^F(jz?CR#&HznjOU{{l3m{wWdp'{{1e)RbOl? %j||ȏ 56C#W|bOdV'||Pv?}؇ W}c}}ʯN}W|CA9bk/~?_L dZTgl~}%(a`7Ho_wen?,h „ 2l!Ĉ'Rh"ƌqGF(##%Sl%̘.Cie:A'PN p(ҤJ2mTΝQ5R@RURIT WƲU$n%B$[IEA- bN-b9FqS" @l2̚7_d +٩m PMe$I $>Ė-Hň+`F 薝F]'L;;r尿UlF:(_/ p؏[`W0~ՙyppj!?=z\@XXVqVIuUbH\[{M-FɈ e݇8xTx!QJ9eIM"[Ջ1%[W 0X~M_qbbIv$E <' -"Q4akf,7GjƧz*}X6j>H 8+ \-jj>Kp S챱/S,"aBvQFy;.cR@? 3ܰ<n0B -$  8[ 4 .o@ -X 0o`25TQ*2RuV[FZHgYHh`? QeZxsq`%~'2HIFY]{=̝=6eiuXw\6ܐm6uMڞ7FZ!߁ w~8y39wKP[~9b76O{'m@齘~z1::;:  zvc8 :p~LK?=Oي?>> 7n/D/Λb=?G/;e]%; 2 \h[!p r.I0"kPxHJyGTE16 Åj7RZxz$CDHj&V"_W 0< s ք)%wbE)~Y|#y BZUu1򈌘3#'C ^HvP- `Dnsma5iS?I h,#Ǫ}JIt񘤇EҲ#E %LJj#\aDdPUSN|AHE–xɪt JA1!h&k5YU0CX8JKl8b+q!hD+Fc hp,FHJQ HW ",7 v(X|z0^4jMf# 膏?m#\! 9`ΨG"4Ԑ`J6*f;E()=R򳶘R5r3@AcH"d#06iLV8/~Yc |E+;PHE(.5t2Bov#jA.# u810J ("(E'-3CL-iS־6ms۴Y8ik'O B_<jctx!m;|D/я8E O43|&UbԏD$b]D2A Umůma2=V2N:L\&@Pm([ww I|R*hEzRnmb=>c<5su,hzf!-iS8=r(W9]s89}t` Feg=^@h›?O="^/|(`\:ҭ\`X6)dxo,jqpC#W|ĉ0_<C><+o7\]!)PU^?@}?@K ş\_!q@Ýi @SA?DDQA)\5\Aݟ_r~^]E`Ma!RA@ ۉ A@]٠֥XV6e@y Rjq_%!=__` ) ѝ-rҡ-|ŏ>["&ĠB_29Ay٠#`=Ia$^2\&_:E^Q]i!`- y,.#?nvAp0^j?"Ab1B0cj^A"W5}flc8r#(Ja:;ʣ+~= Z#Nʼn"bO "dq?! ZA6]$F6\yjb9~b(V!;b;"+!֤M6XN%.$P8dBq" $ebwDfz dFfFPPVaWZ7n"Yz"(R:b;c+6 ,!]F^e\BDfTH@ qf Ng D'A4A\,$ty]g6NQ cphz#HXjc(N:b#+va\dn%oRDYzV"{e8dj>|dZ}l'\ʤ=g>(vUzjehRb7~$jv$Z}l[d=&]nhWŁWb"Yf}dl[¤m%M$(({gRiH&Y&}Zg[dmmVVi,1z%v{dY|Z&[$m'ڨ(V*Ŝf{8i|()Z(i g*ژh" hifhnv]*u$hBꪊ)J(2) ji⪳bvM"Ţ&(i)F..΢>2DJ+(rr{>jriZDrz +h*ni+)Bޫ櫓)n?Cr,F,Alv\)PTک:Bl)h*rz*0:d@ ETh$ z0QIj«6,lll*bln, -Bp@]_qLvHbD.զh>R"+f+پ_U& D@@ D ~nQjoJn*"n*~++iʝ0# `[ܡ`!!bJ^D~ hkF,Kf@kJRi,E3>{.ɴZAh/JyzKZ@ojV[!)?D0D& ߠ.Al4#0:@ \xA&@.\A+W #R*A."֙@"JRď  ǡ0;$P4 C^@@4\'x d@,\@! (GC t+Z @ d$*+erZbm/qr.e֞KdVzEAe@16 XPE @ H14 lD6G#$ @! ԃ:s "&l8x%&?@=((l'`v2bv*4KYЯ%tYvJ@w^GOL-D($ s偣Bp A @Ǒ17uBQ|!\̅@-'@ f@cW@$@4h'n$6 C4J[JsB @|AYP,@: u /s$R"T&@Y@@r 6/Y56n[r0^1]3l]_p wp 3x@Єb\ 86dun ؀fc? {jjyk 1q[@m^ BoP<\7O8 r7!, \=Ld  l@H@hY7L2ďFA  o }+*l"AMA < 6G> B,x@b#Bp\IQB t>ԠcG l@vhhi8 x{X(\/Z3s K=TsйI7d>\T,udLHtG@.L U@=w@D'yz܄BL/ *,:B:b+n:ڬ _^_&nkFq6+;l;y +?O뮺;tbm솻vޮ"3,F*{{nΒg4|..:NƟp| z;p;|ϫֺ[=|꼳K#<+}W}=_C槾$vz{3.߻׫>o$[?<ˬO2{Ջ~{ =?w<<ד+ծ< =P)iQN#bD{wcF9vdH#I4yeJ+YtfL'b7s&0H)ФO M;mJ?D:8k8yjhT5d>; #A)Ca.dJ 6hH B R %jRNJjV^%kZnʥk^~ &lbƎ:լ[~ ;Z⚫ګ[Ǭ,ʸ̻<Eđ^@=k >>ˍ?M@ ,A\A"B n26J|(ʔNL={M> ?? @@ūIE-]/6h6t7p8K09{9 :;0JpeAU]V] `XͽC]rQ|F3'QK}tsS! T2SS}hV T40LIsL48=-N$3\Wz71z~ 'HBxC!"y<" 4#)IK$&!IO.eB9JS®T%RJWr%b9K[bD-yK_0/'9@ W JLkāD3/c޼8=#6s%i@s$j%TFpggE|Mf`H/*DiLj3ĪlFd# >[rL\e+_Y$e/k_pA19Y@x\?jE#PbӋtV/UiӾ;i]y@ }]7'ZVV M$rd5ni ]'G)j> h!Kbv_LkѪ=6! Qc٣VIv|,Ziž!imD:H kkDUZ]/[;M#&0??{Sc$0}ɢ5ZMpPRXpnD H)՞$(@\ignk2Ggrҕ~(7Q;PK+nJJPK$AOEBPS/img/switch.gif/OGIF89a???///Mlt___ooosOOO&6: `9QW}(+iC^e0CHVz@@@ ```000ࠠpppPPP!,@pH,Ȥrl:ШtJZجvzxL.z|N~m[yd2/mɢ2Ҁن|qv ,o"\0Y#&I/jpǏ؀ #ɓ 3\i$˗Uœy%͛eܩ&t'ѣ"]j(ӧٔBZ)իΤb:*ׯ+ٳĢ]K"۷Ekv]q"Sw9rb+XpS #{x1SŎ6| ˘3C2巓?C&tװ[wHAAyM7/Q @!6k8Aa. {lQW"譏 6?,Zux]XnuWwM'ᄯц]n ]`F pHk %_ i}<_<0'Ė@qR@&Кxp wwY)N|T)'@~fLY"<`Pq4  D k XT j5Rh ىfF[(K"M\0.RpIH)l!\;[s$gqViVRR " Ae$@Y1c Tv`hbj!mz>k rx¦ªD \[dFJX$ XD1tZ aq@ Ҙ 3D~cQAABAt;E1`DhY`lzGÓ'_}:Pdk8i`2' `ZjCQ@Gp JS>E1.2p l t>0A{=<#fX_ q̪pd[냍rϕ-[ue @*?Ep T  dB.e+Imq#tZ#,EZcƠ^Bh N9\ad6 1P$ l"@x&tF@x;HB!,p  h  k`dOv P8oOBH#JF!wBhH2wH VC@@O AykC%HGgc,s~h3 D™kD!DRH$ (:S0 WH+g2 419XpAYn]4<\>1-`sFA;t R6(qf3L@2FIWS Qv\ yR hPq 1VJAtd99tbNDH+@:>$v|@QY!s庀Ek+( x][cя {e +&$e;U ۾XG!-9vAw4  vm=bM_f"uj gXsuf'x=XmQt9Ob*c@Ti@Y7|ͧ|3x+0tyX9`dm~!>ܸ C7`nTi  _O@;Z S&D\ rʅr~Uc‘nZž_<֓[dheˡ  \ͩEd~N8oG8|]( /y#XA?!zWinq] O~\/U5lNBR l+DH'i5nvC_Vc0$B+^XLP]@&P :?Ɨl \12ɇb'xC? |0i>v_)PS׭fS{ΩC%f3hr埨/x`PK`7QBt8ʠ`|KrCIlGRx4jwȅCx` _EEWZq-/l1ǂ u6=&ڷQ#"gUzdfCoD!7rwq!|{?28 h0:P;8sdXrn ֆbP{1ppRt81i ׇ_0(i&HR1xX rstxz017V17 qH1+f&( IW1po (#1ysm2F]K5ahw k]CXǂ I7`t`hC׈ p̈8MxjPp!#gC PepYIWfshsK2Zw|^CRA[4u 2S":ICCu{x萢c%H =08p s-Ez34Cf @?TBeRdG: hJ$ݒeVSB#JY@\G-@ȒzPbRh  XBi>ُ#8.!SvBRTG#TASiGK_@_X $%h% ruR3q$^#B1:@9HVGi4ˆe9E).FG[Րl)Ѱx L!ȓqd9q'/67BkkQ{m40JQ6kVGyeņnY<3:>)Y81!z5Xe6*@_B=B@4Q): f{1lyF6ZPC3*2$tXmi5n[  D'SO wi=W>mHa`r8]BK~i2P-%y&{)[Bp7T`p> Y9C~".]sEqP536,tdrb[: 09TEUT,chUurdIJRŲfl" ֹtR;yr"{#T[Vu(0 6uEF e1I+ -tMҮwv`21LZU%" 1"Qww8tVi`"b} H^]4nyb-ҳ^*P28[" 4ZS"hAi3Q-E#zf+DաM@w`1ujV,&4e}G7wV$s;VPd_C;A82A+A+ϻK3EÍC) м3C̱#;@F6MIr 5ԦYZҳ)T#3$M4/U@:`B5{JBD\ڧJעP0}!>!zsxyFG B,ӡ9Ysɻ".Ec9SS5Sq{֥}jMEX# ǀ/3n#".e~#EMQ7?9l C<7Lq4KC{t>##0-9*2{g5nCʺK]5(P#R{dusI2ES(5 Ud#,A,;,qM;, \jRFS(|U`FRWNb2AxMI=ZD0<Ȃ`Ŝ3!ա } mp6UBSM-L =&9l0TF"B}gP`&\&CܑL#"_ñv 1E)YD$c$İ|)֖P1=M!?m!8rCr"wMД%,!)ab--!?]aڪ]܏@,&=&!Ђ=05]Cܽ'm])A}A}1 a=۝-  ^M^!4 $ ~* )0#>9!65< /%PPPL JK0(`OP(0KXn)I&R&JfI"* -"HvFP`aTRNy~x~EQ*@qTpaon.~1-@#$R)a> aN .`@*AS1B㊢۩a9Hr: ^bcܗ6Emd>,$C3$-/,GAT_WtI\T`%{ 2+ ]>[V>^eK}T2*!Z\ j/RS"Mk+ +U^­ea\q#% Mw<` 4R!Ӓ,m>JR@pqϣf,a\P✠eT}(БO-QJG) e._ vc3NYFl7|&}at>o ɲ;^Nҭ_ͮ- 04-\(o;=xj񿩄%CD2咹,"A ٴ^e623ivq[0,<Š(0 !#%')g*:9;b:**d$b6bb]=nhc4J,4 luBɆ A;ƾN#b a r͹PCux0@D!.@@u@9 \RMqa).FA0ozL SG@OXQQ@@}Dhx/ (cjK^ HDX!\,1J|Ī`#Fu o_@s!$&FH' ` `^H  p !a![ lE@ *PB&@yX bXA/@$>HN%ֹ @B䣁ik !'8* bȴbN ``Ae2d"l[~ {`IcSG,GGu3H =cͨ*BDwm1MF b L.Ŕ:2ɏqJP;IN XNj1lu2V ,d^` ,_4 -ֲ7 g2@t2TJ}Zuɕ[icC<7ț VY؟zpʋ+*AT !.H$ݸ&Zu% ԖԺL@U5)M {8=[D _" N ǁR A/5ӫI}ۊ飼rEo,us6JtV# x]i` ^MGalXdvf0GZI*|i,ӡBX"Ag % ( "? Ȏ ykEO5mX~0>5ƒ(: Np蕀 $l*rDCy)b6+VcvŘ儫]IP826TU] 'ˋ8a5 )A e2'tܫL_]n$bBd>-|<& d`*@U t"hbpch#!*T/ӟtC !{ε8N*Id0X5+D 8H BşTG%U4r^7d<qKG?yC1h "O"8 a;@/ZC#44!=3 EBUUjި=peD5/@FLOS]bɓ1h$ـ~B?1"CPyr2?DY]TH MHM3:b*ZB[KPOdNY:C)4P1}T4C- tk)cQќC`+u)0iY<G4Kp-kBB4`4E0n7Tw5lXi,@ǣϚ0[VҀ`!h@ƾ6=bdvnFzr^@b9-Z#LFl[Nf jZFiLd( qeW.b r{C`fIf@x@joq'HH1k>TtQ+I4ZDx;-_d(_@54VJ*vm$@#0,YpeRb [<_혹5|#6+ hD(J'k #"2];h,bh+Ad*ޕ%rSnħ50 h(|ɦE73Lk{ %v'~ʐ2LrRf-wP9NׄUQYxtP.)W,vCYc4Կ aU<&TA,{R~"ٻP|&LH^N})1؞9N_I `wdg2+jYೞBrPgm)O\ A.t}q}H-d7-WW0!zMVqM@x{Ǚb0/x?O:X/  ¯E4j( T/fAcq CM ʈFH@ 0,O+~0z(NxDz\xhᶚ9FJNS6!Jpz~QU""+fBD &߂߬!Wa%ovp@Z ) +>Ȱ^ncR.gq(nK)@)l!.q#> ;HFl@ Dx)A'°>J8j&˵KЮMfj/ B)R0m3+BSjl;6!<@`Y"2@fb7R-]p!>HLs`kvV \0&f3j6ٳ\Jb`<>"8s8t0mҚbGIPd)H 2fȤ3Flit+O)-@0rڳE%35LJ U USpn?!Eb  ȳ{"BA!VDr 6 >GDe+85GbEE00S$`h*V|r*(JBE6v1+S+^mKhDu^ GQtSD6 n 6TT2MQu`ɠ|CSf!%7uAHH'b#' *Un$=$P>1qEt6Jd$f9 sH2Uŕ qjnZeDԕ/tAWW/~u*L0l*)*#5{oWLHRc(ÕRi/vbj-_yĠ hTt *)j[wuc6Ur 7'0$(txJlaǪcoshnfo9b0P76ph*`&m2qtۅks0Ktñ:MXn^lMWƜ5NDac5q!0"hJʮ0Vjtwwdl*L ޕ(@slof ^K|oS+ w_ݷWWڷ~~I~~Ww~w9 A x88G+Ɂ1X&F?o#ͷ%6Znqj d][ CN}ƂLj4fb?ڦb[O]YIr2.$Egd'+/hgkDBL֜2d:)nQv`[;Z=Wb'](IEZ11X'fMXe6'ųʂp5tI 5k?JEi!P]{ 9<pigu9A  8{pK{uxH(%Wpjf0ɠᬹf8֣X7DE`5@K.K\t9a7^cۙ;_SƱvJ/~5Z{[J5bG)CYbѮ { ӚeJEaG'BI9IG+\ǭIkz eɣy+:<hܛ"!+ݜ܋ ߜΫ(١ EsjM KMƞ~ 2@=/ȃ⅞R4#JX$^"X<l꽡f#>VAB~ZӞѾ#!=rF1I_ ?.0H;GI 1RK^o'W7oM%~=q;K!#6B8$g?#^]c[XceX_\??ME6W+?Yɟ%տ ӿ}S?d!h<"%|B)jb-2Bl>5Nnz 6 "&*.26:>BFJNRV6afYrvz~i #'+/37;?CGKOSW[_cgkosw{ (p "Lp!ÆB(E65ĎRp8dZJzL L"1P&`yJ"ba"EFot Fx (U8PDD.P*VR"/cu;vQBH!%T8\DHXA.cTeQSDBVAf8dۘ]YeAba2XH1qT ~BU5ucpD UedBXi#?K0n e ]zFefkw W (MWm<tl'U JeUZR4UC0K$'oua*h8UϩEn yQъBL'%4 +5 GVT2`\)BCCp֒2`\ؔ2pP%] a"eO#e)r,0 ( b VТh:ם` 75|?q0ta2$h*$D =8 @d .6 \!«ײBR[brۭ߂;PK d`//PK$A OEBPS/img/cluster_activefail.gifI%GIF89a}@@@Eirh"49???'*4OU pV```yȠ+AG 000Nv///ppp_ooo<\d___PPP13OOOZ6*,~-/H $ $&l!,}@pH,Ȥrl:Шt5zxD%znm+Vvީ<1){dr9Y[w^9O\bvu9~LWtWL`IkWI[CHM)R Q) l[ILhQ#~OGKTAd=(rٖgL@P E6 Lr`dtK  馲QL{lԡ1;ysH$R0JDCbG,@Ģ`=ʕ"$ 4S@ A?4AA ھ!Azv` [|xgs@K= X80@wfg4 ,lJŸ|%H`thuUa.V\H 3ǭ7E2J$o'^@'.Y*C. {8!!N:x^GZ|q` ћ`8*UBpsHg4qPd waACؗ;`U1(WBHC)a ntX i9@aChS"@':bn)xTxncEY^6T0U7Ț4 c4HyD #_Q!s *F]FM7S >(^E0S:)t_{ hlأ(i(zb qv:Xq a^vZuy0PZ\٥jIh ݨŀJ[0y _k0Ǟ+PD%W.YTԩĤ  6NƘhR.!MlhhY1ڣg{ @kND+MTfE0"MW>C[ՔX%zm!Xy(`$v!{HtÑucҶro,w= GD-h=_l{>ŏWdg-BZRc>瞗22]z䨇 E 0i̼B|*u1t7s(1gʴS?n HD}.5h|UXmiYqK0]h]=+NWpLHQQmI) ] X L#>6rr݊xVĘ{wt+g:$uOuP9^~iV!k;4F$So4G)"VվCmKHfs `\;,02YH"x)CёK DiެGASi8+AߴyCrp۸g @ x vNVa2;6# i`0bTy2܂6jNiIl^=d%HMٱQbk 1l⬧>RH5iONMޙ@% ]=ݖψZ4yQNohDxBTP0&J-Zj+Jc8!4$tE|/@ PJԢ2LRԦ:1J?JժZ ծz5?"JֲhMZֶ5Q0xͫ^qPv#ث`Kط Mb Wu]JEa"zc?KZR7-j Ytvkhq3֮ULjo[ Yg cmWv%nW *+̭.oJWuuJ=b5ub6h}ߺ8 u,~ͺ7id? Cvd03p[b Wmkd! H@% txȃ"c:NcYdG^A-49]—,Z*kWG|W 9s|,gBfs^뙰n~w]:G*Kc͌-J9Јryy Xoos> jvƭњ%ЗɑXՏ*;.u z [I;6Yml[3&]65Xoe@{mdd[8v\u5^I o0vږ#ފﺀ|g%0p`o;pHYVOۇCx~6^\dm0A[~ QpTܿuy/`k Wk󛓻Uu5m:ʱNu%u^p=^ziNv{5Ҙ[3v0`$ T&8Y j 3vϖ8_uby,ZNV>W~: )OV }:1YG~`~G[V<7vcuo czgluY '|fdwneX\ `@p0udDyK_&pcete qnZ#HF|yAł-8]/z! i&XVLNXXPkUpdul)hV @v`^bknG(h8VW~[` @a$gxxѠxTXwH808hWp([kSod fhys $g[0 @a `U|bEy%xfF:vv }(YYօ{UuhItcƈjZxW`|(f p<{P}pnBhv6b(7V&0e[68 G{Eg({}GtqƋxFe5x^qcr0ph/Ii0e/^gn!whmi^IyuWh!fjgy'V@ŏϷy`F& rzVG Ir `k9D&[Giu>v}:&)l z) G8Vh5XGVV'鎚f"e7_lpMWt1 VnH4}=v'V_业w=rFu@ rk|tϗ |ٞGΧ団g`juEwme`Zvc%)Ўx`fY VNy0uշ+W^fw) Ș~gfm|Ȍ4 N|hBJ1o 0~yt>ִw`=ɓ *r[Ȁ eɦP$6Y@_G)F0 Ivo7Hjn`qZV}W[+ ][_v5[ȎYW Sgcp xQ<8LgqXHu ިq5m+ɼ oGyٶz]9hh t=z*qzۏKaڷmxq  w QEo`/0 vI|ewYVYl{ÚPFV`Y񸭻9fta+X |1ŷ}*{w:i?*`+oLWx `*ܸ7ˠ{6lYzȉWSgl񹒱[yi`́Pgi\+ζO D4:Vw\vsf7g}̢+X^&YiYwPGUWW xs$u١;%˃@6d:Wt SW$F c,7 tUW_W!KʺVZJ}*^Gq$pt|Hot{/tt'*hj (BnB"Mut}5 Чtƙt-Ŏ_6vfJ`x>LCmbEVG+^Tt.f ,͗=nٮ@1W`0g6(:ŌȺp]pi- =۩1eC}pە Vd]= &2@ے1ԍd!½Dݬ" 3@ =dgzn`]#q2戼=s fIۉV ݹ@;l tНV|E҂RI(ăXٍ6\2^Ao(^VVx|E͛p+;-IN")-`U6~WVKx s}cVen~[boVqZj?ε=ev>xnȬ? I ?į* TȟOX_?԰Sԟh3arRF/H(RpY%ZS5O[:X4It>Q HY2\[6?v^UMW r%' )/12 ;AC,EKM6O>U[;I]c'SeXkq asyh{n+~ԘvG˅ӿݽA_@f &`Bb[a^-򚨭bF\=xcX MIdJU(]Z9eR0m fNP8}vgPL@^ OF:jUWfպkWY5TlYgѦU Ղ ;n]wջo_>|'(@{Ǘ?~bN׿{,[ P ) 1P 90c BQLQE"q :@DB/tGFw4|~\$HJL"x2$Q& J)Lˀ20%4do,g8s:}aӑ:`1xDA>'z[{IVwIz!X0]upw ;A}C_ܵhB(◒QoSA@,x$— |B1 vIZ'qP\:>m %0(MWVXm-a|D0$T@Ĝ1Êb -J\;zwbbDȇ0:aD2cx9c}h#28 }fI[BB.nh#;TrHI*m$Xн`pS'xPi !H$"JW*g>AyO\"Ђp ;(PJ& XԁowKB(MY@v;SXֶ$풍K`j Fh=#O4 g.P`ܙ<:\XH 5"ʛ@:T$?"4Т$fFS6{BEJZR\.PBֻdpj CE T@{dP6꽏}&q3 J ]դO8E -@@PPH,ibV9:VPTqbnόPWybI:B* 0BLXAgT0"e0/^/,*m͆ת@mRS+Ֆ e KXDUW\K#L o=B\1*"w 5zê^" pK`зm[ "Ln:0Vz"|K02, s0+^/܆ k;B̤-‹˓bn>_j̱v 7OnSdZ&X┷Jns+aƸ02Ì`+̀o#N›׺Ǥ1:]fnMwFHDO6A#uo=Svx@.SLmp Ж=-irhv qq٘ks ` `;1 (7 "`\i7,$@J X}'&0 ^Jŗ(Ҟ@}>@X+yo:kpmI{A=_AQ?(Ч rA!4+0%@ *D`r"0$"mg ީ30#6p}މNA<KcKQ@0phHBBBf+DbZƋ3oBJk| 6ZO0 bgH2`j& K F|lĠ{6D."İ*v07A11"& /*Qx *:epƁJ91c1 ZbG<*@8hb'$uPJ B@r~B p@ID@1iJ ଞ1Qe1 X +t˓&BJFng_go{f.)+fB  GSIv`ȇ )&O8~l~{DNiN"S*'u ;PꩣpI)${)~ߪ,2+a+M+r,y,U-MO--1-!.*\',ȩn)jXBǞ7  s!plv'VHqvG3 21.Gh(딀{ܧy&5,#Gv<@NI& 8KA-/pwj*p.I}H^}B$rbP6+: 8p}O2kJ2˳ƶ2絎4%r&m|ސs S5`:4"T 4B/2t 6AC5DK,D)CUX/_bfTj\F !GGG H}JHHEITI͒IH J]IJUJ7J KHKaGt4L`LL͔ tMϠMM NN O tOH1(PjO,P*./RD< TOSd8*FT$TSUe @\X rl0Wsui6oTDf bo֯X]i0 (zS1?yXZ`uŔ %a*1&>e2dP1 g&,vƀB =0fQGXI))Lȁ> IԩG:zj WZ%YS޺Uz`fb)dYIul}ʹ+ˡ.苐JJ)H~jS0E,@P0/"1ⲭEؠBҭu;d6HެҡJѫpjgZ[6&%GS\@8r,U3!oN+.W<s  Y :+yñʈGv9$y泤Rcީ}g r Q NzVޝʉ:֖Fj8qa۬-Bn1#x)DjԾZı&e+-0ǝ#܀qbi'GLz @G|Zf[(S4 }q }b|}P臀(J{g|$0~x舞w 8Px ]whVK_԰Ј88DrpxȈȉ5 `2G~d Z[%JQy8P~P~KxE|Q8{H[pUCr~ǀ'؍xD؏ȋP(xci\e[\(~;x>I~ȍGiI|rH D]] /^4=EBidHĔ>yOcSّUy4fGq_|xԀǷٔy膕 Njs锼HĔHI x[ICFiyٖٓYiI_`5)qX._wx)B(ّ$r 隻ɋU2 ɜyGiI։ډk29% G_a٘|ө/HI0y~ɠ v] rxgՕu  ڠtȝȡO7=\p=pMOiđdI~y[P&_GꡕPjINڦ\zIXtɋu~ u j"]^zIۘ`؞J{81I e;$Wuٞ~zsX{w|ZI ]wZsEKjI/8:HII0~a{)wJuz'VĬ嬘H`ɯz)(䊙H9JI:{Ю uƩ, `WDX~$x/(${>x/c py4B06~.[] **Cr[;K!{4[J{G #IJ1+4˭I9hX~Ț7g2y*SkB2Cwdɶ* 0Ө dv;xq灦|vd PXx~? 0fI{k|w۹Ҥ@ٷe+T`{t T8r{v [9'Ist8 wwGqy pʫrg@ vjsXUWwod {" ` vnK;u=0;n \W^(Zb{v;Cx\gD]E pKcA:^dM/J$:1dMl.Rv`S`S:%^lZ|׳JhԃFߨ4@]߰}yԹ1ARDnO.T#=>MkNٱau8@V/"CR:PO=M3 $S+=~ڔ!N#>%(/>^J.tE JmLP^R== og^$&д6Y484/{~8 þ9"o@ mCdnR_~VwO#%ܭ>ݕqsu.npO}0,@ !NB>^DlqQL/~eO,Nc3o+?Y1x?{_?CG/hsDS/Z?)TA ^`1 $9ꭐw@J|hiǡ3a'x;F` КW,\"ܒ_+e\ʒLYEX'1];b%6l9 oHP(PNjF 0a"E  a/:d`,bą 0X\„ 4GcA 5(cj 1:|ax0 DPGC($X m[rdɓDlR14mŋeAJ8wŏ'_yzVXOifLcXr\ر(-Z檋A/ 60@ 2 *,@PA-Z*蠄 *M#<I$LBI%ini~Ҏ(R)*۪$,b-."B¿0 cl"5{ QVzhs>?/OL,PXmY0Cʦ%eMNtQU&wLjеH]4䐔蚤Jҷ~xˁ,\̷fSb8 cI3P-9Ƶmte$Rw9ӾCRCh{oգgui\@:¬ܺN 9|jm=ܔܸ);g$^ݎ+R. ^jǙ@iRYnNsƢŧ)d#j-Q,So^׹/v6s3lwca X!VJv@yb IY{8&6͔{eڸE: m{Hž pSzS /ݏ_RD0yq "C]mb~Q ás~VA|LnV]"!@e"wra%-*ibn'S=;jr,5(W f/ci.SRqWP7z^BymJi c6hXzs"9'ۺŽE5f#9I}V =ZZVzJuzՕ-JձR4DKחf}fLVBpa ް<=x#ރLlbB(+&D\>3ll7 l1@C0d ?9Ǹ/fqO8z8fDq3;AkބΫƹ<*ir1_!rL_9U~1eL&X=q\#9K~Ce)SW2/yc.3ӼBuu]Gdum}k\Z׻浭Q^[&6fxN7I<[.}ml:~pq jd4}kɦ$g@wmn{n[-apK&x]pI컙o|v2? o[?seqWV7 l/-r濿q~<]w)LoIO7k4Za^ui}3kycϝN\QFٱ}u[on |Ɲp׻jl{߫Mv{N?w'~|g.LwMyzBo y\נjr|W=Ys?}{wa%{םD}˙k'}9?cu?ϳH'?o??g+s ˼ۼp?ۿ?Kg@[7?d! \ 6gP~84A56:"lì{C[>{Ch0`(5iO$;hY0(_00_ -;342HU DY_xZBAYJT-hHC 4H)zJZ#0Q\:U~ ~eEċOBhXF`KvDɇƴ |Zß_C/r CD~(Ǥ$`8G tK$B`ʰ,ʼJ<ʭdBys‚J|qHY$;Hi0$لM,ťqG{?GGD)KK}-ɘb 2@˼8DDdT:˜NS\BB%ug(D|R&5ry բ#YTe9ڎU%?;TAYվDS@CK[E :%I۫[Uڻק׋DA[+@]\-ܢVą]ܷc[mZ=[ͅW ۪ݺ%~5E])\nT[ӂU=ѝ۸CZ^ݖtAhWJm=5ZdiZc\uUbãPL0d+^$>h$MSe>](Ch\}گ= Dh E`0t`sy5+HWuU%``;2(aT_زuW ֻva%&[׵`W|Tkeu !n]e>C-aE\%NֻAp-*+)Nbh]b}05n ,bhްa9vcW5U]8L_ :nc7hb0 :`=}\27bI7J\BO6cDDE>`tceYeZe[F baTR`tVL fafbfbNfeN6UF.bl(Xkflfmfnfj)fqmn%7m Z[~gxZbgθM^lCghhh00(h^^sW]Ju] nf`hfh(h^CliNnni~p\]õ]S>bhNiidDi>ꂆFh!dg}ijg74>jVj}K@^7jV髮&⭾_ kkkkk&i@69b(k^ln캮ƶbN7j^llllm,l-`+9m~m؎͎ngfN}.mkCU^;k>^kE7~FnnhN7\5nVenW>nn½]nFnFg^>nfk[5nzxvnﴮoo\ojc.jYZ5ppɞpjHOq_qoqqqGBnjkvqq _q>SiXr&or'r(r)r*%6OmHr0s1s'm"o~e ptp[p7?jm4_]_ssikji@vuvu_wr[k%'szw{,iHc_n`w/w)7w7>2tjpWghio&PGx5W nv-ixZxhk'Fv>yq'>z3yywr'ytWp6OzzyH5Dz襗;xOvmg釟zey}Jt{vwgkwy'x|_jU7V|x{oe^Whyҳy=Od|{tDwuG}給sG}|||Gп/} 2/6:`ϟ}<.q?&o:X}/nX~˷~]}~Z+F8 obg0t,X 2l!Ĉ'&< ƌ7f @k^rk׫V؊pKr+K2]f@Lb (RܸC)ԨRGЋj$Z0@ȑxLK m-ܸrҭkVjK'Ìbulףt)”(cHRl_7{)I*Ѿ9͜;{ :Ŀ%CK".wJ\^8)&6Rǖmu˵wsn޽xpk {(M~{!~M0 ?<<+;MP3=5jXokЇ/T"`'aLh~.k B4D#b0?rLt-)a kU&VKV(NKh"xwn<;gdRP#@9 sq)#cW..[HȮ,@z. Pr"5~|*` H%(NV V&EsN[B֌EjB hCH4* k%14bKB,nt !' (Q3B.!|zذPՙQv(!Qꉼ(!/hMp"3L[:-y!Z@ G!Kz۠XGXa #*$N[0%+O[[ $`%$ 2 Eךnx.Pe]$kHX!&> Y#쬞$hB0!zɊhLܛd780ۙPX!5G&=) 9j4oPvlS}.SnwZ`V? @.O(]= xMj-"5SUծ~5M]HB&3?Ӻ5-atN9 jllf3{#h]l5?[n/e`ڧ]h=hce$:[tn6cv#݊M[zGWon97v}?ZO8ٷ}Ϛ7$'09 9lAMnf+3|>yOz4/~US_~TSDh5} 2<ޏ|2[Fw?eTO~e[ DhWp_靍%쉟^TZBM_>v5 R̝ `MU 5ܱ ]M QA!M[` ZNaI`֩\ }S_|ߢ1aZi BE& r˥D!2J [^\!.!!B^[q#ba"Hn>b$E` &f"u"|ԅ"!b)ž R/J'¢"K5I/.cYw} .h}]^#&lm b#9^!4_%~=*Ή!7v,;:-.j4^bݣ#7.<~"c΁"@LJ$v! #<$x#2BZ=.G^eFA^BzGMHcŤb$"}$q FMZ"dLϩ$PE=vNRMC>S# fX%YY%Z%Y N%]T%*#3%9ぉ$ cSr 0h``&a&[%\R%G$5 dF&dm:^fn&gO^b\3i&jj&kk&lXMf!Wcfvn e ahh&qYaD's6gGHep gcFsVu*m*unv"hF? 'wfbeJ$y'qzbt |vy&'z'~no{t`~4d_ (Fg{g>$&%gBtF^(rFdvH熎'!gg7牺h((({'Qh~荖$^hh$5.ihh(:)NV)j)^)f)'vix~i"WziR)6gx)ҩveniwJb)fr&si"rjg:C1@CBjiBC18V}H13L4X 0eZL(nrzQ?n¥(fjBH/)tֺ*$Kê\D<*l <& Irϩ,mrC0.(N+fD\ nE>CAH-(©~+A,ᆁJ-nDI&!vjƲA8m?A*?<WlJCkF+W&u$fG.-k<*JjAonEk4(4dL2LMfgRFb,B϶j \F,0N/FADlX3.AFj*0KRmA0lX/o?t0d,k1(mgok6600L6G0°00 ŽlDÞ:nA.1$uF$qfq'lC\JJ-JpFWXlpA 0HkN_:"DCe*V.2b&s2n1+kD 01hI/d&[Ū*p#,~24rArsV2,W2 c.6*Zs6++-9o8#3FI"笲:h w.cn8C;+s:>/q@r30ȳb(K[ 7'ZfP3)VsXBWE4lW4b~"[~opVGntAsEtF+L#-nR2*340nN&38r/+V3Jêj0b4γ!OrPUc5I5㲵B+Br[SX3u\K]i?5$e7sI /ص3q.&;q4LpKCj7G166"X4̐acD=k=۲qsGBiCjcDn/+o2lOpB,#tvA@lvoW p w230FC15-GL4>׬}k|yk~s'j(N'5@Kw,>FfgiFKg(s;fW'_'Ю~#SFhF88_s8⸏k{qxA 9ބ[jKnq?l8^yy*<29p_+逗9:薫yy9؎G5^o*79Ft:w x(8- D lSld{p|:_:SP[Ϻ5::[:zz:oczWG<'O{'x74W:kz;zz;;{_oA G; 2Cp@;<>؀  A  @  @ 0 l |At@Լ\`| TL }4L@@c@@| <3  ̀ƣ@ @@؀6@ 9,@3(8741,0@fp_#_o`Q<9|/:Yq XfaR⒀ZQ0^HFY @&Y f.㠝 8 !|VbJxM@6RЍ :(s"(R.6BTbi`&p҉'c)*﫰*무jK@6|03 1s 2( 36,CQ5\MElsqf!r<.z :.I'|/(LO)|K?1;˿3KLpp PP#YHq6n FFp4nGK5!I2[;'sک(ҽ-rԏLC׹<["3B?MAQ+bv7[9焌nNCRTS=\>󦾮j\L@tEAqqB ) P-PjWle6S"q}$eLfQx$+2諲׫L*xMN9b<'ƺ.t[ڏY7b,R$we1uyӘԚGu7gT;V.>oo]騅X̑ݺfmzDQLllMn՞4\5RPٽIxo+^.'x@ aw,9es@vҝ.|'[*ei\2ԺlFKg~X VKZ<`IjĺZ9aZZmd6qAn}#&?y ~Z /[\ XF3SD1n`<=ҍ uB(2e*}jg70OԚA) 4S7>5ѕfe+ob/kTg4ch9z(Py 6'Km"lhSsU#É"t%qpa()-Hq AXKя~3g(QQTZ5:'5Vn )LФwL5Uy!VizHq0r Y*PIXI]V=6ʧvG?P}Z!M[fQ;/ `lm6Ve^sM;h&+R ֛'PHr*vi: ԥ WD&Y5˞)9ڪZWEU¶֊{ ?phUZw[` 3l.H5̺ʱ%=.ܾ2ץ-o7՚vw]e]vm@-D1pRFJF0T!kPINΒ/d( G&/99I>D!$#!IIvL%5Ux?30@s^Ѹ2"}04%\A 'W{$_s5HۗM$N{] G w#[À;PKTIIPK$AOEBPS/img/active_standby.gif3]GIF89a$@@@_eq/28&6:50Mlt&&&* ```000spppk`rrrPPP???333-/3fffZ_g+,/()*LLLVY^#&*SXbQRUŁy|GLTkr 䢢lllyyy;?F w~($ C^e VVV憽<<@rpR& u%B "3x0VZkMU*mTՔ1`a|i .w1\IHnMM+OieļTYjteJE,Io!pK-LmW\j_FYơm% r]HK )v2],3I άBeٜ];PYWC7@1] 5Dѵ!\tQ3]GwmI=@1;Pm H@ Hg-T_7C%xwtx㐷x䔣4y?ty+y眓`AA[BS~0 _tz| "tw%.cN#WAPT<; k$?&AbP+ D'JъZ@ږ HQpWz@3@0LaʁDԤ(D 77IN "LjR%PjTcѠ~eDf9w:*>iS^n4jP! 5kTZ]Uj,aZUm(:cJW;Mm{֕,+(*4 ENNP}(7»k 2',g[7 P`mA.@(0@x#~ xr̦װ(Aȶk6p@;N D ]YsһV,ռKA8SAH<"x ^g1ا<YJlH|<_ Ӳay;aï#A<"79)]1`_RBr9璨wΉ |3|< X8(Rz$dow bլ n'c 2g^+Nb `4vmIQ>p.φ悀@; ] [e1O>P|w*E7qn97\@]G 1-<Ȱ @Q^A`J:OVGx2puCeogsAq>pPPԧ.{TzDk}\y]RlfW5'+kgatHOn{P fpQg}!G+wp}B"_S yb?먷:zso}뮷/ )[]\p.z-3y-@蓮ƕ,+s-\| vFM~Yq#ywx&$:Gn$!~F~l|Ka]n:g|VdBcFfm>6[>bp%sW~ (4M6l=g|``XKk6޵m7Ak=*뵂 H{K#['7Zd7x aok`&dSfkV_wH(wqg47h{F'[S&\msdwhg!Fix Am&<'(bp]&ma{:S;3i~Rev\ JmYt8jfJkԋcv7P9ymNIqwsIvo)w{L A+ї,cX٘O^u$,Z0ʢ.5zu<ڣa޹x$f-n@p&o(Vݪh\+G$oqm[B؜h07=/+o:ͫ[xaSphI:Ln})I%}!LD ݶI.H-1ݝt:"- e-c!Z^}H>cEGNrt(Rg?L l F^n$sz!NCiy鏮蛾!;Ffv|f8HmRx8p̅{/`X&WX#7^RȪ1-%p_ |9S>΀~0b0&gRo3i#pUa;j@.GfSոv Q}~H]Ӂ$X|n]|7*{| z^n~7 (W|&쮽식b" &ז1\N8oX|i'[A^m; _:"/r8bz~̕_8loSfw$HЋ -dذA XE5nG!E$YҤG8d˗|qI9uΔ+i%R D!JSQ-ZѤCޔWRZ% kѭaծeʲq"D֦[yv+/J%\pɾ:cȑ)&VL1ʹ5oKUY1;\4_^6Kk<=n;xŏ//zzk?Z&}yޔp@ 4@ԋ"pB +B %;A PASTQKtQh`FkFsqGDMkG!j":H%&zRJ,JK&/43IL5]bM:73NsOij!=? T: -PDTOF=Q<#LJ,3UsSNtLPCUrT.K5UHTTucqVZkG(a{5\u2 58vYfBfmC dMCXA ! . ! DV : J ڊzv ]"8W >dE_р6+Ľ ڠ"aHb>vA6h5^rgIdM 2(< A]e, ]fh'"-%>`CZwA.( -ՠAĵ@O\+φeˣ^\iEA:8EuFFY$1#>z;| `Ѝ}hx adNbԛWh&')f>H@ط H0~}q%@HVP+];!JB4}S4nwPc^ |ad5#AA \e̛"б ' y22do}W l%[(B,@F!Pi%bP,C,Z{|9w">%\ 2v Z޼pP|^cH90+L:1$#{R|ֵ$ Wɛ}+'On$bP(E"5q$dlI0f#`T˝Ԏ, ΦJV2+3#N6IM”/!8/+s1sy6IXO 'BBr)u y3u:Ѣ7K"נRng{5Rt@t %IŠiEHSؔ0' E{~T*܄^~ʨRGMm_KPkֲ+2I$L- ,1ee0C4`2|YkVs<<3wc|g<e>@g@ZЃ37'ZK&"Yz3-}iBfn &iP4P1Fw@ZիfjXZֳ>YԻ浿Y&vU]Mث6tlh:ȩa_[ہ@6Z7ֶϦhgEnh7[^7=gA;ivm~G;7rnwE~my{wF}muO\ _[-n^vCj[?+njc\-unŜ>s]A/Ѝ~t'=fϓ~t |OzՁ>j[p1Ћu}dWrkU]swm{ r.D ~x}6߂;~4ɍxfu3_WoK>/==p2g.{b~O_;_}僿z_d6~OA\q/}S_w'>_{=[c;;s: ;7 s@(̹ @}>s @+#3CSAcAs;[<?߫B??!9\ <\B@|Y+Bϻ˛LB3@0<94>5B6A7A8ԿDADB,DC5%dL=@|>SD$Ga3cD$KS#:M;-U 3yQ 0+ReK=|YuSWS{QL 44XYC60kcӁX@X}Q~R3WX̡X&"P5Q]{YX3O5}[ҹTmY]<5Rۙ=_D߼XYMX\\~^X߲^8_V _B-_V#Z?ޡ$`3`^ f6? 5YU҄Z`{I!u^^^9}n_&_OݬbNAOT$ϕE*n0+^Z2+n,b.bͳ9c4OAXc\LVL4METŕMBFC\ZEv<@&nj[UE;FD,<ќʼbbSQv_%emb&S5VaWPSbKe&<%}b,Y<)YR!"޶Fڳ%|eXeff#^$nV5 Zmf?neffV_s`j`^Tkg}=fMf^%n}~aug'e^Fy<678/v10c[>ni,VΒ&]:c*b=~cSdCfLʍHNG^>kȳa&ħDh&|MH&.8hSd~|o!f2Smd$^v9rbym,kg!6Ʀ4#6mdnh !lnJRlՆnIn>vwsl4CnI!>No>A@d4[]4 l#A@oj@3@~L L0pǃ< pn$qYEvoog`nm9ohF?]q }qn?/Z0h6q#qKǻ"#$crfzrw*uN)EsrSs%)3r:/s:q8/S%G?W: = t;poLtp$/O7 : t|%7C[wr,r$gswsMW_X7a{rZs[r\G?]_^?_rXrW>1qew75BvsVgz[n%|2;/@9lrr=')|_>WwfwdphGKOnRtGRPt::NxAXyoyyWp'S/AzBH`ZCY?ۂ@tZ0+n>ONugnWFc(3n2*Vǵt_Eg7;j{ [ {|_`oGos#]OEş*HY|PYmk|}rЏg؋~mxش}dxV4Vio~ci~~gNGwԷ|qC|ه}$?}_GhwFSZa@< A2l!Ĉ 0E7r " :$ʍE (%̘&\(&ʕ"5$ˋ$M,Q'FF:4'Өf))РCF%ץN`,ڴjײm붭WAeҭkZ_..MA00n1Ȓ'Swq̚7k%IG-XPԪ#Hl5ϸa $s Lk{xp""{vs£m(|]rOq+yO~?.M]j'}RW6 8 J8!ZXa\ ^!r}%x")"+!18CbSHp#9#=q$EY!B M:$QJ9%UZKGjZ$f%a9f\dy&A&m&,&uFPy'}' :(z(*(:(J:)Zz)j)z):*z***:+z+++ ;,{,*,:,Zzx*Jr+h@%|md(I0 +  .t`ҫhv |HE[hN|-0TpqD+R#oj5r}Y@pڛrP!|^(\/M8,D 3,y,HY$`dFNjsBЯ %99YYY__|||66YYooo??ؙ(+rxpuz?//⢢嵵܁aely虙{S^eظ??<Vu?:\ (O Tr:hl@/09P 5hAHrHعω:*Plyu DjlG.O . ?)&g楰j-I1' +tF/ | YSqQb>cRh!7f$hF924L0 z(50I-$oO0H44]47sDY+,T'Hl ;eQ W*P=-ܗ5  "N}Yuw4eE~Uh兆9dnkzɘ[Bhnsin;W~XgxHo<%"sNaJZ Qc"LBƉ" -fTBѐs UpB=5GԤlgJ SXNA aX)K(05;kʛ'!*I t! H g1<#M@j,atԪ %%%+*P:dT*WXq+ZzD`.6SȂND! ]PQBǢ L8RC4+0pNȬi }3Ik۷d& +p˳2RmT46 (@A"T3Zײ1kQNJXn|OXuʗSpn'#H!U ;ypW$`\4c0^j@(„ P ?p%u Y͜`U&'*WJk,wo* TH`I^`ǛenDla kٺ' $L-hYPHM6@01?v`dT,2׌m}'A_vM+AK3k . ܐ]0~Pz !hz.r&s<7s–97f@gk[0tfjZ&k{%D!W[·J6|5(g6x7V}4x6d^JGGOq(vV}Qp^QmBgcTYFZ^uJY wt9g(J/׆oJdJaPsJjERGu:};0'pyun?'~h;ŤwY>7_ж|wX_Av7\xvygfMTwx`kf'juHfwr('؇6$0PhvZN~XWc%g1Fu8}svoWu v\H}čAxZ/чuWWJF~&5 G}bq8V؍ыe㥆z{&XLz%ف+K{|X`UDN wB = > BY;Pgcg_`aypJ06i~DtEh}&Q=`xMu_#d'uGv}maYlM5ƥ9sȘzleY&s:mX/ @Y7vֈ^gd^;P[|_q)}_Pv\r 1_ptUS@Z skKVj!]kckm0_ň_iFՉ_ h:h)W@0Jx~ U5Zt&p^,|*l,H,|Æ!N;\V|`"V yLJ00Pj3gi.Q“!bt0*#P0Ȅ\Ȇ|Ȅ<{l|NJy*[]ɜɞa]*ʪʬʬ PƇq|˸i<|J\ )|ͫƄǢ֬iP| ~hDz )A͜ɦQP=M' \z,N1*! !ͲlLVG81nI\`]w,=3PL --Em,UU.pG\ ;Mla "@=BM|1%!Lgb~ M "-.E08:Qqq8 n|~iΈZ  \ PZ̾D |=.̠,e! L"/L`q"/ r}>Ҕ׺-6Z.M_>^=13zm}S/OaqmPhjln VJto!X^SOn~KydNd%j=_2Z?\o۽.`_0?ǎê^SS}7~Qѡ=$9?.nخO^QoRmA6؟ooJny,|8`_ Oil .dÆ"20E vg#G!C`B *p2 DaП9uZ/BA2I.]çLNZU)I"5(E&%[Ye[E1ԮP4_26%\aj|ureˠyᾕd̙>1j5Qho uc+6Y.A&q0_{BAD2ŕ/UPOo0yo$D%կazϗBw֐]ޝxAv m5T!x˃4(#oa`oRtE۪ $MsGD" 5PDUtD1Ӈ"4 'OoEZrSdF8!"4TTSUuUV[ł IjV\s zW`#A#5V'O9u>,Y{K/[1}A[5m͓/Jmu]vu]xEKY|4Bܓ`` (x[`ua#8by+8 u/_|1j-R\%y!ndqKstIVFPpkyg{& ,vh6h b0cNu_F-3YVdD!I!Yj+=v昫kL:e[!rnßm^vj6A`޸ko&rc B0֛eLƆu'S+buvsbVsF4Qkd#DͅvEذL̷01@Rs"OdGY,yILfR,׼p}k#x1rY,%$9&<ⲣYsCt bߔ)aV$'1Y@f5yMlfSۤ)IM$5tS\g- yH(g=F1L|˂pOrzel<'APԝ l$}PړYQWE iIMzROi%qr"#& an.)iOIMv[b2ˣkSDTO}}&IZQՒTG/ʻ#20YT[/RL+Im"BH[_zJ%7uL S񭰺POO,(&w"IH!Ad_N9\&5^ݣ1-ђrm\c'bEQAS0&Һt{KzVLU{ lgâuCOQ.D^XΨթg8r!ˑɲ}gZkDN`%.z@d"Ӊ&ƘL0);l|AŁjqsGYSsb?[Rd~p-;=ڴaK?˞8' j@4 /FJ&pμSs_)A>nE)氿ݔԄ`d+& )u?e6a7QV m+6ͤ:c_E|Q.$mn]xNΪ-{*ɍyVw] ʑns#]!O<?-9mOMƧJ:N|nПI-kF͗jP3%m~fyoctm4]iw*jBdmYAi o׻=6\fâZЎ5z 7K2Hގb^-g̫8tezzc2GC-ꀓyB, eIBt3$Jhzѭot:Cg5[yauLzԛ^%bmKN AD(.G4k5Hfj˿?e7@0̈@2Ǻ4VByFN3|Ƙ!H~Ts38l6z+V|W*%v-z#F@ *y<-E{캼(>8HC H=Hdl1Ǫ쩑FbtHHOJbS:c;4CR>6w,2x7]I{dDL-yB x,\D-̽C8jԄl{X"5bjM6E;L<+c̀3M4}ǖLA47ݷ7 a.*KC6H\>΁4 ʇh;Vb0VHV:RpquՎU7y[+2=UZTՉ{"NW^W+#ȷXX9m2MBBhT=pVaL`KTy?XTƴIUE'TOC8s}0v%˺]mU|MYIlYU;WpB]^gtXI>5̦̫GGFOۧsJ8Lس *Q[,W}g"ӪT׻}-=,;H C-^A4L^+M5M#(m̐6;(~h`E=Յ8HLgEH uWuXJ}AATY;`;13J+Ђ{`K:IA IUڽ"$a.6^F~a? 7 \ ƀ9HZoTȇzzၤvɁ\jmWv huƉ0-b0.c2. UN#vH4Nc@IdL{H?OdaeR.eSM8eV>eOU6Ipp I| ]NfOYm, xt^BȆDQuQ]Dqv5ϼ glf&(_*ek}g~g M,O`YZfPCt]WhY`U6%rTKSCៅmX`-H`U%֩KCxD_M\:Hر5i(EƵJx蕵]EՉ݊ۋ[G 8h\ކ`_^g 4B ځ%إ]ƅ(Djik_]m-Zԁ3[RDq9+#k Ӕ5I[mGnqY'X~NNZ X?M,;U/օ7W lLjP8[h\F$HB˪i}e mi%m ]tFM5jT>nNe]nжӈhζ`cnND^>Ʌ4ȥ9e›\ylok>θF6FjVIpaC o[6X߅Fk-, 7ľ;^oL%qHnXl앍2> #}Lt/ O^ L"!BaJM8tLtMቸtMtNW. 'aTb8vcF2bލcV'1F[7aJtq`~b'vN(ve_vf_.Lfv\B)^k'3T+vndC^t@wst8x78wL'rdVw@wtKgwstt}'P'm cwNntUuvuxguY0[g\vuufc]OcwOUM'wqOXwww=0ēߥ]E^kiÿr8)'kQnpokܞumNTtYTNA>;>ct[Lr:O7 UslMr,^r 6PŨ.|}1VpY}|#9r}}O"opGT_5nʷ4DE]ZǹEH/pO=pOp<#ӗh~G#,h „ Ĉ'Rg7rw_7~bA *`C/QZa&J I'})"93P)BOĩ˗1ɓA_4 ?9DPnybĚ"|w&PˮhHD D 6&^^5F&rC_^u*W=P\Kʕ1hReRhu[v,v4βUU˶D"X7z̽00 ڷs ȵ8/#i:xј$ӯoW8'f~Y3ugW-x!9v_q!A!y9 PUe nFb5آwbMHaaq(q#pMS_)78$QFKxc{7a9&ezLi&mEG SLRXj?3>ha^&6((6@S9≜%X'2XFŁr?ɟ!3%u+kk:,;)c]PL:ճц^݆EKE>p3V-mDeҝ kpp/ソ%\ 7o+tҭ8K<1wk1s ?l)2*{2O(b묳N^&nP,-+tZ`˗m6+PP\F;K;=Yks]um!E55R;W͚ެw+s`4n8*B4Bg{9 \yE40;$L|GNPL%;;ރU8r;ZX=k=_ϔWm~k|DӯC???oSO'ۛ{4xH*)/A L=5,PW\|R.|! W]P!bZr.s fq ^b1N7 ک⼢lv"w%>Z-ZWÄЋ9|"C/[HÊ( ~# )A1b!)/&(x$$#)IR!KN+cQbc)7`)5qH%,cJ]sU<܊D]r@e) Y <&2"B쀪˕(%U 9ѩf<5i"jAjiM SK.)y='oH=rȬM qFFJqaiP|X`}}!muO 3!1G7L7vn(\+*W(AWA R3քh{UTv!Fb$,( Sʴtkd)!e@ح1rEEYH mF撫_X]+Z_ FGGc,^vv [G !D,>poW(t gޔX@K^%#;akhc,Bjc޶W,9uN\4$bÙ&ɻW#֥ Srծ?^1qX.-I[a:̽\2Iɜ\C ej'?C4="$ר4&cUV"2MsZjpAOS)S}'h8,5U:]3C մV4@]Qij45yI5.ߪ,G?21{!\ץRY-eiۇᅌխ宇bZYiW]2B@|Fuu(}vܑ5oGuNpp;ADnUU]yִ\%LV7\ݫSJMx^፥˸/n[h<޺nsXFw^a47ZV0\~?\r VQ9Y.iMlcv_7)יpǧ 'ylMwV}zUa s>ÿ _\?/?X>Ծ? Xdtф>`ZSf?_ Z ` C @'߲D !al,!6 ' VXtBnNHp!RMݬ [fZa\!!` #>#Kf)]]L@"'vb abEݭMƐ*"+ L-"|U#bhALL%U,1v 0 ?^X@0J2#6r$U@c4ލ0,LcL]:ƣ<^M":3:#Lt<#@N9ct1K1f1r $M@N$@A;F/"D`$X=CآH$I^ G#<x"L$'*CP$$$@$|NN\$CaA+Q YGBE_Jb*"%UV*ΐNGqdX%YzMfjH[[%\ƥ\%]esWGrSS:AMn^BSclea`b"b&)PTFi gv)_ &iCTjCx&kf?_dk4k&ze-mf+ip_ ^`.'s"hƕ?tN'un&Dsngs2Zlk!wB~娑zZw?auƧ|F[ g|=sjVQag(=&d (vgnʖwYf^ׂBR_B(](U5(T(j4^臺XiMe Yfء(O%t( &z*ah"DRIqꥑ:&>VZ)=iW!iU='r2%WN^f)kv)u()MrNC)z*F%a/̧u^*Z{j?gnyJm*.|3-i>_M$FDbj&-""Ӻm2]i-kޞ&\nPݳVn>Q--^J}*lj'ctJgn^*ki*g &o/MN/ j~.Hf~h:^^+njƄF+/ Z/:nI啑mjmQNn-pq.BbʨKZ_ q/SfDIvI9UbeK̮0 Mnf0N]n{MZ DA&10"ON"'Ev<֣111+0xc%1B2$ kMH#8!2"r#+hAPD$O2%W%_2&g&_r%0#AT&2))O2%x򈂲+2,Dz,2-ײ-2-)/Ԃ212(32{1'33#7 (sJB5s2_63oب@  L3XD@X qd7,s ŲZ T9/<3Z|,b3<7#>S B@>+@cHր s5/4H2_EĀ =+D 4U@t (I4J/J/EGĺ@ B#D K';KcPcA $5 J@E@SQWAuAATjsHWB?G3A (5]{ @Gu\u4-A4q5A?5ot??3Z' M5q@ bBAD?@ sg @5ADGesg4 vLC @?'vVT[VjnC TdsPJQA$u_pHO a3DC 3 h? Tvw7AĀ?@>{9Di|5pux_ 8?D@ {AwMo?t6A?T7TA(6A4A=۳ J8u @H_8tbxA $5Wxg?8PavAtR ^V4G s ?c8tWeD@gssZ#yF Jho9P)6c'?SOC4O9j9@hsNx+i7 P7A@TF@m8Tt@'W> l/W8/x83 {x뻍#<yi,@Qb#KKKúWyA|6iCeEFc7DWŃz8߷ ;OpYJO?;AЀɯDgAD͇SDv/99gt lz[yh=p=3hX6C XQ; `S@87J<65x@u+ۻz3DWz;:4A/ R\AhvguV46_|O|bcV\:86f>Wti:̟`ѻw`ko=S@ HKxwcbcy>:~ߺkx=\wV(vcwǀA ?@?BDAaB RW@EƊZPQT@#Рƿ'[card 5`Pb +hP4z)R QHhp >$f5p&tQ_#3!E҉!JD0)Z _X u aXmd oԪN!ƙ+D/N^1azie)%hQn)`[NzRPFpqӶK]p'N#BEYt10cG +PcnaK!ދ腋j 4"諄2kL -Z(˪BrȺ` .j. ɮ-dRHV)`1*G K`HDŽR#6 LǙ)$hH2뢢 1U(4L0S?p- +$[Ê@ (&jrKbȻ +!9$>P9RFِtZ:Au,"X"\PKk]RJi˲fN8#5Y'U-&2H4nJ^+I l__,ȶ%~¸HT$D"jp7ρl͡$LA $k᧘r$SRŠꪭv ^Z-V4D!d7#.⫣[O BQGB &ECQxFt_d>!VhH/ RM[sddṻn5|2^/Ĝk0v* -e*HD(efReZ.J ^(+p,>kڟer۪}ekH#1^Ղ&UyTAB`ZlcL6Ÿ4wб`>^:7u\N7+Ε0!@p9tC0ka‚`1*]JK(܊xͭ~Eыacx8F31[`D5q|XG;Few#߸G?zREs!d (HG9I;d^ɨ -YER!]X7IAQJ@(feG;.2<)rv42eJ2IRT;Z*\drM0q|A8gp&At::V%G:w*: 01sN3K,"g"Y !HL_ G(O-i脓!@VBP;N(8:#ΐ@@;Χ $7 򠟠dbAT-4/V\PC90Nh5P7r7_~9j5z ^/fynbXv D K%Sǜ?zUm~=, B.8$ڻuѴ*C\FE9A}+j$!ԒŜ!SiKb3*J_H1 Pp%V:/(ɒ0x\PtK)78H):-H(D%!hDY4J@vd,] p)44Tk[dvtp[cض^K`&vmfܿ D0XyPZc%NpLx'o2(]HQIU:X&@/+X\'’:)1j!#)il)cq"蔁B2R\ZXebMER&fY |,Y2BN%gxk;I$Rn5i19fΡ[ /Z4Q W͹6B7Sڕys¼Sr\nBq:0ܽih ) yOJǣ7s!SԹbifXu/5GS^>@x |e7+5Y ?3a\ [J{ƹDN902NKjhZ-(wg7m]n57³Hq羪% Gt)N>BЅ Ӟ1DM!\arO^},bX BTId;f4*BK6f0Ad ʙxMEMYB#"BGE}M@hg+z\-PQvfqJQbWz]ޤh@sozpAJLbyd+L~RzSE;BRCo%V;)<4kj.i~;ﯭLBI$IOF P#P'+pPIO CPGKOSPW[_YebkosPq0;`ĉP P  .  P  P ǐ &v P א ِ|ʐPQ Q#Q'+/3xCHR ɖOOsM^ %G^bFCE)sbu-@xQ))ZQn^1.d#jHe96$NKnњ#q7f(I-b);bOdH$ƪBN$^\T,8ǀZLBnBΐʕ~ LQ,&HL. &#"Fe}@RnBTd$[JE%H8caB'f,ڪbQL@J%!'EE*$-"""'!4@Pf e*,4Oʢ'h"Bn#5DE`_LH$&/ʢ5%#t9,k:"s˂=1 D:"$Б3IStTRX"#[1#{Nzd1# `K1%"ES+j"4 Ґ5Y7.1IDt,R3̏.1"Pj@9>6a#P$F>EKk!A'=."V. Ht~'d&@Z&P,FJDenG+ , .*uB)),*t%e""K̫@=jZ*ӂ@߲3aC,a1sne1@29gD nm0_^(_1%DA%/A+@,N,*gD"#)UDhobN&#"*?&DI%BE"EE.I+:$_DS (Ő֔1+HO64X_r‚-ѡ6 @ޯyC,,(<6`R^R̂NB#*F77N"VO?_P*=#P?KZqE 6K#6i$U18pw2SIS(^@PM)qO?Q]uvc$R5EVIVzT^QW"]3qfQ4RgX)m"9,4?Ef?.21ޭUb`51TH0TzNwvU#4^5*B,'VPj,,.bK^6WN (u9JK_!EDu2h !2St*$Oy6e~eD%cH7*(L֬1)@1=#itǯg/_s/*!ҬrVy+ )ݶ85&C* 'q Vm5TngXyvawkS-bed".bTIUC)Qt@2+"W#3$Qwtw/Z8lH.e=R8.f8GjXAhjh#\.aFOiX۞Ώ0jtXZ혒+3Y7;ٓ?CYGKY2;PKxNWRRPK$AOEBPS/img/return_2safe.gifLAGIF89aZآ嬲@@@򙙙&6:Mlts_eq???ࠠ ```000/28pppPPPrrrfff///333ߟ___OOOooo QRUy|#&*GLTw~VY^666999;?Fkr&&&Z_gSXbLLL-/3()*+,/}yyylll<<<`C^e9QW ߸ڠVVVm,8;˚!,Zy00&^TT0^&M55w_5a݆TX&abR0N57WMG䋿ŋ3v+"Jď hʼnƓ(S\ !cD ˛8s̟@ѣHI91/ӧjKJի6T`Ê]ɴٮ5]˶"j­ݻx{I˗y  :#Z %2 {C-˘3'2'2AφxFS&FhՏ#&qLr;7LN|, ~Jl #نZF_ظjʀC4AvtxO1ك-*?a8T܋hr٭II Ih@A案>W:g/ndcjqg&&Si?73[4Y)ڧp Jk;৸X*ՈyN"h"H|.(8iwIzl,*:RᄊYɴi}&!aLHh*`'j9]+ȡlk[}:C;߆;l|A[n{́&-8 >p068r6Qm QY!.pxԩ\QX D cs#EWOWSWK.VL nx /@ {i@4{AALȆ TxHEj1Qbj:P.ꊸd/L` Lj&Jc-.ő,$:x kX"15!"/CQXII4RYdj2PN&DOR,V@pe+mxZZ%("-mKಐxe/Ix%1)2e2r&4(iR^&6mr&8ljq󜕸 !ũ`jÜ9 2<I ":?X++yFžٰf Q" %+ E`xH@ `)DXA|IaM  %2("vԘ>b,m* T%Nw( 65y@ tA-bTlm04TxczߜAz`U!,|U7NMuJRZyrS *VC6x'-C6+ ,޼v* j ['"1 ]PE֠>7m" Չq;s3ط nha~E>2< N;xxp>??{ \Tʄ8z>Zt#4 Ȑԯ>ﮏߜ6A?!~[$}GseP^K'g{طttUTS'+{wtTfTgfY :Vn8xk %SP4 kX k PFTUVu\g V'H;Z&CO;qpJ3VdH|Ja\fCUO5OwS SJvփz؇~X@؈hXHPD ؉L( xĔK 4PxJ ԋXIXCƉHMǸ 0ԈG֨qɘôF88Ȍ Ex؎HEH xIDx4*UYXkθؐP I9sFXt Z%Py ŋ )5+DO e4$\oc&G ?bTW׸epeQGIQ֕K y<]%{eDl@5.Ur]!eE_DN4JQ`2kMhedO`W~gJtW!`\yWdOwbLWIfOhfd7NWeUU!Y-t薇j6Em6d!ŖᨗbePZ`gxlTw0YCkKHv~~ ^gR2$lg\eij"y_PkVywlvw'\zF5OƑG gnf" Cue\&ae_mP5I2[,Cty0)^eYED^5}|fxW[j YT$JkJw)FK7m5Oer@IaWH7K[uPLICJ0qaV*yaS%UaHdtY_%Je19nW|,TLTEi@ţ-TTed;SQH]q QeYʤ[wE ey0jI8+XjM)X\IDY+hkȧt넣ruR+xuwedU~ēP驜DFc#u¶9 eU: FK)d +=QZ#_p}fZw_dHU16*SQzJhzFG.}PahVY{9UlS} (E^AVJө ĠVEX7b)J\䠩SL{a.ZֳeT*ꥣ|FlFg8:I0 DiER9 Q묆dy9`Jֳ7[Rܪ1RJ.K 7\iJWl•D4dȩzovΉU0dƨkjQ[XNĺZCGZ VPIuwk9`RFc_ʲ tFmt呠J9UeyeVWImqetFmiafDDYROtYn@\dzFľ ћ, JT Pj[+OG.%n%m^ĦJcrElA, ZVHJCFC44PywU+[tBJwjZ|kETUgijýImw{ Of@@Mӓ\ж2Gc!wHMԖ j9գ>(Z\^`b=aV=&}(-t@vpnmԵ I~~lYK }{Gt4Tm]j'اNBdP\jxgGa|HVkmOTR:pv?@%=ͱֳp--Ơ$dI܆bQװqFt l{ػ5E6mTB6aTdYVe4jTI \L,XIV5nx+O0detc^MP9=ɲMOvֆ[qs exOk\~V}'x Ej%C&[XF4Tj{oa'lUݙZH%][!u׵C]{aCgNjZƠyF@^|~B\xF A^~ }+y9EX%) EG^2㊕SzI;76UnQDc9StU໧b܋bEVK\>!M1a渽D^~븞뺞뼍>^~ȞmTw]\sN|}uI|^;i&:hDTbTeq5Uel 􄰭XJG%@em>xp L 0_}愀<&(*,}-2?4_6.ά;cb;!ؐS[BN[uDCNC]XbY4^DkAhad_Zoxnpr?t?:;Pz|~o ۥ]TLu7R@odJM. صO\%SOnpݏuReWaS$…g'U)ůVeW‰$\ƏDņ@XAE5ZGKqnyTdBPD`4SOu x_&XuIYaN yyx yxxxx yvvxw v y x DP@!ep@B)"`V 4xuᇊ2h( b p @@@Wrɳϟ@K6q]ʴi4vȹCwm]P}лʵkY)B@D a`c:XTJ s*.N ̵Wngb^#KLʜ2MߠJmGUe~^m,$Tە1.Y~W*[qv|5h0Gسkνߛ,smϫ<|j:!˟O#Ͽ8sQv^G 4F(Vh P ($hbAT'0(c)C\:<<@AiH&$vqNzQVYXf\ve:x_ihfa^U!Z5Iix@y'PSnI^&hgj~veg )TN:3hup.Ǩ?=JUSކi‰j雦Lrdj=G k*7C= kviGZ:z&9m[,ls%Z-R bsۚ5a˓_yWdfz5 (1~MleB (Bxɂ\ʝ(2̝C.0%  D X;h5x\w^d9dhvJ@s1ȑD0 %0C yD 3D {:{}x(D 3/Dl wxuP7;WmCnGjǍ:jKІ!x', ,G 3|,Psu3%) x$nxt0u$ 0mj+<3]Q=AyC69Ax@ (XA X@AG10σS2W)݃} G/P+B!vc 6{7DY0 b((0d6+ ߡC<塖 _ @609#D!!x#! C'B,o'#>7PV!c0޽f[l(IAd G=vAVde2ECz.i(x Tyr % "Kg;On0%~ ϐXRbp,pg  1!(N$b,c`'yr-hSՔԥH"Uc?Mb:/Ge_HNz۞;2Ӎ9 GL֕'V :y{]ZhY!0G4:1@}y7[iûQ9]~o; ]=ѯ}H0R+i Y;jƴ2C`N׻s<4/X^r+#?F{A Vrάzoݰ\:Ӽ4w`Z.a:u{#| ŷ,6\D veLr3@O!-n7sغ>f4`<9UZNL.pWZodn:8% 8|Gk^[ETU``ݵ8 OQp@8 lOcxQ\ޠx:{`.v䘁hOpq ux̀O`+P uI}NF;< +7ϼ7{瑐OWֻYfOϽw^;?2s) Qx"9{>4Os[Ͽo@pAC=,)O5h1 NJ|%4meVOJ43jUk*,%bs-84X8AW[*!Kb,`7)0r9 14,,; &/rCw}'`bb`5'högPn%_]njxxdngYaY8XxȐ|@e5ӀT sTuP47]}X !*x8%Ps,h,~F,0Șʸ"KPQToUpd؍Pz&f0gL9+Bes;\Gv9I`Gx _Ww8w @y xK#&x z,ْ.+294'ٓܒ}B97S0H7JW+q*>W'^:8qbH?aǔX9X])]t/aX sp`(8@іŎ@WYP—huj)r2rMȏ~酧k)@虠9"1I-yv:Yg)j.j(}){Xq~ɣ?);g;AKzPA!D:9Ilȩwbzމ׹޹НJɞYʞ빨dgP jpYٟ' #%G:IZ]xj a&@0 )@?AYg YZ:LJN`@0p: _9@i&z׺=`JQ_ @PqSy`! y @"Q`x%s!:@7&K^{#:Kۮ{ BJ4*')J00 A1>0Z5Q *Pq  mm !oS04 Pw il @ x!?+dy鉕IZzڴ@Q Chk a * Td [k@8s0y0Իn!kpWi֠Z3׹ݢMگ+J0a[D˻ KP Kx@^{ lTSݫY+iK6eL1e ,1ŀp`<T¿nK%q1c6˶\+Pdovg/BW `T<#VtFy+cbFa,GH$fb`zP _[LK|{QL=&Cl<K P},dNLudi; P:UFJs\q+jK^>gѺ {\}Ax}+,`Ol4<ÖpB˽W*˹9jKG`+`!ǃP0[K%jK<  o8][ɔ&FA[CXklL)k,pJj 31,M:{ E 9 : \7ˮPR=T]Tm=޻]* M;l֌h *_ac=m |=uhڼ]ֈ H|}̓ݛKp٠ڢ=ڤ-f %"+$ e-,VwY,#[5ŝTk#h|?2Zk۲B_ jɲC0$ pا}"L+\ ݦ\y}=ޓAaWIaޘL:>}J Pj J͝߂Ҹ=|  :( | ~-}^a!)'}/יEx`eu@ 0  `@ ꕱl 0`F p `K*- GrM]m#@ fb $y Qs u,ɯ`cp/,;% K  ,@ύ["ݜfbcڝZڤ5N拐Ŝ%.PK R+䩦S< Z<x:Źm"ҍ"]W۩UpR2!?>qXm ,*AVJ N~XP{;1PL>5Z᜙<;H!zA^[8E!c 0?Lo46/ 8:ozN>*_{ rn[c 22~C_Ec+-P,#)L);.麾,ӻ @I8Os}lT)=^,Q?~+  *E%+=7@;>_yvvxwx   <[ ɺ¡GϿxҡUUv ;:v'D*\PJHPH*jdZ CIII\ɲ˗0_J̛8sro@_LRDLCE իXjJeW`KTZukVUIF%WSGVʷ߿ y]KQAObNm'=tӹp+roϠC6qæ 3fѷ,+[0TAyͻyJ>zxՎ߉ SMeC{سkS_Z`tmS׫[q;N3l1h 1h/6́x߃FWT8XX<φymdDSЉ,x[F-n䠄4WAX6#N<ӏ@ 5QYEsvYH3(唽waЄؕ~IWҒb^duvFlfqJO&LeXmm*(VoxܝHֆ^YɞuVjiIw(9A:[t$HQ^d:ަ*jgyi6i5j$6F, M)+f,ѦU覫nc-ֻ=`BAkl'|M GL0át! [ƽzu ,$LXS,mwf6}j?ƶ n`?+VHT0~\wk{׷b܃7m%I@G$'R#`#G!XcA!u牅~F/-@/R߀؃7`y`@~!TA.[\?녯Ukבa 1 &L 1=o\`w@сt@@;DP( AP|@Z(↉aDCp"k8$DQcfXdʲEl1jjLLPUu!0p *Bu3@4 ȣZPXC `@.<@Q'-D:_QIЊSM\kN R: BcEX撙y0@JhCڙxh$ T[%(U> D~+WaVB(\:TkaJԥ ".39St-7=#IpE8nÛ0!ME 1V nW>ut+^G._bVk[{>t+TO*ڔvln_̘6KҜ%kM$5 'ܽUAIvЀH о1B PׁVݬ*`إbIƗ䭫|WI7ogeI,7XA b 5aAܬ_aA L")Z K}dZIώxwARVEJYGRNm^ݷ(8F/L *lV0jdds׽Q/YeNn=5_ٰ@o eY3 q-$,[8 =A~A$PPN~HVχ\('Q|ՐH Y>m6]P@|pRܐDW<.IT7KS$7yJ. URfY}{fh$T7 {]62mIf!3bFYN9x8sY:dOnP $ڻyxk"Ė (OW򖻜/hJ@8Ϲwㇻ mDl8h…;Pԧ.u䯸i{`΅,ahOic[IWgP€OxOxOfI˒'O[ϼ7/1zPқ7=k_ȧ=6!w{ޯKZOg;Ч呅9xb{|曟+(n.acV4hXo  h p^!^ ؀8Xx |mSR{yCuCTPb ! \z‡S"u'/ڀ07X6zÆ=IB; Kh&Y#bH9a(Y38z:ymevp3mgvHyDrenjTDEE6p@V?Ogf!FEx>r&Y]U%NP(:H@'(XlZȅC >gV;gKVGg|DbqgvUhH,Jkjx$>rUiC>>Gc:4VGx_U4sOIRHRF^FG05P3K}8B$y?0C0a$U@ ٙ 8SEՕ{I>K`qk ZP՗E9KӅRIėUh4Q&Z5DT:"y;0 "iF( yAXiNsW^C;빑#_tuI9#6tYy[g.$Ty`UPɛ:(y_wE9MGY'bR$x}q;SIV>)`c8f[UKEWSP=Gi#hV ^F#I9(A%tQy&IN8̹PCQjzs 9O/JxxEiIHkJDGOHG *E( _cVRVCDH {t^JIKoJj(J}JbٜSHEI{td*Zѥ`:mW!wG2$E{HDCeQD0D]CUjCЖDT D˺I*' |Jy>*JwVCv~:WRy!8ސ*SzJkaI=+B'"34ki ۰:2B.hF9 uc")&{WȮP {x#{!QZ::?(sIH+㺳ѳG@B d}IܠL N5Pˡdzܧh \۵k*)xz|۷~;[{ Cl[ n7p:۹;[L沴 :{ jۻ;[{țʻDɲ蠲Н벪:۽!kxepi{ɾKIck l@ڇ+ݱ3&dۮv lMskP! |@B\R+J|+\̲ԛ]α[ [Y۾no= `b=d]f}ր'w/3snpr=t]n- JtT~׀؂=؄]M9qxhϵܴ]؎Lُm"kٗϴٔ}]٤ʹMlڪԣڵԼۼ#۳]ƭ2,˽ܾՇp裬vaÚ|m/ɬ ܋XCw+uؐm۩ۮ[lgWJP_.mN<ٰ][MR*B?MCDtb ^>9dߥDZIB~̊> ǵIQ eBDb`PzIOL4,D`{$RZP5[:izCuIGNJD|GP4QQ5Ha NA V((;N_Vds;6;~B5O%ɕբ"# CN(6R R3ʅ(}Kd酙j C0BY`5 V%yR.::ol9V@9h E)`۾9)`Ŷ?.R&eJd뮓>b }>k] НU<GEUh۾^Cbl爴n%F#mU^k }|,i`"_WgP?RΛ6gh+DWfi6NFʾ&9mu>sOeosm,C$m gE]CpVdv]|h[sH>emv>E^}7m 0+@ݵ5zOy y_ȿ|N؟bdf_!k#{OO_~J'w vvy xxx·vΝvϙԘvňyxا 4ܺ*X%xJ gϕ2PPdE$Y ' >\tჁ <^HAF$pbM#K1ECtȴ Jΐ;dʵׯ`ÊKٳhnȊBTX *,]$J-M"I(^R>(hN[ѨH&hyHUL6 aIV](|b)Wc?)C%6wٸ!\(ԋ0iN-|"%VS)h>XAOA.&\%v"&2Avb#**=ziPZa*b^cAr(|\f#[>IRy|^]})^ׄݔs!@᫩tPzeڨ"k?``yX~FyK.`n䬶"mo, HP@P%l'CMp Z,0̰4 j<3oH@pmH'}676R(TWm5-O t` &Mʒ@$ׅbN#%D|߀.n5܌7^G.Wngw9>\Y4`騧ꬷ.촣v! /o'򿃡D/Wogw.|o觯2/݇?`OiH ws'& [ 7AA /(L W0 gHa8̡w#J|x1Vt1͎#]v1yHW4ZR3FTO=8ƴ/>j? JR`~gC-r("VKKjE^;GV]1#)$6e|QM^[ob?Ϸ}fćO 'jPյV>k%4FHSm3Qe ¤U: l[{hp9 *ІUgH(]x8xr(by@𧦢η\K {K 33_iy<<^ GڐNb G (E:RBJEXF f<2Qrj&9Rq'=JKh|C+s NJ~]ǎ:Z2Da{D|$a0pWJU>Ev5&$殠㒊3FTͫZ{vHX7ڋw,pvɁO^$?ֆC:ܥ<3:CFB1HB%qůAM3*&#XH[3XcC$/"Sa3*cC׫FnE):ΐIKfAjsJUb$y1)Ԃ5hpCㆂPI`Nj;Ⱥ`u}P@EAvIJ6f4lϋ.|.ؑO B x2¨c5 _ QOʵb[c~aөgI>]KP5oܘLytRKCS$q/-՗ԍ`.IٶB#uX9%. $F#\)  V9gn 6P≈0GbIȡ"!DdB>hc!H8=5 I 1 r*%bR饙v:ɦ jH~꫄J XkzJA @jV³ P~XЬ#`뭷#l+|-+B nV( Gc+JQoA, 7 7 Wl p0?z $t,,/m02r<*דּ4n4~;*G~MN?MNԐL!)a&\k;`?"vՆPM8m;!d_6 8y70~D.GՕg_޵DАZf)B8X@Z7^%`sԖ*0ZM`x8%xR+'Br<%$bijPMZ @Y.Up bjd9NӜ$:wIUd*]U|RT\?ԃXc:Xe!,ONUp'(d!oL*PvZފ'M A} b\%*euB!Q&52<IC4x:PJ jD(,|+?t(ScyQnB X*CKei;EcsU8["T`KV+b~PBY)הDFrȲ,++3vl]\W(QWpSD^!L3C1["'kh8"pk"[~WӶ~udf nMI1 -ª*ƚ~~z:Z!7ʬP < Z?jE㪮zJsʥ*窯ʯzZhz]`͙ _ ˮLJ,f_`Q7 {w ۮ[yae`J5 z '~9 ;۰!XPfOĴ墛%$6_'/"&-~eZr`Phz+갳Nq<-Y TrD}D]v0YZO&V(VSeyʶW @dw (bu7H5NQ *Cx@j^} l%j9a"Aw^yhW,\_; {AnbjT4{:FX~[ b+R`vSBV<*;+0FV YaP9Ef*DSaբVe \{7 k`@}0Tfj5ԔQ\ç_/xtSw RBhd- fTOFBH.< {2|xbeN}%ܔd:3kke45eY#f%պ{_6\ xh+  dS+cwa'ͶJ;M*|-1kb:ͩfRp`8T⪲/kц~9鈾:mҙ^I۸=꤮)ْn뱞DmN%7nҽn.~@n-^~^^վ׾ٮnNឡ.ώ^N~ݚN*[nWYs`_隲 _ʙj% ")o+?-/ߙ135O79;ߘ=?AOCEGIK_M/OQϖSUoW`$|p~.` Ѕ-\{@n; +i/0Jnr^~qZy$/.p.q)+/! (.q?//??#POVP+0ȟʿ ? +T۰o ?ܯO"~-J~-~Ì }}~ ܔΠ׾7 HA7[Ȑa?xHHŋ%kȱ#u@ɓ(˗$xISج]ne3ƚ@J3 *]T J*UBX.YN9z$A\y,WZnc]jw-G;Q K[ o0Nػj3+8u*zJ"ӨS7/ %CL5OڟC]4Si?z& ~>M*}ܘpX9UA_FXݱ>πAȄbBHك-M$1#XR8|(AezX0n=gnzKU7X$Θz١/O؇y04[ ~7LP/ `'9!5%I?2Yp~"Kmj!O? ~HiSW<DXRhlbJd[~(hY ~-,` O‰_!4VU;-E|[]d3 Hm @Nf3uG1jCp J LPld9"`6Y dn=g0LK\#ZѢ ^7d*X@Q[, QǶlj{+ ?&p&H)Q\d\ v$"tq8wHqITܓc:4*uҘ8ť2Rpi*#01C#>IeICYmpx0}@F@dVFPdC'>Gh m=je$V&>pbNyd@KۉF P($K PSZsk KWTHJW `(mLWR̔/$!Ƈ3Y>K"~=lr$&."r$=L^8'=ЗQG; +|WUa/$%W9D HJKiM0 +η +nf!=L{Ug5Nhc $B,a# h Wڙ2ΖγvghŨҲ 6BB[!TJ̦8SEj#*R)$hZH7tF0v \%,b&ů]afڍ@ 9po=caU­b43 Ŀ}D:K bp\6ʵZ[ލBLnD'Ŧ hjS 72g :(7ejL٥'T4š6)э5U5qt&_rqyk!xy8c|6U1`sV*Rs e9í䤕JcӘjԨGM;_ҩƎG]j pOhַL%bO VrNjͲΞ63M׺Om'}nf{ۮs9B>ugS7^7z9Agy˥f@N8,oe#SB9a;'T3G{Ivw5L~r\`y?nOZ3vs<Ntjy͗qtEKOFӝGSí~uxd]Zտv=d/{P@ӛ=iW{M$rٗ#]/G~'c;I['3;PKCSz u PK$AOEBPS/img/split_wkload.gif,GIF89a@墥ج???@@@/28&6:_eqMlt￿sLLL GLTw~#&*333kr222SXbeee;?F 000砠```999PPP()*&&&666ppp+,/-/3fffrrr<<?Brx;<=C^eEIPfimBEKru{0CHooo,,,NOQi!"%Vz ژ9b ʾt 妝~iUvv<e0eͱjV L44ݐ}4O(#\mSp@+,]"t{'M (auq׮yoj<(B5_׼`@n٠h~v>z7k4NKsu]A7 ×O5ØUl}O =he h(WkeUk]O2McT̶>$^< A qv` !#N] }j1%gRCQJTa AbN7eDH25% W>~f8 FKgw%9= . rI 5=QN~jdC±ΏC@FjRQbAI2$Wj#DU?x-%ɐPVI@2ETؓϥl/v1Is݂~DwEv9±\IfO𣀩 P&?/U )m ֍#Eﲑp dO ù rsS)^Wa䦤En0R+)xɨ2usx}Dyx HF;h-RǦ+ hJx(ˀZ)$U'xQ 2B C8h̃6ʩUڤ'JjF&Tru>E%`5":T:c'*lwf:j,pLV:ӍgOZ8 |*HOTN|lji%s>p a,)^"Iźuq]d zbm#Mn{T V6Uh.;s!)uƭo%T[+`Zu&\W`kȠ ݧԏ ^nW'rFd:TQoM"6ik%_9,f kgWbddbID pu\Ld8 I[b,綷VMFpBaɐTz` ` 2/0m읃;g 90vo{4[en E_DW֮V.4XͲRA8RY"V؍&HۭT l`<ju{A,S%^H\-:sTU5{+k][R:"-~JK&nS8*L7"H!9-0jNqTPFP^HO6bO[OwhP>2AtUPV; qQ.{ThaX"! [B.YFgHyJGp{e)c9၉"Vd\9p\Z<a- zZiahJPCa.G%q1Exq@h_CXxBa!9gjQEqx_Q*qtC.AX  p Iِn YJ!q" 9&%,I+2ɐ1YQ08 7ٓ@ii&P Q PBIDI0@/@%)$Ti6 ?a3 3 3Óli 8?`OY?`>P$4jyq ٘8$I&@i>^5әY{yyǒYj$ДY3 y YY3a $Y38i>! Pyؙڹٝ@9YڙIg)9YyٟII) js ڠ`>:Zz)  ":$IY*,j>002:4JS~Y:0<B:DE N:ZpcDVzXZ*7> bu`dZfzhzGr:tZvZq0L Z!$ZHҥ)$Ykڦ   ٧ !!8 c[ JJJ{ {6uZZ >ګ: S Țʺ*:Z0ZzzU ZjAZzC0ڮ**zIA ;[{K ]d[kꦧ J{p +t#3, kZx!kl#[URZ-7$Tq:qƨlJH|ʪRZvA(ٴճ/U  jl۶k 0Pr;t[vf|ҷ(GVb{d0;[;͓ kڬj[ +;[{ ;;> KBP[p[ ʯjڽ;Kڰ;jK˱;W+Y %qCY; KKw>rS, < , \l,>TóOc+A1t[3Fk!<$IzQ[kG;f1(eY |>,L yN먹仯߻{\ǂ[˰el\ĝ[{:p\v`'9* B[ڭ\Ȇn,\;ʳ첥ɹL̲Lck]| ƻzǼz<j $P}0YZjD*:Ƞꭇ͸L $ų{ѷL̜lT쳵|Ɩ,S!7Ӱ "[*,&l̸ .=##KGzj{2G &>ʩ)v]r}PLmF9VQpA~b~ι-h`ۿpmj^=QqqZ~vMA^-nlϭ?Ix;"$6N鵡1fß>..B 8zGN~~1b얾^Ve.NI>^ҮzN ~߾pN^H=O~®ʜ>Y\>NL작^=M!n{?~[ o"$&_(p)O=;-/1`3#ٛ /'A:9XHf wn;r?t/)hz& o^~fuO.-A`>??k 8vp{_y P@}MnonL{_/CU.b O/AvfUWoI IE/$X O C?Xŋ>-Ǐ`c(0 +W"dYѓO5uVGA|pN?mIӁ 6dy4UCI)V}ĉg̹P=zK(EXhJ7T5歙_`Q$\%K##e@'r"~j':52u `+?akرeFsrfӝO Y#yJ'Dr($%8Lr'$tOŏ'_{7wAԂϧo^Cw3^,1pH!$ ) 8'BIB'0DSTqEMX(>qG{G ud@"0H$TrI& brJ*7{h@`PB: sJbA E.0B໅N<3O|O@r>i&q\FuQH#ч> BL3tSN;T*`TRK5TT(P!&2·fmˡ'n M!x&x>b!PUvYfIOTіJqb2- .ND\*Kb 5L69wB|w_~Oۊfto[j/$va#8 o[no=U-5"'"tuB,W^ټ9v0: `k.PAIآ{Hnc츢(D.|ա\3sjexYoXI#~v螉㭿#J÷M}ZM"| &|LxӡYY=!2FPT 7# Zh 7(C Fl,ynu'|1Ni^8@^륜lsq*s8F'=ic B{Γ3 .~//Q@ JDtCXIOҘ H%k^@+`r`D($QA%h$IR"a`P ƚBS%P' d`D,TC uR=4:yz֣V:oɰ&Hp] I?ɍ+Hƃ8O(Jtf=@Qiڳb>old"AF` hm$!D?L y3 @dd#HHFRlrCfR;'0IP HD&2>hD0kRe/}K`^ #Ab37OpYx`s|TH`R<3ٟhVnV@XI$)dL yJӞlb,)3/7yx,TBѦ@!!LMKA/娑 *\RJX'Ei1H$mB<xF9:qҒ)INMk藛-hiczӝUgbbpy 8߳T50OjMEϪЀT~S)YVtb}BeZɹƫĚrkU͆Ok9ۛU lg}hMϔ(TYe(-N hGt%iHGS?1)rRԹ|iSr*A&{EuHfXή5?G|WhxkQ5En6X$*8|YY:oVڈB R['0aX#1+WL$D")bXƖ1WJWwSC&r|b$tTg{,W$h@T;U fI׳d4m w[ mO6[h+R'wƳn 7s04'@:׹}tJWJ5Ԣ4at`rX9s۲uRpZe7VldڋⱯn¬;F߷}#N\'x ^p4W gx΄Lxavx+ n#'y' pJ(@]r\1WX>sV؏|s]CǸb҄']KotGY(έsYԆ-xl|E7\ݒp#f6*7RWhs{~w] dlw&w\J1lbg5~]9xUy ƈ=Hc<޵ ^g}]}m{5d I_7m.Ig~'J}:H~{{@}o/+~(7_?$#u4*;@L@l92J2쳈1қJ;e2#s @@,\2ͫ( (A TAc_sA$AөA,/A$$| A!"4DB n@S @/B00L|@5\l`Ø##S?:C;8C;3$)X8 :@,DC484C<B%!2G /+ %'(<"D& LD,-Atċ #DЪTMdNO@Y>Z PУ]%t-AX@H)$EX8e^5hF, PmFkB2GSG{[{GH,H0xiƻPC6tOȗ=H39HHEIKĦ ɂkkI%OpI>II0 HJ,J`Bvg|hq$Br4[, elJ{J'JFQAk lFRk~sJ FJIZD TKPVDAEKO &,EՀKKTD; UFL4l) )C B 44\$2>!M<'h3j54ᔖjK@bLcLXK\N˪NHL|N dB D>46$|tFWlO{rOdKKϷN<NOlДg2جPO{MٴM-L-ݔ9S9S=S>S}3 TATBKTE]TFmӃ0݊1T` TL%SP J)Yx%pY%H@DIـ,Y8%XCUӕZ=D ݁4Z] Z=5Zm^me؋m[U8e[u E]5555_ ߌ}Z_s[Xѭ_YY `>D-F ^N`;E^~`Sbd` [Տ7 aa.a޺[^ana~acaY\a><ԃ= b!b".b#.b75Xc4K84nh4ؑ`x7poY3e44@^}D:E;& =v B @f @:d@]T^!HI K0̑@P"Y&/P~c=vIY<82%fpe~8^V "hJXX@i&d>vs~>nPeW2{vP#~TDIpV$vg#PH Og gpgmFdO(d@s NXiO0]_ft^|gbV$y.O(+6&e6OF&lvcT>5g8겶^p@cUf~PhR 4}X@֑hB{6 ŶOifxlbkfO8TXҶPKɶT"0hO}]&m"픸TEmHi(d*en=^@0rFg]P`D6_^|oCo0n@j(>@Fcpkh>m!I)npH^hƁ鍁.^p~G爾hKfɮnO2F jjj$p8n?qeFۖ#H^n./&1/fO6Uxne!HnlޖipkfCk]^2iq"@O}ljuQjefJGclIX?wnn8O$UfPhRudkpE)>y>jx貮n3 돾Gid=_ӦhБbcc04wk^pvwx/x?xOx_ ;PK!,,PK$AOEBPS/img/as_awt.gifMGIF89afuuuuGLTPPP[\]lllLLL000++- sss;;;invw~ꄊ @@@;?FVVV```krY^fzzz}EFFcej&&&555qu|fffrrr^bicccRU]BBBh"49/28Eir򏘩_eqర֐ypppV4OUp'*<\d+AG_ӈ̳õ#&*}}}۪ƿ~~~׺yyyķŃNv|||ҶxxxȸɾӻΦ㑑χSXb|y|VY]-/3prvӠrxRRS79<碡gikNRYBEK333ywy{777III~}}@BGϬ""#Ω!,fu H*\ȰÇ#JHŋ3jȱLJBBIɓ(S\ɲ˖cʜI͛8sܹӟϟ@ JѣH*]ӧPJJՂLjʵWWÊKٳ]˶mQpʝKnLxej߿Ka+^̸ǐ6Le3kzϠnMhӨSXuհcVٸs^l7d>NmG8,ȸfk[H~G4:(p{㯭B[p^"> dhN%M dCЅwaA@!BVhP )"g3!awH x}'@z\p @^A@]x93@hOR:^ET])(/MT6stJf G~FT09EpЏV$@ bޘi槦 s()*C)ύ"t#!&XcPڏc?H\@QIPiy@V@ 9"n{ LMN9gm2'@j'Tfn&+j?NOCl%&S^9${@ pǜ[aI`smO[<[L2HV K#~ *c䝸0@qZu2wMaS~k̳Ϗ!Ay# ,gbz%+P^iƵjw#ziO;\QE;l7:%ez ,Μ]N)#rE>OZP;C!ovQM쀳P5imA}#nlP0IȔ HOEUE/v)dsX@z+ȾG $}Hж~@!ΒeO~AY$*-NA&h".rU,7t A-P , pMg _g-j-N1G$@͈pZx HbϮ T2H.ӑ>0ϊZ 9gx .gRh@$yIhT AK8L(L Ytwў&bzifJɲԧHPB+Pf9@Z0KmDl{ںȶ-, `]BZ6׹ u 5Z `ګ}-lZ_nYNY."8. i]~Ci x>11@J4 p)Zy$bh_#S0Lcmjm0DA"h(@2Bdz`t% ]*[54+RLH^JGn.z sPj&(S<% = 0R.ٯ0S5qsѮ|dKE N?^txH$>I4R}meVrBFk[[J0:lba܆+L0?\!҅>u`)dyGhFCt47vjQ*U2WYpc16!]Z%xc (ސ\XnO2SpsHRTLuNqA|cahӤʃh?]8+t~ҳpPϳuA/(OȳгT IU:Q(8Q L)HfB #ljGҒCh'ghGsiկq7vhxU s7~xcc1^88UQ],'6$X0G 0,o.xU!^_"FRN$uqos_lDXcFa8Jh;MXO9;$J[UlWU!#S8W]h_X=a:_R^x1I8%6džVo81t؂vds4և~HR6XxV/7XxXx0B^B,aYCp؊a|QHX8L8XxȘC`Kaxؘ،\!U7Q8Xȍ[M:( ! ؄2% 00FC= =P(@MDdSx_?0iK+"9$Y&y,0gr4Y6y8:<ٓ>@@i1 ɐxVyXZ\ٕ^`b9b Ȉ!HY N0ki=pVVPVD>Ti8pv!J4oyPPCPV|9kC YI蘶9D`}9ɗzy )ihacb9C=|)V`Y)V@\hk`HM0jѝ*}!7)E}穜PY tpD M9PD)ɚ,鉝 z :]!D5H29Oiu'~yK3ZXiI@ڍIZJFW=h"!$X: :` V :  p1 p:  @ڪF1tFjH:5h/+UsA:0+j+0ːzj 0r9z̚9@ƚ ZZBj%nbZ'8g YЉZ`:ICP CМY)oJie;L&UQ8~馿=pi*3t FೆMIn IŘMIѲ^,B7 q#aB nȵg2G!tT$ 1kwp[+ Eayk Pk!l[nBDt;rw%ȹC!cL:%OJ~ꪼۻj+뵉ѯ_5Q:[{؛ZC!벂:3?1)s˾kk}3;7jܿ<Lh ꫌9*!,HB;Xj\qL#ZK˽'l{i##{SøL$OJ2R=<[@&,DK qPR\h 8?Qў{ї]0S L4U[h!D`i Q (RvFr;qB*17JS1x.JS+= M<)M<wjx-N|~.nO 7L#MjQۉԲOR_߁!1W`b++'+`q!&%. C2 D\;>^~ȞɾWY~M>K3ʈ6>^wc1l&nTgi$ lZ 98-,KawlߟǑX*蹋> mY?U_ ynGnInw !WQ3To߇Ӻ!n>gG~8!<|'oDgVFYn ?P߁$oE176ZÓR>6o&MOF02Tn;fW_WlA?A ]1pKi6{:CGd6?Ne3sg1 4efV[gQE<@n9sC~'F~nd-r^n)Uk:ePjDL&Uyd6R<|/5B>_$(A .d/˕;[tPJ~ ~ZQG);nlK/qx@|!'n  3 73П,"*3s‰+B 7;0ӯh!f{o/ [I ȈGH%i,,jBD2r/!T 1 ۢD"[-DCȰsO>?E('H@'Tх`$H9*n*`<bmԝӠ Dh+zUxB+Oˆ$G!4,JH\@8=+2sZjuQڋ>+0Du綃6JI/ݮR*r<{$T_Q v,MUH'*ܴXBl5]eCr{7&bBT4}ͳZSx (x`+ *cI*9!-x5ݚbc5H9F~'!GWHXNw" G'2\߃iWtbh4/r:ZӁH e~ M8o@G+iU|s'ɭ;w2ݻPJ O_!\ډp2Π U 5+USѢ~~^İ@_npdi1 ܻAKm#wBӧPJ\OÝs$fw@$,WY%TI@R4X N9jRAրu lԴָ,]yVS<mIpzb&(ʆ IKxq)6Ah:vjy`oGI*bC0Qt[SFLp FɍG }QTY) XD\ !ħ|J<ʌGG@PJ <)ETҐ2TjxVfH#x3JM|9I-S$Ȳ N / wI 0D-.W,y(0?s$0@8ZX`PqSf?[M'AO%a%Q;$pk|];H9Tƣ%,hI+P &ASҺ!cA|']HzOSHZ%}C&@SL R3.UĠ,k )Tt; T Yr̼GVZ*8BWp/MXb=bY(ֲmlF1}J%8 a7{X1B=&(|imm ?DIA[߾'" Bs+>zR.Z8h!GfwjRjŒ,-5\M»+#|ယǚ/ݥyzRˑP&]Z%ԝv |Di`& w[4H.'Zr4Iz,€-Qߍ!n`8,Q2q^˘絥@&-r{I;0AЭLxrAA <€~P*f[]dg;@s,I"dX:vIX"-B=hU?,i"|&4foz#XOs`?Ceܥ.Up},hҦNn/]4؊[Qlg`e٣K,%KaNvx?SGsl +VQF%0|@h0`L6>͕+u(!,P!l vM㢴|I5 "Wɏi"}2Zڷ#.t`k w^0oHJ:3m#NzuʇL>&;rL@ElЙSzխ>u[u \Ez7 ǹ~ΛZ%db{hX\e=gӜQEΰ~Ix:Vy ~P n 3nok9cNyЇ^ }̌%_A !0'1_MYs8C 3v I@~AW3✅ynl̪.˚>Qa?cy*5*ɏ[HcR \"2E+(@!< ( !ډ@B)) . \,N s @ ] 0CxS?S@<;} >RN[,SSG}THTHҗ"TMTNSJ]SͿRTdTN]UVeO5 aMV-LX5maXB`[M~a5Zub*ϥ_f%b4"L@Tj[A0{@*0 I#MfpE[mmɹdVp[meRfnE@VK~P+`mUiiHQn[> YXX+p`NO(c@eUfVV@~Xdlfne+XQ>Zewf`]vЅb_fiMEA07t@c~2vS dnr~0gnSV msio[ `_fVdfq΅fh@.Sw~W]Vqiddioo갾0~mewVVz}SSp!Fs4#Ȧ*[I`Dy\A&}_6=ҵݧUS}_&faVcEm ٹ]m"[~38FZ_ " \V=>ٮ NS6\Ub4%nj.W~  s˾&8>Im+3Kd %dG  aa5^tuua6|}X.a*!Xaee'Lb}=}ۅ_&}GBP%]5}ἧIa:S!Ya,h B vk!D8Qh"ƌ7r#"0$ʔ*Wf&^U? |Zha\Ow!e%ڄ9`-?Y8w#g",!=7AnTa% J $UMTUZy%Yj(~:e~f#S`YSH ЏiEC<% :(D[%:(zy.u$Ĝ x矄j(*QJjd@: *0M+ŝ*`:ac;tPTCM@UBO;d QB{.~Dz&,:.BD=mC޹H=1CRV<<@ϟqOR8ԏWYnzԯ%Ċ|,ɏe/n|g|61U gUI\qT:@-ҏ@P?UHAn|D'5i lPcm %H6(-7@HqԀ?iu5Ma.**qk;m 0YNR?9\6s< u/:ײ8=٧ۄUP5C~/)7:>l7֫{C_vU1C[,9-DۃwkA ZcȝV֐Uu":Yv&h"B C(&fM,/rG\ ?&?r?C\ S6Kt5g5aٴp}kd#ٷNh hpa\)eƷ(퇥Sި"(jQdb?mSۤ]WC .`mC29 @`יB?l+sMW(v.!sn|\jl:qEK_ @Y@ Nwc 8fVk{ߋ71P$&'z)  `5sjl +xy5n ^CR[ 6ba.Lgz-&q<}p76Hpyۿ[ :QT @hrsmP>A?<@(@]l.@ Ay]h} X>_}!zַ>LQB+{/c VJwM;Ao` 5y``ȝ;à|Q02TZAXX݀02V0 `^?8: R^F`]cĝL5Zdf?DfB?XrbdfڽCC>!$!fdI Tp84`y@?țClP!؂af agȡzE&Z!CH!fB?hn!/ɕ5JzX\iz?eQ(mFV)Vˑ)b"hH"xQ0\& &b!Ad](2fb 4"gf$c,b@7b.Ơ0YT6Bf]ޡ(f6n#?d(#?8"-> hhr([+7üv,7U@ʹnc(7BDXANٰ6Dj&RLA:x,gLȊ,nc^r2ꛭF *ŶE,*XB%=&7HC.66U&lbbm?[2j!n-O}-NmDA<-"#-Rfj՛)b!N-ʮZd -v:.J+B h>hv4AH|nV/6fN)?D$Z )x]z>lמj..06.#$TBB,A79d&8 =:LBZ/.e0 0 0 p/MFnAo)H%p&B A 7Dh|02(C"P;0v6m5`6,A;3BC;,Ȃ|#hpS8!!2"'"/2#72"pM0/ +pqqB,tB (B&A-<0)xBx!$Bh)4llB !(* tAqqqo0 wkOُ$/Zo#q)*+,2-2.2/20 3132+33;34K35[36111q887XUN&%%krw>ϴ?t@4AABuC5DDkEwsFS{0TG{LbVs&gu<2==>t?M tN4O#O/P;tQG4RS6_7k]WoG3G7_#_߰IsJo5KW'6?4@@AtB5CCD'fuS6h^#93m UVta{5btll;Y߶dn[[f5S{6qr$%LvI3JG7awKu5cumCvZ6euocv\+5gy w)G7`c5<|awX/6Yc5nO6[e5pwg2EC 88 1xG8s`s5}cxvmGZveog\/uҸ6Kvu_l76w{w8/-?9I÷`SykS+vs77p7yy`G8kOw}c7w7xwp CjK7}W_w~kv 8xwyy:ot:t':lwkw ӺO:c9K:9'3sWsKy?_s9[8;v4z9?y9;÷/6:[;9c:6<S[c|9O—<_zxW{Gzz{<|W:S=<#;w$?:|O{ڻwk{[c~3LO}=c;׀1H~w[>A p@-w}A[~ot^Cx~˾o}~- 9:|KMn8,l|@>\S?>c?&L(R#r `M+V(JE^*E%N!(L>tJ!JL"QfL yJx1(&RlY8Rf UtWA>v|Q?&pn\p{o^{p` 6|qbŋ7vrdɂ{18K 3Iʁ~p *QXJ;̰c+1 XqF:πNu1b=d 9D~$2hSx LB6+I&|EcR@jJ]k -ڲ.t~Oو%-Yg54G)~F*$NJP~.FFNË G.'䗬~A\*h.s/4gLA(V9Õ1 )F/pȗI)zA!j*4fWrF<L7 &]v*W;CjW8A"=#m9Q0R$wnjg8Ix~uHE+ ˝X0^֚u-2c2J8h0P H?X;lŒ!&j%-ԁF6_.2t!`B%ksWC%=(u$ tN X[)n l@#N(XZ8 md>R`X%V!-@S T "Vl*!.!JQ qPt-ϸ3䃎YŠ:NGNr j'#2]# ZVX%pZB]2eQ4e *eh+],V|2:YRu'=%B>#PO"o1(B+K`6T)̉D+Z*ve H2RQTҍ%|ib~tzЄz1Jf%;<>u4^W+%bKU}ΔaXJ֜ uC*QըqM*]*UzhI[X@XVeX~UauX%ٞJkeuQu,uegA ?њU^P[>iۛ6fOZٶlp;{Uz\h==,l;[Զ9mw!j٢V%Kyz^ {U[Obĵ^n}*>V.eݷUJg |`%e3](gh9&mo.0u; ezVSx^^]o9I.st7MfS6hwmo?/4g9u{.GZ"qYS?3>uڿsMrMl='( (mEjݓdmIďUG&Hl`W,s}%WZ鰂\Z fY}_ E%͟xouXv6< yNaqL)(@ Y* 6)IZT-(/Q'UH&TJTjKzլB!Tx*fO3BI)uMP2T -H7էU pѸTTp@ ĵ?xlAUE.OSbӰj[FlB2+*2fumeYZn3TR KzY) 5lSoT:5 AU wH-kn/~@ADZދR LUPH@ 70&_)EcG!'ދ< Sݥ?b:7nV;4luU@hӹLMc&0; n%Sz$gam hG]x[f>sV Ʀq(mJxg)e;-ԸNFOƹs~x GN'0x;@8Ϲw@ouHO:ғ;'N[O_ѱu+];ʥ.'zˎ "pNHOߝdGˇ;񐏼'O[W;ˎ2HGOҋ Ix ֻgO/>p-O;_!HEx_#{D=2R'|8o`_oMyOoC~ח}1W~wW#Gx  ӗ&x(*,`%؂284X6x4}p:@B8DXC FL؄NP"v79pXZ\؅Z8h^8dXfxhh:{;Іnpr8q IHxz|؇m8}'ܷgH<È舤(PQX x8?ȉ(}T8H8H8ׁIxȊH}ʸ،،`xؘڸ* ׃=8Xxh I؎ᘋyոӨz hH~x討Ç ِy-H;XH2O!(ޘp=ȀǑhIXw˜*)- ɋ0Iiy;!(DF*'hx]gXٕ^]?9fIv8|glٖs8y1xtɂ/zsYz|ɂ98׃Q8FGYpi*HJiCRɒ!Y1 Gi)'ٙ88<( X9'铫 ٛbpyFP9Y9F`yٝ1kh8&1N1e=hYyِ:J&y_qn&1`"TQG`)&)ɩGqM6є2\ZQ,c8 RJ$eU*#%'EL¢5ᢷ٠+A:`ecKLeBe6M D!KB4aLI,OS0Z1fFgC([mnb&i\ae!]EKJbvqaLEbaWmeRLltefeWF[U(QLYduqRTJĪYtLțr:Y I Ċjd%htTh*U#L FZSpdeXGJJOiOZULp5eZx5i-OOVNtbv`ᯚfi WTJ@VYsYjo Q9Y Z.S?K즥QUFa%^dZfRY(^OT^jaF KŮT KnVTf+[a[YRyd0Ѳ0+22ڷ9֙'9XTR>K_`mC\WsZ 5\,Fcu`,M[cM5_lVKKbεYWEfvk]VƦ(1zٟÇۼΫt$v:x0ZE,ULJ\6ctLfj@`nN,mChM+Z[Dzv{TOFa/}7KY=yz9Z\?ۿ@ֹ`PǦKva4gTTm6dhUI״b,lW+6L-M7$JU[i+5\pZ'*d:|۷K#+kEE?WafSPjJZn3šT滜KDl ZO=IFVzUSiW4bd廽ı&P_# X5S@zĉU<|Q\U䛰`lVb]Fb@`g+\]<#_acLfP4gR& NlPjPo;[eSjRM6u^ J&EP n.RZPn SOދP{R|QݪPR̝ _Pԡ,/4+6 OKi\г }ڗ}Iw!]&MY?pq_Qʦ4q!I~9>,#U _UHHW)lg˼6 =le\{1r0˚C:jP\m}T Wqn z|~׀؂=؄]؆}؈؊zk]m}`!bd , |ٞ٠ڢ=ڤ]ڦMڎMl0;R.!gz]۶}۸wfi}\M|]}-!mWw=]nv xO omxy=ޗq}OGTɐ s(ȨcWMm- >r=$gvN *w×.zݠ fڐ݀D@:Ny r{rݢHe{Hi;\u౼7X|}D?!9UZpLO6. 8 dA[^^ʍ!HengioN )7x\eh@&_΅Lj~^ZC ~0qX"'-ENΗL+Қ ۞~oNA~Ũ8qzko>~A`b~d.G8SUnQo[ɸk)&E,ϝ[ۼ[P\>:xUj1,FPbJL}nNrnRW5̫nXO[@cR22qTjOoQSoUoNY/[]Y86DVo1Q&qSzv ǒ֭VKTGƶĵMKZSڭ`۔ڋKRV|Y!j?UOF*Sg3cO_oJy_{\^VYZZQw[3-MJx P$/ʮtU Q?&@)QLɈBJ :@D3d ?~IYMBGDNAQQDSQNZ՛K`WaŎ d!HȮe떝=+(%,)(JF *QJE/2ba3*8c +%gd"JGY? Q2F\4͔\ 3ʬ/;x]:З1 ^qɕ'ѥO7~?ٓX wŏ'_:ԯg{}9W||ϺVJӀ⇊hk g  Pȱ( , /l D3j%SXedEKb4H$TrI$prJ*Ο*T҄1K0sg4TsM6t^g^޴N<ӜƿYMYrA~,D C}1M1S =%'(dDgTFe"X<&!gS'|"+S]vXb3.d2 -u,`"bg t%(BhQ% tB3b"V#NS+ѕeܖ${[rtYmL\i^ջDUDEvl2Xbߌx*% > Y:ZYkVۺv% ","p c" @EDȟ(2J`PP] p)q@lf IKͬvjPPo}raaa/}(Ṕ%Q%I3̡xCFj *Cq6Dl49&fA=yO|Ş-4[S%=aT#  ]d#sHfԨ%3 ,Yjk'@Y %X JbQppcTɴpi[(Bɖh7fKQNUE[KƲg48A-AoNfe+ e@U<:W6{ C]1((6T " CӴeL&^g$K٠#Q>/J,AhٚmAkZ9VOs+vT?Hܱ+_+ 2 cڠ1+QeȥN(-r iGr zּuJ V}S#¤7mi3Uv NUfY"ye]XX(}n/Mw~uXaw,3^,#^JŢy2Ec B૑,lYʶ]fOi#^I1܏ Y,&.A2*jjf8Y`BX`K ͈WCRc@(,[Pz[k|l_M(@C&)]TȌt@}7UW^Iq+Uu}k`[qv!dC^0e,66]{6w?IZ}ߗmrQIX&v *و.J)ݕD扡@bxTM!^}m{\ZNs%;Pg BFCE ;OxrG_'}tGZ_ -Y g΅`/!2]gpt95Ԉ*Raa1#Eᯌ8 @ @ @ @n!o uK '1J<]aAAp ()B*q S8cT)$4  !qqAU1'{[ Ai$72 H y*&(S;/IS"b A̗" N* يDĩ!j#33.2O7CV :4ܟv!0pZ"qÞ !pCjy4B1BJE2$R;+1 S"cY)ڞDYRdBVd8E 9[< ?Ĭ$I|,DQ–DJ\0WD-Jԩ`›lm3A l)IѐL1+K +˚ZBwP1Fq'*pg 1}i k٥1Gkp Pawa<ZTE1EE8tIRCd1Ш {BCBZl#*, }q.UÍ0dE!1"hL|LȌLɜLȌLǼLɴo $!**ͩ`M0BC4E1R:BX>2#NNNlO,O8ONY ! )V"*@LZz%4X@DI#$WOnHJP|#2 n Q{S I,,ٴڤT)ײH,XBh bMB"}sØ~KzZcE;H3@ST UǰւLaB,ZUڜb-Cܲ *g J!NHS* ՚{R|RڡқPUb} bdD>S|ūUh  xF\ pBYrR`H" I֚")1)nGoCpqEDXV Ձ%[3)<];=_=œ/_____uQi<%#;3-W۱o5n凛 м=m3HmvA޼EpXǕ)5 RYX  .`X͕Z}W6 R9 aZ3=[Dxpۜi_^`_a^WA ؽ5awqG(q$+_+(,  %J *Y+9,-., /0^C1`2FT͖1ǺZ)H(ٴ8- ((Uւ`h @.* $Ҩ7Z _.V2w'з$Z7ŪavF<ǺET-C!x,/G߈(u(i(u~g(x^$v6 [K ^\Mf֋(8E#,e[}LX)(ܳdD7 !v2 3 z+øri+DE#  ,0gj Agg"pg(ynCg %K$2%Y%-+hX-_Ph,8 pk+Hei-Xj,%4^1%B$r4IiL~A鹂ivi&N|ii.섆e,#c9ndnΈ,/Ph>/,fF/n!Hh/ᢆHQ>`{ - ѬNn82cւU!8h:#o=,l UfhkZ#26E=p .y%j8@T+kAl~`}Jn^m٦n* 2/#@   a,pe620Tn 0{^lz-$ S~o[9hn2Ȉ(Gl(C lȌm-2p)H%ƘD(h7VPqسWD-d\{-FlHf (. ir36XP5OC6d7'bʈyX|M3 L!rhoۮp@o^tƚNȱ}l[`x0j2x ,uu٦Xc>Ub,toi y¾:zgs 4m?E5 Ns4# Gcwn kxPvT~w7ŀ?t U<hk _⋞4T3Z&LGdC] \y%{*&{rHyqkl]n܀ 谆/g~9Κ//txt tGtNJWQɀ+R}+fz !(u7| اyz<~,h B[GJˆ koɿ7rȐ(OHd/?-W;O8Eע 7a~H_N!D5~gE$p^ŗ3o{CS~#폹=2ѨGIcIN3k"hl!bfTӣ ˕_0 x HZ AJ8aC!2Wn 9PRaۅTB my%<e(!eX{K ~_<Ђx&!Uvh"H@]z%a9f3/d&^V9զ(|%l⹔S 5vc||RTf_xB`B~Vn_|ʗ :⩵YjJr멆a ]?Vhf^٪PZBV\:@ZxJP]_|,%.H|vЗ!ُ.Z.̎|J69D $'ˊ*3)Mb[)(1%Q8⭍jf.i bŘeX(Jx'+h?OϠYaNuzU9-ֻqv7RSތh(#pe=ZƌMD,lZri-YH EѰN'5gc0<#/yȿ<3ki|oN|$XQB{ʤqx5r5iԏ$\]:ڃu;?~͎_mFA'C=/_swy&Yi)Vʠz6M量6&b? h!(BQ_FaQ%_LĭLʗ9` 8 \RFS0 PDEďx A]NZL.a ^ ?Y  `Լ`1naD6:$!!a0<!ơaVR!! !!چq!Τ` $N"%V%^"&fb,$  "("(!*"++"?P"-֢-"..e-"|cx,#2#@363>#4Fc3ntEHT$#7v7~#8qErc91#;;*#}Ws`Om;>ㅈU: :d>$B"cLeػe=aN$EVE^$FV$5"Az@~dy4I$JJ:2T9$A ֤M$NN֤ mGmPƏe"R&R.%S6eRրZ0KzCL_QքON6DVVH"~ e?TAT*RQHVr/z%Yd^jY%PK[\Қ\~$]d^eccdT &UƤUeWug~&h eh*P1cfGj%]Аk&l @Tʐl=ZPaadbr%Pj6pN_J&1Q&[`vnXoo%(_qf0^'E&rrZcff&Vtvi'Vzqv'!}T^cUjcy ygz'G'{'ɧe'f5f~.~gFw ?hQ5y=gU~(Heju:(FvC(BTJ .n# ։g<.)6)pBTƣǍV.Wtb[P")(ex}e]'fDXV)i[x"h(hzDJ )ss)t)b)jv^ԩ&[ՕVvpR*VZg_8 j@Oi*(]aqI+:&k Dsy!Ay#uZ ̎+\)]+Z+*vN)ks@tkp aD/]rMjLG˷rħce( MhOܦ@Ox*ȩ+G8*j+̑`б `AF0IɞlG|D!zM FFʚȆ,G,⪣&r:,BlFX.M*`@lp?@A`P D9چR@8 + D5Պ0PRdmۖ $G@Ftm6 miFkJҊjBmJNպ`R:immRAFP p(PFHGm`Dw$aOQA֞c̓rl`@hC.`4w.&^.luvz~.Ů|>@Iq$]*MfNe ?0 PNiXTkepl!ϚG /F8/}oo2윚Ԫ,(JaBw-MdAU8ACSI چ-p6:AF?@-ȆmwF~rM iŬNvddF%<^6m鼭M8mI@1d RR|_exCladp1ˆ%CUxS?0 ʆi9zA0މ?vVpyΎwm0l4nxxxר *"4nGO[-}EDSGFxMF7G{gDHK~+!J:y/+;V@;φwJP<_4:Xa{h;p8?gOSK8;rnW{:2 tڼJ8;yV`|*щ_cv.8PǘEҁc6Pz'gگ=hEin.]]Mk<Ɉ|%{o/?=Cݦo>m:mv~ 9=V:7o>c+\";oR#0*8>ߗ钹oDPr$uؒ?g? .;ӇO=ă)G!JA( ßR1!h˃/s([ J1H3iִ9 ?~IhPCӟK6u": TPi@L Ē >*˗1Pa!L2=QߔQ5Q>(IJW/_ZQ"BK%m㫳.AwnݻyxpuWJ˙7wܹ?KhMi(X4HAu: "plş5E(ι  <AD8"|%BjC%P2 '0D2HljMtH  !B]|ip$iAF 12Yl5$h̐Jlܒ.jJ!<4\骣,ˎ A:,>(p`:2>OOěJ H%J-BeڔN=FLRMJSUJ6]}Xe45\q68Q'*sёR?&!QAߋoCi!(+v՚ӕnp6r=c]+}xwS^g,(PXD=?c,OXgs)IXK`t5֘ݎ=6Wȏ?vwM>YW{ 2C@i -Zhɢ cY&ظ[v֨ꪭ>VޚgklXuR٪A2EhH ]I$GrBD+qN>,) X%-E*=t2M?UGB:~U)isuߝn#oޕ'jv–ܡE'/wtG~Eyy]'M혖 ~^|y:' o$-HPSЏQWA ^7`f|!IXB)T YBHOyoscYHabDH&r6(NP[}h "QXE+^YE/zQN^[XF3iT#^ؾQ@ B d N^r#~ʋ#, `#Gp+2+ɼ1x2/~(I0 f %IT<KBHcALD - 2+)Es+ga)ϥl2( }(KZ-PAJ?gPK%'@g'?+<t<BAq^#9D'^\Wي'EFO !)gԕRLTB NQDziR(GYfQFCR+3E)A=PhZe%@a7Zh%PCINWs:jMK{'EBŗ2iX(bjAV *k-L*UJAAT%R4^.ڱ<:S&<%Z*WV 2ne?dYD3ѮS.Q8!&K$4'6e sIES -$*hZ(fךMw{P,tP|f^ P 0hlbt  P 0˄sp i@ΤDk EIsp{4/ eBؐkpzpd pkpİ B91qky#&+ FHW[ڨ<+ JosJQkNQyRQх0pu{ EpEfyy-DO?&wgQqzQw1=m K g cv 3R#7#a! Q K$O%Ì""cGc&cR&gf0kP$G'{'Vr]Zvf'@)R))R*0 sHgu*+C(х(CQ0,R,ǒ,A,&R*Gl-.F+AbP/dd03^ysR0S1/ג-2#Tsy sS23s1!)Q3?2)Sy,k04Oe85[3/EwH1gJ:Ӂ rS7w7{78S8887}d6(!.:. 3: 9v^;^;Ӊ:l:,[Ί۠6Ӱ:!"l?@m@ `?/ =rJ%EаkL :a?@m6Cm s? AAaz "5BBo"!®'sCCG@4Dk!L^AŽjBdGo4d ?t^ C?L HIuGEh2 ,(l6>J&(FW3G4tLl6PA| @yD?B/+N`>hnl`rJ@&J@|,4CQ}TLS5DGP4tN:*E"SPE.@!d֓vYjT `?)@?/\o!X<§mJð.T`ZHE͙DHx^1^.;",CwT\\^? FYv{>t;;8fCdq80Wk(FXkJ"jܓJ^A:_l"hOC֎-Vb5Jf)\تE\9,XB\Y\M3m*b\E\Ւ#|wKEyqՂa~b৿%DLJH'-肧gb &&c @/(F_~i&dOPë 65Y(^v,<[dↃiC"NJF)E|BUfdp3\aẈ\\FΠOؗDT\154D$BE"&f.j=+ǘB.55..6gv`A`g@ߧL`]\> z1 p>> ~y Ơ ꡧq@ ~ p^>w'/~q á ^v>  5 Au~7 `wVϞK_4? R? ̡w ??S X `?e?)S @?TfŠ C _ _֟ <0;cp CF?]ҠQ#s6:z2ȑ$K ,[lF̙4kڼ3Ν<{NÇX"@N:} j1 ^Ze֭\z C)Zĸ" E} פn ޽|PآG$UC 8} cUZ 8͜{%Zi؉IE~-\}l7oݼ{)Q9}PZ`;N6q_=vNVi'YYT|u{t]zAA֭`YVb91"ES_TufHvEYZy祗 zl  EdQL1 @Pr_SRNIe`CXa-!#@TH:(GpB)b~ՉXx.GG3  <@ 2d@LA*P7\`Ï`B8H >pdK6ߓPhjW a!`0`_9#!B|c9|JN1\؁^ࣁΉ@wvX o6 hh D:i|AK`)򞩨ʪ"$պ'q^*ƁZ;lƂ)2O񅙘09iBŨ6Ki'D/ LP3xäƩC^1B n 2jӺ_mt~ ',^610s7L*>s tnz[ߢ(($zꪯz뮿;S7ų^7||O|#Mv~[O]f<}o}>g>; eS>ǿ7Oߏ,y{4o^vo|3lp~z_. o.Rpz1`+Be{Pk&׉ʐ 1 W*?7CeVy 3؛,Qf3e2X>6ӭmAtՄKیr0 `#m<[͞fSbS}(P@+⥀y))< 0/gJi⫯$c(n8UMPx+ 32sKֲkrKc a@+ êbh&Xjq`  T2Ņ*0XMbw/KތX+#=*(Չi;npox>08[5?;;?` bĂT,4+?@lb#8 \8`n>,eAD 6B|)#Ľo\/y;+,#zֲALsΨ "Ķ.j D̶uDK3H @hY N@A(ty @H(xeF$#zrv~M%8?~%`O! (H .1!Dж4͙r [VB PA SǸRnQW ?0^(7|q+}Loӟ'=>qDD-;~Z ^-3nRnD Qp@ \0+A6`h~' Ћ~} o :} G;넵 : @w+<>9郧]?6}ƻMxp1Tϵ^у DnY  `-DVI/-da&ʿ:s|~u7'\ggRpevvSvb{ax6WyAabY28ZPuY~'Wgr OHepvGׇs٧snWfeppgyA~+b~"we%~V EZPGP,8%\GH}hk'}auplXw+  %Y'G UgZpx8(r(wzQngmcV9'hvEiaWʧ_$-pB~Xp%@VxX狲FU@7FxQ{jh{phy8,lPBv(k_pP'Ȅ^~VToHwUveLv&tl8pPܗ6sT^swp!VsG/X~h8xX"_sX@ZHEE0g1g\gxmUsgG)j؈aaqr 0G~`4~%ewhaWVlv|<u^hGTp'^Q(|L0p {}q>(BHvXJ%oU)kG8EXuHG8RWxv&<5Q@wP6Gxg8(hgsn<(sS)qoG~"Wu(kyV蝋PȈx I臄Gh 6EV0I|(r5 UĞUbj!yJRe *JJ=7Z h'hh.8 V:kZB < ROڍ˱Pz!+#+jycjj~,B#r-P0^:iX򷟁HٲЄ[R8&s, hpzzZKzXWk 3ٳ8zM1/k"BH^TXX@W7TI$8зI.qj>OK2s2a"hUXx8Va_'hE1lk(2 5 FU B9-G*%ݹl% HK8%S,S{[)!I@ʻH轹(rXi<+h <, #ºn/q4ipzV)1ZjcyD+^-p)r{#\R(s6%{JXuXH0Ǥ,,l:+۳Z11D苝\yZH䘷UYCQ~˶Z  myD{a+5lՉ!ŹÇ2AUZ` B`ËY񸼁x/w7[LŠy_q~@JbVYh앆;º\Ʉ`ut\%y@ "ȤٷXP{{nj\)UWŴkɔkZh,̤^LhZ˄fΗ9ʮ8kkGPrT?̿ ܳXRyxZ4=%  2͕>ଃL'з 7ǯ+$K5-dlm+M= \TEm [7< 5Rp XUmW ԣ!ݍ#=ԑ{gᘴ]]=.02/(A?ӫ͈2 k :1pm{3(wّ/7>-@=, aVۡ٢0(| ɧ.17k]23m-f.apysَٷ|1+۰*S,*7X- Oxϵmݿ/܏"^4݂2 l رިح ݱ <-a4 + ؞ۊ=!N"`ڎt(2S0) 0Fp**@ls'1, \L҉, 6G2'tB fik6%7rh[0`挋i'na,/C)3N008G/}ԁ30#3"91nz/: aC^9  p^@{z` *̾._2oJ$<9p =/?OA@za?= bP-,C[N<Epx_"Yz넾LM 48.W톯36^QXuxp_c kF<h.d_  JL`sj?bPOR!PO ,Т $|  ΩEK 6.\)^i0`.5SL5męSN=}TP=!zq!-SdѢ5`(VXe͞EVm_pSD?"Nx1 =LQ`P0.FXbƍ?Ydʕ- ;PK҆kymmPK$AOEBPS/img/propagation2.gif7GIF89a&6: *** Mlt555_eq/28sࢢ戈⎎܋ҞȥŸ*???@@@k`50000&&& |А```pppPPP+,/GLTQRUVY^LLL()*SXbZ_g-/3#&*`rrrŃy|w~999䳾...___///krC^e;?F,,,<<<---}aaa!!!666333\\\9QW]]]fffPHVz򍍍yyy888oooiBBBwwwMMM!JJJqqqOOO($ kkk"""ggg6EJfvz>>>III$$$ZZZGGG)))bbb0CHEEEUUU===777cccCCC;;;{{{RRRsssw@SXWWWQQQKOV111AAA~~~dddtttmiii]_cmmmDDD(+jjj222YYYTTTlll8!, H*\ȰÇ#JHLnbȘ3k޼0ʠù}t o'԰cFʥne,ͻo&NUȓ"ƕK;PsfhP_7۽̣0 Աk&gހzEITnEY4ӃhF!UzN]а}zw_?D \ z0P1dH2l?pFGg縆.~>P#x cLdxPh(\-"#YGJDU< Mzd$'yS$D@Jŕ*gJ2ChiK c"FrЄ"O)3h$6IG;V3d/oovN!f+iɡ!%;۹w2$$=URܧ|s55rffB)Fn6}D':~&GFP@HJWҖ+-d@Gʶ&F̩NJӲ! JԢD% ™l?=(eRuN}*עj ` XJV,0hm"xUPk},?׼O[(6)CMbW(elV4ī rLʺգäf)9 ,h)ZNkjwfY`dg䶯ieۓV l}+]ݸꕺks0 Z2=dEޮͬpr ~*v%}%ĪU|Rw<4!B 2"va#oÁ@:S(qXVȅQ̘!C T!2G*&;Mfe*[r|bD@@Tp )R` ! G8B'0nzk)"%fepzEl 4!((a Bp%ߛ^ːNȎ {>@,'O m?zjY>9m2kZRH k!T_TMaA]fKڱRmkÅ`<<#Lr0rCH)r3A|p=bS8-{)*y}1 z86&O T]rwGV9# ;HB:%<NߟBu?y%kR׎e*\^ғtWw Tw[x+S3WPiw7|AozЈ-A:<ݕv(r ]H<5~:'1u+'o| _ڼ%wt¦1Mx31!SBCz~`u]@05X)EVy35& X}F}1} }Hx!|8%'(]UhBzׂWյ1&&xBXDXFxфćPRHT|Vxv~^`ne8QUMlndHqG&2!ׅ}PV~Ch@6 BpA񷊬؊xCd(hD ȈvPͤX$ǘш1ьΘ8Q/q؍G39Yyɗ9Yyٙ39Yyj/ (9YyٛI @ Yyșʹ 19YyؙڹٝIy虞깞ٞy9Yyٟڟ0 ڠ Zzڡ ":$P*,ڢ.0:6z8:<ڣ>@B:DZFpLڤNPRZXZ\ڥ^`b:dZfzh:Хir:tZvz^j1/Ч~:ZzڨP_ਖz:*:Zzڪ "ګz zȚʺڬ:Z`*:Z 1 ڮ:Zz*` * Z{ 12;[{۱   0 "۲.02; 1! 3: >@B;D[F{HJL+kM{XZ\۵B+ 7[b;6f[+gl۶5@r;t[v{xz+Pb{ VU[{۶` ɠ :~:0 P+  L;[{ۻd:P{țʻۼ͛ 1!>0{ @O@` {@{K׀˽*0{0O 0{Իۿ<\|ӛ;0{<\|,1 6 `} `*\ $l66 4`@/@B@B=Dp 8}?ENPR=S=r- 8ZIY IՐPZ]֐`a=apZ}xz|~׀؂=YݹI x@،؎ْؐ-ٜٞٚٞ٠ڢ=ڤ]ڦ}ڨ=`*:۲=۴]۶}۞]0۾=zRZ {W}؝ڽ=](ڍe=- 1 ->^~ >^~ ">$^&~(*,.7p 706~8:D^F~HJLNPR>DWT~XZ\&b>d^f~hjlnpr>s~xz|_@>^~芾>^~阞难>^~ꨞꪾ>뤎 @븞뺾뾾00^~Ȟʾ>^uT؞ھ1 0^~>^uT .?_ ?o @0 0$_&(*,.02?4__8:<> 119D_FHLNO?1A PVXZ\^`b?d_fglnpvxz|~?_x?  t??_Ec\r<$¯;_Aʿs/</Q+;VȡQcs&l@yc2@/I?P؏?icD $XA .dC%NXE5nQaXI)UdB YM9u0 =%ZQِ,eSQZjSYnjaŎ%[(iծUq]yK_C"\ae|\eˏgСE& ڟbȩU>*k!u~]۶ӋYݻۡcs^vnɕ/8᳟Ou:76uֵ'{q/ӫ}\ߞO߸PlϿpAa*M+Фa; 'DBC("4 ETqņH$GKEk`mGEqfH nC"_3I(T!|2J,*2K0ҳ.w2L4:&$4mV1NN9]N3ztQ}QH#tRIͲT,2tct9.=/QCRSTt:PYEXcuY5r T{QӤKT_ Xa#/CVYj[VYcoZpqbvburpj\g-[u]m=VSyՉ5Et%~݀V8;ޅ#& U~6a;֨_KH2]SLɅaM"Jv_xf/f n=g[$:碥=:iKki fhM#~^G{mlMFlvkzn; "(P>;>!b" )$;dϝr2'( & r"n7h'rǽ pܠ"zW(ɽos} ׇB `I %hw%Pq@r}%P>bۈrA cGI 4GH׫0'd <‘l[c:ewJ c$ #)v+K8) Gpp2!vjH)c"$T ƂdKXǁ ώsm^ 0BQ@d!A" Jx<[Er T ?H ~ dM!{Aɤw#(K씭;*!lOy"y sn`-nu@rN!D$'߆-*sRς$zl)AqOGep@$ GXÍY3B$(_2Qt!7>F#%Lw I@Q4'iDŽQ.UbG%GytS4ui<j kҚ*O gLsYJ=q,h5_ŋfV_DXҥ21EgZӛ-rP@ԣ+|`ԫ,2 VǚԶ0djZk!?Cp(!?S_RP"IH0 FBO@½ QJBn\[JbH  "SCl_t^H-6"laDmu\;`cn/#2IDmgοC„DJd} xSיv>v#EL'v"EǍy# ˎw8ȋu8APvExR`=#CON \y>$F~#%_H|^yuRo{#;G<)=T7I>P<$, |#<=sֽJ|ܾOIOxvK>%?IID(9M{zHe&lSD/]}p&Q~;K 6z)X=d;kk ?H cc҃#lÉ=ٳХ ׃=ÿ>w ˉ|;ӹ?#۾X= : D?d=܁ @7",,8d%$"BC ]#-TrT Hl&D @9 ~kEB cĐh)+h7*ȹpn# 'ԯ|63C )H)D=8 x;)h6PFd1]ԛBP9,;9h?2BpGƓө9L(GD/[ZDGt {@X|D9+sE8AiAhՠ%HH II,IH \IlɖIIȚI@20 IHɁ| \ I`JDJCY0+(K,K-&4]TC7UTBQiL$M0N@dUW}U]TM 0TB Ug=UHVE=\VBm %OpSiER>%UH5uEUV,W;W4MQI-T]DwqUSnyNzS{T|sRSvOm XLS3yT;}X׉QwXx΄UӅ}ֆ}~X.-l=8-X0Y- 4 ڡeZM)OZ=OڥZQd-0][m[}[e-HZ\UH۸[[,YU\5`]\%Xl1}=8ٚ%F}TYMYXz[eYUSM׈YIu\\\.KVs[=]((1DQ4Y{M^Ġ6XM uTj]E^XM}V-_Ev^ Nd} N+Jmo _JD)0ಠ_u u`YN>[ M_ZeCQ$ 4`\@&a b!bVa!~b(Vb)ܾCabP(b bQ⽥f\-b/".\} &V_,ҭNU8ٜb~-Pc< }N /dcrcc9.O}@F%FIH6OFPJ6v7~PId25]4}e-vc[cESQCSXe9ieL&^OK&c\\]d^eU~]:E.f`^-K~fTf%eNsVg\\^Kf fLdkTgp.Qqvfvgs&fXr>PMe`xdNz^SYlh9"%1h8hbi+,8&fiN龀iiP1ihih.j.^(2^VNbf♦ꍖi(jN_,Vi2afj싵t找g? B>^O6eŎbI%evRGNUk}m͎f^l5l&Įg>fd1lXmLn=nKn冮5Ӯv(Lkf>ohۦ)ݫ\fNm@oLm;[Y6s+mЮoZ_X p jkppﰆkkq 9N뵦ipGqwoj&ՎWjqqVi%k鱞R*qq#'mVFj'i&7Poنhfږ omnis670JlI,4 WTi6>7S G#QM\v 8BErvt5s;wRuL6?s !JLxhxZuYTH`F[Gd:OwN<퇖NOs=oeCL'Pv`w"P"H0" qsywo߹xx_gn7oj9t`uAu:vo %H=ģV_%H#(p4"x!%H$R%k#yWotGg@~va[x=/zgTrX,d/<+b\d$4b94c;)\zar,V5&\$.q1&*7{[,Fb0bxmk{gܵ\bG8ω"$鷁зwf/dDҥ*Dg-%Z r)ZMϢrY.ϡ0 _O "$g#{|L"őɗ/\up7ox`<u~  8Z=J!v(g?C{"!B g"ƌ{Ɛ"}d 2$(|oɓ=3Gg8#N)3T# /#DL/-:5RAB+XER/ɑ1 fQx_ %ωA"L]0ݽ^V:UYei%L!GY3(O)_ҏxjqZ#O>5,`BvmA{$Up!*LD9tO *vWؚi\ʖK׏ih??ۗ>|z5U  DLPOa=Kh!8B-l?0>PId(C"w1:u!ɨFJOk|43$=6v# =ec;;^ ȔȯKz"#Iɑ}j$3A+z#(b,L~Ĉ!FpVU2t,ICHx1T.sY]RsU#b%g/C9JjҔ_縉$W~ 4'M${yFdt7ps'| tn|g4P!}(D#*щRK**ыS (H?ԡ#5"3Kҕϣ%XCW*S^}'M{ "lcBDXSOBR!bS48ϡFup,g@zM zN.lbm Y]v˻Zz]XĂUdBUWΑl=kTI{FޕAS*_AN`.V4/-LTkzWK-/sE%3}nDkzӐr..v0R.x[Rr+; mk{5xy6^G+ٲSdo.WcnV1bVČ`Y \@];_U: W5S*+|aS/36$NbR2 4kjD5e"`9D(1sV^ ?r-\2p!f>5la ,9vn&Bf993ŕφa<137`~4Gy/g띈 QԦ>5SjTC/ªc-YϚ$5skuzm5]j^{ӐMpPj!GFnB 2;v;Lwܷ>CC/Џ7S[~̵N}#0>?z/|з~'F3_$n[:wCP&n}g^ċx#}o{ &`zp"Y &Ѧ]=#IYΌ!$šh ` L ƠiT| ` ;PK=- 77PK$AOEBPS/img/return_receipt.gif6lɓGIF89a@@@zzzwwwVV|||xxx``` 000^^^===...&&&999鶶VVV666 PPPiiiLLLYYYNNNoooIIIFFF()*fff333<<<\\\SSSlllBBBbbbsϙ ̀嬲uuuآMlt/28&6:@ppp_eqߥVY^_ 䯯濿0ǘ~~~-/3???+,.??SXbw~okr#&*GLToo;?Fwoo7NMM$Z_g<ퟟO8Gg__/rrr򮮮,4uttOO'W}QRUӁ(y|렟DCC// //}}}`,8;ߦ(+C^eɰύSouVz!, H*\Ha"JH1b3jȱǏ CIɓ(S\ɲ˗0c )A8dzϟ@ʼnSJf*]ʴӧPJJU5ʵWȌ6#uٳhӪ]˶iZ}KP˷߿3ekÈM*ǐ#K԰3k:V3ʠCMGQ0o^:(M˞Ml(0Y N8Jz+_pУK5ؕ;νTN7'ӫ_OϕX`S(hH( ߂ 6h~^ ) X?f?g (c*fԡ]AFO/W߈8cYyX =i$<؏=9T/!PcxR Ot(+tp9)syD?ٓ?yE /6)`?g AfC&xbʄXT@?3*bʠ Xd? 3*H̅bĘXPMd'ikO8j]̪_W?<ٓJ)ee36$?z?NۓİJt-W?z"3% i1AK/aoB;?_.]y%^wb.Ns$3OX2_j?c(ls T74nt==fHX/ !-4s!Sx{GgWȞ'5OjX)Ax?v{(W ;ynzq&tOQ``ac>;O>%O"`:_J'I,d(WO(O˗gև/~hgWh{ݍ`#;P篿^luAFlGb ,(R -7(XGQ D$-2HP&_ ya8̡KPs05HĕTaW:# !P<(1NpE.TFG ^LSda QWx FlR#a< i?E:, $ȧd $7iHJBEsX%9IJ/z)N(PVrM *Wڒ\ fh|a.gA2G8L82,@3̦-ep͇Al 4  0Yt dKRU7Gς8ԂA pT `Ӝ^q>}ӗAYiռO? l=U{yiWth~n{b=Y5uVyg4S\N0}otT͕{mv|N0HUdv%zSgwQNo#ŀ~JHX V\`0UYxVdaTq5X#XzW}arEuyc6gfXf}$хdbTWGX%gUutWPvecgttx|~0am*GX﷈iW20| U$SthLg؊O߄pIM0SB^J(E*#(ѧ]$|͗ ьϘ VZg8jH三g027TcL#Ekd,UZp$$'nU؎q0O QOg0@c%/KKKbrf GITDgR&'Ve@bpm@ǎ&ㅏ ]Vܸ]VpXh@g(bf-(7Y3UW&RIsT|7^* \Ug{vg$, nws!Qit=wuGahgI)E ]Y`r$!89Vᙤy%Ma?9JᚴyakM?FȗY)WI 9%KWNO:G6dBb0Zpt)JaH0dNaVpWV fIysln4o`*e\'(g}Y0*{d5*pGv|ValϩX y7eMhu(MCa 'W0\Tvmoh%e}k%qE6jĸՅqU2RأR: cjQ~h# آnpm:WvxJyzH|ڧp'JzBGͷZsVS;WȔjP hiZsTOzJd#xIJzM?穭VLdk*zrKٚz&KԩꬵGYipJvgn{0#ƬHVH͊6HVh TPzp;[j F ۰wCBP{۱ "Tg;(*,۲%;-2;4K&[8:/[5 GB;D[F{E  [дNPR;T[V{P ۳H`J+Gpf{hjl۶npxCG=z|۷~{GPW@Jp۸+\[`v(۹+4pۺ[{0c>ۻ˻>PW@{țʻ;tcv{I0[{؛{ ;˽d*u۵d5;1JSc+Wྒྷ]vK |.L,|GW [[# k? !")G(a,»$<&L79c Ä>@ GL0T\V|XUZ`b-Ĭ2L'ɡqÞàL5 =0\˜|§é˺ܿ,Ll˪ˬ|N̷31k͜;l|QLf,^||\Ɍt\J`Džkll-;ȁ+}͹ȒȌ|=KAL&Ї|߼\ͼ0ͼ#XI і28 .2%=')}ޜ-sӽ,ĿLĚ|:LaHJMd[[],=emP,|t]_ {׀}6 24mă͈jeM+-G]]q1<Թd~ŘaMZ:? f#J$~&qӉ-?mA]Ӣ5VvOm)s 1^aJmԺۛF܈=ca; eWa!I5jZHGQ7m7R]TqqwQ$w o_}4E&SAS ^I  4=+ LTbeMO`du5Q>TPTeTxFFT% cZФKT@*UDreomL%PM@J+XcLloTKd]dPITHVf%VeWI$^&~=."sKv6-oO9H0ՃiupOJyhVc2ja]IŔ^nv]W߱HXSvW~Mjo=Y>n%S~,).+ >-*W!ge@NE}%^DYb%^NfS$gI/fSh#H`Vg'6oR&KIT>~+xdKj]ݛl-nmbs-&oxZێ \eMmncUuV]lӅcqfajS=w(xS]?߁dFD/N YEtFTS?lMz؏~bO|C,E0}Q~MNOvZdT{FkdWM`Q0F%W~V?\e WҘ^M`+JȎT"-Aa%+aԕ`Qa &+8o]I`qVcF}I xEN!W(h=uW (vegQIs6aXokxVuINa4d%٤$+$XAr B%NXBT fϣH @'j `0 C̿]i]B S*?'${ B2L0T6:3@&ZiY :M3\ –1:-KܹNd7M6mk"!G\e̙5[Ʋ͡E&]zt猎UfՐ mcÇu.x[&>z_<]a˞lںfԟo=sŏ'oQvg@.DoQ M: >=$: TpAAu;/@r`B--B#LEYt0Ek4HqG{G yL+ H$Tr$d 8CJd .K0KILcL4TsM6,35,P6N!tPB 5?A QH#tRJ+RL3tSN;SP1} vTTSUuUV[Mu"quVZkZaSN^˛0 {MO`XdUvYfuYhvZjmȪ14׹_#ZtM3z:Bq㕗 w_~_x`Urٻ!#xb+b3X#',8ޅCl3_Pye[vecyf[dבs~,2{Xczh C8{yid<#zj8ᦙn8=6v֚g nWf!fo|pk6OnsV{+*r3]cXH-?,,0gbW'@{r!Jh*A\ ځpV oa%%Lq 0T ;1Dd`]wHxFPK dpTOPR+jX -3;AQ`)T4šβn;xōi݁ AJk*ܓ6qہw OARt vtG&)%ȅ|e`r&IPNS)ϖ˟?F YⰖ^. …8/)^UQ`V:ox_?x(ՋN[ʲ^J2,Lr,E]66Io U\Os_;Nz xHH M@S%>N"T)a(N/2m஡MxVFS8W6Sjcʕ,S)GA o‡ѩri$ff樈b+i4]M]~SIHϊT psnm^EH|£5jHQjr5Tq'oC']"2}YNݾ&},A֯%enzNW L+|׸4!o?/ypGŖv)=w{.Ѧ"Lτ@jH U߼SFuVIl4 )3ηߙ$ rx!2,BΠsWw]?{- Éj.:)2.<ϋ& .Ȃ(@1HA ^%@ b ;(ԣ;Ճ,4s?rאB 1.3x <)24 ȐP?3%D ,?%e$^TCƋ ?DeOhBDO(O$1kL3b KZټJ",c[L6#|2xڽ;+ >*=HQ=C]Iꥒ ;i"&ePe?% SFU<,a=8,4 *, T@,(AJE^ϳ=ɌnL5 B6msR@>f]5ِ]Uo}ch8je>y)ljAR| h@Ӫ+M(a O w!CDΚ+bp2d8i Mj;@^ff&+8r$Or%_r&or'r(r$Gi Y0s1s2 Ywv "^*krrǪݼa/6GA5c.7^/'֘ssze t23,@7A7drHWDja~&We=tu(u5?u6(@8Φ<)]j27@K3Vu9ߍ2Xrluo `W%lkw w$0܆): .Zoj!hbY\j7Hl 6O >^HAQM{N Th#^ۆ֟wmym_3 xv_wGs.nZmQzߎE/E=lٌBJ~k+MZ2z/g;Ħ/_`:N^?0'&Dx o!|F||8w8x ɧ4CWN$dN_2<Ş_ 8 @(z+} `2@3xx7~dBun/3NH4J&ɢAk-eȶx2}7*gWf*e 4``?4aذʛg(P 3<(*,l%TY&NV)_ΠBYsGzR? J? \jB!IKV ʀVX/*V,X(JgLg 0ʐdXAn޽}Ys @As@k~ ǻX-r,3o9O:Qyeα,4h b`vlܐnWL7Xyݐ`dU!FyVDQ_E)2ѴaBe]%WQ&Wy9uOTVa^ijt![KcKqVAeWFRKOgG+`Nc٦nf2X`n?ZUImW5a:QU,5!dY-!xBםuJMyL uh[5Kϕ,QK*C H**ބ\4bO\ +"BIVbPXP#]XmXA z,2 B%+IQ\q›#uRFZiU +=bb eO/x-qТN i&\)x*KyKj^!CT7` EY5&Plf͏?*WDOZ#k*2B3ij}WHZBu-mq` Unb 8llٮU`Ԩ[nRgd.Í63zˍS'Wgi"NշFZҿ}wa l0qo.bc>KbDmi૛ e%1poȅ&!bM,lvjDz&F B28H"]h㨉uB @J;:3n؜̅M7<mgE $,ijM_6X'yӣ\ ^"(1z)Ө5"F"E(.ô a8Х< ߊ $ LW2+d*Ɏ\rI8%*SUYҲIwba]uD_ ^qnQZeš- _ P@2XH*X`!J\qɂ L8':өuTB,yҳLQi}gN)2 CwTrZ n 6/ M_X4`=)(PLc*әҴ60yӝJȩwhӡJ8!KJ ЀP?Դ1CaZQՁpbPz"/ҵv+^פ I^+wVRc(R*NƵUEyLZǧQ=-jQO}-lKFSenk43yL A~UQt+ w9!)-U&B/ԛEԽ /s-%ʯKmeAa |"/lb_A8|Fњ,6+Xg )9i]89 Z.xEQ<^2,1{y$3Ӭ'(\^3uHfHPy(2h" (І>4BK#~4# F{' %AP75L3=@ >5SU$gc-YKF)5sjlگ"ۊK UB%qI  A q84msOp枂s;uȄ$YY|P,Q$4"ؽ"j8|9 o Es*.!ZDƠCFڴd'7ˁ ;فTQ~C h_;q~w03o*%DTP?.Q;߄ { 1 s-)Ijez5 r~QV3 W"0f}ŵ庉r!GHf#0QP|j7`!K.,{ g1iAgn=jVTkd$i @􏇼$]]xP4??Ԫt$A=ϒꁣLvvObgMݕEd\IPE @DA!1Ѩ\xq摖ˍ3<]QS {UppSU! gGBBD`KJ$fȁ uw]^=D 鋾EeȠM!OYYau!aٜ` XEG@! T Ǡa&&Y&~uX(F" ZX r˽`͕}rٚu00/c*Θ!b %?R@$SBϴf%^⵵8~[Y8#9κ2v6HE, {J5E]]a!S܍ X[=O!QS6PFe%F7b Xb~;rH(FNJ^(^('N ba()(((zX]EX())^&.)6>72Κ2"' o^j~)))3 \)c;.tz)) b id)L>*F()sᩖ6jn*v:fRX٨z*Υja΄/<)֪*檓d*. A+&.~irፆr(rԊ^+f',lޯ.H Zy艦+h ,fzشt'|֫xezrdʼ\jֽg羂PXN[V쨙%–~5g>gF,*ƾX,i:YB+^Ȯ.N)*1,Ui{K cѩ.Һ#v;lInfn ֆцLюkf"ڦ)N&)RY7ZʭvŬKꭴJ,ImmmrxJFn .{bZ"~!blFnt r*VZbVl.Rn+YVҮPdnZ^n`>eioN o`2Y9:MΓo^/K/L\ }enn0'-p5f)gop*\ӒOyF/Moe~/kVkBk c C cg+0A e lƄ̱t K]+k 1ps2/!zD4_@_ lNVq0yfp~LWbiD|KS>KPl1̞OlpLT\攉"{a#%_2N& yYlLA ƷLKtA 쥧|_&IWaVS,&3VCP %IPL8g5F<3=/=o,{,P G@@/AT㘳PdVgAĆq?| Gg´EYdF0kIG/hh\PxsAC0QuQ7㢱?n)0UWU{UgX!hI42۟jH@ShLA?X@[[ǴkuV; 5/+%GFu&ZhtΚ@☳ oM]#Ii! *_Io@1lgjrpJ56<6ZF?O6eKen gtLW7CXW \sA@ZJUvvmƀLS]F{'v.67LIfT3/.s/DSkgx T[IN Ic?8Fx#IżJXtN/;qPjzcDl twڷ]~rx4wj;(OwYɴH@0 m3|NztJA:=:?zrD@'4M4B˺,3ƶpyޮI gpAmrK$0;R3p%TCގv x:x^8̸Peg2{D{K&vDĈsH Uco2#׸/npݴyȇ 仃M$޺# i`Ⱞ-;s9wzj1LKk4(CP]GCp4FjC^ 5A긼KhW\ۨ<ʫ?<|(=G`PA ;CǾws{||LhA0I JlgEⴸUXiLd ڼgH97V} [ӳ tЛ6 }4^^8WxhKx 8lS&Kxa3K؃ݐ@vvtu/[X@#CUsmLPpp ?|(C7c~x\$1Nh s~P(:0?{?4t~gJؾgFĿM` 7@ nw0.$QA LRKĉMʠgyhT9iRK6ujTRr%TK BEhPWuP?ӆm\<o^{p`{X;^u<J6|;eA-a p炖)('AĜdB.vL ,dEݏ˭BIr˙7wztӧzṷ3. xc̅"H`u{qQw?!- .$ |0e+ Ԃة2g7w,lT' `M% .B$00g쌲\$jԐʄ¢,ܒ.0sL,44;D (QŒ@1. 1t-1İ A-Z^t3 8 DSO( '8c9 *RM=TO-@&, !t',i& 3|( ^T hh³b9Ÿ`uZ50H $h܂t@ P#h,Z^UJ5׼1 >^ ZUp,9,hGn }k3 2H_ik߰"U՘emU!>9}蠁͖[ᐧ,vz˼,>\6 'Mh yMh" fhnbk6:8B -(10Ëh %-᢮؜=})iS]R<+8:H@ ^cK"jg?xZ]=,0xT{| =Շ~zL :wu] uA 'qAS@>kFA e{Q"A~\7K3KQK F񆂤 WqкT X!PIT";DP шQD/ Ft0$VG* |7 ?@ ep# "(G? YHCdFHG>S3$)YIK^rwiF8o#- H]M04$ ,pR$ enĺ 0YLcT2LgsV$$YMkqi6Mo~8YNs5ɯT:YO{f_>O<5PvӠTB&@ P>YQ^T`FRP>] 򿹸DIlD b14ҖzԦJ iNCґ3YxJIgM*Q6;&p#uLpزeߔ68''oO2@V*CUtIlnyˌBՎKJ^T ŞT  Ss) ͒n㰆-f,,piE$dvt" [YH@MW޶K~m\,-( e^]RUr\>ѕti $Z.&Qz'4?+ HHLʓBӠFSKSNSS5!5 6'Q66e6oSXvB99:S:::3702P;-!ۂy?@T@yh(5SAT-uTB'Tr)C3y 0#TCCtA907EEDCDSFhEADJGsTGwG{GHTHHHAS IJTJJJKTK,T?tL4L#s6! : AN!4N4ܴaTP'ִM;tOAQ!O;!tFF=A t$]M`NUSU%AU_NO!tJaPo86!4`UWUUV!5 l5f<nE VP|RW4XU^R5U\u[QQ;!PqU]PW^aUARL#rcȚ^iTCHv$?'it .USZ <[9ScS^u]4N!dU^WVO_Y2 %,0YR 6` $lbF!T.C.jHliv,]]6U!shmQ T BǑ궞*b6^QoU`f |CڏjdO#5a5#\G0zHbŖ hdlUT:uS5n]d7(ZiJ%놏wl[7p:tv n 7FRwXG!Hu6d7ЊնwVa|W] A[" nmjyQ6 :cmUanl" N'4B#DU.D%` (fw~Nw:H$`D!bB$!,[ b#9$ͬjlOHe̷i]rGsXdH96W;xU\IVZ-XC~8C0ڀŠI8[XAPj\$"b BP(Ň3BIBtGz4WsX!h#DB`]h~DXUQ9 UvuMwyB\LFpi-zcOP(?B\ @ @<~UD ,Z=Kv"bp?Yer#jjwXv'\~ؑcW47{ꕍV#hd2#ΙKXąLJAeNAsNO![d8V?`V0V7tsy ȍc`O:? 8].IMˠE8$ne]HY.%:YnAJN9sXא6u NߌO(&~ T=>{"ه M"tCtF|Ddj5V8G宷{{)<~ ~ 㞉ANվߞ>i߮ ܠT;_ `BuK`1?O8 `\u!Ko_ay?dieߦl AeQ  VW`e?𻟣_`tu۟ <0… :|1ĉ!&M?paC)L:32ʕ,[| ؼ&<{ 4Сaر;0AD"ԩT1@έzz 6د3n1H;x 1ܹ-]x5͝c 8Bf.mjE N}Ląz 9e5rRފ ^)[=hݼ{o&4R6L$'ҞEܭ:zJq88;gWܻ{x,hۚ>B<1K| sၦ*]v:}G`F\b큆L8AT :'t_ vbfhzM8a=3>ֲOU!ɍ8%BxO1@~g8b^`AI9" 1p t<‡< 41b' ◊. ԋŸy^8#cOnoL> j-\2_z7'a@=@)%V2 lb)a9&$NitH/"`&trq,.| HڔYxiiW;Z)"fjʪQNY%f9/~ C7&J:: -;Rkm.otK[n [{hvvCRKCVWmAZk=B]Kv;M(Źf|e7Mw 7 73=^D.AC BA92\n 9 B[Λݪt{N{ߎ?nc+IK,|/|?#:N;o[x~O~柏}+ߏ1hx} ln@bj|jL~@ z7C,0|hD+ Pk?@FP|@/Z ooJQ0.ІT @ k3/,[Nq!e65b}.XqzPF|E#0HPRYP?\h @6Y~(fiYpgi9 _k wr<#w Pl#A]ܢ+ ~w `x;} za=RErxDj(_1c,Ra ^:_P@yF3Snd!xϐęxd&V<d#v("/d5{Fై;d<1zn%"-*HiJ} #faF3mXA5Jm R#1FVjT5fL5ZpDH `| ebLנ9 %ȹ/5K? ŵ5 ]H>YfqE_WY /haT.xLPK fp!  :oWGV4b8m!Rb έ$UzIn #6b^`r%X BEDm] &(H!fY*SўHA 3!8#0ڀЬ+L p"3cQtlAP` ?GL-Y~ZRT!.jNp< s=/B |1n3r t3)M̋rz37ITҴoo-@0ϽA@`P B)`iA-ւ P0@^X3 @ݷgJ{1u!&  LZ v@~ rCk[mAiV׼7ŜBh=Q Դ Y2Gjw/oZϠ$wx=  { N[ͬp:n4 B,a(`,o_|3W nIᚱmbHx~t.P] k}\:op\7+?|퐋A ϓtZ7axq)HaF{}>엱tE P?yDq h;T8S (@P+ /#2o٣n3 tS`3 V@ʀ >D}97j{XZY?|;+%1M~\h[eVPfzF~We` v' ~avatwd0u'|G|'WYa ǂ7V 0mWǀ z a8}7[WaidqF'oG%hM:G( e`~gW[hP7`2`zh(}uc8e2zaH^8bvhdr1w0u|D2 ҂g  ~o7p7UWt}aFx}1\M`MZ`CfiфL'tE~#yww8j'g~0VVpgz8og`丅Xh{y8[_\j&m\@b@x&h|(]deȂ.8}7zh [8~eݘ[8h @(T}hM\72ibPN m5V-ia\dYRH~qfhS)Hz@}^o`X{ Y+2)apZFHMgk7mkٖ%/xs)SUPqeEtfY65yb&UmVpgPYH`VPw8#9/ zvF ؍Ǖvx7P}؇z0h&:'ǹZ@^J)7h`zݗzezI~w7Zg-`"ϧΙUНz'zw3t)4]z9)#)`7~툣,dk˙+} ws9(U`~I/>/F(~G <^*J̹RWZzh:`4y{ʧ3Z/- YOt:Gj }}ʨH}qdc3/6b).0yx^8zeP!zާ #@}z᎕*O uk=3m)5f8Y9s929C:7c:cuww}g W-,0l!\ra8C֕s֙k9nc:6_sM>י=Uߑ4m)*R* C*+4#Od\`՝ , &{-2r-3S37-:3.F] ռL0r 3TbP}/ }]2--34'63M. 4[o}գե00 0#۶B-6 , F=I2 3ŭ"};>AܲB""HmݫMmu(<2}$*ƽȝ314ݙ:.嬣@ c1LN~}!҄0Ȝ.s1.3?s|~ 0won0q.}my^ޔn7H0tX↾} 36穮NS Y<0C-I^qo뫳P{@؅>܈/25.gu4!~>’>N^N'P V`ҞSu`(nҲݾczPn7Z Y`~r>t~wnFo/JP p ]Zr0#ߗ&=t W0 pڔ P L#߹) l*cO<;PKi/;l6lPK$AOEBPS/img/timestamp.gifGGIF89a涶aaaˊw~̔̃𒒒ݐʰ䦦۾Ƶϛ򬲽ӺԻݱSXb嫳$&*RRSjr~좥ܹÂˈKKKԱʝGLTӫbbb߮ՎXXXͭߦɿӰ¥頻\\\Ɣoooqqqyyymmmiiikkkuuuwww{{{ggglllsssddd^^^rrr|||ttt~~~_eq&6:Mlts}}}/28___eeehhhzzzjjjfffnnnvvvxxxҁ`}(+9QW;?FVz0CHC^eiVY^???,-1}ߌy|[`iװǠaloۻ㻽ƈˌͳ̐ҷ!,+YIK&E ]8H%k:lrAF__X0 WZ PF(T n0wP!B`@ `AX>󗵀 :e BEE{q8nēP3Gf̨Qee+'yx˯%hpa#RrժaPri^BIJr}:c˞M۸s֭AJ+-]I 9jڔ8} 5TZ=g[=s_Wao)T ܠ9Na 'xU_Ou>>n񣖵7}KN^/텴--rs`"= s` ~x (GIJ)2B@eP1cc T2Peʝ4^26E|v<%#cȴ+j0ְ60)mr8`4nc׷?#dut@f7ІR2Cu3 ٚlH;0=/I sxH$&АG46yͤNiJ܌h԰m|?ɳ?#"]LBe( ӮzXBU O6'\.bI |g[1[j4(&MSC 6Xpq#*7m4'S}n\\_` } aWZQh% :;[zԭ<̓HyG;Ki^R'Rj67@;+v5'fTAPuܪ# }s^OwZG֎v}?I,fIU|5mY;1n֝,vhY?*5vg9E6%B;Oh[K-m\Kw.i@ |7;xV8gSi8r@. P [[w1Hk8J1_'px^- QDxɔv] OUTE)4C-1@@_(ěkֻC9@,D2Mx+͸J;CTkK|kɵG9ҝjO#M@?v|xC0yv_%nxPvw{We` }%|p]EdJ~slGWZ8Ł g{Ge*%P dWwX _WzHŃ>7U0`tvuy ~;~2H?@ FX KKpd HJ r @H`qd i^$Ȅ$ls7dJg8p(tIvxI׈ 燃ȇ `؈ ^ gr  c`bgdTC,̭˼=I%S@M0UyU2YS3?m]ul!WSttMUU$Uor$݀ ҹ] ۢ4_ ܔA#APg$BtNIsIDMm ,JSjfo=ǞGLm1Vd 'T$$BsI$D F7p6G~}C.-0}mU/$$]$9T0Z1 91bC:Y+-ܮᰌM4).=T=Yt=W*\ 35#+AnDI.߷},|O'^Zm嗃= `5en1.1Kqc%^`vN2WD5{.=a>cFެ~gȄ•hQ9SC0$CPbcfћ-ǵnw|&أ4^*n=2ncN.Ҡ=f\8I" /k+B.s)wLZ3N^:a3dJL-N^RY[9%?> H>K$>5徎E笂N0 oIKTM_Ͽq_`og ">o>O*y`EG_VFL5fPP'$Ļ޵"&BE!V !7 @Ca `l)e@2GV(؋ q#9*ؠ0cF*/ D .\)R9j0MH4WQ$ER%N>Y?ĉ/fcȑ%KnUH<]4)Y4bA#8a _}a WZH" 240V"D`Q"ÌR%KR,6& 8SТz0)TTb+XȒ8;BD KڋNpZ>YP,L3&ERPd^LKE85b6nmz󭠃Z!$"4#+餔Vj饘It)(Sʽڪ . 03l2::+,:,J;-Z{-V 7tmFj :9 9脤:#ێhZ<*C*?;sLPHtB ) 7O#UE_|tld!K'袌6裐4iHPY"%d2<&ȉ<(͛Z*VV,2-4.+0<3xO@4 e EDmQGcI+=.z䔁ԝv ;%E򀂨`}OKj`t?'AMcp İ3g=QiMDZ]lFQH\]!M]$.{U*2`LL8Na6%~b91s3Y?EdN>T6lWe#19:ՙ$N; /բYM/u!u|@A f94;K6D[w &soߏ7h‰WJJ1KsȤ rk^&')t`ǖ5dJ` n/ԍdv)if9[ק3R^۟n*$OVspj^ֶF! lY٠6|!|]-kG3ҵ.]~I^ W}%:dCɕ) S@]{cbF?LP @=*v룝lS# K5oUG;ޫ(@`RRD!{JTؚ'J+֚""Wn[bȷI_UTw+Z5G8鱏ۜAqP 0RK"U1ţIp[({ͨjBֆU~#,&KjIQea.gŻRo"i$b4O.PHM1˦yӍy3C !ǹrRT:N=Gsa%ySDæ"AeK# leENh\ܛQwKQ=c i:St4-Sk:Nk'N4Sf> N“wtꨠ*4ԆV lV8JJ_="BŪ=&vY _Di9̢̺XѾu+>M/S\a):fNdcJV,E9I҆2TVU~pyd)wUJV]Aѷnq֢WbrW梶_t+V=lv&v+Nμ >gzzQf k!%kZxdtyHAPܺج_ 0,[@[F0c 36p3#G`s9l#tCD9̑r\")݋<}\ /9Oв:hН.?]bks[㦙zu?=:Y1t6=Bڅ^XX'uy؟\`hS;_wûפb9)r;)G;/^nmF>wYCI<_rߞ᤟uOǪxo쯤p?༿Ձ||wx=ߝ}Vwq*7Y|os?n??|2c9L@@ @ӳ8&< L cͣ+?7>=+A8|; <۷B"C%T'$C3* BC,<>6t@7B8êCATU=>,<;hdpdpSHD#1?UHh;dZdn;>ed~ÆKXS7eEhX SpL%6r$>hǀCSHc'L?NBxy㋆j,FmG3lLJdIs[ԽI;{HZdĐGEV8<h`ɦI;CLS0i8lDDTDTw Js~ȂH~GSPKLtKADKrF;cHSI̯SI֣I.EȂ\J H`T>Ȕ̴JԽs HcJDǹL~0hNSW|JjSҤWGMaҳrE~ս\]XixyTDM'5VESpW-T[ }=։սW\-ؖϛ>W'TUYX}B[umYq-VrXI|U7]RE ֢5ZUU-dZw]ԌPڨ֙EMZ@ VZ:ؗK-\=\ĭȽm\-\Ԭ]Zӷ5Xۗ2( ]]-]]m]]5~;Ȥܼ=<]]]ز5AX=^M^]^Eh^^}ݧ\C[X^^ _E _M%}-^_$_^u:S=Sx䕆%`>Rh_ܟuQ- ` ` ` ``e?m`Na^a Ooaaaaa ~;ٍXd[ ^b&nb'a&xᎋ8M`,ba0- `-c2n:*+.c6fc.9/R0&81nc;34`c?b}8&9N@cD<=5NdH@V"6bsTљHd]dkcO.e~xcɭdKBC6Rs`&eXdTݳEZ=ydWHeΝ`^ffnf eAVeKn;agfl`FcZFа9~bqgrIAQjK%.gw~bn^Y%f[g\&fDfJ^$agD6C[&h]>gf#^ghcv9Nd~Fgڀ9Nhbh{dnh^ehܓ&n{>|1^Khϔivߛܗnc8nP:nj>,UjΝgk~ꎋShg^rg=N]dfkfhfSflgjh&c8N_aQNƆǖl<&i3Cik؜>mmnVd҆\_n8JnҮnUfk~=[׮MV vkF&ovdelfFH.&6lyineLO7^nXnpI^vp3mCmO>qUcsq7 d W^q>S>rri3r'q[@n'q(wl)'a*/> p,G8P6os7s8s9s:s;o/gF^MBoAae-擾XtFotGtHtItJt%xss[p.$C?O =/01C؁+/,p5RU/'q֮ptURb$xv ?(h3'Suh 8fv( Xv]ws_Vu9u{e$VUii\W%(svGn0H7_[Oi%(%8Xx{-xz@{'d|Wm}_$fj7ށўZ{vGw7xizoޒ%8ޔ8N^`%^&2fKsqxO$hfz]{X2^iM?|?|rƧ&xɟ.(|||||/o|D8}ZX38Z[H|؏}Dd^2XW@~xw~?ފgc'8~HfzRxe~UP/?O_OP(% .{c7t^,h` A1l @U)^5ĸcF: I$~HbL̘2gҬ9t'.%(-j(ҤJ2m闞Llְkqc],U┫2(D/o<׳o}̛;,XVJ=@pY6?KPTp R Ɛ^d xF8a)ƘcIFeiƙgFiچņ!] =ax%M??9d~-_uׁw}r]z%Hh_<) E4.GDXQ)9XabdUv|(ַbiZk"'dXBF{-#? 2_? 9))aQJewa%z+|i~'u PLKJc)Pv >V(bh5"4㞦ɩFK4ze[:09f ˏI;-_Xmacێx/*n0r[OovZ9z+Zfh+62Hcƃzlhh/FoIkƜ1͑zs<}vy>練jϰP5/pڊ"HVgO(>}8X( =e=j'n&>Q>:饛~:ꩫ:~-DŽ~#x(+8bѸ9pkK?=[=k=`o݆s{2>D+||<]&̿0?t';oWT̕51}ւ߿pV Oj7 rgdWS8; d+cس*$U?Z  xZk:\atBɰ_ 6a|- ֎2RhEUK F(.TEViqv}#~#(3JPd#ذS<|$$=ՑAMp$(C)Q DH҉"u Ν%.?vN!V/ ebcRU1|5Fd^޶L5YmGLֲjmgݚiw Άmtכjau>0jb}[sP03aYt1C e6!OX/W߆5< 8v,K-ˮ2Zl192Y恩9t b;؋$^<2{?/%U[XDɽtf\_=?ZǑ) 7#F 1 .u{eUW*+|x>k=[zVN:/iStu8e(2-+[ nO`r~LIL6V(af:Fw;C>r5-]Z8#.S微f/i *Kke.9c.Ӽ/WuN+.F?:ғ =$ɷLZV:3Qs|yTqf?;ӮfGv;I«z¼2/Y{WLmSqamW^^5CuS?=sУ+<ooӻu!W%{rnW%#{~Ƀfп\壇@Hˏ}oZ}g=} `!` _V Mi]`eVz>`M` W  `y^%* !a)þJ 2DE3$\`$ʁ!IaQ@la~̘B?*pʝY] FAI@JG )`8I8ݡ?a $]!2)DI]\@C!.k(CA,2L@b?"a*.b]!&1D%2MQ_u`@DC?x@XaAB?"A?hAxk$K@Z#ޗ#:?@46 5b@#A5*@$C=:L '&c- 6c14)05 @@ZL7NCF34X$ 20G dHF`8C?$4D,j),I>$ADDGG>YldG`cE>BI8ƤAX&|M!:*?h@*,d]&e qڣpw4YHbdw>yCP%7KB z%A(2ܣA3h/"$J@S2xYʄV?ZtV];rdD0 ('> 3R@&4#PʄrD!*h2p5&):$BhS.gxI2?:IdYYb]'vҟvb~$)`)v'BhTdpNz&)L^h^}<"M&a!*wfph胪OVd᠞#gGVd,nhA"$B:0a D&(*!LW*nJ3_CQCPn%I0iQFjk&k"%4Jf> J+Z`@4&jCr/@k"ƫ & "4Vfl&(^zf$M6jD3L)`ljl4@c2'r3.f ؙBcfin\dd֬DncОІ~j?->b"B!, V1)4lbD)h'1(ٽ1&332H2@ #p+0h*+0ojϥB>s t?ZA#B3CCD X4Fk4GG J4K KLtMgN4OOt15Q#Q3K+5S;uTK5UKU{EouWuX5YZ&5 ̵,8]0B_>sb@A'B7CGdSegFwf4I4J4K4LvMtNtO4PuR/u4SGTWogp{5XXYZ5[5\5u*8^5_5`w`aob{cwdG4ES@e_4zgzwgvviwj6~~vvS6F7W3xqqG/ 4w233?4 ,@\&<SB(B2 CѵHLDQE]p!yQ#<8 QjH莜NI((ZTjU,JY51*LMLN\1!.44AMCM5ֲmnm7}t⎳t9x5IꖼIQ,<3=no,ÔwJ?-0鴳<# ZR@DCMq_$5e]%y9^.J~ʀ.oio4M(ٌϏ?#{P =9Qm߂RԹ"qSwTOZUVqZi]rb+ޚAe7nh1Ԑl;Vm?![q]I[H4z}D`߲`a9^QWcS4bw*1?uft*]L)je:^7fWvjJڪZX!Klxż_Qj{Ÿf= :+lwekmrWE+C}b~?xv, xWUc 6Py1@ccF< P%;UVg^AB5.X%BUF՜pŪª( ^CgU$ZX=j5WpmvWMK!56P¬"B5qyQ=mլQ;PPCikO^ >{zqrB_k|`w jpíZ-6:Hvڡ 5l,D a~xW51gb}vKd r㖹 Y7`$XW*5 :\dʙᦀ*X23p)h'L)`޹iF`gB 6T&hi* g)b}'jh7 sm|ɚV_%sZ<1z WW!Y߀: i"-Ɇ($L[Diju~W޺Xy-ܧEp}讑_Y KȜpGbpF͕}i'6iHͬl0 P0LkL Eؒ8s" +w᧬ b;4 C)3$a'#&?'h}G}" `@ \`It/ W@K_ %E }K"W#gXp bc=x > @|" 1p:p3 ,PAplXw pk 2A|!G!0_@S"EA4` 2|fqh$'؀p.>pĞ(AQ$.EB0q!:@dž1ԓ% aHPzX ޗrc؁#@D KPK?_I <5@` 9 n1e4;E6 ҋ8=#Х!Y]d6CBv@xC 8@%_2k4`Jbr pa R<CR=~ \2xwG̈f 2p? ` d"! ?)LiîVǪ#zU>T0Vԫ/Ju+&jUbW XyMb:݂ ZLz hGKҚ,rԺ ڦv2nw[EK=8} KZإ.g{4 ! xK7 ̽zHw;] @ݬl^?I;~p rk6{ k_3H(N0U58-HF^N0k 5"&@Tp>A2(0,x`e$d26dV-€xγ>kMB0K ѐ'MiHѕδ7N;aRFMRԨ&5EVհ MgM`Xɻ03eǤ,|,:>2^/V^ c9&cc36 koYƶO-aKdF\/{'a7nq;ގ뽿{7 bk(&N[%{ ykٖ(OW'wgNۼ8yy{"<7ItdDgW}&.yK=޹k;f׿.=_@}S0sw$hn&}{=^&0[~73&UћyOp+={Kֈlo'푝{O(rO{Ԟ}_#YozC_~F{37?׀NїOW۲j6jxx&GdhgefXgr agցzh"8h {g|ӗo'{7uÇ H7v)~-h}/H{;h{׀7 ǂ|Go3x 5~Ax5'?z9S(1'H|O ChQhFNDA OT^(tXvxxz pcHffVhCs>IT xhXLgO1t8]i(h M_sINWwC4NS4EFDF>IUR؃\^h13Pҗ^WP4Y6y88c9c>yb5sODXM(E EKT$UKAZ\ٕ^^))KH[h6 tJP,K4N@DJEF`߀r1Q/yݘkm 'VAR%E0WJ„@DUS`ހ8IWHkp&ɛl9y9tIPMX4AtFMqNsH^-0ٝ݉> y6ٞ eY)nRȖfl48e>O4NPEOMCXMk{fY~QNBs8P.|)L!5R~LMFRɡ钔W)jvSYŎBuDDL0TGT3$UQ A@IHh,I h&>EX @lj8Kum!zn| ~mzJ tڤ QxC H: J Lzwʩ]h~e1 JdiPə0Z䙬:Ꙓya:zºF([蹬ʞڝZIjZЩ.hZyj8ɪNƺ Ժk jEǰ Ѱo4J|Ff pjZ+&kujfwG\JC ~1[j6뮃z:^ʳ(  k%GIKiN *ni٭Z#PC8Md45d⺮ik;d{j{镶K9n;_yP4R EetN,UFDHe*L 0?Z>Uul^=Pۻ ۸ q'_R;:_B l)sJ9DMdD@dɫ3˯acH+^赪۳ntTLWB4EyJw\\uZ՚̋{KXd>K|VjM)$P,%@ĈJ`J븚6l_nZ"U_L<#HڛO+5?D&L_D{@9<۴;bb<dB,KtR4MrMn8 W< rj hCLiP`PǖijVͳ{`^ͳ; =}? Eu-֔zڸ=֪м]l n qܫܭMݠ m-=׹- =-~!(:WyqS`Sߗǖnq1z3'\ >z=rm I v0|`©}-)^0?a!~ݭ %ԮQ8޻`Vao!]:7ޘDpGguCaNdKM# \A CNEnGnɽmЊz5^~舞Ru wNܵ rDnq}GnĤnC9P. Nsr|Ub`A^cMgeƠF툌gmՓ^<[m46 ҞԞ־^3nXN1ֿP>NpM k^".ޮ>ؓh^}Vَ}١Fڦ,&Iڦ q>q|=~o )JrY~opxPj~~L{^ `l/::Yv_W`T|b(zRX&,Y m]? g\ѪYkHЏ\e/?(Z8l!Zyg_DE{Z6X'Oa9;sh ؟ڿ۟˿u}Z؆%(:Zn W dj ki li klkkiik țӮںk۳ėˣu[_s[X;H1Rf؉H-!Ï2h\(S\ɲ˗0cMM'7rtO@ JѣH薴RE:իXjʵWR}$Ns`X˶۷pʝK-nuޅ*R/xG/* 1rÉ3Z8,Ѭha5.ĹaN#lVWnMs?зFf;x5yuIƝ ?V<:339 ‹Oӫ'/9L[ŀϿB78 _Y{ 6z1y6}kWvx߁I݅L'Lu(F]vmMw&"+*⍎G2jCc'Ih?2H}fGD|T6d=dDN@ &[rٜEcf*Ji͐|7ៀ2(W76!Gc}`f: p(騤N _qj2g)u@v%Rڪ$~kiҺ_ djei詘#z+B m:m'^حuC+@m J7oXfO $ 7G,1tg[%,$lr̜lrcƣ02F,`8H zlF DmH'IPG-P-d\w`-6DPmUlpmXmx>BIL.^x} ̌7㉗BWnDnw砇y -Fo~'R Anܺg7{뵣~;TG/VV^gD髞do.UjOMoT',{Xg=OXUh̀Sǵ@NÁ`4ς xt_!2D$ F'  `8̡w@! HL&:P؃)P#\BİRp YuI_8FgdHF6bRHahE x9Mx,E 5HB&ƐXc"WHřё,A z (kXÈSL*WVn܄$'RT&?1KZv–сc.їp9Gу)D&F63D/ jFE69MnF›g8q1NY"Ҝ$΅S(~9+JЂ#;F=P6¡hDg1QI HGJҒ(MJWRb42.LgʸX/4ͩNI&O p@ Pj4R {KRT9-ߌTJժR ծz` XJֲ]-î ֶp\J׺xu,׾ `KMb:40 h H<`l.pk9<`:#@T6 j- 2, 40CmЀ|ി]mkZs(o3 6}nۈX AxY0XC r AVZ#FpB]Gd ko'| ;¼8p7m#(kn p} < mD\Y64`Ȁ1ۈ^`#8b @lcx ,mc hl1=1lC@^6Y3y_; pyCp<…3`n^0}=b G^sA0ZA7~f1{g6Dl`Gv o8m8M,j=~rcvfCLFtyZd 4hjÖ3gz e-v Wh-i șśm{ ,3[40m TCk0r݁ /0< c h<m^yŨ Z_8xmی"7uuCB]/G|9 Nw3܁4{/A tTg@@8gG[iGsv썸?zÎ,y}xW9]S?xmye.;&j f1f\wsyb3ݦ-6`,[@ӏ@uű]9{XgG㽼ـcvO@q_0 [eSk9r?rXf9op#ۏ6A%1ca[7rbleghiG[Ag} [ `V|\zW{hbqrZ 'Pv~(xkawn{mP~~F|~@:7wea^1|%u[V~Pa`ƶa| PqHrU;wg{ ]^~futesYspf痃5et_|Ng;]Fw§|3'lVr凁P[mfwbz1 Gb?R(zfavFzg~Ysw%c^H`dY:Qt'nʸt\pub 6by(zJWqc'~qfZ "~pgyoiqo\|\ɨqEW}BHgqP]EyFy6]u^=qYoheq'\'vw~j1[Vd㒴H1UXp~qZj`zhy)1&d@elƀofc&(i}zƂF  xgdiu\i_g i!Pi˵aBؕhWf\X^Hz.y.bIV fe|yapYyٚX;PKa~##PK$AOEBPS/img/scheme3.gif;=GIF89a7/28_eq@@@fff&6:s??/_Ò ???ϛПw #&*//#ߧMlt000w~ SXb__GpppkGLTPPP;?F```ooS鐐krOO;چ 9QW(+}`C^eVzooo///OOO򏏏0CHi___333|||&&&LLLILR,8;TXa`sxY_i3;=iiiZqwfvz!,7xI1xMV;;S7Ou¼PIա[PݸJXQՒ1JCOSv*$5i6]"JH\rLb4YȱǏ MIbC(S\_9+CʜIf(ɳ'!7iIѣp賩ӧիc`ׯNaٳhWiU ۷p1ݻx%b+߿ 25G0aǽnFL2½SB̹䲖CˍΨS7dװKETq`EQM8Xݼ+݄B>qBBhU`>(D$H7|'r]ԃM`!]/WBu.v0L1!Z͝d/)xЍ>׋vè|) w0t%x];8 /,$k.+X:Gua`˺/ϥowY79yc z ѝl ָ/,ζ }7Zƶ&A,C϶B l';:΀˜@ FU)z kB%5H h6HFUEl..?_φ{ba [ҹ\îNZԬWD;hH0>M;"#%F*сy%-bnHQLidR܆IEA $j yb+y:T. mMX4%T,G%,([{S yl1' _`=R/·ɩh &8c?˕gQ6eb52 ĭj"'C0|I[%r 0/ZXzң҂24jVL["M`2Eݽ=E5*{. գ"EdSP #8Ю6"   ZZ"%CȠWz%cUXNZ`rp(=ʒR1MINbkZd7z17d?KҺ%EiWZ6s򚢁z%]-87g|}RZg& :иd8[na";]% 骤Lڋ<9XvূB%$y T}>1&C~ù { V׫VHˏbøeSܶKԸ"RbR_ NTqb}/(!ވ:~,(J$@$cQCCqsފV<'OwaO7 `Uy =ך6cq&XNf'~3 TB,yʎCP~ nEÀEvaÀdHhhZa3y3V)ثQ |01wQƼ@ȸDF'65eLa==)GN + $At.,H)+|JZD,q$@)Zh\!ȁN$\(A8kTm5MzL|ͧNcSpw& Ըx w}L<)Qen7 T7A n+ۮ=;?\'qN[C]֕J`P1X4/d$,,#N҉AԈ!y.y p#*.}Fud{ :6 &k x& v":tZAH&,H㈾P{cîF K P&^v/"NAHa:DrO窟9'%d$ tO)d(T6Gsp7*).bc0gX]L`W`|E% Jf'$# !]ѨPSN #N\H`ЍOG 70J(ZIWZyXR\B3c*NxZEYVb z5Jx x"iAyVpJ9S`ِIE.O!)9Jp/*a(I>9QjD[?QNP0TєO0VU;Z1`LeiZ@V0}jYoI ti \!=Za h^YZ1d (I %qəp=)@9/y !y@; q z@ ap@ X i @AR A ^ީ9xyi蹞9>4'#`Iv+`#0?z+`( c J} #С$:t !ڠ%(:?42 [3qУ>@B:DZFz(HڤNOZ:pVzXZ\ڥ^`z=>azhi Ppr:t`Uz|@"Ч:]ʦڨwxꨒcf:Z쉨کj ֐ZVꩆ*@0?BpڪšJ :A ZpJGp::к** {jjڭ:PZ*@ʮ[zʯa*I ]Π{X ۱ ۱ɱ"+ !{ Ki,) +벺 6 3 5@= ?[*IJ G IʴTkQ Sj^[ ]; )he g۶Ȫɶr;o{ q{Ji|˫yk {귊 [ kK [*I; ʹv鹤+ ju+ ; j)˨ ۻiɻ› {܊+ |J`i˧ ӛ˸ k㻪B۾;[jz/ڿ쿥н`ʿ j_a|jepd{+˜h.<-a4:5b<ʿTD\F|0GLc@MRY0?'\@@àj_ʿ   `ʿu _0egõvƎs\ǫpvp{^ bLf7r'Ib6} Ǽ\ LRPTPNR`?0҆D\ҊpЧ\Ω|GAuM\"'{P{rP-ll:m6m']xP^Ղ@L1=5FxWm;=μȼ C6IDJ} L]-0P;*"ӂpL@Z-]@YNRixU`< p PѼPפ&!pGykz'_kQRh,Շ͆MW]-]Ӛ]c3D j]N҇0|BF ;љ}ѪK{ۃ&]xp ӑMx0W3F]֢֙Tm,݀ݿB)Y$=޾|ك`ߏmMTZx]_P>,%#_2rTEe =թW@No\TփN0OMRp]n=mJސ; ؉܍Ө`o}qt ,Anq爠͓:`+ iNjÀà*y.ߎ@`ꯋꎪMܥςp@Ą߾^څ}GC1Q2GJP&-w썚쎰\R+.N `Yό.^}Gұe ᲏)hAJM23_ 3ɇ>^+}1FPU _ ndE` ]V2hE2+Z>z&3?W4dBᝮ 8/)/(_'.1F]W0?3NR3mTU"-I..O$(R$׻Sr.a.&cvg#ͻMՅU`p/m-]0Fա]?5NJ/4a2"f [ۃE co( x ҝ _֮M^\_m1c+uu0u9u3uv9u!vvw xwvFx_E^ExU RYxRN^Rx ^]Yv??NUb  xI*4!@:E f1Ax aN:$R0f)ɬ&)^X邡 0bZ)8 ϦUs%E+ ʊC0'Ҧ~KKx3ÊKYWm ʽ8Hݻ[=[{W,5aXXĐ#K|/mc]le9#Π꽌/eYIcL0È3V۸sLUfpC,ަLfo\߃WsSkνSNx(_<(N_Z;ٻߟvZyuw}&=ԝe}g~r uu #Ih{b(hny!5s%v)Ƴb=5!46>6L8f<֏yې7Iˑ$Y֒l.H,!JqFFZ}uu F -*bm&(+o҃@O+]Wy}^s8hywj.AYzeoR)r9 )xwV\c쉶q Ǥ+'4n,ƒ/|wX9\ Mb,n$m2r(/vXMD15Ux5Yk1i>#eBB4-F4k'XۧpwD,ʼֽ}Y~X@Vg)D<ްH@fqY[;_Ovc>L'7G/W:u[yyycy`2,ߕַ/+}=YKZq>YO&A:0 Z̠7z GHi 5{6˟/w + pKٶ%\E4c`.S@<Dڜn`Ja6kJ(edWE@bȨ0q+EenkFox,Hyq &%alE򩑢$Xĝ%,-KY/WW+P:QP$eLɠ ͑WI=RFd *< @ 0ܤW=se kg1Cҍ^XhTF\dec AWY>2h44@HBj\ j(LBjfdȉ5s6QXYwG }RX6ϲt +8 @ P  x  D851QQ26T 4 H@ԴHuT-uV P3Hhj.ZbGU}mH]WT m}a؀^Aq-^-@)-R,ܛ*ˆ)ڬ:K.knK>&lcaڡ zQKaREb4QK HBR*Sp!<@-tA L*. p@Z{C8 8_^R S.7꿡{'JI\ѷBj>`'+o HF,@Z 3Tq 5W0t Eݚ.pvc40%["BM79G' ʅ m 761ꙫśs1`lkq(p)X"sҗof~ z5w >61QM0U߂v+Mbj(va[X=0CvvQ!W}%Cpt6ƩwRܕ#E"%`lt0fK#~=/qI疹ViH"ў6x՞5X8x_8n`o2:%/G='o$füuyu[cፏ.}`mZÊvyzצ@2_K ~Y1vA$|:g:fI}v7nv7uW VXZW\\c's$q;Em@/`o ^hZ?E ph ?pۥ{e:V a`BXvtQ{,a~o~~^g6` *]OS]|t0p5pipVt:p>5*h:@ P[5@r &8%V@tw(E%$T*PSBU@ 0pArrB55~ZzgY|wJ`(cx B}EvtP>V˜^T~xSav?/ІF%^OrkXE@%5U[n PP(gvvShVrUPkTOHGTIt3S^4%U=ָ]WhiSfiH*Hr(m a^|7kw!'p^S8TThhWd.(8ZASj(b(x鵌tp?%I[qgQp_w͕Ŵcq@`3t# ?uXUdRS2ioq$2`tV:@hZN)& xTT*PGI5px{(xh TtN(d{nysp>8]A'ItYxtpBJH{d4bvyxɹioXd )Tzۖ<`V.i%X8 TGc{jtwɅTs֋:xZq6 a/Nɒ? AՌYX6y4$GǘCEOIr6Por85)e|=T֙7*XIe{hRwTG R 9l* .uv^2puf5Pj^Cyk8Zea!&Wމ!=yC^ ]ԙQXdV)T)X'K 唠jblUe0ubIE[A``}]; Pd)(ܥ3rkV6Cɇ6dI5Zؙyf6`Ts9ؕ5Ϻ`)ؕ예eh)~Q_SJ>UJ*vR'{;m&ƎAN Tۖ** kVeڰ*Zqäf9OjYywLT.:Aq&6+ATu嗩鶩թ:j2!"Jv(Qz 'E9](Zf!bqFy|MY.Cit}܁S{+{6-Y{# %'{p*0Tw8p!F@PuwzиE'zw +vѓAdprkE;2Fmzd`ʛFf`!੷hk۱Qp)[g~x~k{oz8p?yvh+v@цv⸼vۻWdT 0 o`\ Y`a[i^qv0K;:1k@aPTk>bvxrʪJhۆhwieoyY[1uZ^Cf_V E5TGMf܈E-夀p.s:FaVIWb{T7ᐦuz͑>%"ܫ.MfSrȡĜor.t`GM’c63b{׫aqع!Dp"<ZTB0wV>Pꇎ ~~d2NrkH\AG3w-jTܕT$m`' 636|cu@XaD5$ ^n=" כֿ촠ב-Kp`r B?rN"}>P vH΂/lA+ D7zJfENSmviZ`%"vЀWI= w@]V0f&I16[@ gqr{:orH6_wԎN |@&ӑ6N|x eʈ]guE^zbc S%]o$X>MEmA8 b]"ApD0V|3P{@Hw"vt.t v t 6 v&x-vvƑ/:*55 .** *vA<=<BB ЯY'N;B| yi+ 6ფMl$X)T$m-'o pQB,/؞<8 0q(LdÆ!@C!6P>Fܧ<b$HE2-P0M4` 81)LpK(}qIac&I FG=.|HA2RU-#+ 3Id%3IJm2 @<)[i.U6<+cˎ2޲%|p L첗 /*aRd1I%-3lrtL@rL:v~@JЂMBІ:D'JъZͨF7юz HGJҒ(MJWJQ,+zԊbRn2c WtS8DJ:Ak ,vhJԏ˧ j I_^qĀN rRsT@(kS9z:H]mފޗU_qի}TbZך 4M -d`6ȥ2V8(dVd-a-ʓ̺ K2iq&U LФ(Vb.pP,m[d0@ekcFV[`cBpb `z wLo1>wRh/4ٵXO6-(ʀ௣Zoךh.jY=]qXK_P|nXq{<-znc_/]^6 "t\FcV?@N-b5Ht<^>^Z^[Fy'@g dp<0Xh Xu]`gC ]jW%VcU80ɦP5#vfE98sKWθZ#M7=sT4Er%)jXpD}w Eh{M8y`P֨h+Y 9tB_Ru84PVWH4N"-)&Wx7('1$蔦7?nbɐJT( f/f}H`s\iwBI4({I؄3iySx FE@HRwV,% Y|q'?]kGapxEi6=yy Hzt>~WD`r(֖숊~jKhx+m56xwӗ$=X_z|smUiQYqptن`G3fc+Cv,F5CsѨ_2 ?`u&6t (E3\\V2ˆp~GDŽi(mhhx `=I7e85Xwځsa|_Yzvّ9ѕPznjcUH5<=,xk FjvPaEJ^Yhvs3^ 9=yByjYiiXseX}Z/hticXt+1l bIu<$dʤ@v+0\}J4B9UUvByjGXc>W\Iw szŢtNA#2U^USF O)m9aIOLY?i1Ya@LA$aH$QNdq @;pY NP`HAnA̠ z0YР"#"CLBc@!v@ g(+DPQAvp@ 3H q=Ġ /PH*ZX"`A` D&xH6p(ṭ4Ɖd IB2w@cF1v#'II8F̤&7L&tRL*WOlD`JTT+s^z/0]t-Lf2IM^>|4M] 6INM^t8><%0Ib~T:c9o;ou{F[p4 g$-ly;F&c}=\ FYZ?\2#w3rn%r{.8vsx[;@g8qxWHqwPWXt~W/t[j<%0O~{;ЏO}#ޗqՃE~w/yS2u"lןA?7c `gWGtqrc gJ G8vPpytP-(z?`Zv`xGvzzXC{S`Sp{Ɨ 7vTx|8N GA0KndW}vf|T`Tv4 4\4Px4MЇƆKA#F%H `1@ vA p` T>F$`*hr#-rB-,B`Ev%JHA`J0XЇʸwGB7$pp[b[W O\\(>P(pY YPy ِYf0YE0_xr&up/+@R.y (_7P.yvZ:<ٓ>@@ɑx x.%a. +d-i˘Aȁ`gX^`9X0dYfyhjlٖn9mr9" ID&Y/rP#" Oye,xz $z(sa75zZ`Yz8F_xB8F`b{Xg`K{ d{9fPh|S{r`c|V)FAXfHi|k`j|ҩ Іo} T}9gQ@R)3YQׅ}dю0X %yh4i1yLhEpalĈo&9 )e ' N$G}w(ŸE)0x XjăX@xr,!*# "3`b [G4b6@A@_ `P&NGsJb*JI-$2Jg s@GXy>@f ֈK$8svb-`1S}js~c-QuYʙpxj}sGiD6Lz{zJG >zj]x jڦ.JuJj GnJs v Z *`J :0vFъ [VZ3檱:AѰkDb)" FA* 9֯P?q6yi6[8i:;<]w?jAC[cEw4 ˭MO:g.[d0+ 2Oۮ^7Gtff]K_tQ׶u!l973?tzapi]U;j=^:eIKKdby_`2W }i[bc[1z{6OvvޚM?Vgㆺ*^l붶nKmrK mk˷{kԶQU&Ff渪1!ԫV֋u˰[j;[ixlD庰['־fǼ&7Vdw+^#`\| o\ "\\k_+(AG'('d4\@ M6á~MAc@B F2d¨px4pNH Ȱٳa0.">@œ۰ j~v-#H!q V^x#5$BQ /qS!CKYs1oqcK*xwRfu[k7u_OU?{W?}H/?m^~F 6F(VhfQ ($h(,(&(4h8<㏯;PKfNIPK$AOEBPS/img/propagation_tree.gifCNGIF89aϙ_eq&6:/28􀀀Mlts*50&&&k`VY^@@@ 򌬴```000аPPP+,/ppp???GLTZ_g()*QRU-/3rrr#&*LLLy|333 kr SXbC^ew~(+PH($ @BG yyy;?F `666ffflll999]|xl9QWi}<<<___뵵OOO0CH}ߖ»1Iag|,8;oooȴ4Vzzdž///-mVfj8!,+ H*\ȰÇ#JHŋ-1 CID2\ɲ˗0cʜI͛8slI%Rz2ѣH*=:E9JիXjʵk̍F]JYMA鵭۷pʝK.B"YtPs߿KБ%È+^̸1UWLr"W ;̹Ϡ.1"ٲӀYװc&B~UͻwRqWX_μ9@ǓCνw:84~8bY={Eu|ODJeaÆy^,K_ڷù@[ P" o  8!4`pO∰%+߰חZa|")8qMX@@J %q4 J$(D]R[>oiRhC6ȣgY hs%EVD,ְ9ܶ!,JcM@J\kNSmq_\jǶ_&31HM`i?FX׺v-셔w.qﭣ[O t z[9tjڸѬ@(_Uh7 $޿wB}`p (s%L@d ,\ hX?{D)a/n _ ט@1*s S6zKL,U9)QRzG0hR;CoZho ba:p=xwRw=i߻A^^/ bKUF:%t06ms^x< `usz5}[W>sb8y{q ;w>#M#2gپ@O2E凯FMO^% (EaLJan|ٷX u~ 聈v~ꗂ~~w&Zam}1Zt<؃>@@69Xl7|JLHf`GxJ8Q}5`Z\eWh~a(Tj\ggr8~n88Qxzx|؇6G w(Ehh؈\*a󷆚XumxtshXa^`]_THh 8H@ĘċyWF@QXxlx׸1؍8䈀xX~؎|81H؏9YGvTFem ِ9Yyi%"9$Y&y(BM6FF- H_ڈI/G13H5G7F;)F?iG=FA)EEFCɒ9yHGFI KHM9FOFS)DWFUiQ HY)E[E_Cc Ea EeyCi)Dg9DkYCoCmC$`&6$Gtc)`EjF>wG!q)@ @ J$ GIFEG('G JTEF{sG^$BFvG^4yEwdrzyCPIH@d= PyYɜCМ9>ډٝ9AOЄ䉋iiF pIW@ 0|Uٟ 0Cz`F Рڠd@ ?   "LT *,ʢ J Т2:4ʢ`CP-z^ģ@Z `>A* /Iڤp@(ڤKN 鹣IZjA:?GJ^Ĥ^ʣXJIZg i:Ԥ\@ E >cڦejh 4| ~Zq<:JfT4Jj?n:6 :[ʢ*}ʩdZZʓʣ?  <uZp5:z Sp Fʬ4ꬓj#J˃JJ*ꭌ*Cjګƪʭz*2 ʪOjʯʮꮤ *~Jc3Jꯡ ߳jv֣ʰ4zJ '˳#3Jk -ʱ+òKE :T%뺦ڵ^%ݪ.Eh۵ e;l;D+\bd/*q>nTMWsշ1!;˄GWw[( fDd=˦'˷YI=& >Kpi %E#zDU۠ ʣ$+ J E+ڻ*tk3= [ ह+{dK{O+::лKכۥ_ʼ{[⻼廩[ۤۨ;+ۿ󷾫I:33Lk"<ۯ,\=\ )*[=K9 - /ܢ1L=3|57<;̴NPZ@|˾\,\۬!4qۦkqp{@mU tayYcG $_rM p ЉI P83\ϗ,;Ҭ4&e3<|Ͻ 8ozk@.d_IWT@b_vj.mFq> ԩ9g<] pp"-$ƛΰ+]c*p63k$J @'c 7eDaul66?&O7zm|_ݢad:fM˓=.]n\VkF6q/GiBCFddž~D f%ڪ qnU,;R`}b Lom7AΩlamF-$D׼F}Fݪ_v}!}cMLܒ\π.=ڢ q`v_ j8}lRcO}k(ݸZߢcޜ$ҕ:idJFlN 2fmm .kKg ' E"_(^z M޿Χڟ",n[קef" IF @c?pi`f&榭hGon$H.]R PƝm勎 $TҌ*䮼^[\L靚} ⎞~LT#. ܜȞ郬Zlꍜ\ ff`~Z|>Ҿ>ʏɎֹήҘ L^Վ;,m; N׎<>NNƳ~ܢn] J>U zn  lLpuT;/+9ɻ~̥[~)﷬7̻>XXLi;\Q߳C89 t|`)};U:njpn/yr5OozJo/J*0ea6`Yi1ޤ4[pl]s/ĥM&,6%^vqڐ)%O\.ȏ-fСEKiKGfMY3jر}ܹu~-wϵY&oU _}8!ɥÎzeӵ ~ŧ];t{ϧ_}dտӞoj?M=@[D#Jn6Xo:l kJ<`?HJ4h) 9A k, Hxs$`% am13v6n NpJt u3j);j裿 sjѵ/o<|e"k# k"O < |\G@Lom Uv/ҿ1Z4@H#* }ÈاOnt[ bDmE&Cy64zq" 'B>u٤(7}rZ)nDZJw.#P$^tFVD7Jd~bq{(c3"8q"YӠGX‹ѐaHȗPEpA0\FH=Hv!^Ԝ"'5 vd dzm06bB2"$##6JqVG7\5R#Da ?:TydGب)2aL8Nuf*KXn~t9Sud{ȧGfiLH2<=XҤ/d!ØNzjً ^ Fi͗S)%4#h&gDjn9L?R)M9% 1fj>P0q4P~x#n⭊Q|IXw\m@2'R FrjLy=j ܫQ|Qƭs"K Ҥ<3 YP\ Hɧj$H*M7dTAiMh*6b IbRZ;ӆKLׂSX͖p+F:f/A&tSwZKg]l(W`Z%Ek^U!x:,:t&[QnKJ FU[ɆE˪(TK:U˨vkIymy4*%m!TPwm H}aq)D NK<@|&DŽrkv/)-U~iEUi0jKT-uZ\U%n"bN Ba?Y3`KTܞI ~OjI[9Bs$Uɭ`h>/J+jU$UN˖#;5*q틍\wBV4%ޚ)keV93/:;Jc2Z0{|t;n%h-[Ulhۉʹbd:"UzS}t3V h!gXog/۾!Il!9$W@ gC׬MH+o凷C˧+0!:J8sǼX Mgƨ-jjqXvW^}ti$.҅\6K ؾhNW" J]_%c2;'pmKD|n5r@n|Y{_* }Ѽi8H/Η]JС/ _g<둒D,:f<_s >sq|\>3 {2$>Gpx&p_qЭe P>b?혼0 4 380Qk   \ 8AԨ 4 *? A>Q'A$LBႃ06PB(<>)<B).B0C2,C3<+4lC,9xB7CC@L4 Đ9Ĕ`E F<ްQĒaDDBCTA$DCdOP<;xDaDTUҹ\S.I/]ӆSM%2-CH5Q}F-S9'J=T'Ԋ2TZ 1U0-S-DEETXTNը\5ЈTUEĠRB=S%EӰdJ7bUVIZ5V>?BnK&LvM66vfv4n7\c]䳽cX6\..fgec6Ege,Vd洍DugF\JDHaF懍DlYf橝gqFcfYgDFJNxgZF拦}Iɐv>ؓFFԕfi-6~ՙzាfc%R ^dp<,VdꍞdLlj@g]jj6^DkFiGV̯̹޻j0l^lnl$8n.l6ƉȖl.@fvmΎPmtӶl nl>ۆmٝ~mȾrmnVnF̦vnn~l|,mϦhD^ܦo>KnL m֮ĉI(%׌~pq.dfӝvbvԡdg2I /ΔIrQVwCق6ڀ-SoW!wȒ%?&7NN܉u7~m_q-ep1()gS(/~jqdu^e3m|rxu[EJ<'f=Weqe$?s!hqDVr53*G+! oq=pD-tpt6'eGFM7r V7,"'qO7ZZmA0E'qRRSo,s4~hIG6[,Lq!uf#sG?vgN4wgsVdMoXc[Ev6vldrb'rQdu/vf/E?ovH_i|uN#lwNIwVtv&et5c7v'f._w6y=vuj[zx'w@ZA/dCȯ&swvWRavKxwy[GfyqDgUSGsWzy*swwnoN zJwdg&htD{{Go|H7{Vvm|O ɇ|l|lj}ݖr$֧׷v|7W|Ư|ڏ|^}‡}|}&h|~Gl7}J@'}~~~˧~'0/zFmJ˦lj,h%^lH!%֠dq" #b4+^쨑A*W)2#%5^>ȰcL45ʤ9Ž(4eE7wZaѣ" D+XrU١˘}jtRbN*YC$I#\x4T*EJ (T*BĊ+`Ȓ'x䘆sFe̢X< Zre:wQ@aC&kήAҞ,bo]έ2Q[b/ 4tl z״+o{ʔLu8˯ɾ:7.rQFѷۀ HB3 FpMu6Xa@yᇠYzi"^~Z1W݂&c)8 H8HIˑA Tr^GۆE-w,xySRY"l깈YeWZga)d}ۍf&pn$ga`IUVw$m')riJc٨4)cMG(lϙh5[rߕ/ *=ڥ{)g_9+wj& JTॣ%V'}v8Z{eH@Qb;38f`H.g %8;stF,JVb߼"Xj{fzZëzQ"殚h q ܭ̉9W6\Eu,6rZz/f7r##|r6aQ *nىBv)P;-̓xGj黏~ 'O?(5?OI 7)Kx>YDSH.(|@B$`(aM?|HFؿE~+ Cnp}Ec!{HCD`H Bp%|g /~!Pr>O[_QhC~Xč}Pܠ',Y)R.`h+`d#XHP[bnxHV!b7'rU&HG+G#*/bZ iD]x29%ns=MuܳVga {Ϝfg] 9f>DZ >CV:eFSz$RMkәճ4Un|$0FLp]7]xY Ρ{z&1t6͍rެی |Vln&MDvg9QԨ@yQet?AAKV%v#Eg853mfvk[ OHiPfI&dRZ*VP͎T5QoYN!SH 0UgY*b 1Tu^^:^i3;"e:բq}9= WŶ42ӔM+PJTVO T 5*oZp][wZdFwmn*5RX5ROw(J![RviWbٶWln^p =DRr\s6ϕZ߫캬ʮѶ]x` 2^b']zzΝL] oR6>(iMPO*76ISpOd-}[6"|ًf3x$*9Ii(D&PB,K.Ȗy*8F/W呼1㸴$P3Fd?O$/V։O"iMi$$FJ/{TeYyiJ.KXFyL!MJIфN'ML_C[pAbISZڕk%J?`kIHOR^.i-kFql^yRն?bvu\% gB s:|뫞^Uܾ.cn"Uaؕ.V40U/ܯVL1wo[(lezsĵ4bGŒ+/jynTcdT#=J;̙['ދ$E0F}pxC%{tVj7ۦ V"~6Ua/oB=pO9a3=,?_ 7|ށR|[FG%H,}b;Z0UOh:y0!T7-=gtg q[J>)l>-}߰?l4˙Or2$^l┙DIX Hʉ؟ j qO04@ , _A zT "T` ī ( !Nٝ-  PᔨqI t! HHa aaڏd!Ha 8L !$D8"@b! Z"P("!''b(H (b!Z+R",mDŢ-_a"+X 00:_Qb102.4m_-BNU5^/66^770zБ^ա#m;I `;Vܷ=[ic1:y ָAf=;#C#q[@^~cBc]Cc7F&ɅdU@$^M#;B$l5B\$2YO$:df$;~WJ#SXNIcJ$zEL,M:S W.NX%8T^fd%dF2Q[X%SeA$SV$1J`^udReZ_He\P^e `Vz}bke@&nn&o& \Y1m&qoYG4! 's g{F'~:u6\LgĀqP)zaR DnZ&&Z((EF|(Jh'*zԊ">Gr^hbQh荺z"#RhV$"h:(%B$)aRhni$ZDa*NBff$I&!Gj\.fgI潩icbjfRSdl$c*$NbPꡲ\Jg:"dZ,>*$` kd\*F$d2bJk%OVe#:i?in*ZUٕ*6\HN&שׁT{efN=j=jn+ꅍFiʟR+F+ff᪪.*Iʼn[ƪOumf*^Ph>%iKJ*ꤾ&&,c+Q.*^c Fv,},H"Ehcg+#-_ƦkeBJJxeҪ -K&u<6>-FZJ-fԚBX~mbRBF-zٮmvR۲NI ]m܂-jޭ픜׊tؖjm.|`)R$^ALET.~.Z.n%~. .Z"^n rn$!n&.%#<̮~H#V/\/n//zooIf~o}/.Ƌo/}dIQ/sFLxT'b(k 9p߯peoowhoXoLkXk~k@ dglHo_~eI*p #1K! #~;qѰZ1e0?0jqiq Ol?q  ;q԰qqxq2߽q[q(iĘ62aQL$Oh*R^2bfqri$B2*O2%rZ*?)Hr,Wlr-Xr.!w|*h|&H0oz>gQ(3{U4L5w1g3s3%t7o;Q2s~E:3>;WX@s</?1 KS'[* 뱮B,>ˣ>q+qrC ?"DZ sDO1A{ldbtXڱC1Gq";qEF&k4!CtD{@p״B!Ӟt4tDt۴J4"FtHGtiZ4BsM#߳V+W#1Y5[U[J~SP F5KuG"L_]k1Tuu\t]Z?u^rTk&v`Ot)2>2zVh-6Ru=62k+tglrzuzkXr/g2/..8snq#m&3:3;-tNt5u6vrqv=xsD;o3Ky<7|5R}/7 qTw=2pqxt@p}#xg7?8c_8kwFE/E8con/ycŌx,8y?9'9︒7/G? _?c99w9999c"t9׹9A 9::'/:7?:GO:W_:go:w:::::Ǻ:׺:纮::;;'/;7?;GO;W_;go;w;;;H @J;绸Wh@'Xz P '/'T@J@<| |W@ 3o<{GD|% |JɫDC|{Tc<w8>#ѓJ~%}W`Kޛ>ꓽTK@WB~J~{>};h@;|~<+}[[JK?+3닼 P '}@#?D% 4xaB 6tbD)VxQa)4T@IZXCAC@R˒'+YH F ,hPC5ziRK6uj$x $`SFx~I,J2cvդNL$ g p` 6|qbC7HbA! Q!c`s -N @9f׶}wnݻyxpÉ7~yr˙7wztөW~{v۟;PKp]CCPK$AOEBPS/img/cluster.gif7!GIF89a\@@@???h"49Eir𠠠 p```PPP+AG___4OUypppOOOV13000'*///<\dNv_ ooo $HZ$&l -!,\pH,Ȥrl:ШtJZجx&۰xL.z.@Mx&8S6M {~Hol 66pKsW M~LZIRMO Qz|KSŔHq6NQ K nmKFG6d mM X]fƐe!fEs:&rnā+D_DhdM,6@B x'XP TٻjQȓ/ ooh$ \22+"$X૦"8#^ t!M B]>yUjIBbh*K놂 Nh A_IM `@q`4;Z%mِ!ֈ{1lQDAܩn>&TLl;Y$7_ukEߐлgTl7Q` @@" Tš MuAqGDr%I m)= q )6a(t9YeBGR_AXLf;z&hvI@Z(edQN7@}@@,va"%WT݁> uDEUXXsƖbMh ;ikp*[no a)#XM#@PAu&E Q`B)@өْs` )hӃXAH#J/5a' 4=.M$Y?JxI_% .(J\\HANSMSj.D A. toYBh&+N*멥֣nJ@TZya.%&r,dz2n 1(H#+^yN~ '2[$7a)/R+Grڅtbͽʺ;DFa'6re@=pYY\ZYCOlbiW58SG,J猱=3.p>LG9gf.z&K.E=Ĵ @-7T}YLsx`콯b|K>6G l88psy3vˆkb],x9 08Q р} "Ld4HaBh BPm6^@:p.,¹@FD_` a/pZ˱"$R+sd!,9ES${ ~Ath8ʭqH}᛬ %A\K#P%ܰ#HbY O >k@ 0EJ^@$c1Kd|SԓO D %Idn|ETnE -`KtQPtf Vf&1 X|=&OQ /W\YO \#,G Z AFHL.dq1pIFZ%J#Av :*t|(L ye, Y$Z0UED<[[?$%RآH#> fP"&|f5h\|-U Mellz#Q7nlH^Cx" A6L |e'<<6ΰ7{ GLsMW8śqgL+뻱wcطCL"HNd&{ 0@L*[y.{/]~r;6yY^ߛL! dݬfvyπr%c@wB)ntC5/:+8LuK}UG՟o}]_v+}IeTG؁_E&>90{".rG>7|_d#w)OdO+0q399jcUw"P_sk'}H_q'0=/|>GW|yx>ã\W!\-g`߁)H Ph@ve&d !fv@|7d&55Xh@'d%f're` d`}&u@|g!`P%PB+ e X7!d oyЂ\ hp!pVg BgpkeWBpj6p~0kxw~WW`YoxePuejfGe)@%`W58{Ј55`Lxv(ڷȇ%p66v&e8xfZ\h TFWƇhVfh!P&jx5ex\fweX(djo~'f8d)(dPȌ(7d ~ rhݘF8dX@ȃ5t)nh%p)PXi()ȊE6~'d؆3P(dydX`&5Xeȉ=)! hLYG)d`wm@|`ׇ&e֖(n؇I9fXhsp)a~ e&PA% t8wUzƙ ItxgߨCɒXek菥H`_}'lu6u6wPBV hCǔyA7Gk`UeGِiPЉYg q9o B]Vf `w  `(ǟ7}7zo&ǡ\Um։֟S8iD\iؚ~ $3:dɣ(ڌBK-jW&6T:j=ٝRF 8 ]i >IXFșs([zeCij(F+F(vʆ;ٗ燦oQdyRX眤x׉VWx觻w|dǢ{&MpyhIis9X M*UR਎}(ShԊ}I+Pe6gmdzfLvjs IYzfaj:ey~a(揢Z٘* n,ejmIje&ȫXk$ql؈nF&^JYd;(p(>X]jt͡-He%KDs@)e?~js* ojuZe#s؍ZHKlx~UȆn0SM6Y\y ؙȈ nDh8/[FYՂ);d9d+̨BVM)ܸ4i^gh>[F*a{+2 16 Z`/2`M{G\g(Zq{?weX(Y֡KfW!ረkYf%q%\ۺeHd0ʮUQp Ak+Knֻ*.^% "aVwX9\^ ܵ"8^9&\(NDP>~BB=^[?HDFO.QSUWWM[]NU_a>cReNgiNkmoLqsunLwyN{XY>⁎I}>Z㋞NGNᓮE >^L`ꪾ^.ߴ^~.Tǻ>“ ~>ήTI.ݴB菞U!#.<~ʰ~B2^ﵐ ??p.n ־>'yoP&(*,./_ `00:<#`t03P7T@BDO J?CLqCQ_H5IMO[S_oa?cegiBWtBoq@UOkmyMGsAu/W]2}ߋ{OAoߟo_o@_O_>?Z_apT³>dP4:_?o˟6/o՟2ۿ?ߏ./(O7X4Iy,QZ^K[iVo>q|߫`P-`-/P9;b XnHpM/+ VXL v:׉giҞJJvo@["4q?"Up02M P[fԆb֠"! ǕTƕ z5W\f9d>39 Kh*v”^:kzz mAƪFk"6& ۠8ZmA.&F}k \;fo&>o_]=<;8%"cr<Ч]ҽ8U sYBkh]ܑ)\C}x^\O.vX>x^w+G#"i0a7|,&}x(G xP^?&pB{Gr@$Pݏ.ĉb2X v ?+oC$ L  AU`a,("İ1#PbD1=.,b">8pO<w)FVa vы"SH%qbAqd#D:юwxF%D| )FˬQ>mh;@.H!HF(;%Ij3A @S4)Q9U]ոFі"uI^R'4Lbrtef)Z. Ϭ3+Lj7Nq49 B_9;OyfO}?P4;. uCpOE_VQn)o@RD (GaJt "-.fة9R؀+ "SP(8S6$ ,S@(LIH$@ZtmZ:$` r *@Pza T0XFJ1lnv,MMN@rhRuf6RMX [6h2 l[7;,0TЂ l0BhN\h(yK&]n\e[$T7Gٵ xHjVH5}jFUM~ $Ch5]A'"+zRyRcY 2lRIvrl''ySh|3 eA<<`d<A9+9Q,Ņ71lp{m D&u!nIPlgQ \ĄdT EC%ъ/~p 1Dg(q=%[K`P͐eDyc)H(],ڂhg| W@!ɫ-))@1vPCALbbkhR[nUn޾;7m齄s{QoYcAط[2 Jn 1RA*iSFv(.1s `乾=q7޲CƅGIG#X^w*2wB]R!Ogpt6̑~g (]ל54~(OHf-WçÿzOl-"?O)V#4y9ZWYQ׫mӺ,;{_&/'_g|"_'KKg?h4TP|{QISPŽ[^SŚz@dS|\/Jz >k"*'D@V S,:PT$@%$BlΠ6$lJPp0> |$IvYl "\Ał @kp<)UI P/;PK<2Q'3;=s{}iii|||9?B]_cioz_bhdhqilqael9QWprv,8;ru{rxꍘVVV陙!,B H&! #Fᐡ 3jȱǏ C>Q(KbɲK u@Aӑ͛8syMs`ѣH 0 xtl=|x& 80-C𑃫 ³tկ_Oy5pgO=\HQ D߁u^~EՠCxF8}h /pa FEaF8|h*Ɓ aaJ{'sRE |KκI%.#,2eŬfuIh'j Ԗ'*'#ӮgpwMF2 .{D 6%YES{#hg.5bTB7D=qox\V2xnk('x+ H)ΰnn[lDjZҾ:CSMR1@jT(phtG(6* 6%$MN_**CޒFT܅ú|ZΚSyl 06ea{[W0[S0ᄾ:NM;RBr{,”j? TI;Q&,PP]",I;S k` NͲRŗSWhP`c01jcܼ)u$y0AC%9R`d$Z&srӣTa6kepi։nՠ98y83xhvà - b<.a!epΐ hF$ UYAk&+ <:,yM s:#K DRL*#uuf $IDA:$/qim{- h!nF =s!Q7yy[#wf~$ p-? $f8č[w3qYax >q*pAl`%g4 <f <Ј\!nUn ۍPm@!h TPܭ{ Q-cF| :~˲ ]Aoos.|76x"`OƻyDat#BOWֻ`- BpO` ]>"O[Ͼ? pO~#@!/ן쏿q'pp 8 81&}E؁ "ȁӗw&x(*hvGEP284X6x8(~+؃>kW}@%H&L'F҄PR8TA҂h}GE'^b(3fxhj .h_6r8tXvxRY'%!~/}3XxxfXxІCxh3[ 8Xx~؋o(dNj˜%ׇHxFY`ǘ>"zgh8)(SD~A>vŘJAgȉxQLH8EYdH؍nYiv ɐi2A(艌9z0Rؓؒ By;)x\"L؊7 (W_ZpKθHਕbYayXvWPjHTHY#Uv_s  p٘ye`N)yiLH,ٙdv`Vav!҅ٚI,QAZhx*#w겆 ʹٜҙbMÙf8,+)y,oY e'ґXav Q 蹑UA)\j"9ItAɛIѠ+":$Z&z& 툢,ڢ.j]2*zG88 91ڣ@j+PwJLڤNڤAЎO:TZVzSZ9XڥPJAb:d'@yfZ?ڟ"f:)!I#yZlt %yr-Vwڏyf3se (.Ij&6A:*ꈱZʢ.7z.:ȣ:uib]XYEUʬкzj:+z85Sdd)T#*ғVue)eZv2'+Zp`4wUAO"&{(k2 02;4[6{8[P%;&`8S檯Ա2JL۴Ne3;QaFk:DkS*ND{[:xPWPv*:eu¥TU^tX_;/jH8'v[%kZKe\ۦj9zZѐyu88ү)›W(jJO +˼{98ڻ۽W`ۧXvj۾pʺ蛩[G( R9 \{ |\k:K\8J[K:kߩ!‰@(>@B<(`%l@<ˎĭa)0̪[*AH [}"LƸAŊ 5 [R^T)ƆKAf|hls[ơ`l[tv{%,+ȒAȬqS|l~| ŋG,uKŪ,{\} ɇŢp\ʅF+, Mz Ӂ,̛ytKMl:\8Dɝ7l͡<ۼJ,L |&,Ƞ)˞J(έϜ ˞LϺjσϡ,4 CU,eͼQM$c!ϬaѳD^F~FQg}]kɖ. `̻MދOĚ-=6ٲ]K SUT@JB5NH ;`F{>F $0$z~{ނH۠Mh.}RпN*~ՒJa@C;O6BO.@)`C< nBרهuԧ <ԭ,NJ.@\<0hN&.>.ߥ+MN @1O//?{㗹$ n6H$p1pHK$áa\ ? ?->9^C0ෞl#U?)^MMwJ APq`.\~D|FH{VxjMNQ~.<0B ^ ~/O@8Ply?S_ M ?@c/B@H0\L$XР@,Z%NHPH0E$Y@ÇU!ĕ1 Z4`($*$H0RRPH0£dR;˴hI' 4%Jmc̸Q.K=Ѣ=.Ƿ~i`;ehcȑ%O~-bSbHW#gGo}6ǼpOK,|␔G>]6EkWw7Av1\޿ >NeӾ]]eS .0x2C6=yw 0Dأ'#k~GDx`HCiTpAtk+$kd 9FB0J4q$.TArc|0CI%BlDVlq1b"4H$< TI(@l 6Q(I<,r(t'TS)k@lĊ+PI3d<뎹.sDU=& ( "tRJ+RL34- `ECuTQ m&TV[5FpF!` xKMO,pWfuYhvZjZl@B?v3c="`YCLv%M B;KSM7x,6d?V4=Us4l# $X_~wguĿ^`QZ!jz k&BC 6jVٲΚ3v{nǮ{]9A|p  6|q?B |6:3/3sot6i}~-:-]2Ϙ 꾝gkr egvq'7wxOc4W{gĊ7K#-wc1R. ,Wy07@'y] dHNO~AdĀ`ʇIPb_`vסn Gpe Lw׿9Dn`[ )TR|*X9a-9ѨGmJKd;):+ ֵen[/pMriplQkdc P}Nt!]7èAo"A8H=#FxP@ A13LWm z)G)zbƅwᤰB2j)5 !x0>M]& DzYaĕ;"I@eTbkR&&eOlg YMG,C9[Me2&n6)[BіO}Zm)j-6=h0F+;-$ǻo䪾L j0b/^}e}/v̱}WY7TUq'`ԑCu[κG{ݫ`w}%,LYS`@w=V{ `X@xG^|-2`V,:U v'Aovxϐ-X}^nx!qGGp&Mp.\P@}g_~_}1 U'?f}]YMcqL :.RN~ak?m ?c5I[56Ը2@B;+9a;_@@[#A5Y˹b=݋+V;AJA[K Bx;7!$j7!Lt3=ԫ7A946B]bk:AL@1[;AB2H0?L,4C./dás1Co>><;p>+D^[ ZDFlDG|Eg{-, 2DLDM{-D8 (0S. jEW4(hC,E[E\E\a;W FaFblXAvCFflFga{FiFjl.e,E`0SFnFoSEr;4\˰ٛLBcK˭aʶ :K)T9K ?:˦ 𿞌JlʑQ<ȯ 4KH<JȔ'~:̥$Lʥёbz̾58 M}##K?ALŒ >ÕKDì>\NT?!fG+uN|ξIK.c7OOO|F؃ŻJ$H\RD"J(-ˌȀRԘQ!R,R9! RpP-Ӕ:/ 5S6P-7\P9͸S܉ܺ^m% ^@'ڒ@meGh]__0]%6UښZueecf*_b>,Ve:XRܨf]WciZsFgve ~**UbXXK~ZCcg^dni& p\n`uf-Y&~]`XKhNuYC&^e$`-vbJ K&iv% `ڛvݽ_eYuf谥ެMfXߠj&i\ c`NkF㥅c}]]䜦X [epk/^a.Mah3^vXe2VڄY`/jDžGpeUɭZdPD^IQٛg%j~\  ݎlخ\ؓ  GsNYed2݌^}-%W^%[̝^lN]8hpDN dn`[ǽ[Ua7>^[V`~[\ T^c~pp p p p p pp;PKA**PK$AOEBPS/img/config_scenario3.gif\YGIF89a윜___Ӭ ں7rx@@@&6:Mlt_eq/28s???Р000 ```pppPPP&&&#&*w~GLTkr;?FSXbrrrLLL䅋򁅎y|fff`(+333+,/VY^QRU999lllZ_g666-/3()*9QWC^e<<_OSX'),BEK΃ր٭ԣ߿ȯɁSou468Pci89Jab %Tw“ p2IjC~*P+'XcQFi{ڪr%›&A (F ${:$j~?2@RnƆB f{P  2&ͫ>ZA ®- 'm+ B@%L[B?00*ܞ:ax,p\d] @ F`0/䰭b#ňB N:,4te  X* fԥ[,dR% *8ҋ 7 . a |*P |Gpa4!D2@Q./ߊ>>i{ꪦp~:f V5%QxG&:v0?1,pc܌ҞHV08*DHUZQ1dX 5DD$C!T<'@j)9]`/YB\Ix܌HYW Wx$,3 v6-A (Ir #Ve E5RCv-4 L5.dw;r#=2VV a$8VN NYa AogR%Â8a A 5T|Bi>PidY e*K˚"H,,4aB %u> Pe^KPX &vjͨF3zʈJdqb^AFWڃ*g0_ GLٵZ 7<4'5bA eR'5R` 5EZ՚UXJhb`*P*:'DUNŨKSO$2IjX>H_"L>ЈNd&qEUXj. K?ۃN |G[y06Acȥeڱ <* kZq6%Vx ׮fnTA9h^y7<xW+vBz@ܽnNr7,h_6 pʷ‚e@GLU&$A ]Rx&Vf3Vty hb ?a MpszЂ,A I`p(QX+`B!.pa*_iRv䇖x9L-%4g&Ƥ,Za ( 40Z:98x`,/5!Hತ{EXN |xm!e;MPu3P dMBIT`bj  U7ly9K "Xa}Az L qO6hP^DY-:uc3$K,pV`*p]lfvzeٝ ލZ޹( ΁{|@ 3&Ngގnp5 `<l \ H:Us x .AtE74O ;7w-"4a";Bq}[7ݞ>:$ga k`dɹ36v P. c6ϧ0=,;NAAHCZo6Y4!BxHw<[抷(֝9U) 3n>CZ{-**-0ҺtP~W8UG[zEitkuGW{u>G0VeWy?@vDžpn@b`~Hߖ[.t-G|x2UEGUpCXVtSf7`|fyaQJySzFSus0e`$j{=s/x]cLP#:8Uk}'wWxvzo[n}_b}&t@X[eNtk8_8r_8_EuoxG k9Ѓpj94y/؆{xtpx&rK'֊4 8GVfF'^ֈaIpӆ;i`qu戎;A`vjߖ lmJԗfGe<@g( 8rLOaRȏUS]Hn`8ed2U~nG{v 2)0Y:S5!>`tg;A`P7GnO)<4pFU?s6 @tB uacY8UagVg8`u0mֆABp;oa9z@x:ckV}m\kE|mM`U@0ep IUySǗ"|k9Uu_RUirWwF0)yp}2} 7U% [YU]T8yꨨ7Uz8pzS6['eH.}yU4pRKerGpT5UuFzc~x %؆'z@zԈℷf&E` 8C5'ۍ4ؽq@:e`z!z]GyJ]7Xu8rgJH>r{ k.LW>Ᏹ>]UmxSCb^U^ٺKp Td'KWtTv|dT Q! VentWT{<{Lh](4&}H`cE@g h*eNLUUʫ6 wnvd>rnU Vl M0rZ%ZbY[Y_/ޅQ4Z pZSsP +Yg5Y U=@Qu Z5!R^W00b_^d.N  UU@S˱ίԯw_Ud۟ҏ?ґq?dP1q"#XA .dC XbB{E;~ YI3nLٲ\*,0&*fpEM#xMF&UԩA6e4)Õ=^םQf V!̩JKDpM.PX5[7 2~OE (-t`S1dJ  ONhE۷iK}+A.XRf-ŋd.S(p-xAHr ?MB1+d 'yx.tL|1ЧЭP1S~|ğ+K@.`PΟt{N3tpsA^H.bP𸎔+W,k8޶$ېtDP2{p@^noLBTC!}kī;8%;rZqBwJb8rtC K.#<4RӨLpMN<lO|Ol#04t;(uQ }EL몘Spm MIQs#ў URWzaG ۲ F:vXb5Xb%yPduغ:`ZjZlA k;I\tUw]vmeڕwuIY|=dP. M5Xkv`׭+#` FAcCy؀SbQec <6 ryg{ zh7"rwiHMHzj.2fe2|,{l=ע "=9 n[˳s2^/ Lp%y7uVӛD+r^DFQ|4g$?HJ OV$ij?A)DWZ "AЈ &2!Z;8)!/ 0Ӥz)WGt&7ǪG! F+%dWSnt$H=A"J*j?sON)c˨.!TU iw0 f8YrW{i8!^ߣŹH v} )=&^`|S~G+<\9WsjO:O2>_|AޙZ|ry̏V`O;9Krz\8?#s>{ ;-⛼<9<D D @$;@@*TH:MDɻ=#F;Š@v9`,>;>Wl{?3CZ@+@ՃG>^1KD3FF+EjD'Ffij4=$'@$?X\Y GC$=[˾۾)t*GCrGڃGcf OY|Ð8gll2;H(HKD\ElF|ȆHvl$<4 /zG{:>Ȣ;y<5,9<ȫ@DPD?!,K< _Sɟ쉠> K\iG@#7>@ىC8zTƚF " B  L!rL9 zY˅H\Li :`ڜK#ԟTMK9(,N< lΟA̞錂1NG)4O,, MAB<|<OaMN$7HO͝,9L9J:J QPYN,GP=o||픚 5$Ud< b١@bJ(i1)22/#CUn RBASvӀGNJ=Xe7ZZ\F(-$)/U,؀[foY˄;2ʂHȞ*c,2YP֣5P|2 W7޷fǭ$TF;CPd-|ډ7䭍gbQehU#&jC.}}b3R`+jq lll  ll'~ XEm&mn0n.&2n^nfn5 nn5mlӾS.8ogmM0:l.m`C"(p?pGpo6m nl^%H=C72nf_ߜ_-oqg 7_Vե!q r!T Quor(_#QW+'^ߡgڴX%QɛOe>WߙqcfV]Vg'FfWN߉A'd/d923_4oIs 鞉?GBsWD [guD_t{gCRK:k= q&R%EA6sSO8g?fNu=rX?uۛ`oG]/gt/wrٟ) Ĉ'J|Bω"6Ȕ#4AhFfҔ:w'Pi(ҙEOVҨIe!ztG#a#X?0ֲm퐮ˡԌy@+o=!a3nlr0WdB̖)BEIxf]\Y@d‡k;%C &QO˃D&8{`j-z-;.η}ўyzpR;)^*肟jw*v?ΠD|F {1̓kD^k;J!8^l1D+w#N+^p"g`Jm{@cQ*6'K,K5=` < /2  D~WĒ6ȼ^%[ƎƲ.]68x=XqmwgigHF}~y~_AHq I^ z<y-??e\`7`◲!~ݲK$3C\0ՖgϮs.⽟5x=cJ.^CR@;QMTqC "27~` yɠf\ 'N*o ^78oeI,IܒBj+`?O"܉()RqQ0-b -"xt Q" c \FI,4?=(;.6=cH!'Ǽ7ghh+$ t$ΰ2N$g%*%`|,čN>G6qEzC/ƹҍ#e)F,*^2x/{n3 &8)qf -fmCJ3BG4͕ŷ#ZְH,ck^V(v$glLJI~ۏ$ֶ55.ITGkBpH mg'&n3 tfAhJl1XP7 | dHya"Z8i E9SFoc= Px>G~?=%0! qsD/<%7ςڳiژa8y-M' mlp;R+%y3Cr!{ NG2pO [ݲ*|H?Ax@p͜zl.y(U1MfNы 4m{*zi\c1 `Q'Jz@}70`=Q=-YOr.|-]X0\< KF=L-6yyE樝;qV9)Ogq\/Ο8/%~;^8ZR#/-' Ϳ*_J1?́K_K6B-ZNH.} Qp7y_7 85?cjEJVAP9 ]_,F MD4JVZ G1MHfA|Q}aΟ]O OU>qc´PD$a,w5!ntmfpa h B!PVbyaxʅ(`)!R&b Z]b>$zaY"` ᐘa͡bXG`\EX| P)ң$ZY5Jϭb<ƣ<<@--Z.I#@@ $4 =$B 0Vb\zôq㘉]p*d9JX66RW4j$EPc Э}c`x LƤL$LG:AZԚZ>>IԚe..PPeR!fdN$Nr"YeEj%YYXP[[%\¥٠WL\A\^M_J@""dcm%b%_6&LeZ&en_:c&RES"SO胚Iq,~&ZJ N@(hDRCD^Mf)(OgqdG("J{ ܥ!Q2HK&NOaHdJwOV "S-=" [)3|m2 Xjiii*r>#a2aZmMJFL?vzBcnA)k"`FI7bjPb()46BmUià⢇⫞z֔ܤvV^j ))bdI6kx*:iaBOjƪ>Ϊ..j"F|fD*+ǯf kG!jj'g$)F랪#*JfHukFĖꗮ箚k+W׳+^,k))) Q&,Drz:-5a~k2B)'Jʧ~-g*r6'g$o&-%0 Hg Z@sbuٞ-ڞ-zmfw¬Qy-v-zcD~6~ &^\(!0m*.fsR- X d*ޫ 욚lmX2!܂'6Z~jZ VϦj^.},8Iܙ ͍fErX bꂅ,Jƭ֬:NlbX5n/ǁBɾaǶf0n9';^ʲlbi6dnb 0Z*c.FYDᄶ0q7*[2pvF|l̂oO:壖k 0Ĝu6!n+kW~nn*S'/Fbpl&2Obl ë pj q.iw ɩqm* kpB"32F U2"r GjȎ*q^$cf&&cmű ^3 #>Wβ0nvZ-73pv-9kmu-'ޮ(Xhr9Cn\A'+MtAOnr~zl4V疱gX9B \id̎2C-4_)5cj!gsڲ4. GCGjOתI*JÎJKD33oR*HtvDU+i.{FtX 3I'RS[p22wK(L[WV_X2Ѿ)Gg4Po5 ~΃Ƈ[GDR\OD]tT5K3sǴbguM[ka6DB(lقB7:E9"U6D\vD T0g@B.7s77= 4tO7uW ,Bhhb|B_6ypԶmObf\Xq7&l%3}0r;u^t_L'5{q7v }O8Sxgo8 c'e~&8$ZY:=O{ N@o@@{8+xv8_AaWt1x9~7Z##f8gWnR@Fy_ς959O (zǹϹǣ9ÕH* 87'=;ˠ,oWO9 tw_dwzzǟ[6\/lF_S:X؈rzL5/֧ :줹}mOD-v/鮃q\eqϜ;-;I;D4z :nzYujw˘ʎ4{_͹:;;¯ǽv.|4:;3#s7F1GO{K!o ˣ9:Lh|˛S0o<;v|c=:~X|cp[c|'{g|C8D@;iVoAC Z:??D~˿D̾v~)lh0xa„;ebD)VȀ?m ܁F?#I4yeʒ=sfLAOMPJ n40K6m(Ž}z9¼_zmZkI Dĵ{n H܅?dYk4tdrdɓ)W|e7w2w|$?3 Xw޾9#v\4O'׷~+LŸ?.i)n]~eYV}蠅{fH¸¼4a8ӹHꪭ묵ޚ뮩fK1_ "m^dR&8?=;i -v:qԸ^>E{_a DM?U_t3 dmu<] ?_8+ ֥E@TH+>vGOv^2s L%i$$L B[(H3[Q$.$Gwv1,2e*2OS6dj29 L$$i_glʙo֗SIu%) ^F1+`r\AzZ|YKR5Ԝ.]"tږF[.>n&*Uutci٪ (3jQ+7{[jL: `Iے2۫ŋf>.$'{3a3uanΣn*҅?H""cb%.@کШzDX o"U<}`{o"_FvS,h ,nr¸ŨV1 B-ö͜&-h)ppIPm ~Ҥc&ب:| LN(Ijb܀ Ҷ2J+ƮB *, :½ V+`i8PT2Х ǧ.*$n"*:*,LE4 r0ƭ^Ԣϸo܊H*LdBιIFНb-ц0L8I@qJ\BƫZ&Bij/)(J Ȃ %d5)p` ܐ^"ѽ Np`di~hl10ƍjc!V@-pmjKR !h׺&4q8QIp+jϧQ,0-(۴-'*q 10kTn4¢ȑqH/풊zfRH^*b"j OHN樒ߐ|iRikHbT`1ɸj-H00V ac00?E#+hĹ,hMv⏂q &+X$J"R$B"`6 `8 4] qMDHAB\ ,p),lb\Z.1ΰ0:0kDϩ*$ƫUnόL厯;Q>>8ZT*HN*(&T?a>'B?/TC07CB?TD5)CGD;DEtRCE=E_TCWTF-EkBcGFw+SinGkH3.OtG٩XzGaa֢.f0Xl`N+bPK[KJJ"Kg1D#TIh(Q ԾRPj0ϿJ[e9^`PsBNnѠT7 TNb:PQbQNTBTOc SQLP4N)bNc"tif%TlіM01M(C)PU+ԤhR+Pao)V= (촘I| >"`Ֆ4dYTR꾲NkĎTt!x|ڴLXݎXS&KV,ڔ8lk-:l:z-$A S9T`Fh˶P`6,Nɹ$Pl&PmLi^ejn2iffr -Kmb T@dM%$KRVbHc 2^7H4\)~*p3IsT@26&4cԘk4CS6655k☩찛j ԉUܢ p 5c-r 3j8 :Ǩ t0p, HHqSqV0rk*wcbUj "O p5*$Yhy%ܖ)jO;%/Wbyvx 0r,^{Hu tO , "p&nS:tWWQ w~%n:w)tVcxqH,jw# jVsSe /*7^RY}~[ՎX"‰{u4X﯋ 37szOy(1;j!Hx8xP0ʧ0zc.*cU¶n2c2愅L7C;JO:]pˉN6(Io({/ 1caMVrB~]yn-fYVeXɮÓ0_/Fx Tj,8b@8yAT` rN|,hURdbTW9kh JL𒳊J618֛6G2٘Fɡ K@,ZԞk+f32O وZL8f}99lH3[9JwI!Cڣ+0cpԍP7Zڟ Ҭ 씞hUnYwLʾJÊw⮊X^` yDzeeZ<󇸨XL[7;MFۉ:U`N)+zֲJ 2 S KT8K۸3rI{7B8SޚQ[?Ի1r MÛڼ۔ H۔[Ƚ݈{ۿH|\\‡'ŸH3K?<6\iK<gW\<ŕ7c\foT\){rdžn%ɓ\ɗɛɟʣ\ʧʫʝܕ)˻˿\ǜ˼" ,/֜\\:4]]n ]#]-#0/3]7;I{7GKOSA=[_c]gkos]w{؃]؇؋؏ٓ]ٟٗٛ]ڧګ3?]C.O^@ \VIQ JQiQiTZ-| L)1ܽ@L`i2p:Hؚ b&ХR+ >=M:>)L%ڸi"4ĕ&/⩻z)+ DH_ل( RT*&&ˠY̊Ii.k-m۸ˠ0m#JH1KIg-"n&讚H*Kƨ6oq[ޑBLH^~u* ŞnKIvn>JYTA,9 "[@p1񬫚1/,u-2XXriMA&*ݑ;8BſTEE `@? 0EB(!Y3̙4kڼ3N 1ʆ+C6oEǕ"WעA+1dhI;:" ((f0Дz"|haիAwڽ7][\|fU WBq,ŇB Ӫ?=UpP"Rb!EZքe@P]Tݼwl5gRcI+>%Dri 4 %L,n5C=Yd ?|=#^ (.=SQT  RjQ?WP `VAO Ukh f|.xFM}{ Weq_A T>7A tъuaQ ZB=a<AFQTkDf8Q“)!\fb  "]E0֜+PQf{N.J[uГӅGqy?*fxeF`$CRYH jWyd1+q$њKbk<"7+d4?mHH EN'PId呖JD*ll妅֚IYr>%{DHCe Ŗ0&OB% 1TꓦUBhuKo% 3^sU`s!X%+7nO#$i: 'Uurñm<~swsI%_h:4]uо,WDhC.wO)I@QXZ[͕@'lvA&ZyxfHI! ADea==vHE-! 64Ed&LR $ 2$͛EC)EN}LDfVDoT )gBe(OeE}[qwpTlqkT[]E?zPLK3R@@yk^Һ$qU d#@U1 VP20&%H! W]N:IRT)D IFB!^+ą0`#1l"DTó @8- KʨF;"J|${8<D$\w)h\!G)$Jk(B7#MYKTrl+_Sr-o\=ԥ/ ` s,1d*>;PKy \\PK$AOEBPS/img/simplerep2.gifXGIF89aps򀀀@@@Mlt???/28&6: ```_eq000￿pppPPP333  #&*w~GLT666krSXb999;?F`fff9QW<<<}(+i___-/30CH()*&&&///C^eVz+,/柟OOOooolllyyyrrr LLLy| |ȢZ_gQRUmx{prviiiru{3;=󈋓j]|QU^fkvVY^YYYm|||Ŗն⑘bbd!SouȖEFGilqw9 )j_ĩ~bGj$`r+J+qklGk zg+\VkQ-lZ;YbF>4Z.K5"'Ι,p[u _-}nK_ pU1w2f yʠuL܅P54[ )knB$P3 /@\݇ Ps tO<\So(5[c%@ QOU[mk (ԬAK'HB6ͮJIm+ք34WFuWj'[3椗>yj#R(쵲y' |ɁJ:|C wS\'-?.wC)8xc_}[ ִsE;v$Ѐ5b?DBJȟ"6>π5½'zCP? bG!(4-YjH"4%Ȟ 2i-1V(2 ř^ƒ@ HF~OHX4 0'>-13& ndbyB2&4]K"Y3'rQ@(P;rՈ  YKܘ6ְdtRY69؀ pUj'>FvcOR"lˌa&g e(t0Η|H0YbL3Af6qDB EJGas ٦-M%26@˅:jN|<UL.TN4( P!tNSkJGʸbꈠf0)f:S(4[h 4Bk|IQ82Ad`WЏgFҕATpWa0U^@,|P`tԝaKo .Д1)~$ϴ*TP:U \@*Ǽ5LUծ~ 8 >Q TK."P ={ *ZS [Ԕ\U8-tB["X{" bD@_0ag r ZwOIWAԽ\ ]Ob9K<,(@΋$#״/{LG> Zp ٕlGw؊CL肉P>O "7fl{r"4bJ@_Ik$ψ Yp.gwe6s(#OW)0qdP@bZIB.˟:v< sNoW!\Ms:DPGhяe ב vߢ%"q@l>SB}bN J]pꬑ$BK6hdҐʁTMnv7!JT˕nA b)fvG"9I0ACBMnX b"ݺ>NRhGZjל@AL[#rjP; Vj."/9Toܼ+`#oe5`Y#Rvtn`6wy7(|G[h4TKd<ї|nA 7v$mѾ=ָѨѝo 5טoɆr;H-XDh *{2<⹂Xhk[%AHk -n+z 7F#F5J7k;ז+L j]J#xpڨPLSpMJ%z#l:Tsz Lԕ4zZѧ Q;HO Zz :qT(0 *z#&*Nfic󁤟tZZzt&dPވkj:)ƬRYH4)Kz"z=]A0Uk֠|)R!UsAjE(#Y9e$L {G3,)DK=e$y*KR&֠터;xueDG`!**^?{A*U;80<&UkJ،5P ytZAwv ZO|SFA0]} ]P2{a3euՠ osp Ab ),(hprAopB $QTÈJsSh`'O O0ج?8Qwe`t0pp,8 $=^ ,HBP vj<l(AJ k ?k?pB\G mժ5(}-Պ.$+I~g&l1pM3?0 Qw] B5'`h?b:K78dQc1߽G4#iMUoi4?[6@M6i ؂Gp Q$;l;@3.7;3=ˑړb){/Mtՠ-"0}CG榡| b),03!f~peZu"` yQFN(aH5g难&0 ְߒr(FPߖ)qۄ .;/.5P뺾 P հ 1 V0 @FH^HanMA0_'[*nnQP 04P40\|. k # ?06οL;,"` z# 0_@kD.@0JLNbbL @ TZN cn:(5PInQ\  6P_zP .P]n >_O8R4~\/>4?HFojx$3+! 1.pC0V _AÏ?i!ILqz3Ir50__a TCpA .DhB %.P͢E?N(0qG!E$j)Ud25&ΤYM9V@ xT8䊋1?qbi C8(ZaIYnuՉC" ZMAiF- 2hф?Xt$J]¬&acŪ-3! .H|p(CgE.9e~->@P[AHzR &^\!걍R֐] 23iiyo5 ď'_yB8{vlO0c&$0@T@s80"L֨`Bɫ z>TK@S,4S-qr j<S 8q[LMhX렆::"Jް; 9TJ  " 6((3 G L:uҪjp>cʩN7 KKA%Q)"A 2t;8,FCIQ@63QotV-r^5Gs2` ":, MiSOYsWh!U f8A \:V"eVլ/& D@Xd]f[z[Q)L0wWo=^Wuvݙ`MKd|_XI'@2X  %Wt8Z%c_+LZ]Ng2 PUyh}, ;tĤ . RJ{nz+4l2j;J:Bn I|:ڱiݻoba%kxv ?fʸ7m \!+ Wr7k"ƚ$ɛrK7Ps?W{BGWJNW7l7-Iv/qwPw/Z]g+ yf߾4028vE>7N dJ{E`̳P|RGI| "$ Q` !ÉG }삖C`)ʡQC$b(KdbAL[^#xHE0Q^ C(qH F82QQCcG@ C yHD&R@٤).ҫgWe@e4#bHP`l[)ϣ7Q@xaҙb7djHkSew9WN)):r {d&d) i'% *c!X7XCg=IOk^BlOT%hA zP 7s̰3'\pd, GDYT+eiK]RƔh@pԆUk 'T$bySt9uKYX!u4P]H6ɢ{fU[jW<N63TGԥQ Ugиu'h#3MQM,5AkZ=I׺Q(Րiccx `=RԘ,&̎`-O6E$gVmmcˌ< c#hEW(N&WensNV7hd6jb<@UݢᲬn^W}o ]@IRPBEvaqzO󢗩.~ ' {g;FRJ lĩVKxבYIA9# h_םqVL|bҥt90c4>R\ 3 ӇBZ0 xGFL_YO./<Kc[vl^?55:d.>+T2L81K5piLgZӛ4up_ԡMAM-O{JBЍ&bO}S_u,G$X-o]c+ҷhJ[H`V;نt[z۷qMA2l51FkMȻrTmdZun[G#n/ø>7k{\&!b6[8ϵn JºzM#.R o2c_|r!8isV &N/\=Quws\ʆREJHgүڦImqr/=ϕS:h2*X)ݡLOͨ@s:HH 4jKg^=uS xQ /=A $>Oiu޳MLHvs&jA(M_5^53~ Z nw9@q $@J?Us+?~{;#/x7kt*˿ۿ-8@x3Ĺ$AS LA ',D@[@zA#p.+ڋTS8A#CA?U?/8 /: \A ܹHB,C۾5ԓ)m@Dcý0xC9D"A[A3D<;68ETsDW̴JK% O4'E.p/TAULD&tD5 BhER]2^k3x4DDc8d-CK,Bsitj4' n,oTpqBC@XǚkGF):QC;ZE|GS'dDrljH;+@9k@3;zLjHC.ÈGG ȄhB:x4IrBH@ JJ,J;S>EKTHT:ŠJ=P Jq>ذ XU 8/Չ8\M`=OK²#p3gB= 2؀U 1-HPk"bve/t]5 WpՏ%51V x} ڃX9KDEK PX k̂YaQ5 x晴= ٕ匀 VbY5wPȀ _Y@Y"*= T-BI(rD I0V3Amm4) nM*:ۈ!RasRIƽ[VB][|M=_ lP -ΕTYUӏ)\l5T$?)@Z?{A\kKQ0Wc-7ހx/Yu2NX^}#E$7xA_U9.`/d2~]k/!>M3c'|㸥bIب5@I(Q)N ԛaUw[SGFS(RJQ~d HL敽fU O esS<] i>}]I Wdcf0>e~ 0pHk?O`'e %_P6i8ui&UvH^kTU\,2ؽђQF Ef[F H=;Ch %OPU"nܡ#޽aUфf {ݬfn=i.6r,O)yyzyذ{| x<z{op˅I?A9/{?{?kFtoiyġ{Owz>?v 6LoΥ r󑴯|78|紨YšC|V#<}pu0cWW߹W%Xv' _,XL?,&h}cԏ~韧ͶpNp[I@ 0(z(*b(@p7ӟ])g, )qꡒP\gn*hC++ F`käHr J㧠j-! -~N{$f"Mi?պk`o ]^oV*{ dFpB 7gQbc%q^DY|&(`Al!/%|EjZc!?{2'Es^_Fa:ZB6FO4Vb_[R#Eu_ p F]YbFtfύEVopTm}ܠʭ~7`_yodp~ٌ4Tn3򾎰@ P'@ "KJֆFQ!!RJ=18$3\aHi!FTdq ,be"' FQ^Y/Dg"8$ڇHpAh(^Dy~tE,BBb `^HGf1jL ? E!34#J2@fRRH-h|ds".ԠX0hVxE7 'A-`sR'$&x@'BG2%T *$q"< 3FuQu|'<)x!0LĄ>w!BxY# d(f$ A Z4A6)Nsӝ=A֠A48*Rԥ"5}9u6Qs3KƤ2)t24CSN[{ZӪֵn}+\zk`Op eǣUQ ; =,bX"!0L},d mH:&Vie qA~^lղkϟ)4r `+=.rKmJ5.F=S+ei a , )PaI *}/|+ /~kzFqL>0}~0#aؖ7/>qZ ?gR7R^AD_z< 861se@z+!CFYFe;&8j n)SV2[kP^2ɼ)qnne9W'c.bCctV~3 25!kx4#-IKz [41Mf%40ni9# S4!=5OenfAC,iH#7D$J~9B`hK-4HfR\N$OL$\8#HT50R&R.%S6 –5%UR%I"Vn%mړEx@T%Z&eL32P:.")a4!]U^? ͩIv$-$gcoeHc^\__L]3E(1f9&gA&HH@]>"L$vĀu~pݲ$l uǠı;Fv}e>nY fbJK('&&Y&&I[y5Ȁ]z^5Qf&  *^ӈDxEc睹'Lgg,RC@$'ԅ@RIb hd|۔D+ʹ8h1 @CBDv_!QBMB Hb$+'RB ).I(:'Z5Ipiv Miʨ$WRAeZ&yh@,)R6D)*xbATqFfџVZdsǭ>wgY * #@ Lϼ &#^++&.:s| X XvVbJdWTDPPł}A舠*+%WljHX΍jon++XSUĤ2^⇷jPZ+_VlJ}L x1&J#Nb#ņRh6=fZcɂh֬,eܬGXO=O5AҖ fQ"eTQvɒ*+S c_|@`@Do8\L^dq*٠-Gm,g,7R5DQX&@VmA\RhmBXJE(k|v_Hr8QmEhn1F8nAJ./c|A d`@S0SdnAPb@@(F~oAm\Ulg$ln .RxAx-AnA`mn5 o]uv0ʾ-*/QĐ/ZnAl@{G0o|@!1utAoAP`@0 qf b\@`@kjf \q*]CIAI5^o_oAA@,bIj.J/hDR_>fj@5D5A?pAqGz|^{p'g(D#D +JpiG$D2zxp1\1xnH X3o5\5(@^CQ7_C 09o@"1 _0:G 3"Z$K2mÒ)\yF:S@K6oszxd2+r61tDo/.C2Sj0Wku྆~^2bL3nTwA&GJ*w8t(K,/M).(rA;?wV2xr\ pDo!^OH7GgAPWʨpGuFYB lq8*ZwW(nWeE")sJ.9]qG3smuA`4.85X/J5qDDzUHG3J[U޶MI-ONYm2&hO694sBBR.㠮(yB2jsjwnV$6GQ-oTي1W.g[q[ǵ9 Qs`(c7BR8jju]'_׮Zvm?0^bk'2F_#\07ƙs/8\8-ApwQ'G˖xqLdcVztkAXtj5"DEd[7Zp'wD#w,pVkc!f5\hsm!~@>MclLoy+FpQ71-cGx] yE u[oC\1x>OwxQxjyc;uSy  Wo\\ 01/Hz@dtBR{oAL;.qjkrAX;o8rfҋ'ងc(gfs& ?(JjLS*p^06gA_1+Br$'ʵƼE.#IVEGFDI#{,AXod.dzKP4S/:o1P[}n{xȱg$O,W<ڧޜw:c㿧}3-;+-yIkF&>R)O(~SݐF$IHiG##~h{nI{Ė$|Gދ~ͺt͏᛭"{T G;cu/6xgl,X&?4%aXŜD?Sx5j 4xpj\sbD)VxcF9vdH!5 YtfL3]P`MH 0ă#5ziRh6m*gTT*5BJvlX)`@`Sk. ,ybtX{zeڶ5 6|pUk 7v2c V|sf6̨ tQNx^ IdՑvE׶ݻy{lÉ875Ěk۹ګv~}zLCA|׷~pOI(p;P,<|BVz AA 5ܐ=;D.VX ^E DƖD[& PhGL z!T^1Q1,lGn(a&.x !'NA+2<()AMʝz:@!̉r6]^ 6L;7oԧM%Ur0LTӁ `d),('Jy! w1dI=碹;9Jp@LbxztgZ*Z68jVZk-[ y̶{!_6"i 7|&kEϕ]Z0ϣ`֗ cYܜ<]meg䕚5`x_Zicg>XyAs{|LQļ^Fķ*}H$}G7އFG/{?] ?.H! es@%' `@Hܠf,x= g{#TD:pH"\eJA!C2uσԐs7h  ` [¨DMۢ(c\\vj  wFQuccW1zcȠx@ VHQFc#O%- (D1d&RF!9$)78,iYK[,X_*bdeW(iJo[ILg>є4YM<K1djStif n&,IuZSg& @P }l@# 5 55y@ OKSN& W@C (Gz DBoӉ'6kT)HǢੴV(U9)Џ0U)`ժW `tKYWU:I5)T*KՇզتR ` `@E+ @!5^1P  5&W2ԇ#}52`^aV d௎e,_Yɀ@jT!r Ze-Jڂ23HIz \H>h#}B B5 &j6ltmD`v4c#;ّ 8C0P_j}H@jCf>@@CSܢ"p0+iGWװH9ST._ucDُ`Ո]/` ~W&Zd8.iR:D݅,OP" [`rR,k96+{:~(fr&tD(` >[#%p>5҇X̕@/򊺹rf;B]/3YJ7#!l#9J\1,9IAN qD[h:pw3dx.j}\wq}2]r³^ /5C njYe*u-g3cu))@L oPua_cXM ]zf"H7`\`٠/B M D(/VK_@PSDSz-^ "DH^~TYc9T64$J/"F*#b L8/4r"r/ծ "jlqʋSM<^! "KTP8Dp `j0͞~uv;,9O8\ʩo: $$ j vj,,֎$,% 箍H_p0w .ՔJlPL0,KRM<4M"4Nmͦ0!|D^ 0BʲĎ=l^ʲ+w* 6P6+Tko7oh,K|n ,Ċ" N@lqj @.LHoQ%aB,1q-ͣDϴ܍ 4/ 6 ;qO. NQ"ѯ Ba R$W I 3 +#r.#`0 <"*"("i")_P(;+3) $R!btOծ.1D`>Q*厭H!\Q" @KQ"3"jjO&}K0}pp"$z2~@/1wH,ǒ&@Lʖ G@l \jM#,'B5$[>j0I5Sվ1534$󪶭2 ҈28cbt ܫ6`̢`Obr!NB"l$-> *܎ S6}ѫ t֊3:9rsA]¦f*(j\$鮫 P!V ,ۋT\蜲CVLnJHҩ>4G%jhiΦj'yrB Iɒ#f2lI2۩ATV$4J+Ӳ/r-E'KiKS4J;B/CuxtTO(F(!TTJT)4O##D  lܴI (촛2n#+}JQ5P7R)eTgS"B,؈瞲*-ҳXp f* nS$uBGVEHTe"UGOSqĜ> 1kNz8q2,)*d9% Ęq}YZ$<#* 9g&U/No1,~o b2|*%HM« y .ь򢺅XTĂ2ULA¢D``_aBл m=s8!գLc-NB|66>`ӂeSݾ"frfG"T'RigJJ" #mj ,.gk, ( kyek3, L"ֈ,"+%(Qae+qP,\%\2%+),l]*q,"@r" .`ur%6,bQP'Ls0 v'wSz `'ep5s("(|% 5t% 4aϸ"*M5 ̞lDiL܎} ռp7w!D2ޯ8MmlG?q$aSqxx7VYBݨs=A̳,>J{08If k;>MnMXx J=*j  '˽\ R4c䓀̿\v@l6 3 WRw,խo%!r< rmo_7 Jc s}PRM{vx}=IJl" | >$6$ P 4OlR+JNP:+ f٘Yٙʲ+!4 =󘚫9(%TיBd!"x=qhYsq6ͩ gXjFZ}==̞.Rؽwmj{Cʣ~ 7pW5즘/{WeZ"b#Cb*mwJy(TO,~qQ! Zw=-}D#ګķ?0w$utocϲPKccoҧ7QsZfSrԦ,¦گ:##" UW)z#{"rX0M fk̸h˿3ʦʖI &.`9 ;.>1%۷ գm1Zs ziQD#<@,vBJ S+@ ++nlJFw3XK#m0Vv; +"0=pሮd!)60[uQ[3&KA,9R{" cL zU4rm*] Y#3+z."_=B`# XxZ/;<-m3-ظS` Tw=\-3VʑE;.+և/ظbsQxs¸z>ImQN"^qW-e,kHܦvYN0׮]^35u]9_ZOɹЮJS׫$cU5_3zaM@5;PKAtXXPK$A"OEBPS/img/cluster_activeswitch.gif!LGIF89ab@@@"49Eirh???V<\d4OUy+AG```Nv///p'*000 PPP____ ooopppOOO!,bpH,Ȥrl:ШtJZجvzxL.znX O(tvsJ BvMC{ B sxBu/C+zwS !/Bv(¦x(/ ɧ!+/.s w/E.w|񁔀=^tÇ| @/xCuP!(RvNnȲ˗\ oz^(;9I aG*\ʴ)u) @ꒅC[ 9mZc(+ԁID9Utm) HC)I qtFG,1`jkv2*L/@N; O'U-0F!UK+ >\7 kEtBx Sp*t,P!_E>fZlf\̅!Wȗ(So!1$ܳ%v"cX2MVeaY]&Dhbσ5(hңv+s w#9m 3YWn嘇Ewok%LEWO K6*n۾'ᅦ غSu|bG/LJŞw/䗯_R\}ڛoTФRazL UV}\@BC$xbm 豳"=d4 WlSx(N Ct(I\B{ցd$!0G~AܡT)U*7c:NhTRG%^ X]Г1(KM1OVGZe%pw nq*ͬ!PFV4# D?܆Hy]hȪh V<$.9+,#,)"kB/S5Bfeb jT#1L&@EO` v9ӛTbW| -,'ؽmr]q^7^vwf\}bv 0|7f~gkxqe Y'^6s`0Ç|ԕy`'Wl|+(l6{Fzpj#vǀdj|-0?xj ~lȁ6z2^ lG\aȅ5G x znyy7r@'Հ6zmKxjRqg{f^VvrGyWPrÕ~~~N6# ǃ5Ehh mFflHGow`XxqX^g\v XX!kw~7v[_(lpe']X˅] pΕFֵrwtNvf긍Ѝkfquj vg\@ffY -mmy5 ĥw.PzV c nprjj) |(wsƥGlM(\ 7\Xhq -`o v:s#_gteGz~БFojXyUؕaWK@m4;k]ik\ sh8ٵ9qgy\Zsf3G]v5oĥ Yv%\49WuhvkvsuHoFuY]WmI]`tA9\c8\s9\cXQYn9\aXiUn ؆m#pElpג 6VȅPkM`g,gsPÅOa@>|Џ?jŘJ }!^crZ(^yozjiI;Dt qEVn`:wo͉}wj @)s8ɹ+ɤǕ|`(61:XGŐ9iev!g%I|悑eo)^vG~mɂ +Ƨ-ijB&='q`;*o%fB*柑gbkieʫpGx]c&S_\Z\KFU-eJ*a檯J~\- `M䵚Dž(&d  9˶nЯw`_I'UBv1:z3y^d0@2X >Ჸm` ;z^.|UyUЈs2]2rfr0 ]7KXz&^p=mDs%sagw☞csG>`̆]h.}nĥW vo:Cک{\WQx*٪f@sswڭ]Ӹe;\^Xit j뵿gt m fǂ)i]ٌGXWZ| z=5CCu:R4_y\"ۿ(ktKxlycI+k\[^Xv!ǔvzuqO (צ⨠5IpZy_r!GOjbFI ^z t*WyE>gvPʢ{zy[]d+%vZnh8Yyz%j̈D76d幊L73x5= @z ͛F `o\jov|'餽ZhWjEXuDivЫnspz[x\s8ܻ A_ɨ ɐjvQfFa)Dlq(@!]wxZq7{oK]%'^;]2+ìU㕆㉀UvȪ+R:˕;5vF]n)ˍMn{ih y;=i!#h%Ɉ<ђg҄']y)+ ]i<}46h8m:gG]I KhM]OgQS'}dvp7=YԗFloL@n VMui$˦mׄc qֹLl`blصkz&a@gfd^]ќ -f]}U}~ ӂkmge-géf=ex}}fdMͫ׭ٯӳݽ߽ᝮ ]ͯ-=}im-f׽٭d.i^e  ^d nn^#%'>)+>-/135NhNed!7N9N;.=?KnM>eEQnSgA>CO[]ng_.adY~GcI>UWceݙev|~>^Ns.1g>wԱ ~>%B9*+=Nu +;.qϷ .Q7>,)CWV8O^VC1(~QH">!^XeϾHK >w:psBޮ^nn N??/ __?WW/!W%W#'/W+/W)=1ݮ359/=O;o7/COg&-VIV//rMR?T_V P\^`_9$R?{hjllpr?txzM_ ,K-K1? /?;?-wa 4Bj apKQSHUN=DY[^io3gqlB;uwz~5ٞ sŴ1ۣcWUJ_?}Pn sx &Rq\isƏ#H`H/k}ҧ|LQ}pi|C_?K} DNRŅ` NBKA { = aA!B={! TO/LK8=D":DbN6q;! x?*>`C 0 E.Z1XD# 2oi4916D!!A(юe#,8H%[DZ3HF$8IJ1,z'ϢFf!E Pɓw\RIT”3|eb Jv&$d.EuR,079KSҐDfvy B4'bք1Wm"ϙެ6W7Mď"Dz)DvVd3GMz@?P5A}sI uC!:vn0E1QnGAq!KHQRt)9R>3\6SMZt%wUŝ_|@ +XZ/v^ |` ] lUj ^{  &L%<Н* Ņ$ۈ"`jYiEѮV."$(N1x pAv6lpqga6Y R`.˂ ,w+VEkL妗L8HYi 3|#W@21gX@ǔ@.8_@2Lw!,` U$I9@.0Hw#Fx(wb ]„ [K1]fLڸ *rb>eIK2.ǽjK.٪]΂diif7WW[& 詂g8!l1Z5@z6Bw,$Z fn]ʤBFm6K`5 jeK6E#7)<}KU8:XjVPo)oD40!P: h[} BiRWl Mǀ:Dmfu]l-q Tu j-tΝV8b. ?)>VkG3qeZJ.-3˫fY Y"{XJ̽y3nxm E&M&8g23 նks {]olV?BN= 0`hm wǂonu5@pysf ;1x^{ٓUU@qJ],l~,>홢׽I}~"_};^>?7ɿD~)x 0 '`P{P 0g-w04pAPgD H UpVXP \%%`ipPnArPPފA>5i4np4@E/2!X@ 5)@P2,\p h:\L5P ,`2N: p*vLYL Ɇ̫ڰcIv~’hj rEBK-ꢄjIq,U;PK,r!!PK$AOEBPS/img/as_awt_dr.gifR$GIF89a\픔~~~|||KKKwwwyyysss*+,Ś000:::___GLTfffPPPVVV mmm@@@iii pppYYYSXbhnyEEE666```&&&kr|;?FrrrcccBBB✛h/28"49РEir򰰰Ϻޢ_eqڱԾđˏǵժ{{{̬嗗Ţώʯߊӿѝ٦ɶⷷzzzȄ؋ܮVY^y'*p4OUV<\dׁ£_+AGNvw~Ӄ#&*rx]blwy{bbdBEK-/3uuuRRTIIIӚSUZaelfimkklɴy|prv\\\LOV@BGϨ̽333###!,\ H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\R -cʜI͛8sΟ@ JQ=*]ʴӧPs&JիX6ׯ`ÊuٳhӪEs۷pXvݻx󮭫߿ L)#^̸㇊KLpʘ3k~{yϠvM顣O^͚eְcz8dwC`"N6qL'n|zX =a^ >:>'!>M?yG VL4 O==G>τр0e``?n8 @FhPă6 <:@@z![|${ ?1%ڹ,wdڔwPf@ةIP5we1`xW6@!dICczRr7$*AqtC[ hrrwq!@: ix&jg詨~B~ A3i+* 8y@'*Z#5s FϴZazZdҪ?& πD\؜rw$:F{%K&0A}fy$f(1 >Aq7P1/,\ VD#ë37ks̒K†$s> G@>AѲt3wuy-:+hݒ٪p-tmx|w-݁nx܅8;GPq-P揧Ue$K^j's?jao>@@>8`H;?EL8|]OC]+A~n!.փk<$P}m߶>%wy̚rږɵ? h?b!lW 6@ hr" 4ɨAX%lR=}Ѐ{aENޫ_Nz#"OZ\8ɐ*}Kr;- 'ZRŸ]0I  l;X#)cxVQ;XV"21|[ø*9qzgb"$B/<SL.3(GٺK2|+kS6rLy\<ܵ53ӛ槼yκu3ɛ=[Nc ; EA}~rGKPFY9$W2/-[9fʨO=ڇi ef[0` uAVF׻}ڐ+ $f5=Hu'4or+tmz5o[`*XAPNRl, iUYV"IJԿ-D0)6=Uʄw! zr+CIw; kaP|>;wgV{@YT"~!wG~r,R@`B)^Wv4Ug⒩p"':n]ͪEy'A3$[*7k2֬ln꽿ڱ1t  UVFϾCNhH0|{G49= 7p2+"aghia||GC**0f|헀 w(Z=Q3r%IE(}&Q y!a )s2 =,+R$ sQ0BRm>3E W6'"qC>f.&*8s~4fSB>i!`|*!G.=).b?"0 !*p4@Ygrq.S4XTÒw0c.Xd׷,q'u> mX4 ~R؈Ρ%;2H.Y|BR1>@Ci$S<&41J5lX@#Zb"L " ,)'+,p=i*$npFAᏽ!jQ6!@`4FhXL5gYimyɑj֑FaVcRw,YZ: U BP ҨEf*1j ?B%[:H](\PBƓDtP \]'V[)1j?p: p[JɔN\ZIfCopՔHfeIe ?YppP 9Z: u Зu \YT 9cRi"f[EYqY` b y%P`U {i  ?%f[Ppu biZPQm1lYYZYP՞9y T)`Ґ>LA  b 8 {U P_ٔjYY y` ?p"깔չ܉`)MGU Кu"Opb ImF4Zy]._GÄ1C(,E@@B&!ء@rqxhyBldjY@ɜ5uٔIHy89[ u9\К)ʜfS֥7" Y%Py8)yɪe I !STD0!4"-ɑh8Zfک9BK]Y4|Y@*\Q9Zddʝ Y빛Y"{٫ V`V0>p2Hz.1+"HR{), Y̚Y>Y˙YY[9d AB53'>IGD|ᡴ ۩o:ЖQ:Kٞ:_ٵ뱰H fD [ ۩љ,kyJYEUd^6#J'1kQ<:uo莈)~*Z "UqY)sI`|v1Cw…'(чnpYYV{nZ` 0k9JY˟U`ӌ J<7GG~w\w ) yKH |Ifh;2|mSzOz+Kbyɤ \eY);f* Z:S 5;8u@XAIqBգUX{;Z}j!́rLaCq[}F㫢 "l즽Ù |\%ZY[ʛryk̤9LbŢ3+A0h"UdAwb[YYX:Ĭ0{h𜷊i [y +ˢ soh?|F?0҂‚pۨQC<|[ٟy  H {:p ] ZY~J H?\M: ˍ.J,FRqBء qIg) xS@`7#4>Ѝ"yb>I^ɞe%Y>fXA3&CC1?D~;echmZ Śu9ɛHa(] w#(d4o1O*MEރCmD.aByM\^juɮE<c*&]Jh M5JW,7`Cu/eJIH]}۝-odU@• BalV{Gi}f@yߝY-v5GQdvߏia-j]۩[SIK<_MtJ҄|][{yj᯵/e⮥fz$6.5V;q<~ZANaF.!^[Y][؝|֜|MХ㛕Ȣ5S\@h6Y6Z`E#El !:Me r sޝ K+ZK Q*Q,0ڔ ߙ̟:} |*pU U`:f~ӝ~̟Ϥ.Xf([{Yv>I˾M< )诵];%] iXˤЖ!Ozzv!@o/Z}8Iۚ"nn|~|k!>L]&I*M S*?[J) ̙yHU͠r\~NխڭGO KNznJNETAՖ?pubݵp:nE\Nr8NZ 򛕕v.WZ{ؐ<2lH)ZD/~?_W&E%I~\Kg:y{o[:qE∋^ۤy7} uJQOe R^WL: A .dC%>0E5n䨑D4$ӆ~BA1_~4O“,YRJ?xP>K.%S!{0zPAlb?=V,tBU6ݻd\ođs-++L„Uo,B0]OVm*$>&]Ufu{7~P*cf+p)ܻ{[xn?)ͭ{)hPz_je]vV'o[:|hܺB 4@Lb @Xmܳ>3͹`*OZtQ2 $$2K2>r䇮Z QGZ쪵~ѭ V9HI&7zҠ4 JNI(hn(oL*$8ebZҸJS'#H#<(0s2uiU O=|&o QE]L\ " R. (jMh&cIdYfm-)\tKAM#U5RZRLV P]Lt aeGBijT"P{m`] BG1 ֻKX鱕t ܊~KS3a]Kޖ&~v_cy1iZ}O/}ᡛv LJ2E֚gzkw.v(s{ɦ (׬ٶ;nq;2h60pUpp#gHqrƪsЧE *%="S]UnjQwsvʽ}u=VD,"H([)>ĢP"?cn\4㊾Fψq)cF&8ohG(μDh;R")Y9`!xI$A)MyJTRde+]JXBL7NN%ItS$f1yLd&SdEFƏ*f5yMWβ*<$IMpS$g9yNt3ۡ%@g=yO|Sg?wEE3 zP&Ԡ ˆ hE-zQfThG=:Q l[ZIBzTGM}ӟ:A@KejSTFUSjUЅjWUv CѰUk*%Ģlk]:VF tir]ש╇9(C X&nl\bRFV)-תtl *Mžհ%-g!-mkkYL^6a`84K2^zņOzd6NI; 55pŊe:bjdh8\|f3ܳgLY8ˬ=_W$w*3?Ol?׀}һ2ux lZ0(x ^6 L vRA\z9p@R*]K%5h\?O;.Snj \Kueb/;P6_g{woSٱ[iTTJnklzO͆ rq p: \p T.: Ђm$7y||Po\@ 䛿W)VW*s7`%xPrܹ̹k6σ>IW~/8S9 l@X*0&HA7~zY7# sRTt96Ѐ,u8q9}Z ts u1} ,=CAeSfMn{ޏ Q|_>/O9ԟYh#d-y 3.[A3X'$4 /S$+; ,AyBrCbB172A1?[I1␯*9{rNžE33#a2EVD P#SaWXY;[| ہ2zX/ ӯ)ss\yR3X5` ?zүI#st '(DDLK2廿[I+?.FF#H<#>#1F7|Zk2#)'Q|1Įe@R>!kKAɐFz>D @!Cqlx|:2MtDZIħ{t1I~?{;1'H1`B,$&X`;$B{"@Xt4B#k0uLpXK81x Eڴ͐Dެ'NNHD\ED3tNΆSL*y*”p2x/c'I@?L?yHǴ%X %H`@qX\ /@ Ņ X E^  P `Ņ_%@/ d_ M `%"(5QUm5%QQ !-R ER%eR'R RR+R-FO+)] Xvh%H `( JXX e ]a(GLM>؀b`Ue a ]P^DJ`RE  ] 5b0mVnVoVp WqWr-b?/ *WvmWw}Wv=W}cTo 0 VO#XmN V$Wmum"kW W# hX @ms"Xmj͢ -)+,/Eh{rYyeTƬY}Mڂ~HW1%/FzELZ-MYڗEW,]Ԧuڥí-k}"Z UZڇYMwRb۬I][E[-?|'`R(ȍ\]&Z)|ĕ,[}meڳ R&=]e%m̬}]؍ʵ;zG]]]]ߝ%EI]M^lZ][ql^^^^^^ueܕ%^[l]ڭZ !x______jY_ -["߶"nj_1]i'z].^9`$``  &^ "ΛN֞ޡ^ZaEf!b".b#>b$Nb%^b&nb'~b#aҝ,b-b.b/b0c1c2.c.NMc aUc8c96.a7~9c=㨩cceTNeU S^eW~eX.VeZe[nYe][ Te^fb1a&c.fe^9JfAvfffic%ifm>hF(`o[frs^K5bAgz޷%&g|g~.}g$h hN*h#nh"&"zhhah hh.\ A m&iniAixXȀ@ _6 -fvi .ب h a0 j(q.j y P((꫞j[j_Skґ Ȁlk `P`6k>XPkǦaȀud~ 釠_kFy & Ŗm4^P!췮X ^hx^R~m`mnnl h 쨐P~j&pn ha~"yhmKʮ]QnQn~ n"=p9$`5 8n`pƁo ?ш4pޢ- Ho#q+>F@^y?p_k_y'$h `(8 N]nf}Ձ  &h^ i^jp&44 V?9'oZ _rv=FqCpImu4;@ 2B J`L_px@ 6\%̘2gҬi&Μ:w'P-#(Қ>$M:)Ԩ7}4(jh4Pe6Pa Y*ph.@B+f( r?=OĊ3n1I\J˚y2zcM!r-F$QHBt~+av eCtPFgW$)!ҧSnPq]zT<[Bΰ=C@ dLoWe^! J(SvM^f6VU砆!8tEu1H~+8#>Wt*Dy#A  ɘGԢvJ:$Q2q8VYj%Q٥@LZ&eҨ i&i`&K_9uڹh0,_6:(z((|E 8)'Alt'Bgg-:*,B:Bi*).+1@ʔz,u,:,EoJjj;I;ngn D,{&#'뽊2»0Z0G+˾|~*")%k֍)k-Ęq)+l(k2Q3Ђk4҉BF=;SIK 5Z-O|O=p}6pj6܉u1ܡDt~}d$C@N# Q'n<J>@ `?t3=lx+񒗾O VC;( kB: "hCָ bc ^4 0F PiCx x@ p dXO}s1A s# 5h dD # .$ krF)P#1dfD4TϨHAa#e!wIuG]vFD"ϔMQ;hE,j^q#@aF:HhІ6ʡlT! $$"/a| %IIKb2ܡCuuv.)(Ä8[4XH]a1l'9Z j]yXtT.PJUUU1Սv HC{nR:`aվRM,-mIb xMʐ/|aFHNҚPC0{x"8`VEr׫ܢT}.STNuVhV5~"mIKե1kaޣƷm,"xi 淺Dex?n4:{ oxǛg&R0Ȉ%rR ݦN֭lvcٮ~b8s,Z/Ǩm+ !7WK9ʡr'L+Vt6NZ[.KL: Muu)nwƛǚ.P^k;n9lcǎt'J7˜r;\ "&@f;ۑ crv/ew? ހ7%{2S7]QHKSj HB[ݱ~k9*U|frx >krx$ŚbZн.2>,Y؋&vle3;R6§ t|nz髆 kξ= z^;MvEюkb|ؓ7{~g~ohGA~TڶmƑ۫޺h힗b㭟E^A `ڛyUf``+q`̉  j  B~ E`!Xb[=ؕ_Ο1_9` FLrV&`͞9 >a_k_"_x١"b   *RF\%ҟ' (^ "*b^_%1,>F--$Z!֟2a:5T"(ꕚ):#1 yY##jF7 a䍣/c0n0v";;##N߽ F>v+ޘ? #[~cAR5䖉^ UDJP6b*4!",~"6e IAv B^/fKz >$դM N"3 !~Ge;S6:6RI/6e9b":j:#URUIVV-m& OBc>r!b`@LeJecOe$!JJKj^Tҡ]d,d^Z/2c `bt&``+C (f-"abHI[eaffrbLLch ;ŀ? l( ekkZ C`eXfob&9&T{'%J`KKNg29Yt(48&=rk"yTA*n ||ftQRcrrc6ڧ&)x )C"X'(x @ ĀBA2*A8@3Ad@ 51 x84C &4A8  @NL`t@3L@3b@fORa\>\6'L\Bd?Az ?,`=A HXt\l>Æ2C 40U>x?h@#l@lj%Akn@(j3xN0v뢶cZt @bqR|CX&dZ(]7v͚t?iQy?A Ёsf76\@ ?A)LVyhvUllkC$E8ܐD|@20@^>8Q 8?$[*$򫿶!hUl@3t0#!l1rV Ś* xrA"Lkr0:\,DY_ج2@}+p&*L@0@4";lD~-k"翚V2lDT 4C]^.o&oF|?Az8|fZiRVzA)A *QH.e>妨8FIŀN6nyLmT82BL-'n˱ L*1&/@@ .fW*Vl@#WxoCD9@$Bl(@D5,"U/4Bb$/:i (h$\l1sIK?9 ?TAl/Y?l-Aʒcjp'Qld>l8-V?(Tn24 ȏN`,q/.AT5.UU.:S+orS#/:h \IkE HK)dA Z'CX@hrN>)#nI!@BT4@D@(8dY  o @zv%!g3|ӊ.|h@2`@_M\k6?\/m@v)x^cqACE1XFm.GYs pӫzn^D^t@, Q/+.9'C+=Κ>"1 X mf?EBC?4e? p3 / 5!?lEʲN_L]g֎P`6ȂR{H1Zi7tBp;ZE3]Cp@ױ51 ?C5"gT4n+*'*4@Kn??8@ Cv pbSpJr߯?D6D4±}BH2f#@Ѓv^n\ =l'+*>4{+^NL A@A &LQCL=t_^iClCe$a A dTA2hJ{U 8|B`$4C[8L@ tsN3N@2@ηp?*~G)x=0@'=tC* 8@_AK:B7BLB(dxR[WBj@r8R҇G@ 8WE- 0lx%kn%2gm%5B Yc:HKA,CjrAxjHC8Zs)]: u44%?:gBwuvx5 D4nw>:Va]jEΠ`~e!PRDLx@<@{ 􁡉tW+mt,JӧodyH'`X_C+uN_4|`ze{ ;?Vc$b lLPGQ1gx[5l/4<:Hz @dڲ=\XLSHh6;@jx2D6@&CAC=cdOj$af?N2@ܻC(R0X>_Q8hj! xA.@ xT#/CA@lg >.@ =,xq2@#4!ؗthL@b{dz/hB+p:!0S@=BG#4Ƅi(Rl,TԤ"V&^Gp@2(P iRK"=jTSVzkV[2$͢&DIHzF@b7-CN91A뱢5HȐF*^J%Iԓ)*]U*,n.2Fm$3R30 M)S3 ؔMLN>)TfEKO((mJUltuX6LG 4t1_ Bb)33d9SV ޥ՗=R[CavѺl-йȝtsLwuOjSmTz3c>_CdN2LWV7#by{rO{Fj㟽lBudE3F!5\J{9S$%yKwkN1hXoM:ץHia!sxBxټᄏn͞eeэ[ӹ;]%&U=sop6:mSzWq9|BmzܭZйEttF7]eylKY'ݺcY&k|kܱPGt\ Vq4|KI]d=0cc lv._z4= j*׾= ~ylVRFԹ|B`g39 !}Op'ƤBi K"7dհNC71[?Wن"l|e:w9bnGE+ ⌸(}a;^@3R.7کtGO~2+cшTPI ')!Cu $)rRlFYvT?D)DRє#1(7Vѕf*2KY% uE€Q!0-y,fҘD2frUj5)z}*/q/(Y;fXO^r3Wӝ@ʻisoWMDR"wyHNt;gOI=Ap l(J 9<-*4&C t/c0zFxjHU\=SєT/%Q/%V- VZhrSsb ֢µ(]9뱐f$%4O*~T >F i89VF>>,&˼b[c6Y^+HWUW;PoxB[.L%ZչvNͲlvQ㾗Umsv]UB[Ǻ#cJjE/p+j斔ᓪ~ʖbSum\C|):0Nk }0o# 3ͮ{5\?8m^]7d,k1Fwm޲Vaq }nP*❗ w#fvqK^^7ij[aW/ wx{ӏ;kpۉ0A o_<dIB]`17|e( >0? @ !x/b9*/3P7;?CPGKOSPW[_cPgkosPw{P P  P P7>   p Ő 0 אՐ P06V0nV e 6  jU2q6&Q 5QnDQ> P9Zl162 Uq+p*^xZS$5qQP+}Q+QEs0?ͱQ? @`*NA*e1qo@3HbnC !? Z1q*b1+|`$B$Ir*@2*N2+*x0SQ%#qS%R#w2W1$u){*`# a)B)Yp@!1*,Ӓ R*X2Z)!QC "q&WB@*qH>1*1>"S3@ W 1A"R3.A$+)G120+M5G$} ̑W%(+ 0V`?3B@93Xs%`42/1'!0C<`0m3Q*b&B2 8@"s)">b>'+u@]$<0[Qr*T4S10=1*@3 _AD+r8aE'4Dv*BDFF"7?4z@1_7 mH+8-T+ 71/H+9L]1:s 0>424Oc1D5*@eNN1a1"A2N7ÒF1KtK6 DD_1WRQ[LMS)!qOմQT;tCǴLa5Vq*TQ'Q1Fc5LQp*JC=2@>&TWQ{Oq;Um\]j\4]cLUL}u\6B5S^gB]V*"T]ؒ8*Nb!SkP$UE-v*c3DV 8v+ '_5Rq0YWT-0SLs6+p9_uLE Q.KciIÕ*`[c[dTA2'* s4ͱ9HuQU#i[tKEV65iG"??5m_ckO76*s*7}'VUp)5I}?*Rx%sEzV`>7UV'Mr~$E6%Cwo tub[@.jM n &11wv;[q`nU=#6by1-w1*9Wt2;sb|xB`*^2:r؋Xǘ،Xט؍X瘎؎X؏x 9 y{'s$/k,7c979 ;PKAiRRPK$A!OEBPS/img/cluster_standbyfail.gif(zGIF89a@@@h"49Eir???Vp'*4OU y+AG<\d``` ppp000_NvPPP///OOOooo___ߏ!,pH,Ȥrl:ШtJZجvzxL.s$ll|Nx"yT.M .|X M hK~P.M.IJJN/ Q ׹HGįO /oLBHagLdM{A†/x֠ Z` 6 ɓeM+UL4$ ꖘ"!\BOfBЀZҥM#>p4hRlA@ BX ʷpsYpH?`4dW P&@kAY!^`R}C( 1rl ذ k;"H|]AJ&$o^ݾ1L.@c <&i~ 6QbcQD.rޢ1S yJksKz)~:7ӟCSHhKcB'* oc S:)Q. JH?;J҄ hIWOsl,LgJӚ8ͩN7͞@ P:\cHMRJ{PT : XͪVծz` XʀcMZV*@k[J׺E@ ׾ `K0k\]p:u,jVDa'WzMS ҚveOSӫkg{Zcga6\ [ױqXb̍n` JM.|Ru}.w]u]{`27};EB} \m+vIik`aw®*.aj0k+^+8ఉAa#xŠ%Y c϶*qfoL82`P?ǝȱ!dwɡU,0ʆm2"ev@jiLW\E1(jm.l8q~qLua'-k^:Ő+9_ G.s-kZZ^3[k2סGmc6|nlE8Vq۞`3_n nj FnގMiW櫾%Quy+cFx[aG߿")m&@3x7W'(,qۡ\y_}T:9g#]Jo B`"K r[) { x{|N]rϬ_=~^`m>-p&LA:hӞB/@ YG,pCOX|^ yKy,~wlG)dɃX?zhsjPq#w57'+&WեƑv`w {Чl}` o Sal:jyɨtYlQit|UiXcp<כt{vkj烿uGgsYfǐk[9u扁q\rn|'_ّfGXu#Fv\% yIYZݘhZX p.Uy֜%v||ȉvhf{hG|)VY*X%Ţil3'ǔ|xWXpxue-37o[|3xo3GyZO:ʏXn\ٖU0Ĉ2h lvCFZVg悿hIŗe~"dPɕفXdrXE Wd0qz\j g%y) j^ ꆴWXd7k(*/Y pp 6vyޗ 8[d!ppj6*X)Ѹ}|ȁM)sJatVd3G*,)̦LIլ uP}~ zO"zyUXŚj99Ȇxe& ;:W[7yU'| ȈaWWW;q̉|ЂUXkyhSCyf*]zz\iY~j0WPIo'zŦęvmuk;,i=` 7u} oQuT{WY ׷䩻xR{Wc)eIqݹ71 㖠YXwp jםg ,$w7XJ翧nW+%')i}U ?]km2\i4lF0\{iXʼÑd~U{X*܎J\kL@ϒJX \7*KLMl:x UvŴ-: $ݴդإҩP 0&PFXeӒӈҬʛ1m&zvj'ZYg+MԻP=R~fg ϫ []ƒ&<\֖|ll/] XrYWKSuNxy{m 9+KVF&^MG]]Ԝ=]|yh|Ԧgת}Yuرmmv۠mn-ʴ׹m]^SʭTS]]֝Q ڽݷފ(]}C0( p= m_A>--SD1>:E SSnXp9U7Dy`uS nS".<W65ET V@5e4~5A E*u?KnJM^CQnN43%OOENN_^O:%2OCJSnKḤ-L=>x=3315StM.SR^, 0Ls4O^YN჎.7`B.S> :^.SR4^hP,~w똞:u0r^Q9 ~vSNRi~kN^RQNP`!. ߮n+5Umo]T N o_&(*!!#/+?4_6)E@B?D_FH?EINPT_VB\o}b/?=8d;o1 g _1dk.i q=.ou.s {Ow?,yo/,} 11_( c #/ oo1 oOe_//_ "/%8(_@0^ \e?ܡ/m/`‚a dBп?_o,ܞ`?Q /X4Āt>Q%^Y!~/ fe5hq^y}^K\٘"0357*? EKQ S'Y8egig2S7 0$.؉Y0/j$0`'H@Q!M!A GD0H\Pq JX E`FX H0Ffn"G6|%B!\`^8 8,@T A 9LJ٨<+ҋ-m꒕/LV|JM!8pVpA3=O_L\ ST\6|aLHlNKTuoY0U PX }P!X@XbjM@sUO{0| SPN`DX'ZdtZŢ׽F-=T ^ }FXNꍪCZ ళWyW@o@l*#77\ؘWm%BGOօS: @dfRqNsx_/r>@ 7$n `\H[ `@Ax۩?Z@a HߨYmn ;$8 <7Z;u\>a3| ph^oF8aX2HȐ#ըZO8@ (@@{"iVUa12A bϳ2@70N};*@@-2g:T !WG3 Moƀ!-w<$0D PAßq{6h$(QV$A2@x41S,;{< 375Q;b*tB'$7 AJ V4"b#;yO(37(=Ar¢pf)]d׊ ,&Mq&pDB"8GD(YBT[i &# gt|R &E8R-sL\ hJ Xּ$6un g2`*N Xe^]k[Sb7Ey{~ &xJ`p2Z`.Ix &gI { BaWnw2 jꏛ2aT:АELTz,5ljy p[ Plk]ZpY]*C m9>:}*mfQ<@K G=r7m[GO g0iQZզvJM N V`7$~|g<r{aү]snLmpvZ83"VӘۺ/}p7헢k_]ηnWg<|` 86n9|sKMu!a O8`!՜ ,^A\,3a2`G:~C-suZF8A}U[@0Wڲ8,ס3 ,`A~Zf9`5iښ؍9r`nN)T o(G$bB\yBsqK *DO7kY]P"w" ==E;5, 4B 3RBs ^DB1t. $8L3A4Dr @<\2EUt-D@-lrF}F!/Np4eGrd'J@{zTI}Tb JHJ X`r.IJݩJ4DMBV@@hMKM$OtN)L)I` @K]V P#t:`dΠBLPN@LHu )R $V@R15AEL^"QBJPJOuV2HTI&Xu)cuYYZi@QYU(5[r[AVu.>C 31'a""3y34 a,;'q4@Y(Ź1]`T+259(߹30- @yr 4Mk1 /A A /I '3;@%+GDrK:2;PK/T/((PK$AOEBPS/cluster.htm Using Oracle Clusterware to Manage Active Standby Pairs PK_qѮPK$A&OEBPS/img_text/cluster_activefail2.htm Description of the illustration cluster_activefail2.eps

The image shows a cluster in which the standby database is started on one of the extra nodes.

End of description.

PKk˽PK$A!OEBPS/img_text/return_receipt.htmP Description of the illustration return_receipt.eps

The image shows operations between the application, the master database, the transaction log buffer, log files on disk, the subscriber database, a second transaction log buffer, and additional log files on disk.

The return receipt replication cycle is:

  1. Commit transaction and block thread. (Application to master database)

  2. Write update records to log. (Master to transaction log buffer)

  3. Flush batch of update records to disk. (Transaction log buffer to log files on disk)

  4. Send batch of update records to subscriber. (Transaction log buffer to subscriber database)

  5. Acknowledge receipt of batch. (Subscriber sends to master.)

  6. Unblock thread. (Master to application)

  7. Write each received update record to log. (Subscriber to second transaction log buffer)

  8. In a separate thread, flush batch of updates to disk. (Transaction log buffer to second set of log files on disk)

End of description.

PK,v PK$AOEBPS/img_text/genwkload.htm6 Description of the illustration genwkload.eps

The image shows applications sending updates to Database A and Database B. Each database replicates the updates to the other database.

End of description.

PKg;;6PK$AOEBPS/img_text/scheme3.htmX Description of the illustration scheme3.eps

The image shows that inserts into masterds are replicated to subscriberds. Each database has its own replication agent to perform the operation.

End of description.

PKWI]XPK$AOEBPS/img_text/as_awt.htm Description of the illustration as_awt.eps

The image shows an application updating the active database, which has cache tables. The active database replicates to the standby database. The standby database sends AWT updates to the Oracle database. It also replicates the updates to two read-only subscribers, where the cache tables become non-cache tables.

End of description.

PK]PK$AOEBPS/img_text/cluster.htme Description of the illustration cluster.eps

The image shows an active standby pair with one read-only subscriber in the same local network. The active database, the standby database and the read-only subscriber are on different nodes. There are two nodes that are not part of the active standby pair that are also running TimesTen. An application updates the active database. An application reads from the standby and the subscriber. All of the nodes are connected to shared storage.

End of description.

PKQjePK$AOEBPS/img_text/as_awt_dr.htmd Description of the illustration as_awt_dr.eps

The image shows an application updating the active database, which has cache tables. The active database replicates the updates to a standby database. The standby database sends AWT updates to the Oracle database. The standby database also replicates updates to a read-only subscriber on a remote site. The read-only subscriber has cache tables. The read-only subscriber sends AWT updates to an Oracle database, also on the remote site.

End of description.

PKx.idPK$A#OEBPS/img_text/propagation_tree.htm` Description of the illustration propagation_tree.eps

The image shows an application sending updates to a master database. The master database replicates to three propagators. Each propagator replicates to four subscribers.

End of description.

PKHe`PK$A'OEBPS/img_text/cluster_activeswitch.htm  Description of the illustration cluster_activeswitch.eps

The image shows a cluster where the former active node becomes the standby node.

End of description.

PK_ PK$A%OEBPS/img_text/cluster_activefail.htmR Description of the illustration cluster_activefail.eps

The image shows that the state of the standby database has changed to 'ACTIVE' and that the application is updating the new active database.

End of description.

PKաxWRPK$A#OEBPS/img_text/config_scenario3.htm Description of the illustration config_scenario3.eps

The image shows three situations: normal operation, failure of master, recovered master.

Normal operation: Users are connected to applications that update Database A and Database B, which are both masters. The masters replicate to each other.

Failure of master: All users now connect to the applications that update Database B. No replication occurs.

Recovered master: Some users reconnect to applications that update Database A. Some applications send updates to Database A and some applications send updates to Database B. The masters replicate to each other.

End of description.

PK|=PK$AOEBPS/img_text/async_cycle.htm~ Description of the illustration async_cycle.eps

The image shows operations between the application, the master database, the transaction log buffer, log files on disk, the subscriber database, a second transaction log buffer, and additional log files on disk.

The default TimesTen replication cycle is:

  1. Commit transaction. (Application to master database)

  2. Write update records to log. (Master database to transaction log buffer)

  3. Flush batch of update records to disk. (Transaction log buffer to log files on disk)

  4. Send batch of update records to subscriber. (Transaction log buffer to subscriber database)

  5. Acknowledge receipt of batch. (Subscriber database to master database)

  6. Write each received update record to log. (Subscriber database to second transaction log buffer)

  7. In a separate thread, flush batch of updates to disk. (Transaction log buffer to second set of log files on disk)

End of description.

PKPK$A$OEBPS/img_text/activestandby_sub.html Description of the illustration activestandby_sub.eps

The image shows an active database called master1. It replicates to a standby database called master2. The standby database replicates to a read-only subscriber called subscriber1.

End of description.

PKHԡPK$A!OEBPS/img_text/active_standby.htmv Description of the illustration active_standby.eps

The image shows an application that sends updates to the active database. The updates are replicated to the standby database. The updates are propagated from the standby database to several read-only subscribers.

End of description.

PK.Z^PK$AOEBPS/img_text/switch.htmg Description of the illustration switch.eps

The image shows Host A and Host B. Each host contains a master database, a cluster agent, and a database monitor. Host A contains the active service. Host B contains the standby service.

End of description.

PK:}.lgPK$AOEBPS/img_text/as_readonly.htm Description of the illustration as_readonly.eps

The image shows an application that updates the Oracle database. The Oracle database sends autorefresh updates to the cache tables in the active database. The active database replicates the updates to the standby database. The standby database replicates to two read-only subscribers with non-cache tables.

End of description.

PKPK$AOEBPS/img_text/propagation2.htm{ Description of the illustration propagation2.eps

The image shows an application sending updates to a master database. The master database replicates to a propagator database over an intranet connection. The propagator replicates to four subscribers.

End of description.

PKK.{PK$A&OEBPS/img_text/cluster_standbyfail.htm Description of the illustration cluster_standbyfail.eps

The image shows a cluster in which the standby database is started on one of the extra nodes.

End of description.

PK&}PK$AOEBPS/img_text/split_wkload.htm` Description of the illustration split_wkload.eps

The image shows "applications for Chicago" sending updates to Database A and "applications for New York" sending updates to Database B. The Chicago updates are replicated to Database B. The New York updates are replicated to Database A.

End of description.

PK6ƚPK$AOEBPS/img_text/simplerep.htm4 Description of the illustration simplerep.eps

The image shows an application sending updates to the master database. The update records are replicated to the subscriber database.

End of description.

PK--}94PK$AOEBPS/img_text/propagation1.htmU Description of the illustration propagation1.eps

The image shows an application sending updates to the master database. The master database replicates the updates to four subscribers over an intranet connection.

End of description.

PKgxZUPK$A"OEBPS/img_text/delete_conflict.htmm Description of the illustration delete_conflict.eps

The image shows a delete/update conflict:

StepsOn database AOn database B
Initial conditionRow 4 existsRow 4 exists
The applications issue a conflicting update and delete on Row 4 simultaneouslyUpdate Row 4Delete Row 4
The replication agent on each database sends the delete or update to the otherReplicate update to database BReplicate delete to database A
Each database now has the delete or update from the other databaseReplication says to delete Row 4Replication says to update Row 4

End of description.

PKF r m PK$AOEBPS/img_text/return_2safe.htmw Description of the illustration return_2safe.eps

The image shows operations between applications, the master database, the transaction log buffer, and the subscriber database.

The return twosafe cycle is:

  1. Block thread. (Application to master database)

  2. Write update records to log. (Master database to transaction log buffer)

  3. Send batch of update records to subscriber (Transaction log buffer to subscriber database)

  4. Commit transaction on the subscriber.

  5. Acknowledge commit of transaction on the subscriber. (Subscriber to master database)

  6. Commit transaction on the master.

  7. Unblock thread. (Master database to application)

End of description.

PKH|wPK$AOEBPS/img_text/timestamp.htm 6 Description of the illustration timestamp.eps

The image shows this update conflict:

StepsOn Database AOn Database B
Initial condition.X is 1.X is 1.
The application on each database updates X simultaneously.Set X=2.Set X=100.
The replication agent on each database sends its update to the other database.Replicate X to Database BReplicate X to Database A.
Each database now has the other's update.Replication says to set X=100.Replication says to set X=2.

End of description.

PKf'+% PK$AOEBPS/img_text/scheme0.htm` Description of the illustration scheme0.eps

The image shows a master database called masterds and a subscriber database called subscriberds. An application updates the master database. The master replicates one table called tab to the subscriber.

End of description.

PKnPK$AOEBPS/img_text/simplerep2.htmc Description of the illustration simplerep2.eps

The image shows two cases of replicating elements to subscribers.

The case on the left shows applications sending updates to the master database. The same elements are replicated to both subscriber databases.

The case on the right shows application sending updates to the master database. The master database replicates two elements to one subscriber database and two different elements to the other subscriber database.

End of description.

PKz:hcPK$A$OEBPS/img_text/config_scenarios1.htm Description of the illustration config_scenarios1.eps

The image shows three situations: normal operation, failure of master, recovered master.

During normal operation, users are connected to applications that update Database A, the master. The master replicates to Database B, the subscriber.

Failure of the master: When Database A fails, users are still connected to the application, but it cannot update Database A. No replication to the subscriber occurs.

Recovered master: Users are connected to the application, which sends updates to Database A. Database A replicates to Database B, the subscriber.

End of description.

PKu4dAPK$AOEBPS/clusterattributes.htm TimesTen Configuration Attributes for Oracle Clusterware

8 TimesTen Configuration Attributes for Oracle Clusterware

The attributes defined in this chapter are used to set up TimesTen active standby pairs that are managed by Oracle Clusterware. These attributes are specified in the cluster.oracle.ini file. The ttCWAdmin utility creates and administers active standby pairs based on the information in the cluster.oracle.ini file.

List of attributes

This section lists the TimesTen configuration attributes for Oracle Clusterware in these tables:

Table 8-1 Required attributes

NameDescriptionDefault

MasterHosts


Lists host names that may contain master databases in an active standby pair scheme.

None


Table 8-2 Conditionally required attributes

NameDescriptionDefault

AppCheckCmd


Command line for checking the status of a TimesTen application that is managed by Oracle Clusterware

None

AppName


The name of a TimesTen application that is managed by Oracle Clusterware

None

AppStartCmd


Command line for starting a TimesTen application that is managed by Oracle Clusterware

None

AppStopCmd


Command line for stopping a TimesTen application that is managed by Oracle Clusterware

None

AppType


The database to which the application should link.

None

CacheConnect


Specifies whether the active standby pair replicates cache groups.

N

GridPort


Lists the port numbers used by the cache grid agents for the active database and the standby database in an active standby pair that is a cache grid member.

None

MasterVIP


A list of two virtual IP addresses that can be associated with the master databases.

None

RemoteSubscriberHosts


A list of subscriber hosts that are not part of the cluster.

None

RepBackupDir


The directory to which the active database is backed up.

None

SubscriberHosts


List of host names that can contain subscriber databases.

None

SubscriberVIP

The list of virtual IP addresses that can be associated with subscriber databases.

None

VIPInterface


The name of the public network adapter that will be used for virtual IP addresses on each host.

None

VIPNetMask


The netmask of the virtual IP addresses.

None


Table 8-3 Optional attributes

NameDescriptionDefault

AppFailoverDelay


The number of seconds that the Oracle Clusterware resource that monitors the application waits after a failure is detected before performing a failover.

0

AppFailureInterval


The interval in seconds before which Oracle Clusterware stops a TimesTen application if the application has exceeded the number of failures specified by the Oracle Clusterware FAILURE_THRESHOLD resource attribute.

60

AppFailureThreshold


The number of consecutive Oracle Clusterware resource failures that Oracle Clusterware tolerates for the action script for an application within an interval equal to 10 * AppScriptTimeout. The default is 2.

2

AppRestartAttempts


The number of times that Oracle Clusterware attempts to restart the TimesTen application on the current host before moving the application

100

AppScriptTimeout


The number of seconds the TimesTen application container waits for the action scripts to complete for a specific application.

60

AppUptimeThreshold


The number of seconds that a TimesTen application must be up before Oracle Clusterware considers the application to be stable.

600

AutoRecover


Specifies whether an active database should be automatically recovered from a backup if both master databases fail.

No

DatabaseFailoverDelay


The number of seconds that Oracle Clusterware waits before migrating a database to a new host after a failure.

60

FailureThreshold


The number of consecutive failures of resources managed by Oracle Clusterware that are tolerated within 10 seconds before the active standby pair is considered failed and a new active standby pair is created on spare hosts using the automated backup.

2

MasterStoreAttribute


A list of all desired replication scheme STORE attributes on master databases.

None

RepBackupPeriod


The number of seconds between each backup of the active database.

0 (disabled)

RepDDL


A SQL construct of the active standby pair scheme.

None

RepFullBackupCycle


The number times an incremental backup occurs between full backups.

5

ReturnServiceAttribute


The return service attribute of the active standby pair scheme.

None

SubscriberStoreAttribute


The list of all desired replication scheme STORE attributes for the subscriber database.

None

TimesTenScriptTimeout


The number of seconds that Oracle Clusterware waits for the monitor process to start before assuming a failure.

1209600 seconds, or 14 days



Required attributes

These attributes must be present for each DSN in the cluster.oracle.ini file. They have no default values.

The required attributes are listed in Table 8-1, "Required attributes" and described in detail in this section.


MasterHosts

This attribute lists the host names that can contain master databases in the active standby pair. The first host listed has the active database when the cluster is started initially and after restarts. There are exceptions to the designated order:

  • If there are already active and standby databases on specific nodes when the cluster is stopped, then the active and standby databases remain on those hosts when the cluster is restarted.

  • If the cluster is started and the only existing database is on a host that is not listed first in MasterHosts, then that host will be configured with the active database. The first host listed for MasterHosts will be the standby database.

If the scheme contains no virtual IP addresses, only two master hosts are allowed.

Setting

Set MasterHosts as follows:

How the attribute is representedSetting
MasterHostsA comma-separated list of host names. The first host listed becomes the initial active database in the active standby pair.


Conditionally required attributes

These attributes may be required depending on the desired Oracle Clusterware configuration. They have no default values. The conditionally required attributes are listed in Table 8-2, "Conditionally required attributes" and described in detail in this section.


AppCheckCmd

This attribute specifies the full command line for executing a user-supplied script or program that checks the status of the TimesTen application specified by AppName. It must include the full path name of the executable. If there are spaces in the path name, enclose the path name in double quotes.

The command should be written to return 0 when the application is running and a nonzero number when the application is not running. When Oracle Clusterware detects a nonzero value, it takes action to recover the failed application.

Setting

Set AppCheckCmd as follows:

How the attribute is representedSetting
AppCheckCmdA string representing the command line for executing an application that checks the status of the application specified by AppName.

Examples

On UNIX:

AppCheckCmd=/mycluster/reader/app_check.sh check

On Windows:

AppCheckCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -check

AppFailureInterval

This attribute sets the interval in seconds before which Oracle Clusterware stops a TimesTen application if the application has exceeded the number of failures specified by the Oracle Clusterware FAILURE_THRESHOLD resource attribute. If the value is zero, then failure tracking is disabled.

For more information about the Oracle Clusterware FAILURE_THRESHOLD resource attribute, see Oracle Clusterware Administration and Deployment Guide.

Setting

Set AppFailureInterval as follows:

How the attribute is representedSetting
AppFailureIntervalThe number of seconds in the interval before Oracle Clusterware stops an application. The default is 60. For example:

AppFailureInterval=120



AppName

This attribute specifies the name of a TimesTen application managed by Oracle Clusterware. Oracle Clusterware uses the application name to name the corresponding resource. Any description of an application in the cluster.oracle.ini file must begin with this attribute.

Setting

Set AppName as follows:

How the attribute is representedSetting
AppNameA string representing the name of the application. For example, testApp.


AppRestartAttempts

This attribute specifies the number of times that Oracle Clusterware attempts to restart the TimesTen application on the current host before moving the application to another host.

Setting

Set AppRestartAttempts as follows:

How the attribute is representedSetting
AppRestartAttemptsThe number of restart attempts. The default is 100. For example:
AppRestartAttempts=30


AppStartCmd

This attribute specifies the command line that starts the TimesTen application specified by AppName. It must include the full path name of the executable. If there are spaces in the path name, enclose the path name in double quotes.

Setting

Set AppStartCmd as follows:

How the attribute is representedSetting
AppStartCmdA string that represents the command line for starting the application specified by AppName.

Examples

On UNIX:

AppCheckCmd=/mycluster/reader/app_start.sh start

On Windows:

AppCheckCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -start

AppStopCmd

This attribute specifies the command line that stops the TimesTen application specified by AppName. It must include the full path name of the executable. If there are spaces in the path name, enclose the path name in double quotes.

Setting

Set AppStopCmd as follows:

How the attribute is representedSetting
AppStopCmdA string that represents the command line for stopping the application specified by AppName.

Examples

On UNIX:

AppCheckCmd=/mycluster/reader/app_stop.sh stop

On Windows:

AppCheckCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -stop

AppType

This attribute determines the hosts on which the TimesTen application should start.

Setting

Set AppType as follows:

How the attribute is representedSetting</th>
AppTypeActive - The application starts on the active database of an active standby pair.

Standby - The application starts on the standby database of an active standby pair. If the standby database fails, applications linked to it migrate to the active database until a new standby database is available.

DualMaster - The application starts on both the active host and the standby host. The failure of the application on the active host causes the active database and all other applications on the host to fail over to the standby host.

Subscriber - The application starts on all subscriber databases.

Subscriber[index]- The application starts on a subscriber database. The subscriber host used is the host occupying position index in either the SubscriberHosts attribute or the SubscriberVIP attribute, depending on whether virtual IP addresses are used. For a single subscriber, use Subscriber[1]. If no index is specified, TimesTen assumes that the application links to all subscribers.



AppUptimeThreshold

This attribute specifies the value for the Oracle Clusterware UPTIME_THRESHOLD resource attribute.The value represents the number of seconds that a TimesTen application must be up before Oracle Clusterware considers the application to be stable.

For more information about UPTIME_THRESHOLD, see Oracle Clusterware Administration and Deployment Guide.

How the attribute is representedSetting
AppUptimeThresholdNumber of seconds. The default is 600. For example:
AppUptimeThreshold=60


CacheConnect

If the active standby pair replicates cache groups, set this attribute to Y. If you specify Y, Oracle Clusterware assumes that TimesTen is connected to an Oracle database and prompts for the Oracle password.

Setting

Set CacheConnect as follows:

How the attribute is representedSetting
CacheConnectA value of Y (yes) or N (no). Default is N.


GridPort

This attribute lists the port numbers used by the cache grid agents for the active database and the standby database in an active standby pair that is a cache grid member. The port numbers are separated by a comma. This is a mandatory parameter when global cache groups are present.

Setting

Set GridPort as follows

How the attribute is representedSetting
GridPortTwo port numbers separated by a comma. For example:

GridPort=16101,16102



MasterVIP

This attribute is a list of the two virtual IP (VIP) addresses associated with two master databases. This is used for advanced availability. This attribute is required if you intend to use virtual IP addresses.

Setting

Set MasterVIP as follows:

How the attribute is representedSetting
MasterVIPA comma-separated list of two virtual IP addresses to the master databases.


RemoteSubscriberHosts

This attribute contains a list of subscriber hosts that are part of the active standby pair replication scheme but are not managed by Oracle Clusterware.

Setting

Set RemoteSubscriberHosts as follows:

How the attribute is representedSetting
RemoteSubscriberHostsA comma-separated list of subscriber hosts that are not managed by Oracle Clusterware.


RepBackupDir

This attribute indicates the directory where the backup of the active database is stored. This must be a directory in a shared file system that every node in the cluster can access. This attribute is required only if RepBackupPeriod is set to a value other than 0.

On UNIX, the directory must be a shared partition that is shared by all hosts in the cluster. On UNIX platforms, the partition must be NFS or OCFS (Oracle Cluster File System). On Windows, it must be an OCFS partition.

If you want to enable backup, install OCFS on the shared storage during the Oracle Clusterware installation process. You can use this shared storage for backup for an active standby pair.

See "Recovering from permanent failure of both master nodes" and "Failure and recovery for active standby pair grid members" for restrictions on backups.

Setting

Set RepBackupDir as follows:

How the attribute is representedSetting
RepbackupDirFull path name to the replication backup directory.


SubscriberHosts

Lists the host names that can contain subscriber databases. If virtual IP addresses are used, this list can overlap with the master host list provided by the MasterHosts attribute.

If the active standby pair is configured with subscribers, this attribute is required. It has no default value.

Setting

Set SubscriberHosts as follows:

How the attribute is representedSetting
SubscriberHostsA comma-separated list of host names. If virtual IP addresses are used, the order in which hosts are assigned to subscriber virtual IP addresses.

If virtual IP addresses are not used, the order is used to determine which application with an AppType of Subscriber[index] is attached to the subscriber database on a specific host. Also, the number of subscriber hosts specified is the number of subscribers that are part of the active standby pair. A subscriber is brought up on every subscriber host.



SubscriberVIP

This attribute is a list of the virtual IP addresses associated with the subscriber databases. This is used for advanced availability. This attribute is required if you intend to use virtual IP addresses.

Setting

Set SubscriberVIP as follows:

How the attribute is representedSetting
SubscriberVIPOne or more virtual IP addresses. These addresses are mapped to SubscriberHosts. The number of subscriber virtual IP addresses determines the number of subscribers that are brought up as part of the active standby pair. The order of subscriber virtual IP addresses is used to determine which application with an AppType of Subscriber[index] is attached to the database for a specific subscriber.


VIPInterface

This attribute is the name of the public network adapter used for virtual IP addresses on each host. This attribute is required if you intend to use virtual IP addresses.

Setting

Set VIPInterface as follows:

How the attribute is representedSetting
VIPInterfaceA string representing a network adapter.


VIPNetMask

This attribute is the netmask of the virtual IP addresses. This attribute is required if you intend to use virtual IP addresses.

Setting

Set VIPNetMask as follows:

How the attribute is representedSetting
VIPNetMaskAn IP netmask.


Optional attributes

These attributes are optional and have no default values. The optional attributes are listed in Table 8-3, "Optional attributes" and described in detail in this section.


AppFailoverDelay

This attribute specifies the number of seconds that the process that is monitoring the application waits after a failure is detected before performing a failover. The default is 0.

Setting

Set AppFailoverDelay as follows:

How the attribute is representedSetting
AppFailoverDelayAn integer representing the number of seconds that the process that is monitoring the application waits after a failure is detected before performing a failover. The default is 0.


AppFailureThreshold

This attribute specifies the number of consecutive failures that Oracle Clusterware tolerates for the action script for an application within an interval equal to 10 * AppScriptTimeout. The default is 2.

Setting

Set AppFailureThreshold as follows:

How the attribute is representedSetting
AppFailureThresholdAn integer indicating the number of consecutive failures that Oracle Clusterware tolerates for the action script for an application. The default is 2.


AppScriptTimeout

This attribute indicates the number of seconds that the TimesTen application monitor process waits for the start action script and the stop action script to complete for a specific application. The check action script has a nonconfigurable timeout of five seconds and is not affected by this attribute.

Setting

Set AppScriptTimeout as follows:

How the attribute is representedSetting
AppScriptTimeoutAn integer representing the number of seconds the TimesTen application container waits for start and stop action scripts to complete for a specific application. The default is 60.


AutoRecover

Specifies whether Oracle Clusterware automatically recovers the active database from the backup in the case of a failure of both masters.

If recovery is not automated (AutoRecover=N), the database can be recovered using the ttCWAdmin -restore command.

You cannot use AutoRecover if you are using cache groups in your configuration or if a cache grid is configured.

Setting

Set AutoRecover as follows:

How the attribute is representedSetting
AutoRecoverY - Oracle Clusterware automatically recovers the active database from the backup if both masters fail.

N - In the case of the failure of both masters, you must recover manually. This is the default.



DatabaseFailoverDelay

This attributes specifies the number of seconds that Oracle Clusterware waits before migrating a database to a new host after a failure. Oracle Clusterware does not relocate a database if the database comes up during the delay period. This is applicable when advanced availability is configured. The default is 60 seconds.

Setting

Set DatabaseFailoverDelay as follows:

How the attribute is representedSetting
DatabaseFailoverDelayAn integer representing the number of seconds that Oracle Clusterware waits before migrating a database to a new host after a failure. The default is 60.


FailureThreshold

This attribute specifies the number of consecutive failures of resources managed by Oracle Clusterware that are tolerated within 10 seconds before the active standby pair is considered failed and a new active standby pair is created on spare hosts using the automated backup. A spare node is only an option when using virtual IP addresses.

Oracle Clusterware tries to perform a duplicate for the active standby pair when a single failure occurs; it tries to perform a restoration if more than a single failure occurs.

This value is ignored for basic availability, since a spare node is only configured when at least one virtual IP address is configured, or is ignored when RepBackupPeriod is set to 0 when using advanced availability, which does include the configuration of at least one virtual IP address.


Note:

TimesTen tolerates only one failure of a backup resource, regardless of the setting for this attribute.

Setting

Set FailureThreshold as follows:

How the attribute is representedSetting
FailureThresholdAn integer representing the number of consecutive failures of resources managed by Oracle Clusterware that are tolerated within 10 seconds before the active standby pair is considered failed and a new active standby pair is created on spare hosts using the automated backup. The default is 2.


MasterStoreAttribute

This attribute indicates the desired replication scheme STORE attributes for the master databases. The STORE attributes apply to both the active and standby databases. The STORE clause for replication schemes is defined in Oracle TimesTen In-Memory Database SQL Reference.

This attribute is not required when RepDDL is configured.

If this attribute is not set, the STORE attributes take their default values. See "Setting STORE attributes".

Setting

Set MasterStoreAttribute as follows:

How the attribute is representedSetting
MasterStoreAttributeThe desired replication scheme STORE attributes for the master databases. For example: PORT 20000 TIMEOUT 60.


RepBackupPeriod

This attribute indicates the number of seconds between each backup of the active database. If this attribute is set to a value greater than 0, you must also specify a backup directory by setting RepBackupDir.

See "Recovering from permanent failure of both master nodes" and "Failure and recovery for active standby pair grid members" for restrictions on backups.

Setting

Set RepBackupPeriod as follows:

How the attribute is representedSetting
RepBackupPeriodAn integer indicating the number of seconds between each backup of the active database. A value of 0 disables the backup process. The default is 0.


RepDDL

This attribute represents the SQL statement that creates the active standby pair. Use this attribute only in special circumstances. For example, you must specify RepDDL if you need to exclude tables and sequences from the active standby pair.

If RepDDL is set, do not set these attributes:

Replace the database file name prefix in the SQL statement with the <DSN> macro. Use the <MASTERHOST[1]>, <MASTERHOST[2]> and <SUBSCRIBERHOST[n]> macros instead of the host names.

There is no default value for RepDDL.

This example sets RepDDL for two master databases:

RepDDL=CREATE ACTIVE STANDBY PAIR <DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]>

See "Using the RepDDL attribute" for additional examples.

You do not usually need to set the ROUTE clause in RepDDL because the transmitter of the replication agent automatically obtains the private and public network interfaces that Oracle Clusterware uses. However, if hosts have network connectivity for replication schemes that are not managed by Oracle Clusterware, then RepDDL needs to include the ROUTE clause.

If this attribute is used, each STORE clause must be followed by the pseudo host names such as:

  • ActiveHost

  • ActiveVIP

  • StandbyHost

  • StandbyVIP

  • SubscriberHost

  • SubscriberVIP

Setting

Set RepDDL as follows:

How the attribute is representedSetting
RepDDLCreates an active standby pair by issuing a CREATE ACTIVE STANDBY PAIR statement. There is no default value.


RepFullBackupCycle

This attribute specifies the number of incremental backups between full backups. The number of incremental backups depends on the capacity of the shared storage.

Setting this attribute can impact performance. There is a trade-off between the storage capacity and the time consumption for backup. An incremental backup can be performed much faster than a full backup. However, storage consumption increases until a full backup is performed.

See "Recovering from permanent failure of both master nodes" and "Failure and recovery for active standby pair grid members" for restrictions on backups.

Setting

Set RepFullBackupCycle as follows:

How the attribute is representedSetting
RepFullBackupCycleAn integer value representing the number of incremental backups to perform between full backups. The default is 5.


ReturnServiceAttribute

This attribute specifies the return service for the active standby replication scheme. See "Using a return service".

If no value is specified for this attribute, the active standby pair is configured with no return service.

Setting

Set ReturnServiceAttribute as follows:

How the attribute is representedSetting
ReturnServiceAttributeThe type of return service. For example: RETURN RECEIPT. There is no default value.


SubscriberStoreAttribute

This attribute indicates the replication scheme STORE attributes of subscriber databases. The STORE attributes apply to all subscribers. The STORE clause for replication schemes is defined in Oracle TimesTen In-Memory Database SQL Reference.

This attribute is not required when RepDDL is present.

If this attribute is not set, the STORE attributes take their default values. See "Setting STORE attributes".

Setting

Set SubscriberStoreAttribute as follows:

How the attribute is representedSetting
SubscriberStoreAttributeThe list of STORE attributes and their values for the subscriber databases.

For example: PORT 20000 TIMEOUT 60.



TimesTenScriptTimeout

This attribute specifies the number of seconds that Oracle Clusterware waits for the monitor process to start before assuming a failure.

Oracle TimesTen recommends setting a value of several hours because the action script may take a long time to duplicate the active database. The default is 1209600 seconds (14 days).

Setting

Set TimesTenScriptTimeout as follows:

How the attribute is representedSetting
TimesTenScriptTimeoutAn integer representing the number of seconds that Oracle Clusterware waits for the monitor process to start before assuming a failure. The default is 1209600 seconds (14 days).

PKZZ\&&&PK$A OEBPS/toc.ncx_ Oracle® TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2) Cover Table of Contents Oracle TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2) Preface What's New Overview of TimesTen Replication Getting Started Defining an Active Standby Pair Replication Scheme Administering an Active Standby Pair Without Cache Groups Administering an Active Standby Pair with Cache Groups Altering an Active Standby Pair Using Oracle Clusterware to Manage Active Standby Pairs TimesTen Configuration Attributes for Oracle Clusterware Defining Replication Schemes Setting Up a Replicated System Managing Database Failover and Recovery Monitoring Replication Altering Replication Resolving Replication Conflicts Index Copyright PKؕ¥PK$AOEBPS/standbycache.htm Administering an Active Standby Pair with Cache Groups

5 Administering an Active Standby Pair with Cache Groups

You can replicate tables within cache groups as long as they are configured within an active standby pair.


Note:

For information about managing failover and recovery automatically, see Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs".

The following sections describe how to administer an active standby pair that replicates cache groups:

Active standby pairs with cache groups

An active standby pair that replicates a read-only cache group or an asynchronous writethrough (AWT) cache group can change the role of the cache group automatically as part of failover and recovery. This helps ensure high availability of cache instances with minimal data loss. See "Replicating an AWT cache group" and "Replicating a read-only cache group".


Note:

TimesTen does not support replication of a user managed cache group if it is defined with either the AUTOREFRESH or PROPAGATE cache table attributes.

You can also create a special disaster recovery read-only subscriber when you set up active standby replication of an AWT cache group. This special subscriber, located at a remote disaster recovery site, can propagate updates to a second Oracle database, also located at the disaster recovery site. See "Using a disaster recovery subscriber in an active standby pair".

You cannot use an active standby pair to replicate synchronous writethrough (SWT) cache groups. If you are using an active standby pair to replicated a database with SWT cache groups, you must either drop or exclude the SWT cache groups.

Setting up an active standby pair with a read-only cache group

This section describes how to set up an active standby pair that replicates cache tables in a read-only cache group. The active standby pair used as an example in this section is not a cache grid member.

Before you create a database, see the information in these sections:

To set up an active standby pair that replicates a local read-only cache group, complete the following tasks:

  1. Create a cache administration user in the Oracle database. See "Create users in the Oracle database" in Oracle In-Memory Database Cache User's Guide.

  2. Create a database. See "Create a DSN for the TimesTen database" in Oracle In-Memory Database Cache User's Guide.

  3. Set the cache administration user ID and password by calling the ttCacheUidPwdSet built-in procedure. See "Set the cache administration user name and password in the TimesTen database" in Oracle In-Memory Database Cache User's Guide. For example:

    Command> call ttCacheUidPwdSet('orauser','orapwd');
    
  4. Start the cache agent on the database. Use the ttCacheStart built-in procedure or the ttAdmin -cachestart utility.

    Command> call ttCacheStart;
    
  5. Use the CREATE CACHE GROUP statement to create the read-only cache group. For example:

    Command> CREATE READONLY CACHE GROUP readcache
           > AUTOREFRESH INTERVAL 5 SECONDS
           > FROM oratt.readtab
           > (keyval NUMBER NOT NULL PRIMARY KEY, str VARCHAR2(32));
    
  6. Ensure that the autorefresh state is set to PAUSED. The autorefresh state is PAUSED by default after cache group creation. You can verify the autorefresh state by executing the ttIsql cachegroups command:

    Command> cachegroups;
    
  7. Create the replication scheme using the CREATE ACTIVE STANDBY PAIR statement.

    For example, suppose master1 and master2 are defined as the master databases. sub1 and sub2 are defined as the subscriber databases. The databases reside on node1, node2, node3, and node4. The return service is RETURN RECEIPT. The replication scheme can be specified as follows:

    Command> CREATE ACTIVE STANDBY PAIR master1 ON "node1", master2 ON "node2"
           > RETURN RECEIPT
           > SUBSCRIBER sub1 ON "node3", sub2 ON "node4"
           > STORE master1 ON "node1" PORT 21000 TIMEOUT 30
           > STORE master2 ON "node2" PORT 20000 TIMEOUT 30;
    
  8. Set the replication state to ACTIVE by calling the ttRepStateSet built-in procedure on the active database (master1). For example:

    Command> call ttRepStateSet('ACTIVE');
    
  9. Set up the replication agent policy for master1 and start the replication agent. See "Starting and stopping the replication agents".

  10. Load the cache group by using the LOAD CACHE GROUP statement. This starts the autorefresh process. For example:

    Command> LOAD CACHE GROUP readcache COMMIT EVERY 256 ROWS;
    
  11. As the instance administrator, duplicate the active database (master1) to the standby database (master2). Use the ttRepAdmin -duplicate utility with the -keepCG option to preserve the cache group. Alternatively, you can use the ttRepDuplicateEx C function to duplicate the database. See "Duplicating a database". ttRepAdmin prompts for the values of -uid, -pwd, -cacheuid and -cachepwd.

    ttRepAdmin -duplicate -from master1 -host node1 -keepCG "DSN=master2;UID=;PWD="
    
  12. Set up the replication agent policy on master2 and start the replication agent. See "Starting and stopping the replication agents".

  13. The standby database enters the STANDBY state automatically. Wait for master2 to enter the STANDBY state. Call the ttRepStateGet built-in procedure to check the state of master2. For example:

    Command> call ttRepStateGet;
    
  14. Start the cache agent for master2 using the ttCacheStart built-in procedure or the ttAdmin -cacheStart utility. For example:

    Command> call ttCacheStart;
    
  15. As the instance administrator, duplicate the subscribers (sub1 and sub2) from the standby database (master2). Use the -noKeepCG command line option with ttRepAdmin -duplicate to convert the cache tables to normal TimesTen tables on the subscribers. ttRepAdmin prompts for the values of -uid and -pwd. See "Duplicating a database". For example:

    ttRepAdmin -duplicate -from master2 -host node2 -nokeepCG "DSN=sub1;UID=;PWD="
    
  16. Set up the replication agent policy on the subscribers and start the replication agent on each of the subscriber databases. See "Starting and stopping the replication agents".

Setting up an active standby pair with an AWT cache group

For detailed instructions for setting up an active standby pair with a global AWT cache group, see "Replicating cache tables" in Oracle In-Memory Database Cache User's Guide. The active standby pair in that section is a cache grid member.

Recovering from a failure of the active database

This section includes the following topics:

Recovering when the standby database is ready

This section describes how to recover the active database when the standby database is available and synchronized with the active database. It includes the following topics:

When replication is return receipt or asynchronous

Complete the following tasks:

  1. Stop the replication agent on the failed database if it has not already been stopped.

  2. On the standby database, execute ttRepStateSet('ACTIVE'). This changes the role of the database from STANDBY to ACTIVE. If you are replicating a read-only cache group, this action automatically causes the autorefresh state to change from PAUSED to ON for this database.

  3. On the new active database, execute ttRepStateSave('FAILED', 'failed_database','host_name'), where failed_database is the former active database that failed. This step is necessary for the new active database to replicate directly to the subscriber databases. During normal operation, only the standby database replicates to the subscribers.

  4. Stop the cache agent on the failed database if it is not already stopped.

  5. Destroy the failed database.

  6. Duplicate the new active database to the new standby database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. Use the -keepCG -recoveringNode command line options with ttRepAdmin to preserve the cache group. See "Duplicating a database".

  7. Set up the replication agent policy on the new standby database and start the replication agent. See "Starting and stopping the replication agents".

  8. Start the cache agent on the new standby database.

The standby database contacts the active database. The active database stops sending updates to the subscribers. When the standby database is fully synchronized with the active database, then the standby database enters the STANDBY state and starts sending updates to the subscribers.The new standby database takes over processing of the cache group automatically when it enters the STANDBY state. If you are replicating an AWT cache group, the new standby database takes over processing of the cache group automatically when it enters the STANDBY state.


Note:

You can verify that the standby database has entered the STANDBY state by using the ttRepStateGet built-in procedure.

When replication is return twosafe

Complete the following tasks:

  1. On the standby database, execute ttRepStateSet('ACTIVE'). This changes the role of the database from STANDBY to ACTIVE. If you are replicating a read-only cache group, this action automatically causes the autorefresh state to change from PAUSED to ON for this database.

  2. On the new active database, execute ttRepStateSave('FAILED', 'failed_database','host_name'), where failed_database is the former active database that failed. This step is necessary for the new active database to replicate directly to the subscriber databases. During normal operation, only the standby database replicates to the subscribers.

  3. Connect to the failed database. This triggers recovery from the local transaction logs. If database recovery fails, you must continue from Step 5 of the procedure for recovering when replication is return receipt or asynchronous. See "When replication is return receipt or asynchronous". If you are replicating a read-only cache group, the autorefresh state is automatically set to PAUSED.

  4. Verify that the replication agent for the failed database has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".

  5. Verify that the cache agent for the failed database has restarted. If it has not restarted, then start the cache agent.

When the active database determines that it is fully synchronized with the standby database, then the standby database enters the STANDBY state and starts sending updates to the subscribers. The new standby database takes over processing of the cache group automatically when it enters the STANDBY state. If you are replicating an AWT cache group, the new standby database takes over processing of the cache group automatically when it enters the STANDBY state.


Note:

You can verify that the standby database has entered the STANDBY state by using the ttRepStateSet built-in procedure.

Recovering when the standby database is not ready

Consider the following scenarios:

  • The standby database fails. The active database fails before the standby comes back up or before the standby has been synchronized with the active database.

  • The active database fails. The standby database becomes ACTIVE, and the rest of the recovery process begins. (See "Recovering from a failure of the active database".) The new active database fails before the new standby database is fully synchronized with it.

In both scenarios, the subscribers may have had more changes applied than the standby database.

When the active database fails and the standby database has not applied all of the changes that were last sent from the active database, there are two choices for recovery:

  • Recover the active master database from the local transaction logs.

  • Recover the standby master database from the local transaction logs.

The choice depends on which database is available and which is more up to date.

Recover the active database

  1. Connect to the failed active database. This triggers recovery from the local transaction logs. If you are replicating a read-only cache group, the autorefresh state is automatically set to PAUSED.

  2. Verify that the replication agent for the failed active database has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".

  3. Execute ttRepStateSet('ACTIVE') on the newly recovered database. If you are replicating a read-only cache group, this action automatically causes the autorefresh state to change from PAUSED to ON for this database.

  4. Verify that the cache agent for the failed database has restarted. If it has not restarted, then start the cache agent.

  5. Duplicate the active database to the standby database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. Use the -keepCG command line option with ttRepAdmin to preserve the cache group. "Duplicating a database".

  6. Set up the replication agent policy on the standby database and start the replication agent. See "Starting and stopping the replication agents".

  7. Wait for the standby database to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  8. Start the cache agent for on the standby database using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  9. Duplicate all of the subscribers from the standby database. See "Duplicating a master database to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers.

  10. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber databases. See "Starting and stopping the replication agents".

Recover the standby database

  1. Connect to the failed standby database. This triggers recovery from the local transaction logs. If you are replicating a read-only cache group, the autorefresh state is automatically set to PAUSED.

  2. If the replication agent for the standby database has automatically restarted, you must stop the replication agent. See "Starting and stopping the replication agents".

  3. If the cache agent has automatically restarted, stop the cache agent.

  4. Drop the replication configuration using the DROP ACTIVE STANDBY PAIR statement.

  5. Drop and re-create all cache groups using the DROP CACHE GROUP and CREATE CACHE GROUP statements.

  6. Re-create the replication configuration using the CREATE ACTIVE STANDBY PAIR statement.

  7. Execute ttRepStateSet('ACTIVE') on the master database, giving it the ACTIVE role. If you are replicating a read-only cache group, this action automatically causes the autorefresh state to change from PAUSED to ON for this database.

  8. Set up the replication agent policy and start the replication agent on the new active database. See "Starting and stopping the replication agents".

  9. Start the cache agent on the new active database.

  10. Duplicate the active database to the standby database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. Use the -keepCG command line option with ttRepAdmin to preserve the cache group. "Duplicating a database".

  11. Set up the replication agent policy on the standby database and start the replication agent on the new standby database. See "Starting and stopping the replication agents".

  12. Wait for the standby database to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  13. Start the cache agent for the standby database using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  14. Duplicate all of the subscribers from the standby database. See "Duplicating a master database to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers.

  15. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber databases. See "Starting and stopping the replication agents".

Failing back to the original nodes

After a successful failover, you may wish to fail back so that the active database and the standby database are on their original nodes. See "Reversing the roles of the active and standby databases" for instructions.

Recovering from a failure of the standby database

To recover from a failure of the standby database, complete the following tasks:

  1. Detect the standby database failure.

  2. If return twosafe service is enabled, the failure of the standby database may prevent a transaction in progress from being committed on the active database, resulting in error 8170, "Receipt or commit acknowledgement not returned in the specified timeout interval". If so, then call the ttRepSyncSet procedure with a localAction parameter of 2 (COMMIT) and commit the transaction again. For example:

    call ttRepSyncSet( null, null, 2);
    commit;
    
  3. Execute ttRepStateSave('FAILED','standby_database','host_name') on the active database. After this, as long as the standby database is unavailable, updates to the active database are replicated directly to the subscriber databases. Subscriber databases may also be duplicated directly from the active.

  4. If the replication agent for the standby database has automatically restarted, stop the replication agent. See "Starting and stopping the replication agents".

  5. If the cache agent has automatically restarted, stop the cache agent.

  6. Recover the standby database in one of the following ways:

    • Connect to the standby database. This triggers recovery from the local transaction logs.

    • Duplicate the standby database from the active database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. Use the -keepCG -recoveringNode command line options with ttRepAdmin to preserve the cache group.See "Duplicating a database".

    The amount of time that the standby database has been down and the amount of transaction logs that need to be applied from the active database determine the method of recovery that you should use.

  7. Set up the replication agent policy and start the replication agent on the standby database. See "Starting and stopping the replication agents".

  8. Start the cache agent.

The standby database enters the STANDBY state and starts sending updates to the subscribers after the active database determines that the two master databases have been synchronized and stops sending updates to the subscribers.


Note:

You can verify that the standby database has entered the STANDBY state by using the ttRepStateGet procedure.

Recovering from the failure of a subscriber database

If a subscriber database fails, then you can recover it by one of the following methods:

  • Connect to the failed subscriber. This triggers recovery from the local transaction logs. Start the replication agent and let the subscriber catch up.

  • Duplicate the subscriber from the standby database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to normal TimesTen tables on the subscriber.

If the standby database is down or in recovery, then duplicate the subscriber from the active database.

After the subscriber database has been recovered, then set up the replication agent policy and start the replication agent. See "Starting and stopping the replication agents".

Reversing the roles of the active and standby databases

To change the role of the active database to standby and vice versa:

  1. Pause any applications that are generating updates on the current active database.

  2. Execute ttRepSubscriberWait on the active database, with the DSN and host of the current standby database as input parameters. It must return success (<00>). This ensures that all updates have been transmitted to the current standby database.

  3. Stop the replication agent on the current active database. See "Starting and stopping the replication agents".

  4. If global cache groups are not present, stop the cache agent on the current active database. When global cache groups are present, set the autorefresh state to PAUSED.

  5. Execute ttRepDeactivate on the current active database. This puts the database in the IDLE state. If you are replicating a read-only cache group, this action automatically causes the autorefresh state to change from ON to PAUSED for this database.

  6. Execute ttRepStateSet('ACTIVE') on the current standby database. This database now acts as the active database in the active standby pair. If you are replicating a read-only cache group, this automatically causes the autorefresh state to change from PAUSED to ON for this database.

  7. Start the replication agent on t>Fhe former master database.

  8. Configure the replication agent policy as needed and start the replication agent on the former active database. Use the ttRepStateGet procedure to determine when the database's state has changed from IDLE to STANDBY. The database now acts as the standby database in the active standby pair.

  9. Start the cache agent on the former active database if it is not already running.

  10. Resume any applications that were paused in Step 1.

Detection of dual active databases

See "Detection of dual active databases". There is no difference for active standby pairs that replicate cache groups.

Using a disaster recovery subscriber in an active standby pair

TimesTen active standby pair replication provides high availability by allowing for fast switching between databases within a data center. This includes the ability to automatically change which database propagates changes to an Oracle database using AWT cache groups. However, for additional high availability across data centers, you may require the ability to recover from a failure of an entire site, which can include a failure of both TimesTen master databases in the active standby pair as well as the Oracle database used for the cache groups.

You can recover from a complete site failure by creating a special disaster recovery read-only subscriber as part of the active standby pair replication scheme. The standby database sends updates to cache group tables on the read-only subscriber. This special subscriber is located at a remote disaster recovery site and can propagate updates to a second Oracle database, also located at the disaster recovery site. The disaster recovery subscriber can take over as the active in a new active standby pair at the disaster recovery site if the primary site suffers a complete failure. Any applications may then connect to the disaster recovery site and continue operating, with minimal interruption of service.

Requirements for using a disaster recovery subscriber with an active standby pair

To use a disaster recovery subscriber, you must:

  • Use an active standby pair configuration with AWT cache groups at the primary site. The active standby pair can also include read-only cache groups in the replication scheme. The read-only cache groups are converted to regular tables on the disaster recovery subscriber. The AWT cache group tables remain AWT cache group tables on the disaster recovery subscriber.

  • Have a continuous WAN connection from the primary site to the disaster recovery site. This connection should have at least enough bandwidth to guarantee that the normal volume of transactions can be replicated to the disaster recovery subscriber at a reasonable pace.

  • Configure an Oracle database at the disaster recovery site to include tables with the same schema as the database at the primary site. Note that this database is intended only for capturing the replicated updates from the primary site, and if any data exists in tables written to by the cache groups when the disaster recovery subscriber is created, that data is deleted.

  • Have the same cache group administrator user ID and password at both the primary and the disaster recovery site.

Though it is not absolutely required, you should have a second TimesTen database configured at the disaster recovery site. This database can take on the role of a standby database, in the event that the disaster recovery subscriber is promoted to an active database after the primary site fails.

Rolling out a disaster recovery subscriber

To create a disaster recovery subscriber, follow these steps:

  1. Create an active standby pair with AWT cache groups at the primary site. The active standby pair can also include read-only cache groups. The read-only cache groups are converted to regular tables when the disaster recovery subscriber is rolled out.

  2. Create the disaster recovery subscriber at the disaster recovery site using the ttRepAdmin utility with the -duplicate and -initCacheDR options. You must also specify the cache group administrator and password for the Oracle database at the disaster recovery site using the -cacheUid and -cachePwd options.

    If your database includes multiple cache groups, you may improve the efficiency of the duplicate operation by using the -nThreads option to specify the number of threads that are spawned to flush the cache groups in parallel. Each thread flushes an entire cache group to Oracle and then moves on to the next cache group, if any remain to be flushed. If a value is not specified for -nThreads, only one flushing thread is spawned.

    For example, duplicate the standby database mast2, on the system with the host name primary and the cache user ID system and password manager, to the disaster recovery subscriber drsub, and using two cache group flushing threads. ttRepAdmin prompts for the values of -uid, -pwd, -cacheUid and -cachePwd.

    ttRepAdmin -duplicate -from mast2 -host primary -initCacheDR -nThreads 2 
    "DSN=drsub;UID=;PWD=;"
    

    If you use the ttRepDuplicateEx function in C, you must set the TT_REPDUP_INITCACHEDR flag in ttRepDuplicateExArg.flags and may optionally specify a value for ttRepDuplicateExArg.nThreads4InitDR:

    int                 rc;
    ttUtilHandle        utilHandle;
    ttRepDuplicateExArg arg;
    memset( &arg, 0, sizeof( arg ) );
    arg.size = sizeof( ttRepDuplicateExArg );
    arg.flags = TT_REPDUP_INITCACHEDR;
    arg.nThreads4InitDR = 2;
    arg.uid="ttuser"
    arg.pwd="ttuser"
    arg.cacheuid = "system";
    arg.cachepwd = "manager";
    arg.localHost = "disaster";
    rc = ttRepDuplicateEx( utilHandle, "DSN=drsub",
                           "mast2", "primary", &arg );
    

    After the subscriber is duplicated, TimesTen automatically configures the replication scheme that propagates updates from the AWT cache groups to the Oracle database, truncates the tables in the Oracle database that correspond to the cache groups in TimesTen, and then flushes all of the data in the cache groups to the Oracle database.

  3. If you wish to set the failure threshold for the disaster recovery subscriber, call the ttCacheAWTThresholdSet built-in procedure and specify the number of transaction log files that can accumulate before the disaster recovery subscriber is considered either dead or too far behind to catch up.

    If one or both master databases had a failure threshold configured before the disaster recovery subscriber was created, then the disaster recovery subscriber inherits the failure threshold value when it is created with the ttRepAdmin -duplicate -initCacheDR command. If the master databases have different failure thresholds, then the higher value is used for the disaster recovery subscriber.

    For more information about the failure threshold, see "Setting the log failure threshold".

  4. Start the replication agent for the disaster recovery subscriber using the ttRepStart procedure or the ttAdmin utility with the -repstart option. For example:

    ttAdmin -repstart drsub
    

    Updates are now replicated from the standby database to the disaster recovery subscriber, which then propagates the updates to the Oracle database at the disaster recovery site.

Switching over to the disaster recovery site

When the primary site has failed, you can switch over to the disaster recovery site in one of two ways. If your goal is to minimize risk of data loss at the disaster recovery site, you may roll out a new active standby pair using the disaster recovery subscriber as the active database. If the goal is to absolutely minimize the downtime of your applications, at the risk of data loss if the disaster recovery database later fails, you may instead choose to drop the replication scheme from the disaster recovery subscriber and use it as a single non-replicating database. You may deploy an active standby pair at the disaster recovery site later.

Creating a new active standby pair after switching to the disaster recovery site

  1. Any read-only applications may be redirected to the disaster recovery subscriber immediately. Redirecting applications that make updates to the database must wait until Step 7.

  2. Ensure that all of the recent updates to the cache groups have been propagated to the Oracle database using the ttRepSubscriberWait procedure or the ttRepAdmin command with the -wait option.

    ttRepSubscriberWait( null, null, '_ORACLE', null, 600 );
    

    It must return success (<00>). If ttRepSubscriberWait returns 0x01, indicating a timeout, investigate to determine why the cache groups are not finished propagating before continuing to Step 3.

  3. Stop the replication agent on the disaster recovery subscriber using the ttRepStop procedure or the ttAdmin command with the -repstop option. For example, to stop the replication agent for the subscriber drsub, use:

    call ttRepStop;
    
  4. Drop the active standby pair replication scheme on the subscriber using the DROP ACTIVE STANDBY PAIR statement. For example:

    DROP ACTIVE STANDBY PAIR;
    
  5. If there are tables on the disaster recovery subscriber that were converted from read-only cache group tables on the active database, drop the tables on the disaster recovery subscriber.

  6. Create the read-only cache groups on the disaster recovery subscriber. Ensure that the autorefresh state is set to PAUSED.

  7. Create a new active standby pair replication scheme using the CREATE ACTIVE STANDBY PAIR statement, specifying the disaster recovery subscriber as the active database. For example, to create a new active standby pair with the former subscriber drsub as the active and the new database drstandby as the standby, and using the return twosafe return service, use:

    CREATE ACTIVE STANDBY PAIR drsub, drstandby RETURN TWOSAFE;
    
  8. Set the new active standby database to the ACTIVE state using the ttRepStateSet procedure. For example, on the database drsub in this example, execute:

    call ttRepStateSet( 'ACTIVE' );
    
  9. Any applications which must write to the TimesTen database may now be redirected to the new active database.

  10. If you are replicating a read-only cache group, load the cache group using the LOAD CACHE GROUP statement to begin the autorefresh process. You may also load the cache group if you are replicating an AWT cache group, although it is not required.

  11. Duplicate the active database to the standby database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. Use the -keepCG command line option with ttRepAdmin to preserve the cache group. See "Duplicating a database".

  12. Set up the replication agent policy on the standby database and start the replication agent. See "Starting and stopping the replication agents".

  13. Wait for the standby database to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  14. Start the cache agent for the standby database using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  15. Duplicate all of the subscribers from the standby database. See "Duplicating a master database to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers.

  16. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber databases. See "Starting and stopping the replication agents".

Switching over to a single database

  1. Any read-only applications may be redirected to the disaster recovery subscriber immediately. Redirecting applications that make updates to the database must wait until Step 5.

  2. Stop the replication agent on the disaster recovery subscriber using the ttRepStop procedure or the ttAdmin command with the -repstop option. For example, to stop the replication agent for the subscriber drsub, use:

    call ttRepStop;
    
  3. Drop the active standby pair replication scheme on the subscriber using the DROP ACTIVE STANDBY PAIR statement. For example:

    DROP ACTIVE STANDBY PAIR;
    
  4. If there are tables on the disaster recovery subscriber that were converted from read-only cache group tables on the active database, drop the tables on the disaster recovery subscriber.

  5. Create the read-only cache groups on the disaster recovery subscriber.

  6. Although there is no longer an active standby pair configured, AWT cache groups require the replication agent to be started. Start the replication agent on the database using the ttRepStart procedure or the ttAdmin command with the -repstart option. For example, to start the replication agent for the database drsub, use:

    call ttRepStart;
    
  7. Any applications which must write to a TimesTen database may now be redirected to the this database.


    Note:

    You may choose to roll out an active standby pair at the disaster recovery site at a later time. You may do this by following the steps in "Creating a new active standby pair after switching to the disaster recovery site", starting at Step 2 and skipping Step 4.

Returning to the original configuration at the primary site

When the primary site is usable again, you may wish to move the working active standby pair from the disaster recovery site back to the primary site. You can do this with a minimal interruption of service by reversing the process that was used to create and switch over to the original disaster recovery site. Follow these steps:

  1. Destroy original active database at the primary site, if necessary, using the ttDestroy utility. For example, to destroy a database called mast1, use:

    ttDestroy mast1
    
  2. Create a disaster recovery subscriber at the primary site, following the steps detailed in "Rolling out a disaster recovery subscriber". Use the original active database for the new disaster recovery subscriber.

  3. Switch over to the new disaster recovery subscriber at primary site, as detailed in "Switching over to the disaster recovery site". Roll out the standby database as well.

  4. Roll out a new disaster recovery subscriber at the disaster recovery site, as detailed in "Rolling out a disaster recovery subscriber".

PK0gZH>PK$AOEBPS/design.htm Defining Replication Schemes

9 Defining Replication Schemes

This chapter describes how to define replication schemes that are not active standby pairs. For information about defining active standby pair replication schemes, see Chapter 3, "Defining an Active Standby Pair Replication Scheme". If you want to replicate a database that has cache groups, see Chapter 5, "Administering an Active Standby Pair with Cache Groups".

To reduce the amount of bandwidth required for replication, see "Compressing replicated traffic".

To replicate tables with columns in a different order or with a different number of partitions, see "Replicating tables with different definitions".

This chapter includes these topics:

Designing a highly available system

These are the primary objectives of any replication scheme:

  • Provide one or more backup databases to ensure that the data is always available to applications

  • Provide a means to recover failed databases from their backup databases

  • Distribute workloads efficiently to provide applications with the quickest possible access to the data

  • Enable software upgrades and maintenance without disrupting service to users

In a highly available system, a subscriber database must be able to survive failures that may affect the master. At a minimum, the master and subscriber need to be on separate hosts. For some applications, you may want to place the subscriber in an environment that has a separate power supply. In certain cases, you may need to place a subscriber at an entirely separate site.

In this chapter, we consider the replication schemes described in "Types of replication schemes":

  • Unidirectional

  • Bidirectional split workload

  • Bidirectional distributed workload

  • Propagation

In addition, consider whether you want to replicate a whole database or selected elements of the database. Also consider the number of subscribers in the replication scheme. Unidirectional and propagation replication schemes allow you to choose the number of subscribers.

The rest of this section includes these topics:

For more information about using replication to facilitate online upgrades, see "Performing an online upgrade with replication" and "Performing an online upgrade with active standby pair replication" in Oracle TimesTen In-Memory Database Installation Guide.

Considering failover and recovery scenarios

As you plan a replication scheme, consider every failover and recovery scenario. For example, subscriber failures generally have no impact on the applications connected to the master databases. Their recovery does not disrupt user service. If a failure occurs on a master database, you should have a means to redirect the application load to a subscriber and continue service with no or minimal interruption. This process is typically handled by a cluster manager or custom software designed to detect failures, redirect users or applications from the failed database to one of its subscribers, and manage recovery of the failed database. See Chapter 11, "Managing Database Failover and Recovery".

When planning failover strategies, consider which subscribers will take on the role of the master and for which users or applications. Also consider recovery factors. For example, a failed master must be able to recover its database from its most up-to-date subscriber, and any subscriber must be able to recover from its master. A bidirectional scheme that replicates the entire database can take advantage of automatic restoration of a failed master. See "Automatic catch-up of a failed master database".

Consider the failure scenario for the unidirectionally replicated database shown in Figure 9-1. In the case of a master failure, the application cannot access the database until it is recovered from the subscriber. You cannot switch the application connection or user load to the subscriber unless you use an ALTER REPLICATION statement to redefine the subscriber database as the master. See "Replacing a master database".

Figure 9-1 Recovering a master in a unidirectional scheme

Description of Figure 9-1 follows
Description of "Figure 9-1 Recovering a master in a unidirectional scheme"

Figure 9-2 shows a bidirectional distributed workload scheme in which the entire database is replicated. Failover in this type of replication scheme involves shifting the users of the application on the failed database to the application on the surviving database. Upon recovery, the workload can be redistributed to the application on the recovered database.

Figure 9-2 Recovering a master in a distributed workload scheme

Description of Figure 9-2 follows
Description of "Figure 9-2 Recovering a master in a distributed workload scheme"

Similarly, the users in a split workload scheme must be shifted from the failed database to the surviving database. Because replication in a split workload scheme is not at the database level, you must use an ALTER REPLICATION statement to set a new master database. See "Replacing a master database". Upon recovery, the users can be moved back to the recovered master database.

Propagation replication schemes also require the use of the ALTER REPLICATION statement to set a new master or a new propagator if the master or propagator fails. Higher availability is achieved if two propagators are defined in the replication scheme. See Figure 1-11 for an example of a propagation replication scheme with two propagators.

Making decisions about performance and recovery tradeoffs

When you design a replication scheme, weigh operational efficiencies against the complexities of failover and recovery. Factors that may complicate failover and recovery include the network topology that connects a master with its subscribers and the complexity of the replication scheme. For example, it is easier to recover a master that has been fully replicated to a single subscriber than recover a master that has selected elements replicated to different subscribers.

You can configure replication to work asynchronously (the default), "semi-synchronously" with return receipt service, or fully synchronously with return twosafe service. Selecting a return service provides greater confidence that your data is consistent on the master and subscriber databases. Your decision to use default asynchronous replication or to configure return receipt or return twosafe mode depends on the degree of confidence you require and the performance tradeoff you are willing to make in exchange.

Table 9-1 summarizes the performance and recover tradeoffs of asynchronous replication, return receipt service and return twosafe service.

Table 9-1 Performance and recovery tradeoffs

Type of behaviorAsynchronous replication (default)Return receiptReturn twosafe

Commit sequence

Each transaction is committed first on the master database.

Each transaction is committed first on the master database

Each transaction is committed first on the subscriber database.

Performance on master

Shortest response time and best throughput because there is no log wait between transactions or before the commit on the master.

Longer response time and less throughput than asynchronous.

The application is blocked for the duration of the network round-trip after commit. Replicated transactions are more serialized than with asynchronous replication, which results in less throughput.

Longest response time and least throughput.

The application is blocked for the duration of the network round-trip and remote commit on the subscriber before the commit on the master. Transactions are fully serialized, which results in the least throughput.

Effect of a runtime error

Because the transaction is first committed on the master database, errors that occur when committing on a subscriber require the subscriber to be either manually corrected or destroyed and then recovered from the master database.

Because the transaction is first committed on the master database, errors that occur when committing on a subscriber require the subscriber to be either manually corrected or destroyed and then recovered from the master database.

Because the transaction is first committed on the subscriber database, errors that occur when committing on the master require the master to be either manually corrected or destroyed and then recovered from the subscriber database.

Failover after failure of master

If the master fails and the subscriber takes over, the subscriber may be behind the master and must reprocess data feeds and be able to remove duplicates.

If the master fails and the subscriber takes over, the subscriber may be behind the master and must reprocess data feeds and be able to remove duplicates.

If the master fails and the subscriber takes over, the subscriber is at least up to date with the master. It is also possible for the subscriber to be ahead of the master if the master fails before committing a transaction it had replicated to the subscriber.


In addition to the performance and recovery tradeoffs between the two return services, you should also consider the following:

  • Return receipt can be used in more configurations, whereas return twosafe can only be used in a bidirectional configuration or an active standby pair.

  • Return twosafe allows you to specify a "local action" to be taken on the master database in the event of a timeout or other error encountered when replicating a transaction to the subscriber database.

A transaction is classified as return receipt or return twosafe when the application updates a table that is configured for either return receipt or return twosafe. Once a transaction is classified as either return receipt or return twosafe, it remains so, even if the replication scheme is altered before the transaction completes.

For more information about return services, see "Using a return service".

Distributing workloads

Consider configuring the databases to distribute application workloads and make the best use of a limited number of servers. For example, it may be efficient and economical to configure the databases in a bidirectional distributed workload replication scheme so that each serves as both master and subscriber, rather than as separate master and subscriber databases. However, a distributed workload scheme works best with applications that primarily read from the databases. Implementing a distributed workload scheme for applications that frequently write to the same elements in a database may diminish performance and require that you implement a solution to prevent or manage update conflicts, as described in Chapter 14, "Resolving Replication Conflicts".

Defining a replication scheme

After you have designed your replication scheme, use the CREATE REPLICATION SQL statement to apply the scheme to your databases. You must have the ADMIN privilege to use the CREATE REPLICATION statement.

Table 9-2 shows the components of a replication scheme and identifies the clauses associated with the topics in this chapter. The complete syntax for the CREATE REPLICATION statement is provided in Oracle TimesTen In-Memory Database SQL Reference.

Table 9-2 Components of a replication scheme

ComponentSee...

CREATE REPLICATION Owner.SchemeName

"Owner of the replication scheme and replicated objects"


ELEMENT ElementName ElementType

"Defining replication elements"


[CheckConflicts]

"Checking for replication conflicts on table elements"


{MASTER|PROPAGATOR} DatabaseName ON "HostName"

"Database names"


[TRANSMIT {NONDURABLE|DURABLE}]

"Setting transmit durability on data store elements"


SUBSCRIBER DatabaseName ON "HostName"

"Database names"


[ReturnServiceAttribute]

"Using a return service"


INCLUDE|EXCLUDE

"Defining the DATASTORE element"


STORE DatabaseName DataStoreAttributes

"Setting STORE attributes"


[NetworkOperation]

"Configuring network operations"




Note:

Naming errors in your CREATE REPLICATION statement are often hard to troubleshoot, so take the time to check and double-check the element, database, and host names for mistakes.

The replication scheme used by a database persists across system reboots. Modify a replication scheme by using the ALTER REPLICATION statement. See Chapter 13, "Altering Replication".

Owner of the replication scheme and replicated objects

The replication scheme and the replicated objects must be owned by the same user on every database in a replication scheme. To ensure that there is a common owner across all databases, you should explicitly specify the user and replication scheme in the CREATE REPLICATION statement.

For example, create a replication scheme named repscheme owned by user repl. The first line of the CREATE REPLICATION statement for repscheme is:

CREATE REPLICATION rep1.repscheme

Database names

These are the roles of the databases in a replication scheme:

  • Master: Applications update the master database. The master sends the updates to the propagator or to the subscribers directly.

  • Propagator: The propagator database receives updates from the master database and sends them to subscriber databases.

  • Subscriber: Subscribers receive updates from the propagator or the master.

Before you define the replication scheme, you need to define the data source names (DSNs) for the databases in the replication scheme. On UNIX platforms, create an odbc.ini file. On Windows, use the ODBC Administrator to name the databases and set connection attributes. See "Step 1: Create the DSNs for the master and the subscriber" for an example.

Each database "name" specified in a replication scheme must match the prefix of the database file name without the path specified for the DataStore data store attribute in the DSN definition. Use the same name for both the DataStore and Data Source Name data store attributes in each DSN definition. If the database path is directory/subdirectory/foo.ds0, then foo is the database name that you should use. For example, this entry in an odbc.ini file shows a Data Source Name (DSN) of masterds, while the DataStore value shows the path for masterds:

[masterds]
DataStore=/tmp/masterds
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8

Table requirements and restrictions for replication schemes

The name and owner of replicated tables participating in the replication scheme must be identical on the master and subscriber databases. The column definitions of replicated tables participating in the replication scheme must be identical on the master and subscriber databases unless you specify a TABLE DEFINITION CHECKING value of RELAXED in the CREATE REPLICATION statement. If you specify RELAXED, then the tables must have the same key definition, number of columns and column data types. See "Replicating tables with different definitions".

Replicated tables must have one of the following:

  • A primary key

  • A unique index over non-nullable columns

Replication uses the primary key or unique index to uniquely identify each row in the replicated table. Replication always selects the first usable index that turns up in a sequential check of the table's index array. If there is no primary key, replication selects the first unique index without NULL columns it encounters. The selected index on the replicated table in the master database must also exist on its counterpart table in the subscriber.


Note:

The keys on replicated tables are transmitted in each update record to the subscribers. Smaller keys are transmitted more efficiently.

Replicated tables have these data type restrictions:

  • VARCHAR2, NVARCHAR2, VARBINARY and TT_VARCHAR columns in replicated tables is limited to a size of 4 megabytes. For a VARCHAR2 column, the maximum length when using character length semantics depends on the number of bytes each character occupies when using a particular database character set. For example, if the character set requires four bytes for each character, the maximum possible length is one million characters. For an NVARCHAR2 column, which requires two bytes for each character, the maximum length when using character length semantics is two million characters.

  • Columns with the BLOB data type in replicated tables are limited to a size of 16 megabytes. Columns with the CLOB or NCLOB data type in replicated tables are limited to a size of 4 megabytes.

  • A primary key column cannot have a LOB data type.

You cannot replicate tables with compressed columns.

If these requirements and restrictions present difficulties, you may want to consider using the Transaction Log API (XLA) as a replication mechanism. See "Using XLA as a replication mechanism" in Oracle TimesTen In-Memory Database C Developer's Guide.

Defining replication elements

A replication scheme consists of one or more ELEMENT descriptions that contain the name of the element, its type (DATASTORE, TABLE or SEQUENCE), the master database on which it is updated, and the subscriber databases to which the updates are replicated.

If you want to replicate a database with cache groups, see Chapter 5, "Administering an Active Standby Pair with Cache Groups".

These are restrictions on elements:

  • Do not include a specific object (table, sequence or database) in more than one element description.

  • Do not define the same element in the role of both master and propagator.

  • An element must include the database on the current host as either the master, subscriber or propagator.

  • Element names must be unique within a replication scheme.

The correct way to define elements in a multiple subscriber scheme is described in "Multiple subscriber schemes with return services and a log failure threshold". The correct way to propagate elements is described in "Propagation scheme".

The name of each element in a scheme can be used to identify the element if you decide later to drop or modify the element by using the ALTER REPLICATION statement.

You can add tables, sequences and databases to an existing replication scheme. See "Altering a replication scheme". You can drop a table or sequence from a database that is part of a replication scheme after you exclude the table or sequence from the replication scheme. See "Dropping a table or sequence from a replication scheme".

The rest of this section includes the following topics:

Defining the DATASTORE element

To replicate the entire contents of the master database (masterds) to the subscriber database (subscriberds), the ELEMENT description (named ds1) might look like the following:

ELEMENT ds1 DATASTORE
  MASTER masterds ON "system1"
  SUBSCRIBER subscriberds ON "system2"

Identify a database host using the host name returned by the hostname operating system command. It is good practice to surround a host name with double quotes.

You cannot replicate a temporary database.

You can choose to exclude certain tables and sequences from the DATASTORE element by using the EXCLUDE TABLE and EXCLUDE SEQUENCE clauses of the CREATE REPLICATION statement. When you use the EXCLUDE clauses, the entire database is replicated to all subscribers in the element except for the objects that are specified in the EXCLUDE clauses. Use only one EXCLUDE TABLE and one EXCLUDE SEQUENCE clause in an element description. For example, this element description excludes two tables and one sequence:

ELEMENT ds1 DATASTORE
  MASTER masterds ON "system1"
  SUBSCRIBER subscriberds ON "system2"
  EXCLUDE TABLE ttuser.tab1, ttuser.tab2
  EXCLUDE SEQUENCE ttuser.seq1

You can choose to include only certain tables and sequences in the database by using the INCLUDE TABLE and INCLUDE SEQUENCE clauses of the CREATE REPLICATION statement. When you use the INCLUDE clauses, only the objects that are specified in the INCLUDE clauses are replicated to each subscriber in the element. Use only one INCLUDE TABLE and one INCLUDE SEQUENCE clause in an element description. For example, this element description includes one table and two sequences:

ELEMENT ds1 DATASTORE
  MASTER masterds ON "system1"
  SUBSCRIBER subscriberds ON "system2"
  INCLUDE TABLE ttuser.tab3
  INCLUDE SEQUENCE ttuser.seq2, ttuser.seq3

Defining table elements

To replicate the ttuser.tab1 and ttuser.tab2 tables from a master database (named masterds and located on a host named system1) to a subscriber database (named subscriberds on a host named system2), the ELEMENT descriptions (named a and b) might look like the following:

ELEMENT a TABLE ttuser.tab1
  MASTER masterds ON "system1"
 SUBSCRIBER subscriberds ON "system2"
ELEMENT b TABLE ttuser.tab2
  MASTER masterds ON "system1"
  SUBSCRIBER subscriberds ON "system2"

For requirements for tables in replication schemes, see "Table requirements and restrictions for replication schemes".

Replicating tables with foreign key relationships

You may choose to replicate all or a subset of tables that have foreign key relationships with one another. However, if the foreign key relationships have been configured with ON DELETE CASCADE, then you must configure replication to replicate all of the tables, either by configuring the replication scheme with a DATASTORE element that does not exclude any of the tables, or by configuring the scheme with a TABLE element for every table that is involved in the relationship.

It is not possible to add a table with a foreign key relationship configured with ON DELETE CASCADE to a pre-existing replication scheme using ALTER REPLICATION. Instead, you must drop the replication scheme, create the new table with the foreign key relationship, and then create a new replication scheme replicating all of the related tables.

Replicating sequences

Sequences are replicated unless you exclude them from the replication scheme or unless they have the CYCLE attribute. Replication of sequences is optimized by reserving a range of sequence numbers on the standby database each time a sequence is updated on the active database. Reserving a range of sequence numbers reduces the number of updates to the transaction log. The range of sequence numbers is called a cache. Sequence updates on the active database are replicated only when they are followed by or used in replicated transactions.

Consider a sequence my.seq with a MINVALUE of 1, an INCREMENT of 1 and the default Cache of 20. The very first time that you use my.seq.NEXTVAL, the current value of the sequence on the master database is changed to 2, and a new current value of 21 (20+1) is replicated to the subscriber. The next 19 references to my.seq.NEXTVAL on the master database result in no new current value being replicated, because the current value of 21 on the subscriber database is still ahead of the current value on the master. On the twenty-first reference to my.seq.NEXTVAL, a new current value of 41 (21+20) is transmitted to the subscriber database because the subscriber's previous current value of 21 is now behind the value of 22 on the master.

Sequence replication has these restrictions:

  • Sequences with the CYCLE attribute cannot be replicated.

  • The definition of the replicated sequence on each peer database must be identical.

  • No conflict checking is performed on sequences. If you make updates to sequences in both databases in a bidirectional replication configuration without using the RETURN TWOSAFE service, it is possible for both sequences to return the identical NEXTVAL.

If you need to use sequences in a bidirectional replication scheme where updates may occur on either peer, you may instead use a nonreplicated sequence with different MINVALUE and MAXVALUE attributes on each database to avoid conflicts. For example, you may create sequence my.seq on database DS1 with a MINVALUE of 1 and a MAXVALUE of 100, and the same sequence on DS2 with a MINVALUE of 101 and a MAXVALUE of 200. Then, if you configure DS1 and DS2 with a bidirectional replication scheme, you can make updates to either database using the sequence my.seq with the guarantee that the sequence values never conflict. Be aware that if you are planning to use ttRepAdmin -duplicate to recover from a failure in this configuration, you must drop and then re-create the sequence with a new MINVALUE and MAXVALUE after you have performed the duplicate operation.

Operations on sequences such as SELECT my.seq.NEXTVAL FROM sys.dual, while incrementing the sequence value, are not replicated until they are followed by transactions on replicated tables. A side effect of this behavior is that these sequence updates are not purged from the log until followed by transactions on replicated tables. This causes ttRepSubscriberWait and ttRepAdmin -wait to fail when only these sequence updates are present at the end of the log.

To replicate the ttuser.seq sequence from a master database (named masterds and located on a host named system1) to a subscriber database (named subscriberds on a host named system2), the element description (named a) might look like the following:

ELEMENT a SEQUENCE ttuser.seq
  MASTER masterds ON "system1"
  SUBSCRIBER subscriberds ON "system2"

Views and materialized views in a replicated database

A materialized view is a summary of data selected from one or more TimesTen tables, called detail tables. Although you cannot replicate materialized views directly, you can replicate their underlying detail tables in the same manner as you would replicate regular TimesTen tables.

The detail tables on the master and subscriber databases can be referenced by materialized views. However, TimesTen replication verifies only that the replicated detail tables have the same structure on both the master and subscriber. It does not enforce that the materialized views are the same on each database.

If you replicate an entire database containing a materialized or nonmaterialized view as a DATASTORE element, only the detail tables associated with the view are replicated. The view itself is not replicated. A matching view can be defined on the subscriber database, but is not required. If detail tables are replicated, TimesTen automatically updates the corresponding view.

Materialized views defined on replicated tables may result in replication failures or inconsistencies if the materialized view is specified so that overflow or underflow conditions occur when the materialized view is updated.

Checking for replication conflicts on table elements

When databases are configured for bidirectional replication, there is a potential for replication conflicts to occur if the same table row in two or more databases is independently updated at the same time.

Such conflicts can be detected and resolved on a table-by-table basis by including timestamps in the replicated tables and configuring the replication scheme with the optional CHECK CONFLICTS clause in each table's element description.

See Chapter 14, "Resolving Replication Conflicts" for a complete discussion on replication conflicts and how to configure the CHECK CONFLICTS clause in the CREATE REPLICATION statement.

Setting transmit durability on data store elements

A master database configured for asynchronous or return receipt replication is durable by default. This means that log records are committed to disk when transactions are committed. The master database can be set to nondurable by including the TRANSMIT NONDURABLE clause in the element description.

Transaction records in the master database log buffer are, by default, flushed to disk before they are forwarded to subscribers. If the entire master database is replicated (ELEMENT is of type DATASTORE), you can improve replication performance by eliminating the master's flush-log-to-disk operation from the replication cycle. This is done by including a TRANSMIT NONDURABLE clause in the element description. The TRANSMIT setting has no effect on the subscriber. The transaction records on the subscriber database are always flushed to disk.

Master databases configured for return twosafe replication are nondurable by default and cannot be made durable. Setting TRANSMIT DURABLE on a database that is configured for return twosafe replication has no effect on return twosafe transactions.

Example 9-1 Replicating the entire master database with TRANSMIT NONDURABLE

To replicate the entire contents of the master database (masterds) to the subscriber database (subscriberds) and to eliminate the flush-log-to-disk operation, your element description (named a) might look like the following:

ELEMENT a DATASTORE
  MASTER masterds ON "system1"
  TRANSMIT NONDURABLE
  SUBSCRIBER subscriberds ON "system2"

In general, if a master database fails, you have to initiate the ttRepAdmin -duplicate operation described in "Recovering a failed database" to recover the failed master from the subscriber database. This is always true for a master database configured with TRANSMIT DURABLE.

A database configured as TRANSMIT NONDURABLE is recovered automatically by the subscriber replication agent if it is configured in the specific type of bidirectional scheme described in "Automatic catch-up of a failed master database". Otherwise, you must follow the procedures described in "Recovering nondurable databases" to recover a failed nondurable database.

Using a return service

You can configure your replication scheme with a return service to ensure a higher level of confidence that replicated data is consistent on both the master and subscriber databases. This section describes how to configure and manage the return receipt and return twosafe services.

You can specify a return service for table elements and database elements for any subscriber defined in a CREATE REPLICATION or ALTER REPLICATION statement.

Example 9-2 shows separate SUBSCRIBER clauses that can define different return service attributes for SubDatabase1 and SubDatabase2.

Example 9-2 Different return services for each subscriber

CREATE REPLICATION Owner.SchemeName
  ELEMENT ElementNameElementType
    MASTER DatabaseName ON "HostName"
    SUBSCRIBER SubDatabase1 ON "HostName" ReturnServiceAttribute1
    SUBSCRIBER SubDatabase2 ON "HostName" ReturnServiceAttribute2;

Alternatively, you can specify the same return service attribute for all of the subscribers defined in an element. Example 9-3 shows the use of a single SUBSCRIBER clause that defines the same return service attributes for both SubDatabase1 and SubDatabase2.

Example 9-3 Same return service for all subscribers

CREATE REPLICATION Owner.SchemeName
  ELEMENT ElementNameElementType
    MASTER DatabaseName ON "HostName"
    SUBSCRIBER SubDatabase1 ON "HostName",
               SubDatabase2 ON "HostName"
               ReturnServiceAttribute;

These sections describe the return service attributes:

RETURN RECEIPT

TimesTen provides an optional return receipt service to loosely couple or synchronize your application with the replication mechanism.

Specify the RETURN RECEIPT attribute to enable the return receipt service for the subscribers listed in the SUBSCRIBER clause of an element description. With return receipt enabled, when the application commits a transaction for an element on the master database, the application remains blocked until the subscriber acknowledges receipt of the transaction update. If the master is replicating the element to multiple subscribers, the application remains blocked until all of the subscribers have acknowledged receipt of the transaction update.

For example replication schemes that use return receipt services, see Example 9-24 and Example 9-25.

Example 9-4 RETURN RECEIPT

To confirm that all transactions committed on the tab table in the master database (masterds) are received by the subscriber (subscriberds), the element description (e) might look like the following:

ELEMENT e TABLE tab
    MASTER masterds ON "system1"
    SUBSCRIBER subscriberds ON "system2"
      RETURN RECEIPT

If any of the subscribers are unable to acknowledge receipt of the transaction within a configurable timeout period, the application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. You can use the ttRepXactStatus procedure to check on the status of a return receipt transaction. See "Checking the status of return service transactions" for more information on the return service timeout period.

You can also configure the replication agent to disable the return receipt service after a specific number of timeouts. See "Managing return service timeout errors and replication state changes" for details.

The return receipt service is disabled by default if replication is stopped. See "RETURN SERVICES {ON | OFF} WHEN REPLICATION STOPPED" for details.

RETURN RECEIPT BY REQUEST

RETURN RECEIPT enables notification of receipt for all transactions. You can use RETURN RECEIPT BY REQUEST to enable receipt notification only for specific transactions identified by your application.

If you specify RETURN RECEIPT BY REQUEST for a subscriber, you must use the ttRepSyncSet built-in procedure to enable the return receipt service for a transaction. The call to enable the return receipt service must be part of the transaction (autocommit must be off).

Example 9-5 RETURN RECEIPT BY REQUEST

To enable confirmation that specific transactions committed on the tab table in the master database (masterds) are received by the subscriber (subscriberds), the element description (e) might look like:

ELEMENT e TABLE tab
    MASTER masterds ON "system1"
    SUBSCRIBER subscriberds ON "system2"
      RETURN RECEIPT BY REQUEST

Before committing a transaction that requires receipt notification, call the ttRepSyncSet built-in procedure to request the return services and to set the timeout period to 45 seconds:

Command> CALL ttRepSyncSet(0x01, 45, NULL);

If any of the subscribers are unable to acknowledge receipt of the transaction update within a configurable timeout period, the application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. See "Setting the return service timeout period".

You can use the ttRepSyncGet built-in procedure to check if a return service is enabled and obtain the timeout value. For example:

Command> CALL ttRepSyncGet();
< 01, 45, 1>
1 row found.

RETURN TWOSAFE BY REQUEST

RETURN TWOSAFE enables notification of commit on the subscriber for all transactions. You can use RETURN TWOSAFE BY REQUEST to enable notification of subscriber commit only for specific transactions identified by the application.

If you specify RETURN TWOSAFE BY REQUEST for a subscriber, you must use the ttRepSyncSet procedure to enable the return twosafe service for a transaction. The call to enable the return twosafe service must be part of the transaction (autocommit must be off).

The ALTER TABLE statement cannot be used to alter a replicated table that is part of a RETURN TWOSAFE BY REQUEST transaction. If DDLCommitBehavior=0 (the default), the ALTER TABLE operation succeeds because a commit is performed before the ALTER TABLE operation, resulting in the ALTER TABLE operation executing in a new transaction which is not part of the RETURN TWOSAFE BY REQUEST transaction. If DDLCommitBehavior=1, the ALTER TABLE operation results in error 8051.

Example 9-6 RETURN TWOSAFE BY REQUEST

To enable confirmation that specific transactions committed on the master database (databaseA) are also committed by the subscriber (databaseB), the element description (a) might look like:

ELEMENT a DATASTORE
    MASTER databaseA ON "system1"
    SUBSCRIBER databaseB ON "system2"
      RETURN TWOSAFE BY REQUEST;

Before calling commit for a transaction that requires confirmation of commit on the subscriber, call the ttRepSyncSet built-in procedure to request the return service, set the timeout period to 45 seconds, and specify no action (1) in the event of a timeout error:

Command> CALL ttRepSyncSet(0x01, 45, 1);

In this example, if the subscriber is unable to acknowledge commit of the transaction within the timeout period, the application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. The application can then chose how to handle the timeout. See "Setting the return service timeout period".

You can use the ttRepSyncGet built-in procedure to check if a return service is enabled and obtain the timeout value. For example:

Command> CALL ttRepSyncGet();
< 01, 45, 1>
1 row found.

RETURN TWOSAFE

The return twosafe service ensures that each replicated transaction is committed on the subscriber database before it is committed on the master database. If replication is unable to verify the transaction has been committed on the subscriber, it returns notification of the error. Upon receiving an error, the application can either take a unique action or fall back on preconfigured actions, depending on the type of failure.

The return twosafe service is intended to be used in replication schemes where two databases must stay synchronized. One database has an active role, while the other database has a standby role but must be ready to assume an active role at any moment. Use return twosafe with a bidirectional replication scheme with exactly two databases.

To enable the return twosafe service for the subscriber, specify the RETURN TWOSAFE attribute in the SUBSCRIBER clause in the CREATE REPLICATION or ALTER REPLICATION statement.

Example 9-7 RETURN TWOSAFE

To confirm all transactions committed on the master database (databaseA) are also committed by the subscriber (databaseB), the element description (a) might look like the following:

ELEMENT a DATASTORE
    MASTER databaseA ON "system1"
    SUBSCRIBER databaseB ON "system2"
      RETURN TWOSAFE

The entire CREATE REPLICATION statement that specifies both databaseA and databaseB in a bidirectional configuration with RETURN TWOSAFE might look like the following:

CREATE REPLICATION bidirect
ELEMENT a DATASTORE
    MASTER databaseA ON "system1"
    SUBSCRIBER databaseB ON "system2"
      RETURN TWOSAFE
ELEMENT b DATASTORE
    MASTER databaseB ON "system2"
    SUBSCRIBER databaseA ON "system1"
      RETURN TWOSAFE;

When replication is configured with RETURN TWOSAFE, you must disable autocommit mode

When the application commits a transaction on the master database, the application remains blocked until the subscriber acknowledges it has successfully committed the transaction. Initiating identical updates or deletes on both databases can lead to deadlocks in commits that can be resolved only by stopping the processes.

If the subscriber is unable to acknowledge commit of the transaction update within a configurable timeout period, your application receives a tt_ErrRepReturnFailed (8170) warning on its commit request. See "Setting the return service timeout period".

NO RETURN

Use the NO RETURN attribute to explicitly disable the return receipt or return twosafe service. NO RETURN is the default condition. This attribute is typically set in ALTER REPLICATION statements. See Example 13-13.

Setting STORE attributes

Table 9-3 lists the optional STORE parameters for the CREATE REPLICATION and ALTER REPLICATION statements.

Table 9-3 STORE attribute descriptions

STORE attributeDescription

DISABLE RETURN {SUBSCRIBER|ALL} NumFailures

Set the return service policy so that return service blocking is disabled after the number of timeouts specified by NumFailures.

See "Establishing return service failure/recovery policies".

RETURN SERVICES {ON|OFF} WHEN REPLICATION STOPPED

Set return services on or off when replication is disabled.

See "Establishing return service failure/recovery policies".

RESUME RETURN Milliseconds

If DISABLE RETURN has disabled return service blocking, this attribute sets the policy for re-enabling the return service.

See "Establishing return service failure/recovery policies".

RETURN WAIT TIME Seconds

Specifies the number of seconds to wait for return service acknowledgement. A value of 0 means that there is no waiting. The default value is 10 seconds.

The application can override this timeout setting by using the returnWait parameter in the ttRepSyncSet built-in procedure.

See "Setting the return service timeout period".

DURABLE COMMIT {ON|OFF}

Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped.

See "DURABLE COMMIT".

LOCAL COMMIT ACTION {NO ACTION|ACTON}

Specify the default action to be taken for a return service transaction in the event of a timeout. The options are:

NO ACTION - On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can reissue the commit. This is the default.

COMMIT- On timeout, the commit function attempts to perform a commit to end the transaction locally. No more operations are possible on the same transaction.

This default setting can be overridden for specific transactions by using the localAction parameter in the ttRepSyncSet procedure.

See "LOCAL COMMIT ACTION".

COMPRESS TRAFFIC {ON|OFF}

Compress replicated traffic to reduce the amount of network bandwidth used.

See "Compressing replicated traffic".

PORT PortNumber

Set the port number used by subscriber databases to listen for updates from a master.

If no PORT attribute is specified, the TimesTen daemon dynamically selects the port. While static port assignment is allowed by TimesTen, dynamic port allocation is recommended.

See "Port assignments".

TIMEOUT Seconds

Set the maximum number of seconds that the replication agent waits for a response from its database.

FAILTHRESHOLD

Set the log failure threshold.

See "Setting the log failure threshold".

CONFLICT REPORTING {SUSPEND|RESUME} AT Value

Specify the number of replication conflicts per second at which conflict reporting is suspended, and the number of conflicts per second at which conflict reporting resumes.

See Chapter 14, "Resolving Replication Conflicts".

TABLE DEFINITION CHECKING {EXACT|RELAXED}

Specify the type of table definition checking:

  • EXACT - The tables must be identical on master and subscriber. This is the default.

  • RELAXED - The tables must have the same key definition, number of columns and column data types.

See "Replicating tables with different definitions".


The FAILTHRESHOLD and TIMEOUT attributes can be unique to a specific replication scheme definition. This means these attribute settings can vary if you have applied different replication scheme definitions to your replicated databases. This is not true for any of the other attributes, which must be the same across all replication scheme definitions. For example, setting the PORT attribute for one scheme sets it for all schemes.

For an example replication scheme that uses a STORE clause to set the FAILTHRESHOLD attribute, see Example 9-24.

Setting the return service timeout period

If your replication scheme is configured with one of the return services described in "Using a return service", a timeout occurs if any of the subscribers are unable to send an acknowledgement back to the master within the time period specified by RETURN WAIT TIME.

The default return service timeout period is 10 seconds. You can specify a different return service timeout period by:

  • Configuring RETURN WAIT TIME in the CREATE REPLICATION or ALTER REPLICATION statement. A RETURN WAIT TIME of 0 indicates no waiting.

  • Calling the ttRepSyncSet procedure with a new returnWait parameter

Once set, the timeout period applies to all subsequent return service transactions until you either reset the timeout period or terminate the application session. The timeout setting applies to all return services for all subscribers.

A return service may time out because of a replication failure or because replication is so far behind that the return service transaction times out before it is replicated. However, unless there is a simultaneous replication failure, failure to obtain a return service confirmation from the subscriber does not mean the transaction has not been or will not be replicated.

You can set other STORE attributes to establish policies that automatically disable return service blocking in the event of excessive timeouts and re-enable return service blocking when conditions improve. See "Managing return service timeout errors and replication state changes".

Example 9-8 Setting the timeout period for both databases in bidirectional replication scheme

To set the timeout period to 30 seconds for both bidirectionally replicated databases, databaseA and databaseB, in the bidirect replication scheme, the CREATE REPLICATION statement might look like the following:

CREATE REPLICATION bidirect
ELEMENT a DATASTORE
    MASTER databaseA ON "system1"
    SUBSCRIBER databaseB ON "system2"
      RETURN TWOSAFE
ELEMENT b DATASTORE
    MASTER databaseB ON "system2"
    SUBSCRIBER databaseA ON "system1"
      RETURN TWOSAFE
STORE databaseA RETURN WAIT TIME 30
STORE databaseB RETURN WAIT TIME 30;

Example 9-9 Resetting the timeout period

Use the ttRepSyncSet built-in procedure to reset the timeout period to 45 seconds. To avoid resetting the requestReturn and localAction values, specify NULL:

Command> CALL ttRepSyncSet(NULL, 45, NULL);

Managing return service timeout errors and replication state changes

The replication state can be reset to stop by a user or by the master replication agent in the event of a subscriber failure. A subscriber may be unable to acknowledge a transaction that makes use of a return service and may time out with respect to the master. If any of the subscribers are unable to acknowledge the transaction update within the timeout period, the application receives an errRepReturnFailed warning on its commit request.

The default return service timeout period is 10 seconds. You can specify a different return service timeout period by:

  • Configuring the RETURN WAIT TIME attribute in the STORE clause of the CREATE REPLICATION or ALTER REPLICATION statement

  • Calling ttRepSyncSet procedure with a new returnWait parameter

A return service may time out or fail because of a replication failure or because replication is so far behind that the return service transaction times out before it is replicated. However, unless there is a simultaneous replication failure, failure to obtain a return service confirmation from the subscriber does not necessarily mean the transaction has not been or will not be replicated.

This section describes how to detect and respond to timeouts on return service transactions. The main topics are:

When to manually disable return service blocking

You may want respond in some manner if replication is stopped or return service timeout failures begin to adversely impact the performance of the replicated system. Your "tolerance threshold" for return service timeouts may depend on the historical frequency of timeouts and the performance/availability equation for your particular application, both of which should be factored into your response to the problem.

When using the return receipt service, you can manually respond by:

  • Using ALTER REPLICATION to make changes to the replication scheme to disable return receipt blocking for a particular subscriber. If you decide to disable return receipt blocking, your decision to re-enable it depends on your confidence level that the return receipt transaction is no longer likely to time out.

  • Calling the ttDurableCommit procedure to durably commit transactions on the master that you can no longer verify as being received by the subscriber.

An alternative to manually responding to return service timeout failures is to establish return service failure and recovery policies in your replication scheme. These policies direct the replication agents to detect changes to the replication state and to keep track of return service timeouts and then automatically respond in some predefined manner.

Establishing return service failure/recovery policies

An alternative to manually responding to return service timeout failures is to establish return service failure and recovery policies in your replication scheme. These policies direct the replication agents to detect changes to the replication state and to keep track of return service timeouts and then automatically respond in some predefined manner.

The following attributes in the CREATE REPLICATION or ALTER REPLICATION statement set the failure/recovery policies when using a RETURN RECEIPT or RETURN TWOSAFE service:

The policies set by these attributes are applicable for the life of the database or until changed. However, the replication agent must be running to enforce these policies.

RETURN SERVICES {ON | OFF} WHEN REPLICATION STOPPED

The RETURN SERVICES {ON|OFF} WHEN REPLICATION STOPPED attribute determines whether a return receipt or return twosafe service continues to be enabled or is disabled when replication is stopped. "Stopped" in this context means that either the master replication agent is stopped (for example, by ttAdmin -repStop master) or the replication state of the subscriber database is set to stop or pause with respect to the master database (for example, by ttRepAdmin -state stop subscriber). A failed subscriber that has exceeded the specified FAILTHRESHOLD value is set to the failed state, but is eventually set to the stop state by the master replication agent.


Note:

A subscriber may become unavailable for a period of time that exceeds the timeout period specified by RETURN WAIT TIME but still be considered by the master replication agent to be in the start state. Failure policies related to timeouts are set by the DISABLE RETURN attribute.

RETURN SERVICES OFF WHEN REPLICATION STOPPED disables the return service when replication is stopped and is the default when using the RETURN RECEIPT service. RETURN SERVICES ON WHEN REPLICATION STOPPED allows the return service to continue to be enabled when replication is stopped and is the default when using the RETURN TWOSAFE service.

Example 9-10 RETURN SERVICES ON WHEN REPLICATION STOPPED

Configure the CREATE REPLICATION statement to replicate updates from the masterds database to the subscriber1 database. The CREATE REPLICATION statement specifies the use of RETURN RECEIPT and RETURN SERVICES ON WHEN REPLICATION STOPPED.

CREATE REPLICATION myscheme
ELEMENT e TABLE tab
  MASTER masterds ON "server1"
  SUBSCRIBER subscriber1 ON "server2"
  RETURN RECEIPT
  STORE masterds ON "server1"
    RETURN SERVICES ON WHEN REPLICATION STOPPED;

While the application is committing updates to the master, ttRepAdmin is used to set subscriber1 to the stop state:

ttRepAdmin -dsn masterds -receiver -name subscriber1 -state stop

The application continues to wait for return receipt acknowledgements from subscriber1 until the replication state is reset to start and it receives the acknowledgment:

ttRepAdmin -dsn masterds -receiver -name subscriber1 -state start
DISABLE RETURN

When a DISABLE RETURN value is set, the database keeps track of the number of return receipt or return twosafe transactions that have exceeded the timeout period set by RETURN WAIT TIME. If the number of timeouts exceeds the maximum value set by DISABLE RETURN, the applications revert to a default replication cycle in which they no longer wait for subscribers to acknowledge the replicated updates.

You can set DISABLE RETURN SUBSCRIBER to establish a failure policy to disable return service blocking for only those subscribers that have timed out, or DISABLE RETURN ALL to establish a policy to disable return service blocking for all subscribers. You can use the ttRepSyncSubscriberStatus built-in procedure or the ttRepReturnTransitionTrap SNMP trap to determine whether a particular subscriber has been disabled by the DISABLE RETURN failure policy.

The DISABLE RETURN failure policy is enabled only when the replication agent is running. If DISABLE RETURN is specified but RESUME RETURN is not specified, the return services remain off until the replication agent for the database has been restarted. You can cancel this failure policy by stopping the replication agent and specifying either DISABLE RETURN SUBSCRIBER or DISABLE RETURN ALL with a zero value for NumFailures. The count of timeouts to trigger the failure policy is reset either when you restart the replication agent, when you set the DISABLE RETURN value to 0, or when return service blocking is re-enabled by RESUME RETURN.

DISABLE RETURN maintains a cumulative timeout count for each subscriber. If there are multiple subscribers and you set DISABLE RETURN SUBSCRIBER, the replication agent disables return service blocking for the first subscriber that reaches the timeout threshold. If one of the other subscribers later reaches the timeout threshold, the replication agent disables return service blocking for that subscriber also.

Example 9-11 DISABLE RETURN SUBSCRIBER

Configure the CREATE REPLICATION statement to replicate updates from the masterds database to the databases, subscriber1 and subscriber2. The CREATE REPLICATION statement specifies the use of RETURN RECEIPT and DISABLE RETURN SUBSCRIBER with a NumFailures value of 5. The RETURN WAIT TIME is set to 30 seconds.

CREATE REPLICATION myscheme
ELEMENT e TABLE tab
  MASTER masterds ON "server1"
  SUBSCRIBER subscriber1 ON "server2",
             subscriber2 ON "server3"
RETURN RECEIPT
STORE masterds ON "server1"
  DISABLE RETURN SUBSCRIBER 5
  RETURN WAIT TIME 30;

While the application is committing updates to the master, subscriber1 experiences problems and fails to acknowledge a replicated transaction update. The application is blocked 30 seconds after which it commits its next update to the master. Over the course of the application session, this commit/timeout cycle repeats 4 more times until DISABLE RETURN disables return receipt blocking for subscriber1. The application continues to wait for return-receipt acknowledgements from subscriber2 but not from subscriber1.

RETURN SERVICES OFF WHEN REPLICATION STOPPED is the default setting for the return receipt service. Therefore, return receipt is disabled under either one of the following conditions:

For another example that set the DISABLE RETURN attribute, see Example 9-12.

RESUME RETURN

When we say return service blocking is "disabled," we mean that the applications on the master database no longer block execution while waiting to receive acknowledgements from the subscribers that they received or committed the replicated updates. Note, however, that the master still listens for an acknowledgement of each batch of replicated updates from the subscribers.

You can establish a return service recovery policy by setting the RESUME RETURN attribute and specifying a resume latency value. When this attribute is set and return service blocking has been disabled for a subscriber, the return receipt or return twosafe service is re-enabled when the commit-to-acknowledge time for a transaction falls below the value set by RESUME RETURN. The commit-to-acknowledge time is the latency between when the application issues a commit and when the master receives acknowledgement of the update from the subscriber.

Example 9-12 RESUME RETURN

If return receipt blocking has been disabled for subscriber1 and if RESUME RETURN is set to 8 milliseconds, then return receipt blocking is re-enabled for subscriber1 the instant it acknowledges an update in less than 8 milliseconds from when it was committed by the application on the master.

CREATE REPLICATION myscheme
ELEMENT e TABLE ttuser.tab
  MASTER masterds ON "server1"
  SUBSCRIBER subscriber1 ON "server2",
             subscriber2 ON "server3"
RETURN RECEIPT
STORE masterds ON "server1"
  DISABLE RETURN SUBSCRIBER 5
  RESUME RETURN 8;

The RESUME RETURN policy is enabled only when the replication agent is running. You can cancel a return receipt resume policy by stopping the replication agent and then using ALTER REPLICATION to set RESUME RETURN to zero.

DURABLE COMMIT

Set the DURABLE COMMIT attribute to specify the durable commit policy for applications that have return service blocking disabled by DISABLE RETURN. When DURABLE COMMIT is set to ON, it overrides the DurableCommits general connection attribute on the master database and forces durable commits regardless of whether the replication agent is running or stopped.

DURABLE COMMIT is useful if you have only one subscriber. However, if you are replicating the same data to two subscribers and you disable return service blocking to one subscriber, then you achieve better performance if you rely on the other subscriber than you would if you enable durable commits.

Example 9-13 DURABLE COMMIT ON

Set DURABLE COMMIT ON when establishing a DISABLE RETURN ALL policy to disable return-receipt blocking for all subscribers. If return-receipt blocking is disabled, commits are durably committed to disk to provide redundancy.

CREATE REPLICATION myscheme
ELEMENT e TABLE tab
  MASTER masterds ON "server1"
  SUBSCRIBER subscriber ON "server2",
             subscriber2 ON "server3"
RETURN RECEIPT
STORE masterds ON "server1"
  DISABLE RETURN ALL 5
  DURABLE COMMIT ON
  RESUME RETURN 8;
LOCAL COMMIT ACTION

When using the return twosafe service, you can specify how the master replication agent responds to timeout errors by:

  • Setting the LOCAL COMMIT ACTION attribute in the STORE clause of the CREATE REPLICATION statement

  • Calling the ttRepSyncSet procedure with the localAction parameter

The possible actions upon receiving a timeout during replication of a twosafe transaction are:

  • COMMIT - On timeout, the commit function attempts to perform a commit to end the transaction locally. No more operations are possible on the same transaction.

  • NO ACTION - On timeout, the commit function returns to the application, leaving the transaction in the same state it was in when it entered the commit call, with the exception that the application is not able to update any replicated tables. The application can reissue the commit. This is the default

If the call returns with an error, you can use the ttRepXactStatus procedure described in "Checking the status of return service transactions" to check the status of the transaction. Depending on the error, your application can choose to:

  • Reissue the commit call - This repeats the entire return twosafe replication cycle, so that the commit call returns when the success or failure of the replicated commit on the subscriber is known or if the timeout period expires.

  • Roll back the transaction - If the call returns with an error related to applying the transaction on the subscriber, such as primary key lookup failure, you can roll back the transaction on the master.

Compressing replicated traffic

If you are replicating over a low-bandwidth network, or if you are replicating massive amounts of data, you can set the COMPRESS TRAFFIC attribute to reduce the amount of bandwidth required for replication. The COMPRESS TRAFFIC attribute compresses the replicated data from the database specified by the STORE parameter in your CREATE REPLICATION or ALTER REPLICATION statement. TimesTen does not compress traffic from other databases.

Although the compression algorithm is optimized for speed, enabling the COMPRESS TRAFFIC attribute affects replication throughput and latency.

Example 9-14 Compressing traffic from one database

To compress replicated traffic from database dsn1 and leave the replicated traffic from dsn2 uncompressed, the CREATE REPLICATION statement looks like:

CREATE REPLICATION repscheme
ELEMENT d1 DATASTORE
    MASTER dsn1 ON host1
    SUBSCRIBER dsn2 ON host2
ELEMENT d2 DATASTORE
    MASTER dsn2 ON host2
    SUBSCRIBER dsn1 ON host1
STORE dsn1 ON host1 COMPRESS TRAFFIC ON;

Example 9-15 Compressing traffic between both databases

To compress the replicated traffic between both the dsn1 and dsn2 databases, use:

CREATE REPLICATION scheme
ELEMENT d1 DATASTORE
    MASTER dsn1 ON host1
    SUBSCRIBER dsn2 ON host2
ELEMENT d2 DATASTORE
    MASTER dsn2 ON host2
    SUBSCRIBER dsn1 ON host1
STORE dsn1 ON host1 COMPRESS TRAFFIC ON
STORE dsn2 ON host2 COMPRESS TRAFFIC ON;

Port assignments

Static port assignments are recommended. If you do not assign a PORT attribute, the TimesTen daemon dynamically selects the port. When ports are assigned dynamically for the replication agents, then the ports of the TimesTen daemons have to match as well. Setting the PORT attribute for one replication scheme sets it for all replication schemes.

You must assign static ports if you want to do online upgrades.

When statically assigning ports, it is important to specify the full host name, DSN and port in the STORE attribute of the CREATE REPLICATION statement.

Example 9-16 Assigning static ports

CREATE REPLICATION repscheme
ELEMENT el1 TABLE ttuser.tab
    MASTER dsn1 ON host1
    SUBSCRIBER dsn2 ON host2
ELEMENT el2 TABLE ttuser.tab
    MASTER dsn2 ON host2
    SUBSCRIBER dsn1 ON host1
STORE dsn1 ON host1 PORT 16080
STORE dsn2 ON host2 PORT 16083;

Setting the log failure threshold

You can establish a threshold value that, when exceeded, sets an unavailable subscriber to the failed state before the available log space is exhausted. Use the FAILTHRESHOLD attribute to set the log failure threshold. See Example 9-24.

The default threshold value is 0, which means "no limit." See "Setting connection attributes for logging" for details about log failure threshold values.

If a master sets a subscriber database to the failed state, it drops all of the data for the failed subscriber from its log and transmits a message to the failed subscriber database. If the master replication agent can communicate with the subscriber replication agent, then the message is transmitted immediately. Otherwise, the message is transmitted when the connection is reestablished. After receiving the message from the master, if the subscriber is configured for bidirectional replication or to propagate updates to other subscribers, it does not transmit any further updates, because its replication state has been compromised.

Any application that connects to the failed subscriber receives a tt_ErrReplicationInvalid (8025) warning indicating that the database has been marked failed by a replication peer. Once the subscriber database has been informed of its failed status, its state on the master database is changed from failed to stop.

Applications can use the ODBC SQLGetInfo function to check if the database it is connected to has been set to the failed state, as described in "Subscriber failures".

Replicating tables with different definitions

Use the TABLE DEFINITION CHECKING RELAXED attribute to enable replication of tables that are not identical. For example, if tables have columns in a different order or have a different number of partitions, you can replicate them using this clause. A table has multiple partitions if columns have been added after its initial creation.


Note:

See Example 9-18 and "Check partition counts for the tables" in Oracle TimesTen In-Memory Database Troubleshooting Guide for more information.

Setting the TABLE DEFINITION CHECKING attribute to RELAXED requires that replicated tables have the same key definition, number of columns and column data types. Table definition checking occurs on the subscriber side. Setting this attribute to RELAXED for both master and subscriber has the same effect as setting it for only the subscriber.

The RELAXED setting can result in slightly slower performance. The change in performance depends on the workload and the number of partitions and columns in the tables. You can set table definition checking to RELAXED temporarily while consolidating tables with multiple partitions and then reset it to EXACT. There is no performance loss for tables with identical structures.

Example 9-17 Replicating tables with columns in different positions

Create table t1 in dsn1 database:

CREATE TABLE ttuser.t1 (a INT PRIMARY KEY, b INT, c INT);

Create table ttuser.t1 in dsn2 database with the columns in a different order than the columns in ttuser.t1 in dsn1 database. Note that the column names and data types are the same in both tables and a is the primary key in both tables.

CREATE TABLE ttuser.t1 (c INT, a INT PRIMARY KEY, b INT);

Create replication scheme ttuser.rep1. Set TABLE DEFINITION CHECKING to RELAXED for the subscriber, dsn2.

CREATE REPLICATION ttuser.rep1
       ELEMENT e1 TABLE ttuser.t1
       MASTER dsn1
       SUBSCRIBER dsn2
       STORE dsn2 TABLE DEFINITION CHECKING RELAXED;

Start the replication agent for both databases. Insert a row into ttuser.t1 on dsn1.

Command> INSERT INTO ttuser.t1 VALUES (4,5,6);
1 row inserted.

Verify the results on ttuser.t1 on dsn2.

Command> SELECT * FROM ttuser.t1;
< 5, 6, 4 >
1 row found.

Example 9-18 Replicating tables with a different number of partitions

When you alter a table to add columns, it increases the number of partitions in the table, even if you subsequently drop the new columns. You can use the RELAXED setting for TABLE DEFINITION CHECKING to replicate tables that have different number of partitions.

Create table ttuser.t3 on dsn1 with two columns.

CREATE TABLE ttuser.t3 (a INT PRIMARY KEY, b INT);

Create table ttuser.t3 on dsn2 with one column that is the primary key.

CREATE TABLE ttuser.t3 (a INT PRIMARY KEY);

Add a column to the table on dsn2. This increases the number of partitions to two, while the table on dsn1 has one partition.

ALTER TABLE ttuser.t3 ADD COLUMN b INT;

Create the replication scheme on both databases.

CREATE REPLICATION reppart
       ELEMENT e2 TABLE ttuser.t3
       MASTER dsn1
       SUBSCRIBER dsn2
       STORE dsn2 TABLE DEFINITION CHECKING RELAXED;

Start the replication agent for both databases. Insert a row into ttuser.t3 on dsn1.

Command> INSERT INTO ttuser.t3 VALUES (1,2);
1 row inserted.

Verify the results in ttuser.t3 on dsn2.

Command> SELECT * FROM ttuser.t3;
< 1, 2 >
1 row found.

Configuring network operations

If your replication host has more than one network interface, you may wish to configure replication to use an interface other than the default interface. Although you must specify the host name returned by the operating system's hostname command when you define a replication element, you may configure replication to send or receive traffic over a different interface using the ROUTE clause.

The syntax of the ROUTE clause is:

ROUTE MASTER FullDatabaseName SUBSCRIBER FullDatabaseName
  {{MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost}
    PRIORITY Priority} [...]

In dual master replication schemes, each master database is a subscriber of the other master database. This means that the CREATE REPLICATION statement should include ROUTE clauses in multiples of two to specify a route in both directions.

Example 9-19 Configuring multiple network interfaces

If host host1 is configured with a second interface accessible by the host name host1fast, and host2 is configured with a second interface at IP address 192.168.1.100, you may specify that the secondary interfaces are used with the replication scheme.

CREATE REPLICATION repscheme
ELEMENT e1 TABLE ttuser.tab
    MASTER dsn1 ON host1
    SUBSCRIBER dsn2 ON host2
ELEMENT e2 TABLE ttuser.tab
    MASTER dsn2 ON host2
    SUBSCRIBER dsn1 ON host1
ROUTE MASTER dsn1 ON host1 SUBSCRIBER dsn2 ON host2
    MASTERIP host1fast PRIORITY 1
    SUBSCRIBERIP "192.168.1.100" PRIORITY 1
ROUTE MASTER dsn2 ON host2 SUBSCRIBER dsn1 ON host1
    MASTERIP "192.168.1.100" PRIORITY 1
    SUBSCRIBERIP host1fast PRIORITY 1;

Alternately, on a replication host with more than one interface, you may wish to configure replication to use one or more interfaces as backups, in case the primary interface fails or the connection from it to the receiving host is broken. You may use the ROUTE clause to specify two or more interfaces for each master or subscriber that are used by replication in order of priority.

Example 9-20 Configuring network priority

If host host1 is configured with two network interfaces at IP addresses 192.168.1.100 and 192.168.1.101, and host host2 is configured with two interfaces at IP addresses 192.168.1.200 and 192.168.1.201, you may specify that replication use IP addresses 192.168.1.100 and 192.168.200 to transmit and receive traffic first, and to try IP addresses 192.168.1.101 or 192.168.1.201 if the first connection fails.

CREATE REPLICATION repscheme
ELEMENT e TABLE ttuser.tab
  MASTER dsn1 ON host1
  SUBSCRIBER dsn2 ON host2
ROUTE MASTER dsn1 ON host1 SUBSCRIBER dsn2 ON host2
  MASTERIP "192.168.1.100" PRIORITY 1
  MASTERIP "192.168.1.101" PRIORITY 2
  SUBSCRIBERIP "192.168.1.200" PRIORITY 1
  SUBSCRIBERIP "192.168.1.201" PRIORITY 2;

If replication on the master host is unable to bind to the MASTERIP with the highest priority, it will try to connect using subsequent MASTERIP addresses in order of priority immediately. However, if the connection to the subscriber fails for any other reason, replication will try to connect using each of the SUBSCRIBERIP addresses in order of priority before it tries the MASTERIP address with the next highest priority.

Replication scheme syntax examples

The examples in this section illustrate how to configure a variety of replication schemes. The replication schemes include:

Single subscriber schemes

The scheme shown in Example 9-21 is a single master and subscriber unidirectional replication scheme. The two databases are located on separate hosts, system1 and system2. We use the RETURN RECEIPT service to confirm that all transactions committed on the ttuser.tab table in the master database are received by the subscriber.

Example 9-21 Replicating one table

CREATE REPLICATION repscheme
ELEMENT e TABLE ttuser.tab
    MASTER masterds ON "system1"
    SUBSCRIBER subscriberds ON "system2"
      RETURN RECEIPT;

The scheme shown in Example 9-22 is a single master and subscriber unidirectional replication scheme. The two databases are located on separate hosts, server1 and server2. The master database, named masterds, replicates its entire contents to the subscriber database, named subscriberds.

Example 9-22 Replicating entire database

CREATE REPLICATION repscheme
ELEMENT e DATASTORE
    MASTER masterds ON "server1"
    SUBSCRIBER subscriberds ON "server2";

Multiple subscriber schemes with return services and a log failure threshold

You can create a replication scheme that includes up to 128 subscriber databases. If you are configuring propagator databases, you can configure up to 128 propagators. Each propagator can have up to 128 subscriber databases. See "Propagation scheme" for an example of a replication scheme with propagator databases.

Example 9-23 Replicating to two subscribers

This example establishes a master database, named masterds, that replicates the ttuser.tab table to two subscriber databases, subscriber1ds and subscriber2ds, located on server2 and server3, respectively. The name of the replication scheme is twosubscribers. The name of the replication element is e.

CREATE REPLICATION twosubscribers
ELEMENT e TABLE ttuser.tab
    MASTER masterds ON "server1"
    SUBSCRIBER subscriber1ds ON "server2",
               subscriber2ds ON "server3";

Example 9-24 Replicating to two subscribers with RETURN RECEIPT

This example uses the basic example in Example 9-23 and adds a RETURN RECEIPT attribute and a STORE parameter. RETURN RECEIPT enables the return receipt service for both databases. The STORE parameter sets a FAILTHRESHOLD value of 10 to establish the maximum number of transaction log files that can accumulate on masterds for a subscriber before it assumes the subscriber has failed.

CREATE REPLICATION twosubscribers
ELEMENT e TABLE ttuser.tab
  MASTER masterds ON "server1"
  SUBSCRIBER subscriber1ds ON "server2",
             subscriber2ds ON "server3"
  RETURN RECEIPT
STORE masterds FAILTHRESHOLD 10;

Example 9-25 Enabling RETURN RECEIPT for only one subscriber

This example shows how to enable RETURN RECEIPT for only subscriber2ds. Note that there is no comma after the subscriber1ds definition.

CREATE REPLICATION twosubscribers
ELEMENT e TABLE ttuser.tab
    MASTER masterds ON "server1"
    SUBSCRIBER subscriber1ds ON "server2"
    SUBSCRIBER subscriber2ds ON "server3" RETURN RECEIPT
STORE masterds FAILTHRESHOLD 10;

Example 9-26 Enabling different return services for subscribers

This example shows how to apply RETURN RECEIPT BY REQUEST to subscriber1ds and RETURN RECEIPT to subscriber2ds. In this scheme, applications accessing subscriber1ds must use the ttRepSyncSet procedure to enable the return services for a transaction, while subscriber2ds unconditionally provides return services for all transactions.

CREATE REPLICATION twosubscribers
ELEMENT e TABLE ttuser.tab
    MASTER masterds ON "server1"
    SUBSCRIBER subscriberds1 ON "server2" RETURN RECEIPT BY REQUEST
    SUBSCRIBER subscriber2ds ON "server3" RETURN RECEIPT
STORE masterds FAILTHRESHOLD 10;

Replicating tables to different subscribers

The replication scheme shown in Example 9-27 establishes a master database, named centralds, that replicates four tables. ttuser.tab1 and ttuser.tab2 are replicated to the subscriber backup1ds. ttuser.tab3 and ttuser.tab4 are replicated to backup2ds. The master database is located on the finance server. Both subscribers are located on the backupsystem server.

Example 9-27 Replicating tables to different subscribers

CREATE REPLICATION twobackups
ELEMENT a TABLE ttuser.tab1
  MASTER centralds ON "finance"
  SUBSCRIBER backup1ds ON "backupsystem"
ELEMENT b TABLE ttuser.tab2
  MASTER centralds ON "finance"
  SUBSCRIBER backup1ds ON "backupsystem"
ELEMENT d TABLE ttuser.tab3
  MASTER centralds ON "finance"
  SUBSCRIBER backup2ds ON "backupsystem"
ELEMENT d TABLE ttuser.tab4
  MASTER centralds ON "finance"
  SUBSCRIBER backup2ds ON "backupsystem";

Propagation scheme

In Example 9-28, the master database sends updates on a table to a propagator that forwards the changes to two subscribers. The master database is centralds on the finance host. The propagator database is propds on the nethandler host. The subscribers are backup1ds on backupsystem1 and backup2ds on backupsystem2.

The replication scheme has two elements. For element a, the changes to the tab table on centralds are replicated to the propds propagator database. For element b, the changes to the tab table received by propds are replicated to the two subscribers, backup1ds and backup2ds.

Example 9-28 Propagation

CREATE REPLICATION propagator
ELEMENT a TABLE ttuser.tab
  MASTER centralds ON "finance"
  SUBSCRIBER propds ON "nethandler"
ELEMENT b TABLE ttuser.tab
  PROPAGATOR propds ON "nethandler"
  SUBSCRIBER backup1ds ON "backupsystem1",
             backup2ds ON "backupsystem2";

Bidirectional split workload schemes

In Example 9-29, there are two databases, westds on the westcoast host and eastds on the eastcoast host. Customers are represented in two tables: waccounts contains data for customers in the Western region and eaccounts has data for customers from the Eastern region. The westds database updates the waccounts table and replicates it to the eastds database. The eaccounts table is owned by the eastds database and is replicated to the westds database. The RETURN RECEIPT attribute enables the return receipt service to guarantee that transactions on either master table are received by their subscriber.

Example 9-29 Bidirectional split workload

CREATE REPLICATION r1
ELEMENT elem_waccounts TABLE ttuser.waccounts
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast" RETURN RECEIPT
ELEMENT elem_eaccounts TABLE ttuser.eaccounts
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast" RETURN RECEIPT;

Bidirectional distributed workload scheme

Example 9-30 shows a bidirectional general workload replication scheme in which the ttuser.accounts table can be updated on either the eastds or westds database. Each database is both a master and a subscriber for the accounts table.


Note:

Do not use a bidirectional distributed workload replication scheme with return twosafe return service.

Example 9-30 Bidirectional distributed workload scheme

CREATE REPLICATION r1
ELEMENT elem_accounts_1 TABLE ttuser.accounts
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_accounts_2 TABLE ttuser.accounts
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast";

When elements are replicated in this manner, the applications should write to each database in a coordinated manner to avoid simultaneous updates on the same data. To manage update conflicts, include a timestamp column of type BINARY(8) in the replicated table and enable timestamp comparison by including the CHECK CONFLICTS clause in the CREATE REPLICATION statement. See Chapter 14, "Resolving Replication Conflicts" for a complete discussion on how to manage update conflicts.

Example 9-31 shows that the tstamp timestamp column is included in the ttuser.accounts table. The CREATE REPLICATION statement has been modified to include the CHECK CONFLICTS clause.

Example 9-31 Managing update conflicts

CREATE TABLE ttuser.accounts (custname VARCHAR2(30) NOT NULL,
                       address VARCHAR2(80),
                       curbalance DEC(15,2),
                       tstamp BINARY(8),
                       PRIMARY KEY (custname));

CREATE REPLICATION r1
ELEMENT elem_accounts_1 TABLE ttuser.accounts
  CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN tstamp
    UPDATE BY SYSTEM
    ON EXCEPTION ROLLBACK WORK
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_accounts_2 TABLE ttuser.accounts
  CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN tstamp
    UPDATE BY SYSTEM
    ON EXCEPTION ROLLBACK WORK
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast";

Creating replication schemes with scripts

Creating your replication schemes with scripts can save you time and help you avoid mistakes. This section provides some suggestions for automating the creation of replication schemes using Perl.

Consider the general workload bidirectional scheme shown in Example 9-32. Entering the element description for the five tables, ttuser.accounts, ttuser.sales, ttuser.orders, ttuser.inventory, and ttuser.customers, would be tedious and error-prone if done manually.

Example 9-32 General workload bidirectional replication scheme

CREATE REPLICATION bigscheme
ELEMENT elem_accounts_1 TABLE ttuser.accounts
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_accounts_2 TABLE ttuser.accounts
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast"
ELEMENT elem_sales_1 TABLE ttuser.sales
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_sales_2 TABLE ttuser.sales
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast"
ELEMENT elem_orders_1 TABLE ttuser.orders
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_orders_2 TABLE ttuser.orders
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast"
ELEMENT elem_inventory_1 TABLE ttuser.inventory
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_inventory_2 TABLE ttuser.inventory
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast"
ELEMENT elem_customers_1 TABLE ttuser.customers
  MASTER westds ON "westcoast"
  SUBSCRIBER eastds ON "eastcoast"
ELEMENT elem_customers_2 TABLE ttuser.customers
  MASTER eastds ON "eastcoast"
  SUBSCRIBER westds ON "westcoast";

It is often more convenient to automate the process of writing a replication scheme with scripting. For example, the perl script shown in Example 9-33 can be used to build the scheme shown in Example 9-32.

Example 9-33 Using a Perl script to create a replication scheme

@tables = qw(
  ttuser.accounts
  ttuser.sales
  ttuser.orders
  ttuser.inventory
  ttuser.customers
);

print "CREATE REPLICATION bigscheme";

foreach $table (@tables) {
  $element = $table;
  $element =~ s/repl\./elem\_/;

  print "\n";
  print " ELEMENT $element\_1 TABLE $table\n";
  print " MASTER westds ON \"westcoast\"\n";
  print " SUBSCRIBER eastds ON \"eastcoast\"\n";
  print " ELEMENT $element\_2 TABLE $table\n";
  print " MASTER eastds ON \"eastcoast\"\n";
  print " SUBSCRIBER westds ON \"westcoast\"";
 }
print ";\n";

The @tables array shown in Example 9-33 can be obtained from some other source, such as a database. For example, you can use ttIsql and f in a Perl statement to generate a @tables array for all of the tables in the WestDSN database with the owner name repl:

@tables = 'ttIsql -e "tables; quit" WestDSN
           | grep " REPL\."';

Example 9-34 shows a modified version of the script in Example 9-33 that creates a replication scheme for all of the repl tables in the WestDSN database. (Note that some substitution may be necessary to remove extra spaces and line feeds from the grep output.)

Example 9-34 Perl script to create a replication scheme for all tables in WestDSN

@tables = 'ttIsql -e "tables; quit" WestDSN
           | grep " REPL\."';

print "CREATE REPLICATION bigscheme";

foreach $table (@tables) {
  $table =~ s/^\s*//; # Remove extra spaces
  $table =~ s/\n//; # Remove line feeds
  $element = $table;
  $element =~ s/repl\./elem\_/;

  print "\n";
  print " ELEMENT $element\_1 TABLE $table\n";
  print " MASTER westds ON \"westcoast\"\n";
  print " SUBSCRIBER eastds ON \"eastcoast\"\n";
  print " ELEMENT $element\_2 TABLE $table\n";
  print " MASTER eastds ON \"eastcoast\"\n";
  print " SUBSCRIBER westds ON \"westcoast\"";
 }
print ";\n";
PKjbO;PK$AOEBPS/content.opfV9 Oracle® TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2) en-US E21635-04 Oracle Corporation Oracle Corporation Oracle® TimesTen In-Memory Database Replication Guide, 11g Release 2 (11.2.2) 2012-08-27T13:13:14Z Provides information about designing and configuring TimesTen to TimesTen replication, including information about TimesTen integration with Oracle Clusterware. PKa[9V9PK$AOEBPS/dcommon/prodbig.gif GIF87a!!!)))111BBBZZZsss{{ZRRcZZ!!1!91)JB9B9)kkcJJB991ssc絽Zcc!!{祽BZc!9B!c{!)c{9{Z{{cZB1)sJk{{Z{kBsZJ91)Z{!{BcsRsBc{9ZZk甽kBkR!BZ9c)JJc{!))BZks{BcR{JsBk9k)Zck!!BZ1k!ZcRBZcZJkBk1Z9c!R!c9kZRZRBZ9{99!R1{99R{1!1)c1J)1B!BJRkk{ƽ絵ތkk絵RRs{{{{JJsssBBkkk!!9ss{{ZZssccJJZZRRccRRZZ))cBBJJ99JJ!!c11991199Z11!c!!))Z!!!1BRck{)!cJBkZRZ,HP)XRÇEZ֬4jJ0 @ "8pYҴESY3CƊ@*U:lY0_0#  5tX1E: C_xޘeKTV%ȣOΏ9??:a"\fSrğjAsKJ:nOzO=}E1-I)3(QEQEQEQEQEQEQE֝Hza<["2"pO#f8M[RL(,?g93QSZ uy"lx4h`O!LŏʨXZvq& c՚]+: ǵ@+J]tQ]~[[eϸ (]6A&>ܫ~+כzmZ^(<57KsHf妬Ϧmnẁ&F!:-`b\/(tF*Bֳ ~V{WxxfCnMvF=;5_,6%S>}cQQjsOO5=)Ot [W9 /{^tyNg#ЄGsֿ1-4ooTZ?K Gc+oyڙoNuh^iSo5{\ܹ3Yos}$.nQ-~n,-zr~-|K4R"8a{]^;I<ȤL5"EԤP7_j>OoK;*U.at*K[fym3ii^#wcC'IIkIp$󿉵|CtĈpW¹l{9>⪦׺*ͯj.LfGߍԁw] |WW18>w.ӯ! VӃ :#1~ +މ=;5c__b@W@ +^]ևՃ7 n&g2I8Lw7uҭ$"&"b eZ":8)D'%{}5{; w]iu;_dLʳ4R-,2H6>½HLKܹR ~foZKZ࿷1[oZ7׫Z7R¢?«'y?A}C_iG5s_~^ J5?œ tp]X/c'r%eܺA|4ծ-Ե+ْe1M38Ǯ `|Kյ OVڅu;"d56, X5kYR<̭CiطXԮ];Oy)OcWj֩}=܅s۸QZ*<~%뺃ȶp f~Bðzb\ݳzW*y{=[ C/Ak oXCkt_s}{'y?AmCjޓ{ WRV7r. g~Q"7&͹+c<=,dJ1V߁=T)TR՜*N4 ^Bڥ%B+=@fE5ka}ędܤFH^i1k\Sgdk> ֤aOM\_\T)8靠㡮3ģR: jj,pk/K!t,=ϯZ6(((((((49 xn_kLk&f9sK`zx{{y8H 8b4>ÇНE|7v(z/]k7IxM}8!ycZRQ pKVr(RPEr?^}'ðh{x+ՀLW154cK@Ng C)rr9+c:׹b Жf*s^ fKS7^} *{zq_@8# pF~ [VPe(nw0MW=3#kȵz晨cy PpG#W:%drMh]3HH<\]ԁ|_W HHҡb}P>k {ZErxMX@8C&qskLۙOnO^sCk7ql2XCw5VG.S~H8=(s1~cV5z %v|U2QF=NoW]ո?<`~׮}=ӬfԵ,=;"~Iy7K#g{ñJ?5$y` zz@-~m7mG宝Gٱ>G&K#]؃y1$$t>wqjstX.b̐{Wej)Dxfc:8)=$y|L`xV8ߙ~E)HkwW$J0uʟk>6Sgp~;4֌W+חc"=|ř9bc5> *rg {~cj1rnI#G|8v4wĿhFb><^ pJLm[Dl1;Vx5IZ:1*p)إ1ZbAK(1ׅ|S&5{^ KG^5r>;X׻K^? s fk^8O/"J)3K]N)iL?5!ƾq:G_=X- i,vi2N3 |03Qas ! 7}kZU781M,->e;@Qz T(GK(ah(((((((Y[×j2F}o־oYYq $+]%$ v^rϭ`nax,ZEuWSܽ,g%~"MrsrY~Ҿ"Fت;8{ѰxYEfP^;WPwqbB:c?zp<7;SBfZ)dϛ; 7s^>}⍱x?Bix^#hf,*P9S{w[]GF?1Z_nG~]kk)9Sc5Ո<<6J-ϛ}xUi>ux#ţc'{ᛲq?Oo?x&mѱ'#^t)ϲbb0 F«kIVmVsv@}kҡ!ˍUTtxO̧]ORb|2yԵk܊{sPIc_?ħ:Ig)=Z~' "\M2VSSMyLsl⺿U~"C7\hz_ Rs$~? TAi<lO*>U}+'f>7_K N s8g1^CeКÿE ;{+Y\ O5|Y{/o+ LVcO;7Zx-Ek&dpzbӱ+TaB0gNy׭ 3^c T\$⫫?F33?t._Q~Nln:U/Ceb1-im WʸQM+VpafR3d׫é|Aү-q*I P7:y&]hX^Fbtpܩ?|Wu󭏤ʫxJ3ߴm"(uqA}j.+?S wV ~ [B&<^U?rϜ_OH\'.;|.%pw/ZZG'1j(#0UT` Wzw}>_*9m>󑓀F?EL3"zpubzΕ$+0܉&3zڶ+jyr1QE ( ( ( ( ( ( ( (UIdC0EZm+]Y6^![ ԯsmܶ捆?+me+ZE29)B[;я*wGxsK7;5w)}gH~.Ɣx?X\ߚ}A@tQ(:ͧ|Iq(CT?v[sKG+*רqҍck <#Ljα5݈`8cXP6T5i.K!xX*p&ќZǓϘ7 *oƽ:wlຈ:Q5yIEA/2*2jAҐe}k%K$N9R2?7ýKMV!{W9\PA+c4w` Wx=Ze\X{}yXI Ү!aOÎ{]Qx)#D@9E:*NJ}b|Z>_k7:d$z >&Vv󃏽WlR:RqJfGإd9Tm(ҝEtO}1O[xxEYt8,3v bFF )ǙrPNE8=O#V*Cc𹾾&l&cmCh<.P{ʦ&ۣY+Gxs~k5$> ӥPquŽўZt~Tl>Q.g> %k#ú:Kn'&{[yWQGqF}AЅ׮/}<;VYZa$wQg!$;_ $NKS}“_{MY|w7G!"\JtRy+贾d|o/;5jz_6fHwk<ѰJ#]kAȎ J =YNu%dxRwwbEQEQEQEQEQEQEQEQEQE'fLQZ(1F)hQ@X1KEQE-Q@ 1KE3h=iPb(((1GjZ(-ʹRPbR@ 1KE7`bڒyS0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((([ufT5K9 ر1S^q|&o2R}E"Ȟib!8ct@>vz\][j3>˘]lNTC\ xo>#,/uY.wI匡܌ FF=cD}w09>&ݪNY'4QTմuHtu D&MZ4%uL䏕xoG; +ooy۳nspq8=} hPE?NѷU95m6R.]B5 |V2]yS9#n@}*%Jw}fvۜآ(ow:O7|Osዹd0jefұY5fXA`㏔(?⏈uG/u$_f+7d(d=w(%9N>x{Ρ\yzI,$f6< AR}E?<+kq-ljthgI+#-AEN=[MTKP}B-̦T^9d@;ZEczvv Ү`GlNOW5-[MѭT-,`g]L)lXqhOMմfݮ4Br%*8%IPEcYH4yGپgn͹񎹭(oyZnۄ?fsscku>Lx­n ] 0%A; OnQESԵm7F[SP"u2ĥNbp DZ QEQEQEQEQEQEQEQEQEQEQEQEQEgmk+='CGW ;£FWW ;£FP oz ԖLTH(N̠&00s5_|LVyA# ; l9#q3C'C?IWPϟ <,~$xj)|Sywu,RFP^UUJM`r|PV?xHIc=B֦Y$;]XRz?IaY?TT~O4 O.4/Fүc˃n/mm(r`n#\ލΟ$:tRk N!V9 H$gOQm}ldm4[{$%LT,@$-SYz6[Ӓ/淑Uq%𦉯j{U3Zn#Ϋde+b9q_w.E>nwlMo8u~CķskV\}ibHT#s9F$K7c +w|% HGU FH6,ۘ#; >3kM8rWT;#nrv{e|RፎomJ%[CܒFPrN~RYr >ᩯ|+O %/t$c J\ vxy|'`? OR7Ul>&|Etx : 8%nFq8eo.cW]m> si:įyAT]ЬMmSkyg{h]zԎe%|=NJ>Z{<rc2)#$H9Si3nu-~I4K.cc0#  @/[7wȡvVqr+jEE@i C񗄄3ޤ-jeC%n(˜0n:;IaXTW ;£FWW ;£FP oz ԖLTH(N̠&00s5_|LVyA# ; l9#q3C'C?IWPo5og K£$yQWa ߃kc^8,T 9F_PNpV >\p29Voo>o 1 Q!A*ŏqCq$o iqG(;z֤O|I}[SLKe$7|̏ X&ߴ_t'6K{Kaj31;GZO[f%&`*bA0F ɬQ6~ x>Zg&yco%ʳ(q(FF8<_> ?MkP,MTIHd0&,Cag;y_]覮 ?J Z|. ZkWcKFn@ysk(ý{I5<{?ZYF&& V \"'/EgVIrQy6h%X4裰oc &|s]G>oZy5=vSs<P Y`gh(((((((((((((+5.K^! k6};}kb+G𶍠:g]j׏y9.1ֶ( { h~(,{{H0\y6! ۨ(  [F%gKY%32y&\ ˒z('oikkQ2 ζ({/$Ð$%pGNÞk2O 4m. "9 Q\X9eִ{K(d*m,Q}MnQ@~:v]& [}d[7m`Õ @<acoiqv$0vdpV(-ZxİYqހ(i;G^ֶ(ym"h%BG"WR0Acijc#aȠ\;WyESt7Fk}/OGk ĥHPpϰvѭᯉXEۙtgkRH(  #s fgKq`m& ,?|}&"^&v<#o;MF> 47B+tk,MVW ɉpY."B8-5h&&psw&AV;Sٜ2OBQ_?²Yo;7o׵>Tlv]}zǨ_w##\NI&=2Rk?J&Odt4[G"0Y3&THL t:MQOxVoiop {V$UH$aB 98Q@%zMpG$$!Q 'j,z(m^,^ftYH*F2 IPH|Y? \O-[#2(푵(;y^Gxn;OG?ۼ/pK&q|O Gv%ծ<;ǽlj> 'Ɵ>_}[}ϙ(ݿwzm?wߎ¼?t>T n|^?gi?K /1R X19T((~uxO fZJc"3 yp: <xo~TRL@璥*[%1@E|hELi/^B!1y_G ? z׵[]OMw8QjW8<+u|?|ѳg((((((((((((%O`?_4`ֹ>gGEo\3%gW񕿌/+[`̑.PX)rC!;DDWi?=6PLw v ^Ai$ 6ZI8r[o#8nO7VEJ$E `(|&{6X77%1x(]l's_E foH#Ȓ!s#3WFz:6keKfw,GqTѺ2p:rI&M3?OiSif4Jikv[)el9Aր0xSǃ<8ֺ~xaTC9H #x=/3}Q5;wkxd: m߹X<pr]?[v;oʭ4J[ ^Sۻ 846rV?Oz-?u=73@G:y~p9R}?C'y_8w3ڸ{~ ,vE5GbViT6ܾ3vFp(}|IY-cFHpK fK2_ rFa'V4h,~],w2{WQM۬ n wy wXBngx_9^6NUGCنOA7¯ZG}g۬~*aGd`( g,f|o. iqZ!$Eا\kryUs.{0Op J"x#xSZ?ufI3,AX.Bƪj`(wɦZIE#D1Fr yG=,|CjHu7+.q2ØM6NJ'Og(kI쯧͚]CNO$s~+ jCNJ!aUUT9`pT(ߡX"{7qo5̐`NTq! gRg冹ɝAw$0WjUiw6w X8 A^_?ٮ%.ܲePF'Ԛy/[Z֮ k]$Ȱ~b >Hr;EzeakAcI<;;9,Ĝp3kr<yV?-zCuo-QRHPF Ab+MOw \-µ`;0緥t~o_u/C(o4usdBH,4< Ѿg^'}">32G)Or6FXsOx+Meƕww=ݵqKWP:Cؐz [{ PȡԌA8x7߈t* 23h|wy$˪F\)D^Iv{nFbYI$ܜ.O$(8`!>3jm"wnv: qp ⽒\MnĈLnBJ33?iV:3SG%0s#@#V3ς~O} vgfn|@kb-?[٬UʫĞ\yV?6SJҬt=.L-MD2I$I$߂ :e=ܓWhYJnvaA2N A7|l6i2 H'sF˒ ŴYx9Zm4K=,-ͼQTXmx?x!(T)$r(eu#x 1@msKE@;XuVS3Ȫz׈a𮽪iw6sv\H4H[c9dd{W}<*IU Rћӊ<)-#0hdd$I69pP)wcgZMcZv{m7 !p7e21bxON:GmMVM$ˆ:n9CgzIX%bChd$ gh[|ƉqeݴMܬ*9Fcjs` $O*Z'+yc P xellb.`tf$м)fmt=2 ($r͍',T zxW_);/i><|agPξPP#Q9'65m6gFEKy2u*H#8>8_Ht/x҉++-4; '~ǝsHucڶ(((((((((((((XIq R aFO@85xY5 ZǤ 77$4nB@Ex==.$I%*P1hF(Ffl(#$/e%}H7F]тO%N:q]Q^WOVԴ|;.\ ̋2 8 0`ZJ( ( ( +S [Qޟ/HdYrwf:˿_iZWswQXI]]iL@8 zVQEQ\-oQÍ[Vn>}˱_nO <9Q^'uT񆌰^q r3ks?xoA"gpKB0YvT|v1@DžkZx{=F.<miwVPEmKOû{-B Lpȳ/Àp FT((((((?7<,,| c$qpʙ#Gina 1G/6OAv ѵ3}#Lp@:q=h4Q"GCB78p'nOi@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@7|I~bm۴#|Qd?g@F:瑏d#xCǞC$[X b.A.U2w=N; .dɠHA#oJuUi]́$溵r0S"ޙYiuh,3 G[o_*XAy-ѯ/绺4>ږ[/P8ѤbTWk͟;^ZDҾW$IyV?-u,n56,I1G"HEV>xwOċ3=S7+kEt$خ^@VbÒqC3wtO]h[j;R1DBc݆9`XX5 [[΂1-졵oXHYuFAƫcu_r^Yjh1/&!$luB2|D!ZRZW[i`!"ed/$7dp ~$qC^}_M["m9(#l7Mܚ}6KZ\*āޠV O⫫x-/3A*H̬dD s@>:W.;.[+v7f92[}bß o4Ofx}b)|.>cǴ4B2æk((jEE^^?8U}ܧ29l?$](ow:i /5.Od$kjd&=+:ύ^3Ҵt-ڼdI27fn(X-v _n 닏 hO.lI%L%$W$kё|aO#ξ ;330xx^֯#M#*q1+<?iæt u1f;])d`+n.4Pi6!WzΣ_L^:o;&A  c;zz#PYX*ePc|xnOPTz>%е-kޤW-3e"FD-1+''( *mzki-?7 2#`FPs3˨u-CP Civq̑8'#>|u}]+v(mIWGB[dk_OSk;t,o{WaE| }?Q^3t [- r6msIm"Y5u۹5 +վk r~sp+( zciwfm͝l'#@#W7RҞh|9 YҴ}ChIqhfzƳ&jպAq{}6@vz$? BoL"ދe*h6xשb(8&;NͳeMaN1*H'\ޛ+^ѭKX]x 0>½ͧh6Wr^OooR\ɝ2I9$gב|&g%VLoxd3 (/$52_Y:62a"ǜ@R`vvxN71is"Ӽ 6!<A+rN>[}d[7m`Õ @<sRLGm<5-&4[rIuf6 lLrI^՗WM$MG-!l죳SFhdnܿ(7?|A-sƫ4iMlc_bŘ,Icd\1 /}^_3nFs{AEgZg''inv2qg5}.M,<,:8<r{b(پѧ}QX.ȼb9r0!-~RO5| }?Q^3t [- r6ms{Yo떞 յW]x@ۇʤVa|p1W j?eûOoxnw Q@7VEJ$E `+~q7g2[ŹAbt%@lۈ瓚 (-C-'KHm,8a߬q_W~]ϛyX"sQ@G?%/|>iFuaFC\xSݎwW)H %:2gmZZXH OYa x[b1B08 zP?i~t=iZi$WQ@|xS-k?:cQEQEQEQEQEQEQEQEQEQEQEQEQEg͟;^ZDҾW$Iu [j2 ƐAhcH%+$<_)i,]z"b\.$8$mg "@TP0Ş/t_ xFMN{u/f6܏®vyai-׈<]6V6ZDRXaɐ8$o~?Nѷ~% ]OGQfRl!hӨ nN8 ( jC±i.!!fp{%y_ǝF|]jڥ.pyw0;[g3c8 ih4F2G JxI;?Qޥ]v1$ݔH ј+g(ryOCq$sd($u{[nF^a}oryQ$о7#T2\cK>UO1obkKhe%< w8@gi,M+A`Y2NᏋ45+n~}>`Uv¼?6O_/e4?OEuf2lOңcǯ5:m R_:[|.1ր=Ա+oaOArO}/:ھ?o $#ƁOxzݴ7dle r>BS/]bWNvH vcd2dA@xc-r+G#FnP;N~*h{z_GH𵾻iHǰWF߆tiqQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEU;='M.,Ki}̐¨6Iː2,NOPEPEPEPEPEPEPEP=7ItkvK˴vJ[dg EQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEPKewuuPK$AOEBPS/dcommon/contbig.gif`GIF87a!!!111999BBBJJJRRRccckkksss{{{skk{{ZRRRJJƽ{sZRJRJB91)kcZB9)sskZRJ1޽ƽ{{ssskkkcƵZZRccZRRJJJB{BB9991ssckkZccR))!RRB!!JJ1))99!11ƌ)1R)k֔)s1RZJR{BJs9R1J!11J1J9k{csZk!1J!)cBR9J1B)91B!cRs{!)s!){1B!k!s!{ksksckckZc9B)1!)!)BJ9B1919έƌ!!)JJcZZ{!!!1RR{JJsBBkJJ{!!9BB{1!!J9)!!Z!!c1!!kR!!s9Z!BckJs)19!!c!!ZRZ,H rrxB(Kh" DժuICiи@S z$G3TTʖ&7!f b`D 0!A  k,>SO[!\ *_t  Exr%*_}!#U #4 & ֩3|b]L ]t b+Da&R_2lEٱZ`aC)/яmvUkS r(-iPE Vv_{z GLt\2s!F A#葡JY r|AA,hB}q|B`du }00(䡆<pb,G+oB C0p/x$…– ]7 @2HFc ) @AD \0 LHG',(A` `@SC)_" PH`}Y+_|1.K8pAKMA @?3҄$[JPA)+NH I ,@8G0/@R T,`pF8Ѓ)$^$ DDTDlA@ s;PKPK$AOEBPS/dcommon/darbbook.cssPKPK$A!OEBPS/dcommon/O_signature_clr.JPG"(JFIF``C    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?O '~MQ$Vz;OlJi8L%\]UFjޙ%ԯS;rA]5ފ<׈]j7Ouyq$z'TQuw7Ŀ KX߁M2=S'TQt?.5w'97;~pq=" ~k?`'9q6 E|yayM^Om'fkC&<5x' ?A?Zx'jß={=SßM gVC.5+Hd֪xc^)Җufz{Cީ|D Vkznq|+Xa+{50rx{|OG.OϞ~f/ xxX[2H )c+#jpUOZYX\=SG ߨC|K@;_߆'e?LT?]:?>w ڔ`D^So~xo[Ӡ3i7B:Q8 Vc-ďoi:FM292~y_*_闱YN\Fr=xZ3鳎OwW_QEzW~c]REeaSM}}Hӏ4&.E]u=gMѠ+mF`rNn$w9gMa꺢nTuhf2Xv>އ a(Û6߭?<=>z'TQuw7Ŀ KX߁M2=S'TQt?.5Kko\.8S$TOX߀Gw?Zx汴X)C7~.i6(Щ=+4{mGӭ¸-]&'t_kV*I<1)4thtIsqpQJ+> \m^[aJ5)ny:4o&QEnyAEPEEss 72,PDۢ׃K W{Wjr+wگ iM/;pd?~&?@;7E4gv8 $l'z'TQuw7Ŀ Gֱ=ɿ&G?. iR(5W*$|?w᫼gkmIbHe/_t>tg%y.l}N5[]+Mk0ĠeHdPrsst'UiC,y8`V%9ZIia|ܪvi מYG,o}+kk{YbyIeb*sAtի82zWoEK5z*o-eo;n(P u-I)4Š(HQEQEQEQEhz(X/Đ?}Bk˩ ݏrk0]4>8XzV? }6$}d^F>nU K ?Bտk_9׾x~w'ߞ  uDŽtL ؈5c-E/"|_Oo.IH쐍=i*Iw5(ںw?t5s.)+tQ2dUt5Vĺ.jZ"@IRrZƅY4ߡ_;}ų(KyQf1Aǵt?sZg+?F5_oQR&Dg߿]6FuRD u>ڿxl7?IT8'shj^=.=J1rj1Wl$얲cPx;E,p$֟ˏkw qg"45(ǛkV/=+ũ)bYl~K#˝J_כ5&\F'I#8/|wʾ_Xj Q:os^T1.M_|TO.;?_  jF?g N 8nA2F%i =qW,G=5OU u8]Rq?wr'˻S+۾.ܼ 87Q^elo/T*?L|ۚ<%<,/v_OKs B5f/29n0=zqQq(ª=VX@*J(э(f5qJN_EVǞQEOuoѕOuoa5}gO?:߂8Wא|cڽ~]N&O( (<]>͠@VQ=^~U ̴m&\խ5i:}|}r~9՝f}_>'vVֲ$~^f30^in{\_.O F8to}?${φ|#x^#^n~w=~k~?'KRtO.㌡h![3Zu*ٷճ(ԟ]z_/W1(ԟ]v~g|Yq<ז0 ; b8֮s,w9\?uEyStKaª@\,)) (!EPEPEPEPEPzѧts{v>C/"N6`d*J2gGӧWqBq_1ZuΓ\X]r?=Ey88Mp&pKtO-"wR2 K^-Z< \c>V0^@O7x2WFjs<׻kZ(<Т(OFw/6$1[:ޯԯ#q~4|,LVPem=@=YLUxӃV}AUbcUB.Ds5*kٸAeG>PJxt͝ b88?*$~@ׯD VkraiJs}Q.20x&mXξ,Z]“A-J#`+-E/"<]\a'tZGy.(|lދ~gMK OZdxDŽU9T6ϯ^<Ϡt5CZ]].t۫S=s`ڳ%8iVK:nqe+#<.T6U>zWoy3^I {F?J~=G}k)K$$;$de8*G Uӟ4Ocºw}|]4=ݣ\x$ʠms?q^ipw\"ȿPs^Z Q_0GڼU.t}ROM[G#]8wٞ ӫ87}Cgw vHȩBM55vof =A_٭`Ygx[6 P,5}>蚊(0(+?>+?> k|TuXq6_ +szk :u_ Z߶Ak_U}Jc2u/1[_»ݸG41-bሬ۴}}Eȹפ_c?5gi @cL\L<68hF_Ih>X4K7UТ sMj =J7CKo>Օ5s:߀t ~ηaٿ?|gdL8+gG%o?x`دOqȱwc¨&TW_V_aI=dpG!wu۞սZ1yL50$(l3(:~'ַo A}a3N*[0ǭ HKQV}G@֜$ 9of$ArNqUOgË05#m?D)^_h//5_/<?4}Jį+GkpG4"$ r| >S4Ђ"S 1%R:ȝ 8;PKPz PK$AOEBPS/dcommon/feedback.gif7GIF89a'%(hp|fdx?AN5:dfeDGHɾTdQc`g*6DC\?ؘ||{;=E6JUՄfeA= >@,4`H.|`a (Q 9:&[|ځ,4p Y&BDb,!2@, $wPA'ܠǃ@CO~/d.`I @8ArHx9H75j L 3B/` P#qD*s 3A:3,H70P,R@ p!(F oԥ D;"0 ,6QBRɄHhI@@VDLCk8@NBBL2&pClA?DAk%$`I2 #Q+l7 "=&dL&PRSLIP)PɼirqМ'N8[_}w;PK-PK$AOEBPS/dcommon/booklist.gifGIF89a1޵֥΄kZ{Jk1Rs!BZ)B),@I9Z͓Ca % Dz8Ȁ0FZЌ0P !x8!eL8aWȠFD(~@p+rMS|ӛR$ v "Z:]ZJJEc{*=AP  BiA ']j4$*   & 9q sMiO?jQ = , YFg4.778c&$c%9;PKː5PK$AOEBPS/dcommon/cpyr.htm1 Oracle Legal Notices

Oracle Legal Notices

Copyright Notice

Copyright © 1994-2012, Oracle and/or its affiliates. All rights reserved.

Trademark Notice

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

License Restrictions Warranty/Consequential Damages Disclaimer

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Warranty Disclaimer

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

Restricted Rights Notice

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

Hazardous Applications Notice

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Third-Party Content, Products, and Services Disclaimer

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Alpha and Beta Draft Documentation Notice

If this document is in prerelease status:

This documentation is in prerelease status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.

Oracle Logo

PKN61PK$AOEBPS/dcommon/masterix.gif.GIF89a1ޜΌscJk1Rs!Bc1J),@IS@0"1 Ѿb$b08PbL,acr B@(fDn Jx11+\%1 p { display: none; } /* Class Selectors */ .ProductTitle { font-family: sans-serif; } .BookTitle { font-family: sans-serif; } .VersionNumber { font-family: sans-serif; } .PrintDate { font-family: sans-serif; font-size: small; } .PartNumber { font-family: sans-serif; font-size: small; } PKeӺ1,PK$AOEBPS/dcommon/larrow.gif#GIF87a絵ƌֵƽ{{ss֜ƔZZ{{{{ZZssZZccJJJJRRBBJJJJ991111))!!{,@pH,Ȥrl:ШtpHc`  өb[.64ꑈ53=Z]'yuLG*)g^!8C?-6(29K"Ĩ0Яl;U+K9^u2,@@ (\Ȱ Ë $P`lj 8x I$4H *(@͉0dа8tA  DсSP v"TUH PhP"Y1bxDǕ̧_=$I /& .)+ 60D)bB~=0#'& *D+l1MG CL1&+D`.1qVG ( "D2QL,p.;u. |r$p+5qBNl<TzB"\9e0u )@D,¹ 2@C~KU 'L6a9 /;<`P!D#Tal6XTYhn[p]݅ 7}B a&AƮe{EɲƮiEp#G}D#xTIzGFǂEc^q}) Y# (tۮNeGL*@/%UB:&k0{ &SdDnBQ^("@q #` @1B4i@ aNȅ@[\B >e007V[N(vpyFe Gb/&|aHZj@""~ӎ)t ? $ EQ.սJ$C,l]A `8A o B C?8cyA @Nz|`:`~7-G|yQ AqA6OzPbZ`>~#8=./edGA2nrBYR@ W h'j4p'!k 00 MT RNF6̙ m` (7%ꑀ;PKl-OJPK$AOEBPS/dcommon/index.gifGIF89a1޵ΥΥ{sc{BZs,@IM" AD B0 3.R~[D"0, ]ШpRNC  /& H&[%7TM/`vS+-+ q D go@" 4o'Uxcxcc&k/ qp zUm(UHDDJBGMԃ;PK(PK$AOEBPS/dcommon/bookbig.gif +GIF89a$!!!)))111999BBBJJJRRRZZZccckkksss{{{skkB991)))!!B11))1!JB9B9!!cZ9ƭƽssk{ZZRccZRRJJJBBB9c!!ν)1)k{s絽ƌkssֽZccJRRBJJ{9BB)11)99!!))11!!k!JZ!)RcJccBcs)1c)JZ!BR!)BZ)99J!Rk9!c11B)Z{)9Bkc1kB9BZ!Z{9Rs)Jkksk9kB1s1Jk9Rƥc{k9s)Z{1k91)s1Rk)Jc1J!))BZ!1k{csc{)19B!)Bcsc{ksc{kZs!RkJkJkքc{9Zks{ck9R)Bks9R9R1J!)Z1B!)c)9)99BR19kksBBJcc{ccBBZ))9kk!!199c11ZBB{9!!R!!Z!!c))!!kR!!s!!BcksRZ1c9B)R91c1)Z!R9B9k1)RcZ{)!1B9JB9B)!)J9B!& Imported from GIF image: bookbig.gif,$!!!)))111999BBBJJJRRRZZZccckkksss{{{skkB991)))!!B11))1!JB9B9!!cZ9ƭƽssk{ZZRccZRRJJJBBB9c!!ν)1)k{s絽ƌkssֽZccJRRBJJ{9BB)11)99!!))11!!k!JZ!)RcJccBcs)1c)JZ!BR!)BZ)99J!Rk9!c11B)Z{)9Bkc1kB9BZ!Z{9Rs)Jkksk9kB1s1Jk9Rƥc{k9s)Z{1k91)s1Rk)Jc1J!))BZ!1k{csc{)19B!)Bcsc{ksc{kZs!RkJkJkքc{9Zks{ck9R)Bks9R9R1J!)Z1B!)c)9)99BR19kksBBJcc{ccBBZ))9kk!!199c11ZBB{9!!R!!Z!!c))!!kR!!s!!BcksRZ1c9B)R91c1)Z!R9B9k1)RcZ{)!1B9JB9B)!)J9BH`\Ȑ:pظа"A6DBH,V@Dڹ'G"v Æ ܥ;n;!;>xAܽ[G.\rQC wr}BŊQ A9ᾑ#5Y0VȒj0l-GqF>ZpM rb ;=.ސW-WѻWo ha!}~ْ ; t 53 :\ 4PcD,0 4*_l0K3-`l.j!c Aa|2L4/1C`@@md;(H*80L0L(h*҇҆o#N84pC (xO@ A)J6rVlF r  fry†$r_pl5xhA+@A=F rGU a 1х4s&H Bdzt x#H%Rr (Ѐ7P`#Rщ'x" #0`@~i `HA'Tk?3!$`-A@1l"P LhʖRG&8A`0DcBH sq@AXB4@&yQhPAppxCQ(rBW00@DP1E?@lP1%T` 0 WB~nQ@;PKGC PK$AOEBPS/dcommon/rarrow.gif/GIF87a絵ƌֵƽ{{ss֜ƔZZ{{{{ZZssZZccJJJJRRBBJJJJ991111))!!{,@pH,Ȥrl:ШLlԸ NCqWEd)#34vwwpN|0yhX!'+-[F 'n5 H $/14w3% C .90" qF 7&E "D mnB|,c96) I @0BW{ᢦdN p!5"D`0 T 0-]ʜ$;PKJV^PK$AOEBPS/dcommon/mix.gifkGIF89aZZZBBBJJJkkk999sss!!!111cccֽ{{{RRR)))猌ƭ{s{sks!,@@pH,B$ 8 t:<8 *'ntPP DQ@rIBJLNPTVEMOQUWfj^!  hhG H  kCúk_a Ǥ^ h`B BeH mm  #F` I lpǎ,p B J\Y!T\(dǏ!Gdˆ R53ټ R;iʲ)G=@-xn.4Y BuU(*BL0PX v`[D! | >!/;xP` (Jj"M6 ;PK枰pkPK$AOEBPS/dcommon/doccd_epub.jsM /* Copyright 2006, 2012, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2012.3.17 */ function addLoadEvent(func) { var oldOnload = window.onload; if (typeof(window.onload) != "function") window.onload = func; else window.onload = function() { oldOnload(); func(); } } function compactLists() { var lists = []; var ul = document.getElementsByTagName("ul"); for (var i = 0; i < ul.length; i++) lists.push(ul[i]); var ol = document.getElementsByTagName("ol"); for (var i = 0; i < ol.length; i++) lists.push(ol[i]); for (var i = 0; i < lists.length; i++) { var collapsible = true, c = []; var li = lists[i].getElementsByTagName("li"); for (var j = 0; j < li.length; j++) { var p = li[j].getElementsByTagName("p"); if (p.length > 1) collapsible = false; for (var k = 0; k < p.length; k++) { if ( getTextContent(p[k]).split(" ").length > 12 ) collapsible = false; c.push(p[k]); } } if (collapsible) { for (var j = 0; j < c.length; j++) { c[j].style.margin = "0"; } } } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(compactLists); function processIndex() { try { if (!/\/index.htm(?:|#.*)$/.test(window.location.href)) return false; } catch(e) {} var shortcut = []; lastPrefix = ""; var dd = document.getElementsByTagName("dd"); for (var i = 0; i < dd.length; i++) { if (dd[i].className != 'l1ix') continue; var prefix = getTextContent(dd[i]).substring(0, 2).toUpperCase(); if (!prefix.match(/^([A-Z0-9]{2})/)) continue; if (prefix == lastPrefix) continue; dd[i].id = prefix; var s = document.createElement("a"); s.href = "#" + prefix; s.appendChild(document.createTextNode(prefix)); shortcut.push(s); lastPrefix = prefix; } var h2 = document.getElementsByTagName("h2"); for (var i = 0; i < h2.length; i++) { var nav = document.createElement("div"); nav.style.position = "relative"; nav.style.top = "-1.5ex"; nav.style.left = "1.5em"; nav.style.width = "90%"; while (shortcut[0] && shortcut[0].toString().charAt(shortcut[0].toString().length - 2) == getTextContent(h2[i])) { nav.appendChild(shortcut.shift()); nav.appendChild(document.createTextNode("\u00A0 ")); } h2[i].parentNode.insertBefore(nav, h2[i].nextSibling); } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(processIndex); PKo"nR M PK$AOEBPS/dcommon/toc.gifGIF89a1ΥΥ{c{Z{JkJk1Rk,@IK% 0| eJB,K-1i']Bt9dz0&pZ1o'q(؟dQ=3S SZC8db f&3v2@VPsuk2Gsiw`"IzE%< C !.hC IQ 3o?39T ҍ;PKv I PK$AOEBPS/dcommon/topnav.gifGIF89a1ֽ筽ޭƔkZZk{Bc{,@ ) l)-'KR$&84 SI) XF P8te NRtHPp;Q%Q@'#rR4P fSQ o0MX[) v + `i9gda/&L9i*1$#"%+ ( E' n7Ȇ(,҅(L@(Q$\x 8=6 'נ9tJ&"[Epljt p#ѣHb :f F`A =l|;&9lDP2ncH R `qtp!dȐYH›+?$4mBA9 i@@ ]@ꃤFxAD*^Ŵ#,(ε  $H}F.xf,BD Z;PK1FAPK$AOEBPS/dcommon/bp_layout.css# @charset "utf-8"; /* bp_layout.css Copyright 2007, Oracle and/or its affiliates. All rights reserved. */ body { margin: 0ex; padding: 0ex; } h1 { display: none; } #FOOTER { border-top: #0d4988 solid 10px; background-color: inherit; color: #e4edf3; clear: both; } #FOOTER p { font-size: 80%; margin-top: 0em; margin-left: 1em; } #FOOTER a { background-color: inherit; color: gray; } #LEFTCOLUMN { float: left; width: 50%; } #RIGHTCOLUMN { float: right; width: 50%; clear: right; /* IE hack */ } #LEFTCOLUMN div.portlet { margin-left: 2ex; margin-right: 1ex; } #RIGHTCOLUMN div.portlet { margin-left: 1ex; margin-right: 2ex; } div.portlet { margin: 2ex 1ex; padding-left: 0.5em; padding-right: 0.5em; border: 1px #bcc solid; background-color: #f6f6ff; color: black; } div.portlet h2 { margin-top: 0.5ex; margin-bottom: 0ex; font-size: 110%; } div.portlet p { margin-top: 0ex; } div.portlet ul { list-style-type: none; padding-left: 0em; margin-left: 0em; /* IE Hack */ } div.portlet li { text-align: right; } div.portlet li cite { font-style: normal; float: left; } div.portlet li a { margin: 0px 0.2ex; padding: 0px 0.2ex; font-size: 95%; } #NAME { margin: 0em; padding: 0em; position: relative; top: 0.6ex; left: 10px; width: 80%; } #PRODUCT { font-size: 180%; } #LIBRARY { color: #0b3d73; background: inherit; font-size: 180%; font-family: serif; } #RELEASE { position: absolute; top: 28px; font-size: 80%; font-weight: bold; } #TOOLS { list-style-type: none; position: absolute; top: 1ex; right: 2em; margin: 0em; padding: 0em; background: inherit; color: black; } #TOOLS a { background: inherit; color: black; } #NAV { float: left; width: 96%; margin: 3ex 0em 0ex 0em; padding: 2ex 0em 0ex 4%; /* Avoiding horizontal scroll bars. */ list-style-type: none; background: transparent url(../gifs/nav_bg.gif) repeat-x bottom; } #NAV li { float: left; margin: 0ex 0.1em 0ex 0em; padding: 0ex 0em 0ex 0em; } #NAV li a { display: block; margin: 0em; padding: 3px 0.7em; border-top: 1px solid gray; border-right: 1px solid gray; border-bottom: none; border-left: 1px solid gray; background-color: #a6b3c8; color: #333; } #SUBNAV { float: right; width: 96%; margin: 0ex 0em 0ex 0em; padding: 0.1ex 4% 0.2ex 0em; /* Avoiding horizontal scroll bars. */ list-style-type: none; background-color: #0d4988; color: #e4edf3; } #SUBNAV li { float: right; } #SUBNAV li a { display: block; margin: 0em; padding: 0ex 0.5em; background-color: inherit; color: #e4edf3; } #SIMPLESEARCH { position: absolute; top: 5ex; right: 1em; } #CONTENT { clear: both; } #NAV a:hover, #PORTAL_1 #OVERVIEW a, #PORTAL_2 #OVERVIEW a, #PORTAL_3 #OVERVIEW a, #PORTAL_4 #ADMINISTRATION a, #PORTAL_5 #DEVELOPMENT a, #PORTAL_6 #DEVELOPMENT a, #PORTAL_7 #DEVELOPMENT a, #PORTAL_11 #INSTALLATION a, #PORTAL_15 #ADMINISTRATION a, #PORTAL_16 #ADMINISTRATION a { background-color: #0d4988; color: #e4edf3; padding-bottom: 4px; border-color: gray; } #SUBNAV a:hover, #PORTAL_2 #SEARCH a, #PORTAL_3 #BOOKS a, #PORTAL_6 #WAREHOUSING a, #PORTAL_7 #UNSTRUCTURED a, #PORTAL_15 #INTEGRATION a, #PORTAL_16 #GRID a { position: relative; top: 2px; background-color: white; color: #0a4e89; } PK3( # PK$AOEBPS/dcommon/bookicon.gif:GIF87a!!!)))111999BBBJJJRRRZZZccckkksss{{{ޭ{{ZRRcZZRJJJBB)!!skRB9{sν{skskcZRJ1)!֭ƽ{ZZRccZJJBBB999111)JJ9BB1ZZB!!ﭵBJJ9BB!!))Jk{)1!)BRZJ{BsR!RRJsJ!J{s!JsBkks{RsB{J{c1RBs1ZB{9BJ9JZ!1BJRRs!9R!!9Z9!1)J19JJRk19R1Z)!1B9R1RB!)J!J1R)J119!9J91!9BkksBBJ119BBR!))9!!!JB1JJ!)19BJRZckތ1)1J9B,H*\hp >"p`ƒFF "a"E|ժOC&xCRz OBtX>XE*O>tdqAJ +,WxP!CYpQ HQzDHP)T njJM2ꔀJ2T0d#+I:<жk 'ꤱF AB @@nh Wz' H|-7f\A#yNR5 /PM09u UjćT|q~Yq@&0YZAPa`EzI /$AD Al!AAal 2H@$ PVAB&c*ؠ p @% p-`@b`uBa l&`3Ap8槖X~ vX$Eh`.JhAepA\"Bl, :Hk;PKx[?:PK$AOEBPS/dcommon/conticon.gif^GIF87a!!!)))111999BBBJJJRRRZZZccckkksss{{{ZRR޽{{ssskkkcccZ991ccRZZBBJJZck)19ZcsBJZ19J!k{k)Z1RZs1!B)!J91{k{)J!B!B911)k{cs!1s!9)s!9!B!k)k1c!)Z!R{9BJcckZZcBBJ99B119{{!!)BBRBBZ!))999R99Z!!999c1!9!)19B1)!B9R,  oua\h2SYPa aowwxYi 9SwyyxxyYSd $'^qYȵYvh ч,/?g{н.J5fe{ڶyY#%/}‚e,Z|pAܠ `KYx,ĉ&@iX9|`p ]lR1khٜ'E 6ÅB0J;t X b RP(*MÄ!2cLhPC <0Ⴁ  $4!B 6lHC%<1e H 4p" L`P!/,m*1F`#D0D^!AO@..(``_؅QWK>_*OY0J@pw'tVh;PKp*c^PK$AOEBPS/dcommon/blafdoc.cssL@charset "utf-8"; /* Copyright 2002, 2011, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2011.10.7 */ body { font-family: Tahoma, sans-serif; /* line-height: 125%; */ color: black; background-color: white; font-size: small; } * html body { /* http://www.info.com.ph/~etan/w3pantheon/style/modifiedsbmh.html */ font-size: x-small; /* for IE5.x/win */ f\ont-size: small; /* for other IE versions */ } h1 { font-size: 165%; font-weight: bold; border-bottom: 1px solid #ddd; width: 100%; } h2 { font-size: 152%; font-weight: bold; } h3 { font-size: 139%; font-weight: bold; } h4 { font-size: 126%; font-weight: bold; } h5 { font-size: 113%; font-weight: bold; display: inline; } h6 { font-size: 100%; font-weight: bold; font-style: italic; display: inline; } a:link { color: #039; background: inherit; } a:visited { color: #72007C; background: inherit; } a:hover { text-decoration: underline; } a img, img[usemap] { border-style: none; } code, pre, samp, tt { font-family: monospace; font-size: 110%; } caption { text-align: center; font-weight: bold; width: auto; } dt { font-weight: bold; } table { font-size: small; /* for ICEBrowser */ } td { vertical-align: top; } th { font-weight: bold; text-align: left; vertical-align: bottom; } ol ol { list-style-type: lower-alpha; } ol ol ol { list-style-type: lower-roman; } td p:first-child, td pre:first-child { margin-top: 0px; margin-bottom: 0px; } table.table-border { border-collapse: collapse; border-top: 1px solid #ccc; border-left: 1px solid #ccc; } table.table-border th { padding: 0.5ex 0.25em; color: black; background-color: #f7f7ea; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } table.table-border td { padding: 0.5ex 0.25em; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } span.gui-object, span.gui-object-action { font-weight: bold; } span.gui-object-title { } p.horizontal-rule { width: 100%; border: solid #cc9; border-width: 0px 0px 1px 0px; margin-bottom: 4ex; } div.zz-skip-header { display: none; } td.zz-nav-header-cell { text-align: left; font-size: 95%; width: 99%; color: black; background: inherit; font-weight: normal; vertical-align: top; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-header-link { font-size: 95%; } td.zz-nav-button-cell { white-space: nowrap; text-align: center; width: 1%; vertical-align: top; padding-left: 4px; padding-right: 4px; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-button-link { font-size: 90%; } div.zz-nav-footer-menu { width: 100%; text-align: center; margin-top: 2ex; margin-bottom: 4ex; } p.zz-legal-notice, a.zz-legal-notice-link { font-size: 85%; /* display: none; */ /* Uncomment to hide legal notice */ } /*************************************/ /* Begin DARB Formats */ /*************************************/ .bold, .codeinlinebold, .syntaxinlinebold, .term, .glossterm, .seghead, .glossaryterm, .keyword, .msg, .msgexplankw, .msgactionkw, .notep1, .xreftitlebold { font-weight: bold; } .italic, .codeinlineitalic, .syntaxinlineitalic, .variable, .xreftitleitalic { font-style: italic; } .bolditalic, .codeinlineboldital, .syntaxinlineboldital, .titleinfigure, .titleinexample, .titleintable, .titleinequation, .xreftitleboldital { font-weight: bold; font-style: italic; } .itemizedlisttitle, .orderedlisttitle, .segmentedlisttitle, .variablelisttitle { font-weight: bold; } .bridgehead, .titleinrefsubsect3 { font-weight: bold; } .titleinrefsubsect { font-size: 126%; font-weight: bold; } .titleinrefsubsect2 { font-size: 113%; font-weight: bold; } .subhead1 { display: block; font-size: 139%; font-weight: bold; } .subhead2 { display: block; font-weight: bold; } .subhead3 { font-weight: bold; } .underline { text-decoration: underline; } .superscript { vertical-align: super; } .subscript { vertical-align: sub; } .listofeft { border: none; } .betadraft, .alphabetanotice, .revenuerecognitionnotice { color: #e00; background: inherit; } .betadraftsubtitle { text-align: center; font-weight: bold; color: #e00; background: inherit; } .comment { color: #080; background: inherit; font-weight: bold; } .copyrightlogo { text-align: center; font-size: 85%; } .tocsubheader { list-style-type: none; } table.icons td { padding-left: 6px; padding-right: 6px; } .l1ix dd, dd dl.l2ix, dd dl.l3ix { margin-top: 0ex; margin-bottom: 0ex; } div.infoboxnote, div.infoboxnotewarn, div.infoboxnotealso { margin-top: 4ex; margin-right: 10%; margin-left: 10%; margin-bottom: 4ex; padding: 0.25em; border-top: 1pt solid gray; border-bottom: 1pt solid gray; } p.notep1 { margin-top: 0px; margin-bottom: 0px; } .tahiti-highlight-example { background: #ff9; text-decoration: inherit; } .tahiti-highlight-search { background: #9cf; text-decoration: inherit; } .tahiti-sidebar-heading { font-size: 110%; margin-bottom: 0px; padding-bottom: 0px; } /*************************************/ /* End DARB Formats */ /*************************************/ @media all { /* * * { line-height: 120%; } */ dd { margin-bottom: 2ex; } dl:first-child { margin-top: 2ex; } } @media print { body { font-size: 11pt; padding: 0px !important; } a:link, a:visited { color: black; background: inherit; } code, pre, samp, tt { font-size: 10pt; } #nav, #search_this_book, #comment_form, #comment_announcement, #flipNav, .noprint { display: none !important; } body#left-nav-present { overflow: visible !important; } } PKʍPK$AOEBPS/dcommon/rightnav.gif&GIF89a1ֽ筽ޭƔkZZk{Bc{,@ ) l)- $CҠҀ ! D1 #:aS( c4B0 AC8 ְ9!%MLj Z * ctypJBa H t>#Sb(clhUԂ̗4DztSԙ9ZQҀEPEPEPEPEPEPEPM=iԍP Gii c*yF 1׆@\&o!QY00_rlgV;)DGhCq7~..p&1c:u֫{fI>fJL$}BBP?JRWc<^j+χ5b[hֿ- 5_j?POkeQ^hֿ1L^ H ?Qi?z?+_xɔŪ\썽O]χ>)xxV/s)e6MI7*ߊޛv֗2J,;~E4yi3[nI`Ѱe9@zXF*W +]7QJ$$=&`a۾?]N T䏟'X)Ɣkf:j |>NBWzYx0t!* _KkoTZ?K Gc+UyڹgNuh^iSo5{\ܹ3Yos}.>if FqR5\/TӮ#]HS0DKu{($"2xִ{SBJ8=}Y=.|Tsц2UЫ%.InaegKo z ݎ3ֹxxwM&2S%';+I',kW&-"_¿_ Vq^ܫ6pfT2RV A^6RKetto^[{w\jPZ@ޢN4/XN#\42j\(z'j =~-I#:q[Eh|X:sp* bifp$TspZ-}NM*B-bb&*xUr#*$M|QWY ~p~- fTED6O.#$m+t$˙H"Gk=t9r娮Y? CzE[/*-{c*[w~o_?%ƔxZ:/5𨴟q}/]22p qD\H"K]ZMKR&\C3zĽ[PJm]AS)Ia^km M@dК)fT[ijW*hnu Ͳiw/bkExG£@f?Zu.s0(<`0ֹoxOaDx\zT-^ѧʧ_1+CP/p[w 9~U^[U<[tĽwPv[yzD1W='u$Oeak[^ |Gk2xv#2?¹TkSݕ| rݞ[Vi _Kz*{\c(Ck_܏|?u jVڔ6f t?3nmZ6f%QAjJf9Rq _j7Z-y.pG$Xb]0')[_k;$̭?&"0FOew7 z-cIX岛;$u=\an$ zmrILu uٞ% _1xcUW%dtÀx885Y^gn;}ӭ)場QEQ@Q@Q@Q@Q@Q@!4xPm3w*]b`F_931˜[ן+(> E ly;<;MF-qst+}DH @YKlLmؤciN<|]IU)Lw(8t9FS(=>og<\Z~u_+X1ylsj'eՃ*U3`C!N9Q_WܱhKc93^ua>H ƕGk=8~e#_?{ǀe-[2ٔ7;=&K挑5zsLdx(e8#{1wS+ΝVkXq9>&yஏh$zq^0~/j@:/«Vnce$$uoPp}MC{$-akH@ɫ1O !8R9s5ԦYmϧ'OUṡ5T,!Ԛ+s#1Veo=[)g>#< s)ƽُA^䠮ωFUj(ǩ|N3Jڷ睁ϱuږZYGOTsI<&drav?A^_f׻B$,O__ԿC`it{6>G׈C~&$y؎v1q9Sc1fH[ѽ>,gG'0'@Vw,BO [#>ﱺg5ΒFVD%Yr:O5 Tu+O멃]ی38Ze}R&ѝ_xzc1DXgس;<,_,{ƽY'AS#oF.M#~cBuEx7G+Y)(5q+GCV;qF+CLQ)qEC&6z𿊘z}?&w=+)??&\g{;V??׻xGœdٿ׼-Nc')3K]N)iLTӿCdb7Q^a N sd>Fz[0S^s'Zi 77D}kWus ab~~H(>.fif9,~|Jk;YN3H8Y(t6Q݉k͇_÷Z+2߄&[ +Tr^藺97~c܎=[f1RrBǓ^kEMhxYVm<[џ6| kqbѱ| YA{G8p?\UM7Z66 g1U1igU69 u5Pƪ:VVZC=[@ҹ¨$kSmɳО\vFz~i3^a Osŧυ9Q}_3 όO{/wgoet39 vO2ea;Ύ7$U#?k+Ek&dpzbӱ+TaB0gN{[N7Gי}U7&@?>Fz~E!a@s ?'67XxO*!?qi]֏TQN@tI+\^s8l0)2k!!iW8F$(yOּT.k,/#1:}8uT˾+5=O/`IW G֯b.-<= HOm;~so~hW5+kS8s.zwE| ?4ӿw/K N 9?j(#0UT` Wzw}:_*9m>󑓀F?ELzv=8q:=WgJ`nDr Zе<ֹ](Q@Q@Q@Q@Q@Q@Q@Q@ 'IdC0EYJVcMty_~u+Sw-aO n<[YJgL#6i g5ЖDZ14cʝ!!\/M}/_AYR__>oC? _?7_G#RERW쏞KB}JxGSkǕA pƱơP m]hwB7U$Zq M95"3q1ioATߚ{g.t uu2k=;h#YB= fgS :TdLԃ!44mFK{Hrd^7oz|BVr<{)6AXգV»|>*/hS܏z͆OM=Εq (s|s׊LKQI :9NJ)P+!ʣoAF>+=@I}"x/}۠1aנc¹4emC:>p_xWKX` >R3_S½èųp3޺u3N e یbmͺ<_ mnݮ1Op?Gm)Qb%N585'%Ahs\6yw!"&Ɨ._wk)}GP;Z!#\"< *oƾ\)}N>"լ/~]Lg}pBG X?<zZ#x69S=6) jzx=y9O&>+e!!? ?s~k5Gʏ)?*ce7Ox~k5􇔾Q/e7/Ԑ#3OgNC0] ;_FiRl>Q.g>!%k#ú:Kn'&}?U@\pџPtp)v<{_i}Oվֲ3XIYIx~b<D?(=_JXH=bbi=Oh?_ C_O)}oW쏜? %Ƶ;-RYFi`wۭ{ϖZMtQ$"c_+ԃx1*0b;ԕ݋ESQEQEQEQEQEQEQEQEQEQZ(1F)h1K@XLRE&9P (bf{RӨ&)PEPEPbԴPGKZ(iإbn(:A%S0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((PKje88PK$AOEBPS/dcommon/help.gif!GIF89a1εֵ֜֜{kZsBc{,@ )sƠTQ$8(4ʔ%ŌCK$A HP`$h8ŒSd+ɡ\ H@%' 6M HO3SJM /:Zi[7 \( R9r ERI%  N=aq   qƦs *q-n/Sqj D XZ;PKއ{&!PK$AOEBPS/failure.htmxq Managing Database Failover and Recovery

11 Managing Database Failover and Recovery

This chapter applies to all replication schemes, including active standby pairs. However, TimesTen integration with Oracle Clusterware is the best way to monitor active standby pairs. See Chapter 7, "Using Oracle Clusterware to Manage Active Standby Pairs".

This chapter includes these topics:

Overview of database failover and recovery

A fundamental element in the design of a highly available system is the ability to recover quickly from a failure. Failures may be related to hardware problems such as system failures or network failures. Software failures include operating system failure, application failure, database failure and operator error.

Your replicated system must employ a cluster manager or custom software to detect such failures and, in the event of a failure involving a master database, redirect the user load to one of its subscribers. The focus of this discussion is on the TimesTen mechanisms that an application or cluster manager can use to recover from failures.

Unless the replication scheme is configured to use the return twosafe service, TimesTen replicates updates only after the original transaction commits to the master database. If a subscriber database is inoperable or communication to a subscriber database fails, updates at the master are not impeded. During outages at subscriber systems, updates intended for the subscriber are saved in the TimesTen transaction log.


Note:

The procedures described in this chapter require the ADMIN privilege.

General failover and recovery procedures

The procedures for managing failover and recovery depend primarily on:

  • The replication scheme

  • Whether the failure occurred on a master or subscriber database

  • Whether the threshold for the transaction log on the master is exhausted before the problem is resolved and the databases reconnected

Subscriber failures

In a default asynchronous replication scheme, if a subscriber database becomes inoperable or communication to a subscriber database fails, updates at the master are not impeded and the cluster manager does not have to take any immediate action.


Note:

If the failed subscriber is configured to use a return service, you must first disable return service blocking, as described in "Managing return service timeout errors and replication state changes".

During outages at subscriber systems, updates intended for the subscriber are saved in the transaction log on the master. If the subscriber agent reestablishes communication with its master before the master reaches its FAILTHRESHOLD, the updates held in the log are automatically transferred to the subscriber and no further action is required. See "Setting the log failure threshold" for details on how to establish the FAILTHRESHOLD value for the master database.

If the FAILTHRESHOLD is exceeded, the master sets the subscriber to the failed state and it must be recovered, as described in "Recovering a failed database". Any application that connects to the failed subscriber receives a tt_ErrReplicationInvalid (8025) warning indicating that the database has been marked failed by a replication peer.

An application can use the ODBC SQLGetInfo function to check if the subscriber database it is connected to has been set to the failed state. The SQLGetInfo function includes a TimesTen-specific infotype, TT_REPLICATION_INVALID, that returns a 32-bit integer value of '1' if the database is failed, or '0' if not failed. Since the infotype TT_REPLICATION_INVALID is specific to TimesTen, all applications using it need to include the timesten.h file in addition to the other ODBC include files.

Example 11-1 Checking whether a database has been set to the failed state

Check if the database identified by the hdbc handle has been set to the failed state.

SQLINTEGER retStatus;

SQLGetInfo(hdbc, TT_REPLICATION_INVALID,
          (PTR)&retStatus, NULL, NULL);

Master failures

The cluster manager plays a more central role if a failure involves the master database. If a master database fails, the cluster manager must detect this event and redirect the user load to one of its surviving databases. This surviving subscriber then becomes the master, which continues to accept transactions and replicates them to the other surviving subscriber databases. If the failed master and surviving subscriber are configured in a bidirectional manner, transferring the user load from a failed master to a subscriber does not require that you make any changes to your replication scheme. However, when using unidirectional replication or complex schemes, such as those involving propagators, you may have to issue one or more ALTER REPLICATION statements to reconfigure the surviving subscriber as the "new master" in your scheme. See "Replacing a master database" for an example.

When the problem is resolved, if you are not using the bidirectional configuration or the active standby pair described in "Automatic catch-up of a failed master database", you must recover the master database as described in "Recovering a failed database".

After the database is back online, the cluster manager can either transfer the user load back to the original master or reestablish it as a subscriber for the "acting master."

Automatic catch-up of a failed master database

The master catch-up feature automatically restores a failed master database from a subscriber database without the need to invoke the ttRepAdmin -duplicate operation described in "Recovering a failed database".

The master catch-up feature needs no configuration, but it can be used only in the following types of configurations:

  • A single master replicated in a bidirectional manner to a single subscriber

  • An active standby pair that is configured with RETURN TWOSAFE

For replication schemes that are not active standby pairs, the following must be true:

  • The ELEMENT type is DATASTORE.

  • TRANSMIT NONDURABLE or RETURN TWOSAFE must be enabled.

  • All replicated transactions must be committed nondurably. They must be transmitted to the remote database before they are committed on the local database. For example, if the replication scheme is configured with RETURN TWOSAFE BY REQUEST and any transaction is committed without first enabling RETURN TWOSAFE, master catch-up may not occur after a failure of the master.

When the master replication agent is restarted after a crash or invalidation, any lost transactions that originated on the master are automatically reapplied from the subscriber to the master (or from the standby to the active in an active standby pair). No connections are allowed to the master database until it has completely caught up with the subscriber. Applications attempting to connect to a database during the catch-up phase receive an error that indicates a catch-up is in progress. The only exception is connecting to a database with the ForceConnect first connection attribute set in the DSN.

When the catch-up phase is complete, the application can connect to the database. An SNMP trap and message to the system log indicate the completion of the catch-up phase.

If one of the databases is invalidated or crashes during the catch-up process, the catch-up phase is resumed when the database comes back up.

Master catch-up can fail under these circumstances:

  • The failed database is offline long enough for the failure threshold to be exceeded on the subscriber database (the standby database in an active standby pair).

  • Dynamic load operations are taking place on the active database in an active standby pair when the failure occurs. RETURN TWOSAFE is not enabled for dynamic load operations even though it is enabled for the active database. The database failure causes the dynamic load transactions to be trapped and RETURN TWOSAFE to fail.

When master catch-up is required for an active standby pair

TimesTen error 8110 (Connection not permitted. This store requires Master Catchup.) indicates that the standby database is ahead of the active database and that master catch-up must occur before replication can resume.

When using master catch-up with an active standby pair, the standby database must be failed over to become the new active database. If the old active database can recover, it becomes the new standby database. If it cannot recover, the old active database must be destroyed and the new standby database must be created by duplicating the new active database. See "When replication is return twosafe" for more information about recovering from a failure of the active database when RETURN TWOSAFE is configured (required for master catch-up).

In an active standby pair with RETURN TWOSAFE configured, it is possible to have a trapped transaction. A trapped transaction occurs when the new standby database has a transaction present that is not present on the new active database after failover. Error 16227 (Standby store has replicated transactions not present on the active) is one indication of trapped transactions. You can verify the number of trapped transactions by checking the number of records in replicated tables on each database during the manual recovery process. For example, enter a statement similar to the following:

SELECT COUNT(*) FROM reptable;

When there are trapped transactions, perform these tasks for recovery:

  1. Use the ttRepStateSet built-in procedure to change the state on the standby database to 'ACTIVE'.

  2. Destroy the old active database.

  3. Use ttRepAdmin -duplicate to create a new standby database from the new active database, which has all of the transactions. See "Duplicating a database".

Failures in bidirectional distributed workload schemes

You can distribute the workload over multiple bidirectionally replicated databases, each of which serves as both master and subscriber. When recovering a master/subscriber database, the log on the failed database may present problems when you restart replication. See "Bidirectional distributed workload scheme".

If a database in a distributed workload scheme fails and work is shifted to a surviving database, the information in the surviving database becomes more current than that in the failed database. If replication is restarted at the failed system before the log failure threshold has been reached on the surviving database, then both databases attempt to update one another with the contents of their transaction logs. In this case, the older updates in the transaction log on the failed database may overwrite more recent data on the surviving system.

There are two ways to recover in such a situation:

  • If the timestamp conflict resolution rules described in Chapter 14, "Resolving Replication Conflicts" are sufficient to guarantee consistency for your application, then you can restart the failed system and allow the updates from the failed database to propagate to the surviving database. The conflict resolution rules prevent more recent updates from being overwritten.

  • Re-create the failed database, as described in "Recovering a failed database". If the database must be re-created, the updates in the log on the failed database that were not received by the surviving database cannot be identified or restored. In the case of several surviving databases, you must select which of the surviving databases is to be used to re-create the failed database. It is possible that at the time the failed database is re-created, the selected surviving database may not have received all updates from the other surviving databases. This results in diverging databases. The only way to prevent this situation is to re-create the other surviving databases from the selected surviving database.

Network failures

In the event of a temporary network failure, you do not need to perform any specific action to continue replication. The replication agents that were in communication attempt to reconnect every few seconds. If the agents reconnect before the master database runs out of log space, the replication protocol makes sure they do not miss or repeat any replication updates. If the network is unavailable for a longer period and the log failure threshold has been exceeded for the master log, you need to recover the subscriber as described in "Recovering a failed database".

Failures involving sequences

After a network link failure, if replication is allowed to recover by replaying queued logs, you do not need to take any action.

However, if the failed host was down for a significant amount of time, you must use the ttRepAdmin -duplicate command to repopulate the database on the failed host with transactions from the surviving host, as sequences are not rolled back during failure recovery. In this case, the ttRepAdmin -duplicate command copies the sequence definitions from one database to the other.

Recovering a failed database

If the databases are configured in a bidirectional replication scheme, a failed master database is automatically brought up to date from the subscriber. See "Automatic catch-up of a failed master database". Automatic catch-up also applies to recovery of master databases in active standby pairs.

If a restarted database cannot be recovered from its master's transaction log so that it is consistent with the other databases in the replicated system, you must re-create the database from one of its replication peers. Use command line utilities or the TimesTen Utility C functions. See "Recovering a failed database from the command line" and "Recovering a failed database from a C program".


Note:

It is not necessary to re-create the DSN for the failed database.

In the event of a subscriber failure, if any tables are configured with a return service, commits on those tables in the master database are blocked until the return service timeout period expires. To avoid this, you can establish a return service failure and recovery policy in your replication scheme, as described in"Managing return service timeout errors and replication state changes". If you are using the RETURN RECEIPT service, an alternative is to use ALTER REPLICATION and set the NO RETURN attribute to disable return receipt until the subscriber is restored and caught up. Then you can submit another ALTER REPLICATION statement to re-establish RETURN RECEIPT.

Recovering a failed database from the command line

If the databases are fully replicated, you can use the ttDestroy utility to remove the failed database from memory and ttRepAdmin -duplicate to re-create it from a surviving database. If the database contains any cache groups, you must also use the -keepCG option of ttRepAdmin. See "Duplicating a database".

Example 11-2 Recovering a failed database

To recover a failed database, subscriberds, from a master named masterds on host system1, enter:

> ttdestroy /tmp/subscriberds

> ttrepadmin -dsn subscriberds -duplicate -from masterds -host "system1" -uid ttuser

You will be prompted for the password of ttuser.


Note:

ttRepAdmin -duplicate is supported only between identical and patch TimesTen releases. The major and minor release numbers must be the same.

After re-creating the database with ttRepAdmin -duplicate, the first connection to the database reloads it into memory. To improve performance when duplicating large databases, you can avoid the reload step by using the ttRepAdmin -ramload option to keep the database in memory after the duplicate operation.

Example 11-3 Keeping a database in memory when recovering it

To recover a failed database, subscriberds, from a master named masterds on host system1, and to keep the database in memory and restart replication after the duplicate operation, enter:

> ttdestroy /tmp/subscriberds

> ttrepadmin -dsn subscriberds -duplicate -ramload -from masterds -host "system1"
-uid ttuser -setmasterrepstart

You will be prompted for the password of ttuser.


Note:

After duplicating a database with the ttRepAdmin -duplicate -ramLoad options, the RAM Policy for the database is manual until explicitly reset by ttAdmin -ramPolicy or the ttRamPolicy function.

Recovering a failed database from a C program

You can use the C functions provided in the TimesTen utility library to recover a failed database programmatically.

If the databases are fully replicated, you can use ttDestroyDataStore function to remove the failed database and the ttRepDuplicateEx function to re-create it from a surviving database.

Example 11-4 Recovering and starting a failed database

To recover and start a failed database, named subscriberds on host system2, from a master, named masterds on host system1, enter:

int          rc;
ttutilhandle utilhandle;
ttrepduplicateexarg arg;
memset( &arg, 0, sizeof( arg ) );
arg.size = sizeof( ttrepduplicateexarg );
arg.flags = tt_repdup_repstart | tt_repdup_ramload;
arg.uid=ttuser;
arg.pwd=ttuser;
arg.localhost = "system2";
rc = ttdestroydatastore( utilhandle, "subscriberds", 30 );
rc = ttrepduplicateex( utilhandle, "dsn=subscriberds",
                      "masterds", "system1", &arg );

In this example, the timeout for the ttDestroyDataStore operation is 30 seconds. The last parameter of the ttRepDuplicateEx function is an argument structure containing two flags:

  • TT_REPDUP_RESTART to set the subscriberds database to the start state after the duplicate operation is completed

  • TT_REPDUP_RAMLOAD to set the RAM policy to manual and keep the database in memory


Note:

When the TT_REPDUP_RAMLOAD flag is used with ttRepDuplicateEx, the RAM policy for the duplicate database is manual until explicitly reset by the ttRamPolicy function or ttAdmin -ramPolicy.

See "TimesTen Utility API" in Oracle TimesTen In-Memory Database C Developer's Guide for the complete list of the functions provided in the TimesTen C language utility library.

Recovering nondurable databases

If your database is configured with the TRANSMIT NONDURABLE option in a bidirectional configuration, you do not need to take any action to recover a failed master database. See "Automatic catch-up of a failed master database".

For other types of configurations, if the master database configured with the TRANSMIT NONDURABLE option fails, you must use ttRepAdmin-duplicate or ttRepDuplicateEx to re-create the master database from the most current subscriber database. If the application attempts to reconnect to the master database without first performing the duplicate operation, the replication agent recovers the database, but any attempt to connect results in an error that advises you to perform the duplicate operation. To avoid this error, the application must reconnect with the ForceConnect first connection attribute set to 1.

Writing a failure recovery script

Upon detecting a failure, the cluster manager should invoke a script that effectively executes the procedure shown by the pseudocode in Example 11-5.

Example 11-5 Failure recovery pseudocode

Detect problem {
       if (Master == unavailable) {
          FailedDataDatabase = Master
          FailedDSN = Master_DSN
          SurvivorDatabase = Subscriber
          switch users to SurvivorDatabase
      }
else {
          FailedDatabase = Subscriber
          FailedDSN = Subscriber_DSN
          SurvivorDatabase = Master
      }
}
Fix problem....
If (Problem resolved) {
       Get state for FailedDatabase
       if (state == "failed") {
         ttDestroy FailedDatabase
         ttRepAdmin -dsn FailedDSN -duplicate
                 -from SurvivorDatabase -host SurvivorHost
                 -setMasterRepStart
                 -uid ttuser
                 -pwd ttuser
      }
      else {
         ttAdmin -repStart FailedDSN
      }
      while (backlog != 0) {
         wait
      }
}

Switch users back to Master.

This applies to either the master or subscriber databases. If the master fails, you may lose some transactions.

PK'}qxqPK$A OEBPS/toc.htm Table of Contents

Contents

Title and Copyright Information

Preface

What's New

1 Overview of TimesTen Replication

2 Getting Started

3 Defining an Active Standby Pair Replication Scheme

4 Administering an Active Standby Pair Without Cache Groups

5 Administering an Active Standby Pair with Cache Groups

6 Altering an Active Standby Pair

7 Using Oracle Clusterware to Manage Active Standby Pairs

8 TimesTen Configuration Attributes for Oracle Clusterware

9 Defining Replication Schemes

10 Setting Up a Replicated System

11 Managing Database Failover and Recovery

12 Monitoring Replication

13 Altering Replication

14 Resolving Replication Conflicts

Index

PKUq~PK$AOEBPS/setup.htm Setting Up a Replicated System

10 Setting Up a Replicated System

This chapter describes how to set up and start replication. All of the topics in this chapter apply to replication schemes that are not active standby pairs. Some of the topics in this chapter also apply to active standby pairs.

To set up an active standby pair, see:

This chapter includes the following topics:

Configuring the network

This section applies to both active standby pairs and other replication schemes. It describes some of the issues to consider when replicating TimesTen data over a network. The topics include:

Network bandwidth requirements

The network bandwidth required for TimesTen replication depends on the bulk and frequency of the data being replicated. This discussion explores the types of transactions that characterize the high and low ends of the data range and the network bandwidth required to replicate the data between TimesTen databases.

Table 10-1 provides guidelines for calculating the size of replicated records.

Table 10-1 Replicated record sizes

Record TypeSize

Begin transaction

48 bytes

Update

116 bytes

+ 18 bytes per column updated

+ size of old column values

+ size of new column values

+ size of the primary key or unique key

Delete

104 bytes

+ size of the primary key or unique key

Insert

104 bytes

+ size of the primary key or unique key

+ size of inserted row


Transactions are sent between replicated databases in batches. A batch is created whenever there is no more data in the transaction log buffer in the master database, or when the current batch is roughly 256K bytes. See "Copying updates between databases" for more information.

Replication in a WAN environment

TimesTen replication uses the TCP/IP protocol, which is not optimized for a WAN environment. You can improve replication performance over a WAN by installing a third-party "TCP stack" product. If replacing the TCP stack is not a feasible solution, you can reduce the amount of network traffic that the TCP/IP protocol has to deal with by setting the COMPRESS TRAFFIC attribute in the CREATE ACTIVE STANDBY PAIR or CREATE REPLICATION statement. See "Compressing replicated traffic" for details.

See installation information for your platform in Oracle TimesTen In-Memory Database Installation Guide for information about changing TCP/IP kernel parameters for better performance.

Configuring host IP addresses

In a replication scheme, you need to identify the name of the host on which your database resides. The operating system translates this host name to one or more IP addresses. This section describes how to configure replication so that it uses the correct host names and IP addresses for each host.

This section includes these topics:

Identifying database hosts and network interfaces using the ROUTE clause

When specifying the host for a database in a replication element, you should always use the name returned by the hostname command, as replication uses the same host name to verify that the current host is involved in the replication scheme. Replication schemes may not be created that do not include the current host.

If a host contains multiple network interfaces (with different IP addresses), you should specify which interfaces are to be used by replication using the ROUTE clause. You must specify a priority for each interface. Replication tries to first connect using the address with the highest priority, and if a connection cannot be established, it tries the remaining addresses in order of priority until a connection is established. If a connection to a host fails while using one IP address, replication attempts to re-connect (or fall back) to another IP address, if more than one address has been specified in the ROUTE clause.


Note:

Addresses for the ROUTE clause may be specified as either host names or IP addresses. However, if your host has more than one IP address configured for a given host name, you should only configure the ROUTE clause using the IP addresses, in order to ensure that replication uses only the IP addresses that you intend.

See "Configuring network operations" for more information.

Identifying database hosts on UNIX without using the ROUTE clause

If a replication configuration is specified using host names rather than IP addresses, replication must be able to translate host names of peers into IP addresses. For this to happen efficiently on Windows, make sure each Windows machine is set up to query either a valid WINS server or a valid DNS server that has correct information about the hosts on the network. In the absence of such servers, static HOST-to-IP entries can be entered in either:

When possible, you should use the ROUTE clause of a replication scheme to identify database hosts and the network interfaces to use for replication. However, if you have a legacy replication configuration that does not use the ROUTE clause, this section explains how to configure operating system and DNS files for a replication host with multiple network interfaces.

If a host contains multiple network interfaces (with different IP addresses) and replication is not configured with a ROUTE clause, TimesTen replication tries to connect to the IP addresses in the same order as returned by the gethostbyname call. It will try to connect using the first address; if a connection cannot be established, it tries the remaining addresses in order until a connection is established. TimesTen replication uses this same sequence each time it establishes a new connection to a host. If a connection to a host fails on one IP address, TimesTen replication attempts to re-connect (or fall back) to another IP address for the host in the same manner described above.

There are two basic ways you can configure a host to use multiple IP addresses on UNIX platforms: DNS or the /etc/hosts file.


Note:

If you have multiple network interface cards (NICs), be sure that "multi on" is specified in the /etc/host.conf file. Otherwise, gethostbyname will not return multiple addresses.

For example, if your machine has two NICs, use the following syntax for your /etc/hosts file:

127.0.0.1  localhost
IP_address_for_NIC_1  official_hostname optional_alias
IP_address_for_NIC_2  official_hostname optional_alias

The host name official_hostname is the name returned by the hostname command.

When editing the /etc/hosts file, keep in mind that:

  • You must log in as root to change the /etc/hosts file.

  • There should only be one line per IP address.

  • There can be multiple alias names on each line.

  • When there are multiple IP addresses for the same host name, they must be on consecutive lines.

  • The host name can be up to 30 characters long.

For example, the following entry in the /etc/hosts file on a UNIX platform describes a server named Host1 with two IP addresses:

127.0.0.1        localhost
10.10.98.102     Host1
192.168.1.102    Host1

To specify the same configuration for DNS, your entry in the domain zone file would look like:

Host1     IN     A     10.10.98.102
          IN     A     192.168.1.102

In either case, you only need to specify Host1 as the host name in your replication scheme and replication will use the first available IP address when establishing a connection.

In an environment in which multiple IP addresses are used, you can also assign multiple host names to a single IP address in order to restrict a replication connection to a specific IP address. For example, you might have an entry in your /etc/hosts file that looks like:

127.0.0.1        localhost
10.10.98.102     Host1
192.168.1.102    Host1 RepHost1

or a DNS zone file that looks like:

Host1     IN     A     10.10.98.102
          IN     A     192.168.1.102
RepHost1  IN     A     192.168.1.102

If you want to restrict replication connections to IP address 192.168.1.102 for this host, you can specify RepHost1 as the host name in your replication scheme. Another option is to simply specify the IP address as the host name in the CREATE REPLICATION statement used to configure your replication scheme.

Host name resolution on Windows

If a replication configuration is specified using host names rather than IP addresses, replication must be able to translate host names of peers into IP addresses. For this to happen efficiently on Windows, make sure each Windows machine is set up to query either a valid WINS server or a valid DNS server that has correct information about the hosts on the network. In the absence of such servers, static host-to-IP entries can be entered in either:

%windir%\system32\drivers\etc\hosts

or

%windir%\system32\drivers\etc\lmhosts

Without any of these options, a Windows machine resorts to broadcasting to detect peer nodes, which is extremely slow.

You may also encounter extremely slow host name resolution if the Windows machine cannot communicate with the defined WINS servers or DNS servers, or if the host name resolution set up is incorrect on those servers. Use the ping command to test whether a host can be efficiently located. The ping command responds immediately if host name resolution is set up properly.


Note:

You must be consistent in identifying a database host in a replication scheme. Do not identify a host using its IP address for one database and then use its host name for the same or another database.

User-specified addresses for TimesTen daemons and subdaemons

By default, the TimesTen main daemon, all subdaemons and all agents use any available address to listen on a socket for requests. You can modify the ttendaemon.options file to specify an address for communication among the agents and daemons by including a -listenaddr option. See "Managing TimesTen daemon options" in Oracle TimesTen In-Memory Database Operations Guide for details.

Suppose that your machine has two NICs whose addresses are 10.10.10.100 and 10.10.11.200. The loopback address is 127.0.0.1. Then keep in mind the following as it applies to the replication agent:

  • If you do not set the -listenaddr option in the ttendaemon.options file, then any process can talk to the daemons and agents.

  • If you set -listenaddr to 10.10.10.100, then any process on the local host or the 10.10.10 net can talk to daemons and agents on 10.10.10.100. No processes on the 10.10.11 net can talk to the daemons and agents on 10.10.10.100.

  • If you set -listenaddr to 127.0.0.1, then only processes on the local host can talk to the daemons and agents. No processes on other hosts can talk the daemons and agents.

Identifying the local host of a replicated database

Ordinarily, TimesTen replication is able to identify the hosts involved in a replication configuration using normal operating system host name resolution methods. However, in some rare instances, if the host has an unusual host name configuration, TimesTen is unable to determine that the local host matches the host name as specified in the replication scheme. When this occurs, you receive error 8191, "This store is not involved in a replication scheme," when attempting to start replication using ttRepStart or ttAdmin -repStart. The ttHostNameSet built-in procedure may be used in this instance to explicitly indicate to TimesTen that the current database is in fact the database specified in the replication scheme. See "ttHostNameSet" in Oracle TimesTen In-Memory Database Reference for more information.

Setting up the replication environment

The topics related to setting up your replication environment include:

Establishing the databases

You can replicate one or more tables on any existing database. If the database you want to replicate does not yet exist, you must first create one, as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.

After you have identified or created the master database, create a DSN definition for the subscriber database on the target host. Set the connection attributes for the master and subscriber databases as described in "Connection attributes for replicated databases".

After you have defined the DSN for the subscriber, you can populate the subscriber database with the tables to be replicated from the master in one of two ways:

  • Connect to the database and use SQL statements to create new tables in the subscriber database that match those to be replicated from the master.

  • Use the ttRepAdmin -duplicate utility to copy the entire contents of the master database to the subscriber. See "Duplicating a master database to a subscriber".

Connection attributes for replicated databases

Databases that replicate to each other must have the same DatabaseCharacterSet data store attribute. TimesTen does not perform any character set conversion between replicated databases.

If you wish to configure parallel replication, see "Configuring parallel replication" for information about setting the ReplicationParallelism and ReplicationApplyOrdering data store attributes.

See "Setting connection attributes for logging" for recommendations for managing the replication log files.

It is possible to replicate between databases with different settings for the TypeMode data store attribute. However, you must make sure that the underlying data type for each replicated column is the same on each node. See "TypeMode" in Oracle TimesTen In-Memory Database Reference for more information.

In an active standby pair, use the ReceiverThreads first connection attribute to increase the number of threads that apply changes from the active database to the standby database from 1 to 2. If you set ReceiverThreads to 2 on the standby, you should also set it to 2 on the active to maintain increased throughput if there is a failover.

You can also set ReceiverThreads to 2 on one or more read-only subscribers in an active standby pair to increase replication throughput from the standby database.

Databases must be hosted on systems that have two or more CPUs to take advantage of setting this attribute to 2.

Configuring parallel replication

By default, replication is performed with a single thread where the nodes in a replication scheme have one log reader, or transmitter thread, on the source database, and one applier, or receiving thread, on the target database. You can increase your performance by configuring parallel replication, which configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database.


Note:

If you enable parallel replication, you cannot execute both DDL and DML statements in the same transaction.

There are two types of parallel replication, each of which is configured with the ReplicationApplyOrdering and ReplicationParallelism data store creation attributes and must be set when the database is created. Since both ReplicationParallelism and ReplicationApplyOrdering attributes are data store attributes, they cannot be modified after database creation.


Note:

All databases within the replication scheme that use parallel replication must be configured identically with the same type of parallel replication and the same number of threads or tracks.

The only time you can have different values for parallel replication attributes is during an upgrade. For details, see "Performing an upgrade on databases that use parallel replication" in the Oracle TimesTen In-Memory Database Installation Guide.


The following sections describe both options for parallel replication:

Configuring automatic parallel replication

Automatic parallel replication enables you to configure multiple threads that act in parallel to replicate and apply transactional changes to nodes in a replication scheme. Automatic parallel replication enforces transactional dependencies and applies changes in commit order.

Automatic parallel replication is enabled by default with ReplicationApplyOrdering=0. To configure parallel replication, set ReplicationParallelism to a number from 2 to 32. The number cannot exceed half the value of LogBufParallelism. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. The default for ReplicationParallelism is 1, which indicates single-threaded replication.


Note:

If ReplicationParallelism is greater than 1, the LogBufParallelism first connection attribute must be an integral multiple of ReplicationParallelism.

If the replication scheme is an active standby pair that replicates AWT cache groups, the settings for ReplicationApplyOrdering, ReplicationParallelism and the CacheAwtParallelism data store attributes determine how many threads are used to apply changes in the TimesTen cache tables to the corresponding Oracle tables. See "Configuring parallel propagation to Oracle tables" in Oracle In-Memory Database Cache User's Guide for more information.

For more information on these data store attributes, see "ReplicationParallelism", "ReplicationApplyOrdering", and "LogBufParallelism" in the Oracle TimesTen In-Memory Database Reference.

Configuring user-defined parallel replication for other replication schemes

If your application has predictable transactional dependencies and does not require the commit order on the target database be the same as the order on the source database, you can increase replication throughput by using user-defined parallel replication, which enables the user to manually divide work across different tracks.

User-defined parallel replication configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database. The application assigns transactions to tracks. The application specifies which track a transaction belongs to when the transaction starts on the source database. The transactions in each track are applied in the order in which they are received on the target database, but commit order is not maintained for transactions across the different tracks.


Note:

Use caution in assigning tracks to transactions that affect tables with foreign key relationships. If transactions on related tables are assigned to different tracks, one of the transactions can be lost because the transactions may be applied out of commit order.

In general, transactions that modify the same table should be assigned to the same replication track. In addition, updates that should be applied in order on the receiving side should use the same track. However, if all transactions insert to a particular table, they can be assigned to different tracks to increase replication throughput. You can split the workload for a table across multiple tracks with a key that ties a row to the same track.

Enable user-defined parallel replication by setting these data store attributes at database creation time:

  • Set ReplicationApplyOrdering to 1.

  • Set ReplicationParallelism to a number from 1 to 64. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. The default is 1, which indicates a single thread. Parallel replication requires at least two threads.

In addition, the application needs to assign transactions to tracks by one of these methods:

  • Set the ReplicationTrack general connection attribute to a non-zero number. All transactions issued by the connection are assigned to this track. The value can be any number. TimesTen maps the ReplicationTrack number for this connection to one of the available parallel replication threads. Thus, the application can use any number to group transactions that should be applied in order. See "ReplicationTrack" in Oracle TimesTen In-Memory Database Reference.

  • Use the ALTER SESSION SQL statement to set the replication track number for the current connection. See "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference.

  • Use the TT_REPLICATION_TRACK ODBC connection option for the SQLSetConnectOption ODBC function. See "Features for use with replication" in Oracle TimesTen In-Memory Database C Developer's Guide

  • Use the setReplicationTrack() method of the TimesTenConnection JDBC class. See "Features for use with replication" in Oracle TimesTen In-Memory Database Java Developer's Guide

Use the ttConfiguration built-in procedure to return the replication track number for the current connection. Use the ttLogHolds built-in procedure to verify that multiple tracks are being used.

Restrictions on user-defined parallel replication
  • Do not configure user-defined parallel replication for tables that have an aging policy defined.

  • Databases configured for user-defined parallel replication cannot contain cache groups.

  • A database cannot be defined as a propagator when user-defined parallel replication is configured.

  • User-defined parallel replication is not supported for synchronous replication, including databases with the RETURN RECEIPT and RETURN TWOSAFE attributes.

  • Cross-release replication and migration from a database that does not have user-defined parallel replication enabled to a database that does have user-defined parallel replication enabled is not supported from release 11.2.1.6.0 until 11.2.1.8.0. It is supported from releases earlier than 11.2.1.6.0 and from 11.2.1.8.0 and later. Users of releases from 11.2.1.6.0 to 11.2.1.8.0 can perform an upgrade by first applying an in-place patch release upgrade to 11.2.1.8.0. For details, see "Performing an upgrade when using parallel replication" in the Oracle TimesTen In-Memory Database Installation Guide.

Managing the transaction log on a replicated database

This section includes these topics:

About log buffer flushing

A dedicated subdaemon thread writes the contents of the log buffer to disk periodically. These writes may be synchronous or buffered. The subdaemon thread ensures that the system I/O buffer never fills up with more transaction log data than the value of the LogFileSize first connection attribute without being synchronized to the log buffer.

If the database is configured with LogFlushMethod=2, then all writes to the disk are synchronous writes and the data is durably written to disk before the write call returns. If the database is configured with LogFlushMethod=1, then the writes are buffered unless there is a specific request from an application for synchronous writes.

In addition to the periodic writes, an application can also trigger the subdaemon thread to write the buffer contents to disk. The following are cases where the application triggers a synchronous write to the disk:

  • When a transaction that requested a durable commit is committed. A transaction can request a durable commit by calling the ttDurableCommit built-in procedure or by having the DurableCommits connection attribute set to 1.

  • When the replication agent sends a batch of transactions to a subscriber and the master has been configured for replication with the TRANSMIT DURABLE attribute (the default).

  • When the replication agent periodically executes a durable commit, whether the master database is configured with TRANSMIT DURABLE or not.

Transactions are also written to disk durably when durable commits are configured as part of the return service failure policies and a failure has occurred.

The size of the log buffer has no influence on the ability of TimesTen to write data to disk under any of the circumstances listed above.

About transaction log growth on a master database

In databases that do not use replication, Transaction Log API (XLA), cache groups or incremental backup, unneeded records in the log buffer and unneeded transaction log files are purged each time a checkpoint is initiated, either by the automatic background checkpointing thread or by an application's call to the ttCkpt or ttCkptBlocking built-in procedures. In a replicated database, transactions remain in the log buffer and transaction log files until the master replication agent confirms they have been fully processed by the subscriber. Only then can the master consider purging them from the log buffer and transaction log files.

A master database transaction log can grow much larger than it would on an unreplicated database if there are changes to its subscriber state. When the subscriber is in the start state, the master can purge logged data after it receives confirmation that the information has been received by the subscriber. However, if a subscriber becomes unavailable or is in the pause state, the log on the master database cannot be flushed and the space used for logging can be exhausted. When the log space is exhausted, subsequent updates on the master database are aborted. Use the ttLogHolds built-in procedure to get information about replication log holds.

For more information about transaction log growth, see "Monitoring accumulation of transaction log files" in Oracle TimesTen In-Memory Database Operations Guide.

Setting connection attributes for logging

LogBufMB specifies the maximum size of the in-memory log buffer in megabytes. This buffer is flushed to a transaction log file on the disk when it becomes full. The minimum size for LogBufMB is 8 times the value of LogBufParallelism.

You need to establish enough disk space for the transaction log files. There are two settings that control the amount of disk space used by the log:

  • The LogFileSize setting in the DSN specifies the maximum size of a transaction log file. If logging requirements exceed this value, additional transaction log files with the same maximum size are created. If you set the LogFileSize to a smaller value than LogBufMB, TimesTen automatically increases the LogFileSize to match LogBufMB. For best performance, set LogBufMB and LogFileSize to their maximum values.

  • The log failure threshold setting specifies the maximum number of transaction log files allowed to accumulate before the master assumes a subscriber has failed. The threshold value is the number of transaction log files between the most recently written to transaction log file and the earliest transaction log file being held for the subscriber. For example, if the last record successfully received by all subscribers was in Log File 1 and the last log record written to disk is at the beginning of Log File 4, then replication is at least 2 transaction log files behind (the contents of Log Files 2 and 3). If the threshold value is 2, then the master sets the subscriber to the failed state after detecting the threshold value had been exceeded. This may take up to 10 seconds. See "Setting the log failure threshold" for more information.

Because transactions are logged to disk, you can use bookmarks to detect the log record identifiers of the update records that have been replicated to subscribers and those that have been written to disk. To view the location of the bookmarks for the subscribers associated with masterDSN, use the ttBookmark built-in procedure, as described in "Show replicated log records".

If a subscriber goes down and then comes back up before the threshold is reached, then replication automatically "catches up" as the committed transactions in the transaction log files following the bookmark are automatically transmitted. However, if the threshold is exceeded, the master sets the subscriber to the failed state. A failed subscriber must use ttRepAdmin -duplicate to copy the master database and start over, as described in Chapter 11, "Managing Database Failover and Recovery".

See Oracle TimesTen In-Memory Database Reference for more information about TimesTen connection attributes, built-in procedures and utilities.

Applying a replication scheme to a database

Define the replication scheme as described in Chapter 9, "Defining Replication Schemes". Save the CREATE REPLICATION statement in a SQL file.

After you have described the replication scheme in a SQL file, you can execute the SQL on the database using the -f option to the ttIsql utility. The syntax is:

ttIsql -f schemefile.sql -connstr "dsn=DSN"

Example 10-1 Creating a replication scheme by executing a SQL file

If your replication scheme is described in a file called repscheme.sql, you can execute the file on a DSN, called masterDSN, by entering:

> ttIsql -f repscheme.sql -connstr "dsn=masterDSN"

Under most circumstances, you should apply the same scheme to all of the replicated databases. You must invoke a separate ttIsql command on each host to apply the replication scheme.

Example 10-2 Executing a SQL file on each host

If your scheme includes the databases masterDSN on host S1, subscriber1DSN on host S2, and subscriber2DSN on host S3, do the following:

On host S1, enter:

> ttIsql -f repscheme.sql -connstr "dsn=masterDSN"

On host S2, enter:

> ttIsql -f repscheme.sql -connstr "dsn=subscriber1DSN"

On host S3, enter:

> ttIsql -f repscheme.sql -connstr "dsn=subscriber2DSN"

You can also execute the SQL file containing your replication scheme from the ttIsql command line after connecting to a database. For example:

Command> run repscheme.sql;

Duplicating a master database to a subscriber

The simplest method for populating a subscriber database is to duplicate the contents of the master database. Duplicating a database in this manner is also essential when recovering a failed database, as described in Chapter 11, "Managing Database Failover and Recovery". You can use the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database.

To duplicate a database, these conditions must be fulfilled:

  • The instance administrator performs the duplicate operation.

  • The instance administrator user name must be the same on both instances involved in the duplication.

  • You must provide the user name and password for a user with the ADMIN privilege on the source database.

  • The target DSN cannot include client/server attributes.

To duplicate the contents of a master database to a subscriber database, complete these tasks:

  1. Create or alter a replication scheme to include the new subscriber database and its host. See "Defining a replication scheme" or "Creating and adding a subscriber database".

  2. Apply the replication scheme to the master database. See "Applying a replication scheme to a database".

  3. Start the replication agent for the master database. See "Starting and stopping the replication agents".

  4. On the source database (the master), create a user and grant the ADMIN privilege to the user:

    CREATE USER ttuser IDENTIFIED BY ttuser;
    User created.
    
    GRANT admin TO ttuser;
    
  5. Assume the user name of the instance administrator is timesten. Logged in as timesten on the target host (the subscriber), duplicate database masterDSN on host1 to subscriber1DSN:

    ttRepAdmin -duplicate -from masterDSN -host host1 subscriber1DSN
    
    Enter internal UID at the remote datastore with ADMIN privileges: ttuser 
    Enter password of the internal Uid at the remote datastore:
    

    Enter ttuser when prompted for the password of the internal user at the remote database.


    Note:

    The host entry can be identified with either the name of the remote host or its TCP/IP address. If you identify hosts using TCP/IP addresses, you must identify the address of the local host (host1 in this example) by using the -localhost option.

    You can specify the local and remote network interfaces for the source and target hosts by using the -localIP and -remoteIP options of ttRepAdmin -duplicate. If you do not specify one or both network interfaces, TimesTen chooses them.

    For details, see "ttRepAdmin" in Oracle TimesTen In-Memory Database Reference.


  6. Start the replication agent on the subscriber database.

Configuring a large number of subscribers

A replication scheme can include up to 128 subscribers. A replication scheme with propagator databases can have up to 128 propagators, and each propagator can have up to 128 subscribers. An active standby pair replication scheme can include up to 127 read-only subscribers. If you are planning a replication scheme that includes a large number of subscribers, then ensure the following:

  • The log buffer size should result in the value of LOG_FS_READS in the SYS.MONITOR table being 0 or close to 0. This ensures that the replication agent does not have to read any log records from disk. If the value of LOG_FS_READS is increasing, then increase the log buffer size.

  • CPU resources are adequate. The replication agent on the master database spawns a thread for every subscriber database. Each thread reads and processes the log independently and needs adequate CPU resources to transmit to the subscriber database.

Replicating databases across releases

Replication functions across releases only if the database of the more recent version of TimesTen was upgraded using ttMigrate from a database of the older version of TimesTen. A database created in the current version of TimesTen is not guaranteed to replicate correctly with the older version.

For example, replication between a database created in TimesTen release 6.0 and a database created in TimesTen release 11.2.1 is not supported. However, if one database was created in TimesTen release 6.0, and the peer database was created in TimesTen release 6.0 and then upgraded to TimesTen release 11.2.1, replication between them is supported.

See "Database Upgrades" in Oracle TimesTen In-Memory Database Installation Guide.

Starting and stopping the replication agents

After you have defined a replication scheme, you can start the replication agents for each database involved in the replication scheme. You must have the ADMIN privilege to start or stop a replication agent.

You can start and stop a replication agent by using the ttAdmin utility with the -repStart or -repStop option. You can also use the ttRepStart and ttRepStop built-in procedures to start and stop a replication agent from the ttIsql command line.

Example 10-3 Starting and stopping the replication agent with ttAdmin

To start the replication agents for the DSNs named masterDSN and subscriberDSN, enter:

ttAdmin -repStart masterDSN
ttAdmin -repStart subscriberDSN

To stop the replication agents, enter:

ttAdmin -repStop masterDSN
ttAdmin -repStop subscriberDSN

Example 10-4 Starting and stopping the replication agent from ttIsql

To start and stop the replication agent for the DSN named masterDSN, enter:

> ttIsql masterDSN
Command> call ttRepStart;
Command> call ttRepStop;

You can also use the ttAdmin utility to set the replication restart policy. By default the policy is manual, which enables you to start and stop the replication agents as described above. Alternatively, you can set the replication restart policy for a database to always or norestart.

Restart PolicyStart replication agent when the TimesTen daemon startsRestart replication agent on errors or invalidation
alwaysYesYes
manualNoYes
norestartNoNo


Note:

The TimesTen daemon manages the replication agents. It must be running to start or stop the replication agents.

When the restart policy is always, the replication agent is automatically started when the database is loaded into memory. See "Specifying a RAM policy" in Oracle TimesTen In-Memory Database Operations Guide to determine when a database is loaded into memory.

Example 10-5 Using ttAdmin to set the restart policy

To use ttAdmin to set the replication restart policy to always, enter:

ttAdmin -repPolicy always DSN

To reset the policy back to manual, enter:

ttAdmin -repPolicy manual DSN

Following a database invalidation, both manual and always policies cause the replication agent to be automatically restarted. When the agent restarts automatically, it is often the first connection to the database. This happens after a fatal error that, for example, requires all applications to disconnect. The first connection to a database usually has to load the most recent checkpoint file and often needs to do recovery. For a very large database, this process may take several minutes. During this period, all activity on the database is blocked so that new connections cannot take place and any old connections cannot finish disconnecting. This may also result in two copies of the database existing at the same time because the old one stays around until all applications have disconnected. For very large databases for which the first-connect time may be significant, you may want to wait for the old database to become inactive first before starting up the new one. You can do this by setting the restart policy to norestart to specify that the replication agent is not to be automatically restarted. For more information on setting policies that would prevent the database from being reloaded, see "Specifying a RAM policy" in Oracle TimesTen In-Memory Database Operations Guide to determine when a database is loaded into memory.

Setting the replication state of subscribers

The state of a subscriber replication agent is described by its master database. When recovering a failed subscriber database, you must reset the replication state of the subscriber database with respect to the master database it communicates with in a replication scheme. You can reset the state of a subscriber database from either the command line or your program:

  • From the command line, use ttRepAdmin -state to direct a master database to reset the replication state of one of its subscriber databases.

  • From ttIsql, call the ttRepSubscriberStateSet built-in procedure to direct a master database to reset the replication state of one or all of its subscriber databases.

See "Monitoring Replication" for information about querying the state of a database.

A master database can set a subscriber database to either the start, pause, or stop states. The database state appears as an integer value in the STATE column in the TTREP.REPPEERS table, as shown in Table 10-2.

Table 10-2 Database states

StateDescription

start

STATE value: 0

Replication updates are collected and transmitted to the subscriber database as soon as possible. If replication for the subscriber database is not operational, the updates are saved in the transaction log files until they can be sent.

pause

STATE value: 1

Replication updates are retained in the log with no attempt to transmit them. Transmission begins when the state is changed to start

stop

STATE value: 2

Replication updates are discarded without being sent to the subscriber database. Placing a subscriber database in the stop state discards any pending updates from the master's transaction log.

failed

STATE value: 4

Replication to a subscriber is considered failed because the threshold limit (log data) has been exceeded. This state is set by the system is a transitional state before the system sets the state to stop. Applications that connect to a failed database receive a warning. See "General failover and recovery procedures" for more information.


When a master database sets one of its subscribers to the start state, updates for the subscriber are retained in the master's log. When a subscriber is in the stop state, updates intended for it are discarded.

When a subscriber is in the pause state, updates for it are retained in the master's log, but are not transmitted to the subscriber database. When a master transitions a subscriber from pause to start, the backlog of updates stored in the master's log is transmitted to the subscriber. (There is an exception to this, which is described in Chapter 11, "Managing Database Failover and Recovery".) If a master database is unable to establish a connection to a stated subscriber, the master periodically attempts to establish a connection until successful.

Example 10-6 Using ttRepAdmin to set the subscriber state

To use ttRepAdmin from the command line to direct the masterds master database to set the state of the subscriberds subscriber database to stop:

ttRepAdmin -dsn masterds -receiver -name subscriberds -state stop

Note:

If you have multiple subscribers with the same name on different hosts, use the -host option of the ttRepAdmin utility to identify the host for the subscriber that you want to modify.

Example 10-7 Using ttRepSubscriberStateSet to set the subscriber state

On the master database, call the ttRepSubscriberStateSet built-in procedure to set the state of the subscriber database (subscriberds ON system1) in the repscheme replication scheme to stop:

Command> CALL ttRepSubscriberStateSet('repscheme', 'repl',
          'subscriberds', 'system1', 2);

Only ttRepSubscriberStateSet can be used to set all of the subscribers of a master to a particular state.The ttRepAdmin utility does not have any equivalent functionality.

PK5"uPK$AOEBPS/alterpair.htmB Altering an Active Standby Pair

6 Altering an Active Standby Pair

This chapter includes the following sections:

Making DDL changes in an active standby pair

You can perform the following tasks in an active standby pair without stopping the replication agent:

  • Create, alter, or drop a user. These statements are replicated.

  • Grant or revoke privileges from a user. These statements are replicated.

  • Create or drop a view, a materialized view, a PL/SQL function, PL/SQL procedure, PL/SQL package, or PL/SQL package body. These objects are not replicated. See "Creating a new PL/SQL object in an existing active standby pair" for more information.

  • Add a column to a replicated table or drop a column from a replicated table. The change is replicated to the table in the standby database.

  • Create or drop a table, including global temporary tables. The CREATE TABLE and DROP TABLE statements can be replicated to the standby database. The new table can also be included in the active standby pair.

  • Create or drop a synonym. The CREATE SYNONYM and DROP SYNONYM statements can be replicated to the standby database.

  • Create or drop an index. The CREATE INDEX and DROP INDEX statements can be replicated to the standby database.

Use the DDLReplicationLevel and DDLReplicationAction connection attributes to control what happens when you want to perform these tasks.

DDLReplicationLevel can be set as follows:

  • DDLReplicationLevel=1. CREATE or DROP statements for tables, indexes, or synonyms are not replicated to the standby database. However, you can add to or drop columns from a replicated table, and those actions will be replicated to the standby database.

  • DDLReplicationLevel=2 is the default, which enables replication of creating and dropping of tables, indexes, and synonyms.

You can set the DDLReplicationLevel attribute by using the ALTER SESSION statement:

ALTER SESSION SET ddl_replication_level=1;

If you want to include a table in the active standby pair when the table is created, set the DDLReplicationAction connection attribute to 'INCLUDE'. If you do not want to include a table in the active standby pair when the table is created, set DDLReplicationAction='EXCLUDE'. The default is 'INCLUDE'.

You can set the DDLReplicationAction attribute by using the ALTER SESSION statement:

ALTER SESSION SET ddl_replication_action='EXCLUDE';

To add an existing table to an active standby pair, set DDLReplicationLevel=2 and use the ALTER ACTIVE STANDBY PAIR INCLUDE TABLE statement. The table must be empty.

When DDLCommitBehavior=0 (the default), DDL operations are automatically committed. When RETURN TWOSAFE has been specified, errors and timeouts may occur as described in "RETURN TWOSAFE". If a RETURN TWOSAFE timeout occurs, the DDL transaction is committed locally regardless of the LOCAL COMMIT ACTION that has been specified.

Creating a new PL/SQL object in an existing active standby pair

To add a new PL/SQL procedure, package, package body or function to an existing active standby pair, complete these tasks:

  1. Create the PL/SQL object on the active database. The CREATE statement is not replicated to the standby database.

  2. Create the PL/SQL object on the standby database.

  3. Grant privileges to the new PL/SQL object on the active database. The GRANT statement is replicated to the standby database.

Restrictions on making DDL changes in an active standby pair

  • CREATE TABLE AS SELECT is not replicated.

  • The CREATE INDEX statement is replicated only when the index is created on an empty table.

  • These statements cannot be executed on the standby database when DDLReplicationLevel=2:

    • CREATE USER, ALTER USER, DROP USER

    • GRANT, REVOKE

    • CREATE SYNONYM, DROP SYNONYM

Examples: Making DDL changes in an active standby pair

Example 6-1 Create a table and include it in the active standby pair

On the active database, set DDLReplicationLevel to 2 and DDLReplicationAction to 'INCLUDE'.

Command > ALTER SESSION SET ddl_replication_level=2;
Session altered.
Command > ALTER SESSION SET ddl_replication_action='INCLUDE';
Session altered.

Create a table. The table must have a primary key or index.

Command > CREATE TABLE tabinclude (col1 NUMBER NOT NULL PRIMARY KEY);
Table created.

Insert a row into tabinclude.

Command > INSERT INTO tabinclude VALUES (55);
1 row inserted.

On the standby database, verify that the INSERT statement has been replicated. This indicates that the tabinclude table has been included in the active standby pair.

Command > SELECT * FROM tabinclude;
< 55 >
1 row found.

Alternatively, use the ttIsql repschemes command to see what tables are included in the active standby pair.

Example 6-2 Create a table and add it to the active standby pair later

On the active database, set DDLReplicationLevel to 2 and DDLReplicationAction to 'EXCLUDE'.

Command> ALTER SESSION SET ddl_replication_level=2;
Session altered.
Command> ALTER SESSION SET ddl_replication_action='exclude';
Session altered.

Create a table that does not have a primary key or index. Try to include it in the active standby pair.

Command> CREATE TABLE newtab (a NUMBER NOT NULL);
Command> ALTER ACTIVE STANDBY PAIR INCLUDE TABLE newtab;
 8000: No primary or unique index on non-nullable column found for replicated 
 table TERRY.NEWTAB
The command failed.

Create an index on the table. Include the table in the active standby pair.

Command> CREATE UNIQUE INDEX ixnewtab ON newtab(a);
Command> ALTER ACTIVE STANDBY PAIR INCLUDE TABLE newtab;

Insert a row into the table.

Command> INSERT INTO newtab VALUES (5);
1 row inserted.

On the standby database, verify that the row was inserted.

Command> SELECT * FROM newtab;
< 5 >
1 row found.

This example illustrates that a table does not need a primary key to be part of an active standby pair.

Example 6-3 CREATE INDEX is replicated

On the active database, set DDLReplicationLevel=2 and DDLReplicationAction='INCLUDE'.

Command> ALTER SESSION SET ddl_replication_level=2;
Session altered.
Command> ALTER SESSION SET ddl_replication_action='include';
Session altered.

Create a table with a primary key. The table is automatically included in the active standby pair.

Command> CREATE TABLE tab2 (a NUMBER NOT NULL, b NUMBER NOT NULL, 
       > PRIMARY KEY (a));

Create an index on the table.

Command> CREATE UNIQUE INDEX ixtab2 ON tab2 (b);

On the standby database, verify that the CREATE INDEX statement has been replicated.

Command> indexes;
 
Indexes on table TERRY.TAB2:
  IXTAB2: unique T-tree index on columns:
    B
  TAB2: unique T-tree index on columns:
    A
  2 indexes found.
 
Indexes on table TERRY.NEWTAB:
  NEWTAB: unique T-tree index on columns:
    A
  1 index found.
 
Indexes on table TERRY.TABINCLUDE:
  TABINCLUDE: unique T-tree index on columns:
    A
  1 index found.
4 indexes found on 3 tables.

Example 6-4 CREATE SYNONYM is replicated

On the active database, set DDLReplicationLevel to 2 and DDLReplicationAction to 'INCLUDE'.

Command > ALTER SESSION SET ddl_replication_level=2;
Session altered.
Command > ALTER SESSION SET ddl_replication_action='INCLUDE';
Session altered.

Create a synonym for tabinclude.

Command> CREATE SYNONYM syntabinclude FOR tabinclude;
Synonym created.

On the standby database, use the ttIsql synonyms command to verify that the CREATE SYNONYM statement has been replicated.

Command> synonyms;
TERRY.SYNTABINCLUDE
1 synonym found.

Making other changes to an active standby pair

You must stop the replication agent to make these changes to an active standby pair:

  • Include or exclude a sequence

  • Include or exclude a cache group

  • Add or drop a subscriber

  • Change values in the STORE clause

  • Change network operations (ADD ROUTE or DROP ROUTE clause)

To alter an active standby pair according to the preceding list, complete the following tasks:

  1. Stop the replication agent on the active database. See "Starting and stopping the replication agents".

  2. If the active standby pair includes cache groups, stop the cache agent on the active database.

  3. Use the ALTER ACTIVE STANDBY PAIR statement to make changes to the replication scheme. See "Examples: Altering an active standby pair".

  4. Start the replication agent on the active database. See "Starting and stopping the replication agents".

  5. If the active standby pair includes cache groups, start the cache agent on the active database.

  6. Destroy the standby database and the subscribers.

  7. Duplicate the active database to the standby database. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database. If the active standby pair includes cache groups, use the -keepCG command line option with ttRepAdmin to preserve the cache group. See "Duplicating a database".

  8. Set up the replication agent policy on the standby database and start the replication agent. See "Starting and stopping the replication agents".

  9. Wait for the standby database to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  10. If the active standby pair includes cache groups, start the cache agent for the standby database using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  11. Duplicate all of the subscribers from the standby database. See "Duplicating a master database to a subscriber". If the active standby pair includes cache groups, use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers. See "Duplicating a database".

  12. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber databases. See "Starting and stopping the replication agents".

Examples: Altering an active standby pair

Example 6-5 Adding a subscriber to an active standby pair

Add a subscriber database to the active standby pair.

ALTER ACTIVE STANDBY PAIR
  ADD SUBSCRIBER sub1;

Example 6-6 Dropping subscribers from an active standby pair

Drop subscriber databases from the active standby pair.

ALTER ACTIVE STANDBY PAIR
  DROP SUBSCRIBER sub1
  DROP SUBSCRIBER sub2;

Example 6-7 Changing the PORT and TIMEOUT settings for subscribers

Alter the PORT and TIMEOUT settings for subscribers sub1 and sub2.

ALTER ACTIVE STANDBY PAIR
  ALTER STORE sub1 SET PORT 23000 TIMEOUT 180
  ALTER STORE sub2 SET PORT 23000 TIMEOUT 180;

Example 6-8 Adding a cache group to an active standby pair

Add a cache group to the active standby pair.

ALTER ACTIVE STANDBY PAIR
  INCLUDE CACHE GROUP cg0;
PK'&BBPK$AOEBPS/alter.htmz1 Altering Replication

13 Altering Replication

This chapter describes how to alter an existing replication system. Table 13-1 lists the tasks often performed on an existing replicated system.

Table 13-1 Tasks performed on an existing replicated system

TaskWhat to do

Alter or drop a replication scheme

See"Altering a replication scheme" and "Dropping a replication scheme".

Alter a table used in a replication scheme

See "Altering a replicated table".

Truncate a table used in a replication scheme

See "Truncating a replicated table".

Change the replication state of a subscriber database

See "Setting the replication state of subscribers".

Resolve update conflicts

See Chapter 14, "Resolving Replication Conflicts".

Recover from failures

See Chapter 11, "Managing Database Failover and Recovery".

Upgrade database

Use the ttMigrate and ttRepAdmin utilities, as described in "Database Upgrades" in Oracle TimesTen In-Memory Database Installation Guide.


Altering a replication scheme

You can perform the following tasks without stopping the replication agent:

Use ALTER REPLICATION to alter the replication scheme on the master and subscriber databases. Any alterations on the master database must also be made on its subscribers.


Note:

You must have the ADMIN privilege to use the ALTER REPLICATION statement.

Most ALTER REPLICATION operations are supported only when the replication agent is stopped (ttAdmin -repStop). The procedure for ALTER REPLICATION operations that require the replication agents to be stopped is:

  1. Use the ttRepStop procedure or ttAdmin -repStop to stop the replication agent for the master and subscriber databases. While the replication agents are stopped, changes to the master database are stored in the log.

  2. Issue the same ALTER REPLICATION statement on both master and subscriber databases.

  3. Use the ttRepStart procedure or ttAdmin -repStart to restart the replication agent for the master and subscriber databases. The changes stored in the master database log are sent to the subscriber database.

If you use ALTER REPLICATION to change a replication scheme that specifies a DATASTORE element, then:

  • You cannot use SET NAME to change the name of the DATASTORE element

  • You cannot use SET CHECK CONFLICTS to enable conflict resolution

This section includes the following topics:

Adding a table or sequence to an existing replication scheme

There are two ways to add a table or sequence to an existing replication scheme:

  • When the element level of the replication scheme is TABLE or SEQUENCE, use the ALTER REPLICATION statement with the ADD ELEMENT clause to add a table or sequence. See Example 13-1.

  • When the element level of the replication scheme is DATASTORE, use the ALTER REPLICATION statement with the ALTER ELEMENT clause to include a table or sequence. See Example 13-2.

Example 13-1 Adding a sequence and a table to a replication scheme

This example uses the replication scheme r1, which was defined in Example 9-29. It alters replication scheme r1 to add sequence seq and table westleads, which will be updated on database westds and replicated to database eastds.

ALTER REPLICATION r1
  ADD ELEMENT elem_seq SEQUENCE seq
    MASTER westds ON "westcoast"
    SUBSCRIBER eastds ON "eastcoast"
  ADD ELEMENT elem_westleads TABLE westleads
    MASTER westds ON "westcoast"
    SUBSCRIBER eastds ON "eastcoast";

Example 13-2 Adding a sequence and a table to a DATASTORE element

Add the sequence my.seq and the table my.tab1 to the ds1 DATASTORE element in my.rep1 replication scheme.

ALTER REPLICATION my.rep1
  ALTER ELEMENT ds1 DATASTORE
    INCLUDE SEQUENCE my.seq
  ALTER ELEMENT ds1 DATASTORE
    INCLUDE TABLE my.tab1;

Adding a PL/SQL object to an existing replication scheme

To add a new PL/SQL procedure, package, package body or function to an existing replication scheme, complete these tasks:

  1. Create the PL/SQL object on a master database. The CREATE statement is not replicated to subscribers.

  2. Create the PL/SQL object on the subscribers

  3. Grant privileges to the new PL/SQL object on the master database. The GRANT statement is replicated to the subscribers.

Adding a DATASTORE element to an existing replication scheme

You can add a DATASTORE element to an existing replication scheme by using the ALTER REPLICATION statement with the ADD ELEMENT clause. All tables except temporary tables, materialized views, and nonmaterialized views are included in the replication scheme if you do not use the INCLUDE or EXCLUDE clauses. See "Including tables or sequences when you add a DATASTORE element" and "Excluding a table or sequence when you add a DATASTORE element".

Example 13-3 Adding a DATASTORE element to a replication scheme

Add a DATASTORE element to an existing replication scheme.

ALTER REPLICATION my.rep1
  ADD ELEMENT ds1 DATASTORE
       MASTER rep2
       SUBSCRIBER rep1, rep3;

Including tables or sequences when you add a DATASTORE element

You can restrict replication to specific tables or sequences when you add a database to an existing replication scheme. Use the ALTER REPLICATION statement with the ADD ELEMENT clause and the INCLUDE TABLE clause or INCLUDE SEQUENCE clause. You can have one INCLUDE clause for each table or sequence in the same ALTER REPLICATION statement.

Example 13-4 Including a table and sequence in a DATASTORE element

Add the ds1 DATASTORE element to my.rep1 replication scheme. Include the table my.tab2 and the sequence my.seq in the DATASTORE element.

ALTER REPLICATION my.rep1
ADD ELEMENT ds1 DATASTORE
MASTER rep2
SUBSCRIBER rep1, rep3
INCLUDE TABLE my.tab2
INCLUDE SEQUENCE my.seq;

Excluding a table or sequence when you add a DATASTORE element

You can exclude tables or sequences when you add a DATASTORE element to an existing replication scheme. Use the ALTER REPLICATION statement with the ADD ELEMENT clause and the EXCLUDE TABLE clause or EXCLUDE SEQUENCE clause. You can have one EXCLUDE clause for each table or sequence in the same ALTER REPLICATION statement.

Example 13-5 Excluding a table or sequence from a DATASTORE element

Add the ds2 DATASTORE element to a replication scheme, but exclude the table my.tab1 and the sequence my.seq.

ALTER REPLICATION my.rep1
ADD ELEMENT ds2 DATASTORE
MASTER rep2
SUBSCRIBER rep1
EXCLUDE TABLE my.tab1
EXCLUDE SEQUENCE my.seq;

Dropping a table or sequence from a replication scheme

This section includes the following topics:

Dropping a table or sequence that is replicated as part of a DATASTORE element

To drop a table or sequence that is part of a replication scheme at the DATASTORE level, complete the following tasks:

  1. Stop the replication agent.

  2. Exclude the table or sequence from the DATASTORE element in the replication scheme.

  3. Drop the table or sequence.

If you have more than one DATASTORE element that contains the table or sequence, then you must exclude the table or sequence from each element before you drop it.

Example 13-6 Excluding a table from a DATASTORE element and then dropping the table

Exclude the table my.tab1 from the ds1 DATASTORE element in the my.rep1 replication scheme. Then drop the table.

ALTER REPLICATION my.rep1
  ALTER ELEMENT ds1 DATASTORE
    EXCLUDE TABLE my.tab1;
DROP TABLE my.tab1;

Dropping a table or sequence that is replicated as a TABLE or SEQUENCE element

To drop a table that is part of a replication scheme at the TABLE or SEQUENCE level, complete the following tasks:

  1. Stop the replication agent.

  2. Drop the element from the replication scheme.

  3. Drop the table or sequence.

Example 13-7 Dropping an element from a replication scheme and then dropping the sequence

Drop the SEQUENCE element elem_seq from the replication scheme r1. Then drop the sequence seq.

ALTER REPLICATION r1
  DROP ELEMENT elem_seq;
DROP SEQUENCE seq;

Creating and adding a subscriber database

You can add a new subscriber database while the replication agents are running. To add a database to a replication scheme, do the following:

  1. Make sure the new subscriber database does not exist.

  2. Apply the appropriate statements to all participating databases:

    ALTER REPLICATION ...
      ALTER ELEMENT ...
        ADD SUBSCRIBER ...
    
  3. On the source database (the master), create a user and grant the ADMIN privilege to the user:

    CREATE USER ttuser IDENTIFIED BY ttuser;
    User created.
    
    GRANT admin TO ttuser;
    
  4. Logged in as the instance administrator, run the ttRepAdmin -duplicate command to copy the contents of the master database to the newly created subscriber. You can use the -setMasterRepStart option to ensure that any updates made to the master after the duplicate operation has started are also copied to the subscriber.

  5. Start the replication agent on the newly created database (ttAdmin -repStart).

Example 13-8 Adding a subscriber to a replicated table

This example alters the r1 replication scheme to add a subscriber (backup3) to the westleads table (step 2 above):

ALTER REPLICATION r1
  ALTER ELEMENT elem_westleads
    ADD SUBSCRIBER backup3 ON "backupserver";

Dropping a subscriber database

Stop the replication agent before you drop a subscriber database.

This example alters the r1 replication scheme to drop the backup3 subscriber for the westleads table:

Example 13-9 Dropping a subscriber for a replicated table

ALTER REPLICATION r1
  ALTER ELEMENT elem_westleads
    DROP SUBSCRIBER backup3 ON "backupserver";

Changing a TABLE or SEQUENCE element name

Stop the replication agent before you change a TABLE or SEQUENCE element name.

Change the element name of the westleads table from elem_westleads to newelname:

Example 13-10 Changing a table name

ALTER REPLICATION r1
  ALTER ELEMENT Eelem_westleads
    SET NAME newelname;

Note:

You cannot use the SET NAME clause to change the name of a DATASTORE element.

Replacing a master database

Stop the replication agent before you replace a master database.

In this example, newwestds is made the new master for all elements currently configured for the master, westds:

Example 13-11 Replacing a master database

ALTER REPLICATION r1
  ALTER ELEMENT * IN westds
    SET MASTER newwestds;

Eliminating conflict detection

In this example, conflict detection configured by the CHECK CONFLICTS clause in the scheme shown in Example 14-2 is eliminated for the elem_accounts_1 table:

Example 13-12 Eliminating conflict detection for a table

ALTER REPLICATION r1
  ALTER ELEMENT elem_accounts_1
    SET NO CHECK;

See Chapter 14, "Resolving Replication Conflicts" for a detailed discussion on conflict checking.

Eliminating the return receipt service

In this example, the return receipt service is eliminated for the first subscriber in the scheme shown in Example 9-29:

Example 13-13 Eliminating return receipt service for a subscriber

ALTER REPLICATION r1
  ALTER ELEMENT elem_waccounts
    ALTER SUBSCRIBER eastds ON "eastcoast"
      SET NO RETURN;

Changing the port number

The port number is the TCP/IP port number on which the replication agent of a subscriber database accepts connection requests from the master replication agent. See "Port assignments" for details on how to assign port to the replication agents.

In this example, the r1 replication scheme is altered to change the port number of the eastds to 22251:

Example 13-14 Changing a port number for a database

ALTER REPLICATION r1
  ALTER STORE eastds ON "eastcoast"
    SET PORT 22251;

Changing the replication route

If a replication host has multiple network interfaces, you may specify which interfaces are used for replication traffic using the ROUTE clause. If you need to change which interfaces are used by replication, you may do so by dropping and adding IP addresses from or to a ROUTE clause.

Example 13-15 Changing the replication route

In this example, the rep.r1 replication scheme is altered to change the priority 2 IP address for the master database from 192.168.1.100 to 192.168.1.101:

ALTER REPLICATION r1
  DROP ROUTE MASTER eastds ON "eastcoast"
             SUBSCRIBER westds ON "westcoast"
             MASTERIP "192.168.1.100"
  ADD ROUTE MASTER eastds ON "eastcoast"
            SUBSCRIBER westds ON "westcoast"
            MASTERIP "192.168.1.101" PRIORITY 2;

Changing the log failure threshold

Use the FAILTHRESHOLD attribute of the STORE parameter to reset the log failure threshold. Stop the replication agents before using ALTER REPLICATION or ALTER ACTIVE STANDBY PAIR to define a new threshold value, and then restart the replication agents.

See "Setting the log failure threshold" and "Setting the log failure threshold" for more information about the log failure threshold.

Altering a replicated table

You can use ALTER TABLE to add or drop columns on the master database. The ALTER TABLE operation is replicated to alter the subscriber databases.

If you use ALTER TABLE on a database configured for bidirectional replication, first stop updates to the table on all of the replicated databases and confirm all replicated updates to the table have been received by the databases before issuing the ALTER TABLE statement. Do not resume updates until the ALTER TABLE operation has been replicated to all databases. This is necessary to ensure that there are no write operations until after the table is altered on all databases.


Note:

You can use the ttRepSubscriberWait procedure or monitoring tools described in Chapter 12, "Monitoring Replication" to confirm the updates have been received and committed on the databases.

Also, if you are executing a number of successive ALTER TABLE operations on a database, you should only proceed with the next ALTER TABLE after you have confirmed the previous ALTER TABLE has reached all of the subscribers.


Note:

You can use the ALTER TABLE statement to change default column values, but the ALTER TABLE statement is not replicated. Thus default column values need not be identical on all nodes.

Truncating a replicated table

You can use TRUNCATE TABLE to delete all of the rows of a table without dropping the table itself. Truncating a table is faster than using a DELETE FROM table statement.

Truncate operations on replicated tables are replicated and result in truncating the table on the subscriber database. Unlike delete operations, however, the individual rows are not deleted. Even if the contents of the tables do not match at the time of the truncate operation, the rows on the subscriber database are deleted anyway.

The TRUNCATE statement replicates to the subscriber, even when no rows are operated upon.

When tables are being replicated with timestamp conflict checking enabled, conflicts are not reported.

Dropping a replication scheme

You can use the DROP REPLICATION statement to remove a replication scheme from a database. You cannot drop a replication scheme when master catchup is required unless it is the only replication scheme in the database.


Note:

You must have the ADMIN privilege to use the DROP REPLICATION statement.

You must stop the replication agent before you drop a replication scheme.

Example 13-16 Dropping a replication scheme

To remove the repscheme replication scheme from a database, enter the following:

DROP REPLICATION repscheme;

If you are dropping replicated tables, you must drop the replication scheme before dropping the replicated tables. Otherwise, you receive an error indicating that you have attempted to drop a replicated table or index.

Example 13-17 Removing a table and a replication from a database

To remove the tab table and repscheme replication scheme from a database, enter the following:

DROP REPLICATION repscheme;
DROP TABLE tab;
PK zzPK $Aoa,mimetypePK$A +a\:iTunesMetadata.plistPK$AYuMETA-INF/container.xmlPK$A(W#WOEBPS/gettingstarted.htmPK$A[pTOv]OEBPS/cover.htmPK$Az`OEBPS/whatsnew.htmPK$A!/{OEBPS/preface.htmPK$Ae  $OEBPS/index.htmPK$Ai=3OEBPS/conflict.htmPK$A]Q$((!OEBPS/img/cluster_activefail2.gifPK$Ax33 OEBPS/img/propagation1.gifPK$A+nJJrOEBPS/img/as_readonly.gifPK$A d`//iOEBPS/img/switch.gifPK$ACN%I% OEBPS/img/cluster_activefail.gifPK$ATIIDOEBPS/img/delete_conflict.gifPK$A33 OEBPS/img/active_standby.gifPK$AxNWRR}=OEBPS/img/config_scenarios1.gifPK$A{LLOEBPS/img/return_2safe.gifPK$ACSz u OEBPS/img/activestandby_sub.gifPK$A!,,OEBPS/img/split_wkload.gifPK$AH MM*OEBPS/img/as_awt.gifPK$A҆kymm9xOEBPS/img/async_cycle.gifPK$A=- 77JOEBPS/img/propagation2.gifPK$Ai/;l6l OEBPS/img/return_receipt.gifPK$AhGG$ OEBPS/img/timestamp.gifPK$Aa~##f OEBPS/img/simplerep.gifPK$Ag/;;= OEBPS/img/scheme3.gifPK$AfNIG2 OEBPS/img/scheme0.gifPK$Ap]CCK OEBPS/img/propagation_tree.gifPK$A<2Q'}OEBPS/standbycache.htmPK$AjbO;;DOEBPS/design.htmPK$Aa[9V9/OEBPS/content.opfPK$A_ biOEBPS/dcommon/prodbig.gifPK$AY@ oOEBPS/dcommon/doclib.gifPK$Aewuu qOEBPS/dcommon/oracle-logo.jpgPK$A OEBPS/dcommon/contbig.gifPK$AOEBPS/dcommon/darbbook.cssPK$AMά""!BOEBPS/dcommon/O_signature_clr.JPGPK$APz mOEBPS/dcommon/feedbck2.gifPK$A-OEBPS/dcommon/feedback.gifPK$Aː5OEBPS/dcommon/booklist.gifPK$AN61=OEBPS/dcommon/cpyr.htmPK$A!:3.,OEBPS/dcommon/masterix.gifPK$AeӺ1,2.OEBPS/dcommon/doccd.cssPK$A7 0OEBPS/dcommon/larrow.gifPK$A#2OEBPS/dcommon/indxicon.gifPK$AS'":5OEBPS/dcommon/leftnav.gifPK$Ahu,6OEBPS/dcommon/uarrow.gifPK$Al-OJ9OEBPS/dcommon/oracle.gifPK$A(XBOEBPS/dcommon/index.gifPK$AGC COEBPS/dcommon/bookbig.gifPK$AJV^MOEBPS/dcommon/rarrow.gifPK$A枰pkOOEBPS/dcommon/mix.gifPK$Ao"nR M ROEBPS/dcommon/doccd_epub.jsPK$Av I ']OEBPS/dcommon/toc.gifPK$A r~$t^OEBPS/dcommon/topnav.gifPK$A1FA_OEBPS/dcommon/prodicon.gifPK$A3( # lcOEBPS/dcommon/bp_layout.cssPK$Ax[?:pOEBPS/dcommon/bookicon.gifPK$Ap*c^dvOEBPS/dcommon/conticon.gifPK$AʍzOEBPS/dcommon/blafdoc.cssPK$A+&OEBPS/dcommon/rightnav.gifPK$Aje88OEBPS/dcommon/oracle-small.JPGPK$Aއ{&!OEBPS/dcommon/help.gifPK$A'}qxq,OEBPS/failure.htmPK$AUq~ >OEBPS/toc.htmPK$A5"uOEBPS/setup.htmPK$A'&BBOEBPS/alterpair.htmPK$A zzOEBPS/alter.htmPKoo"

7 Using Oracle Clusterware to Manage Active Standby Pairs

Oracle Clusterware monitors and controls applications to provide high availability. This chapter describes how to use Oracle Clusterware to manage availability for a TimesTen active standby pair.


Note:

For more information about Oracle Clusterware, see Oracle Clusterware Administration and Deployment Guide in the Oracle Database documentation.

This chapter includes the following topics:

Overview

Figure 7-1 shows an active standby pair with one read-only subscriber in the same local network. The active database, the standby database and the read-only subscriber are on different nodes. There are two nodes that are not part of the active standby pair that are also running TimesTen. An application updates the active database. An application reads from the standby and the subscriber. All of the nodes are connected to shared storage.

Figure 7-1 Active standby pair with one subscriber

Description of Figure 7-1 follows
Description of "Figure 7-1 Active standby pair with one subscriber"

You can use Oracle Clusterware to start, monitor and automatically fail over TimesTen databases and applications in response to node failures and other events. See "Planned maintenance" and "Recovering from failures" for details.

Oracle Clusterware can be implemented at two levels of availability for TimesTen. The basic level of availability manages two master nodes and up to 127 read-only subscriber nodes in the cluster. The active standby pair is defined with local host names or IP addresses. If both master nodes fail, user intervention is necessary to migrate the active standby scheme to new hosts. When both master nodes fail, Oracle Clusterware notifies the user.

The advanced level of availability uses virtual IP addresses for the active, standby and read-only subscriber databases. Extra nodes can be included in the cluster that are not part of the initial active standby pair. If a failure occurs, the use of virtual IP addresses allows one of the extra nodes to take on the role of a failed node automatically.

If your applications connect to TimesTen in a client/server configuration, automatic client failover enables the client to reconnect automatically to the master database with the active role after a failure. See "Using automatic client failover for an active standby pair" and "TTC_FailoverPortRange" in the Oracle TimesTen In-Memory Database Reference.

The ttCWAdmin utility is used to administer TimesTen active standby pairs in a cluster that is managed by Oracle Clusterware. The configuration for each active standby pair is manually created in an initialization file called cluster.oracle.ini by default. The information in this file is used to create Oracle Clusterware resources. Resources are used to manage each TimesTen daemon, database, TimesTen processes, user applications and virtual IP addresses. For more information about the ttCWAdmin utility, see "ttCWAdmin" in Oracle TimesTen In-Memory Database Reference. For more information about the cluster.oracle.ini file, see "The cluster.oracle.ini file".

Active standby configurations

Use Oracle Clusterware to manage only these configurations:

  • Active standby pair with or without read-only subscribers

  • Active standby pair (with or without read-only subscribers) with AWT cache groups, read-only cache groups and global cache groups

Required privileges

See "ttCWAdmin" in Oracle TimesTen In-Memory Database Reference for information about the privileges required to execute ttCWAdmin commands.

Hardware and software requirements

Oracle Clusterware release 11.2.0.2.x is supported with TimesTen active standby pair replication, beginning with release 11.2.0.2.0. See Oracle Clusterware Administration and Deployment Guide for network and storage requirements and information about Oracle Clusterware configuration files.

Oracle Clusterware and TimesTen should be installed in the same location on all nodes.

The TimesTen instance administrator must belong to the same UNIX primary group as the Oracle Clusterware installation owner.

Note that the /tmp directory contains essential TimesTen Oracle Clusterware directories. Their names have the prefix crsTT. Do not delete them.

All hosts should use Network Time Protocol (NTP) or a similar system so that clocks on the hosts remain within 250 milliseconds of each other.

Restricted commands and SQL statements

When you use Oracle Clusterware with TimesTen, you cannot use these commands and SQL statements:

  • CREATE ACTIVE STANDBY PAIR, ALTER ACTIVE STANDBY PAIR and DROP ACTIVE STANDBY PAIR SQL statements

  • The -repStart and -repStop options of the ttAdmin utility

  • The -cacheStart and -cacheStop options of the ttAdmin utility after the active standby pair has been created

  • The -duplicate option of the ttRepAdmin utility

  • The ttRepStart and ttRepStop built-in procedures

  • Built-in procedures for managing a cache grid when the active standby pair in a cluster is a member of a grid

In addition, do not call ttDaemonAdmin -stop before calling ttCWAdmin -shutdown.

The TimesTen integration with Oracle Clusterware accomplishes these operations with the ttCWAdmin utility and the attributes in the cluster.oracle.ini file.

For more information about the built-ins and utilities, see Oracle TimesTen In-Memory Database Reference. For more information about the SQL statements, see Oracle TimesTen In-Memory Database SQL Reference.

The cluster.oracle.ini file

Create an initialization file called cluster.oracle.ini as a text file. The information in this file is used to create Oracle Clusterware resources that manage TimesTen databases, TimesTen processes, user applications and virtual IP addresses.


Note:

All of the attributes that can be used in the cluster.oracle.ini file are described in Chapter 8, "TimesTen Configuration Attributes for Oracle Clusterware".

The ttCWAdmin -create command reads this file for configuration information, so the location of the text file must be reachable by ttCWAdmin. It is recommended that you place this file in the daemon home directory on the host for the active database. However, you can place this file in any directory or shared drive on the same host as where you will execute the ttCWAdmin -create command.

The default location for this file is in one of the following directories:

  • The install_dir/info directory on UNIX platforms

  • The c:\TimesTen\install_dir\srv\info directory on Windows platforms

If you place this file in another location, identify the path of the location with the -ttclusterini option.

The entry name in the cluster.oracle.ini file must be the same as an existing DSN:

  • In the sys.odbc.ini file on UNIX platforms

  • In a system DSN on Windows platforms

For example, [basicDSN] is the entry name in the cluster.oracle.ini file described in "Configuring basic availability". [basicDSN] must also be the DataStore and Data Source Name data store attributes in the sys.odbc.ini files on each host. For example, the sys.odbc.ini file for the basicDSN DSN on host1 might be:

[basicDSN]
DataStore=/path1/basicDSN
LogDir=/path1/log
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8

The sys.odbc.ini file for basicDSN on host2 can have a different path, but all other attributes should be the same:

[basicDSN]
DataStore=/path2/basicDSN
LogDir=/path2/log
DatabaseCharacterSet=AL32UTF8
ConnectionCharacterSet=AL32UTF8

This section includes sample cluster.oracle.ini files for these configurations:

Configuring basic availability

This example shows an active standby pair with no subscribers. The hosts for the active database and the standby database are host1 and host2. The list of hosts is delimited by commas. You can include spaces for readability if desired.

The ttCWAdmin utility is used to administer TimesTen active standby pairs in a cluster that is managed by Oracle Clusterware.

[basicDSN]
MasterHosts=host1,host2

The following is an example of a cluster.oracle.ini file for an active standby pair with one subscriber on host3:

[basicSubscriberDSN]
MasterHosts=host1,host2
SubscriberHosts=host3

Configuring advanced availability

In this example, the hosts for the active database and the standby database are host1 and host2. The specified host3 and host4 are extra nodes that can be used for failover. There are no subscriber nodes. MasterVIP specifies the virtual IP addresses defined for the master databases. VIPInterface is the name of the public network adaptor. VIPNetMask defines the netmask of the virtual IP addresses.

[advancedDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0

This example has one subscriber on host4. There is one extra node that can be used for failing over the master databases and one extra node that can be used for the subscriber database. MasterVIP and SubscriberVIP specify the virtual IP addresses defined for the master and subscriber databases. VIPInterface is the name of the public network adaptor. VIPNetMask defines the netmask of the virtual IP addresses.

[advancedSubscriberDSN]
MasterHosts=host1,host2,host3
SubscriberHosts=host4,host5
MasterVIP=192.168.1.1, 192.168.1.2
SubscriberVIP=192.168.1.3
VIPInterface=eth0
VIPNetMask=255.255.255.0

Ensure that the extra nodes:

Including cache groups in the active standby pair

If the active standby pair replicates one or more AWT or read-only cache groups, set the CacheConnect attribute to y.

This example specifies an active standby pair with one subscriber in an advanced availability configuration. The active standby pair replicates one or more cache groups.

[advancedCacheDSN]
MasterHosts=host1,host2,host3
SubscriberHosts=host4, host5
MasterVIP=192.168.1.1, 192.168.1.2
SubscriberVIP=192.168.1.3
VIPInterface=eth0
VIPNetMask=255.255.255.0
CacheConnect=y

Including the active standby pair in a cache grid

If the active standby pair is a member of a cache grid, assign port numbers for the active and standby databases by setting the GridPort attribute.

This example specifies an active standby pair with no subscribers in an advanced availability configuration. The active standby pair is a member of a cache grid.

[advancedGridDSN]
MasterHosts=host1,host2,host3
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
CacheConnect=y
GridPort=16101, 16102

For more information about using Oracle Clusterware with a cache grid, see "Using Oracle Clusterware with a TimesTen cache grid".

Implementing application failover

TimesTen integration with Oracle Clusterware can facilitate the failover of a TimesTen application that is linked to any of the databases in the active standby pair. Both direct-linked and client/server applications that are on the same host as Oracle Clusterware and TimesTen can be managed.

The required attributes in the cluster.oracle.ini file for failing over a TimesTen application are:

  • AppName - Name of the application to be managed by Oracle Clusterware

  • AppStartCmd - Command line for starting the application

  • AppStopCmd - Command line for stopping the application

  • AppCheckCmd - Command line for executing an application that checks the status of the application specified by AppName

  • AppType - Determines the database to which the application is linked. The possible values are Active, Standby, DualMaster, Subscriber (all) and Subscriber[index].

Optionally, you can also set AppFailureThreshold, DatabaseFailoverDelay, and AppScriptTimeout. These attributes have default values.

The TimesTen application monitor process uses the user-supplied script or program specified by AppCheckCmd to monitor the application. The script that checks the status of the application must be written to return 0 for success and a nonzero number for failure. When Oracle Clusterware detects a nonzero value, it takes action to recover the failed application.

This example shows advanced availability configured for an active standby pair with with no subscribers. The reader application is an application that queries the data in the standby database. AppStartCmd, AppStopCmd and AppCheckCmd can include arguments such as start, stop and check commands. On UNIX, do not use quotes in the values for AppStartCmd, AppStopCmd and AppCheckCmd.

[appDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
AppName=reader
AppType=Standby
AppStartCmd=/mycluster/reader/app_start.sh start
AppStopCmd=/mycluster/reader/app_stop.sh stop
AppCheckCmd=/mycluster/reader/app_check.sh check

AppStartCmd, AppStopCmd and AppCheckCmd can include arguments. For example, the following is a valid cluster.oracle.ini file on Windows that demonstrates configuration for an application that is directly linked to the active database. The script for starting, stopping, and checking the application takes arguments for the DSN and the action to take (-start, -stop and -check).

Note the double quotes for the specified paths in AppStartCmd, AppStopCmd and AppCheckCmd. The quotes are needed because there are spaces in the path. Enclose only the path in quotes. Do not enclose the DSN or the action in quotes.

[appWinDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=Local Area Connection
VIPNetMask=255.255.255.0
AppName=UpdateApp
AppType=Active
AppStartCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -start
AppStopCmd= "C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -stop
AppCheckCmd="C:\Program Files\UserApps\UpdateApp.exe" -dsn myDSN -check

You can configure failover for more than one application. Use AppName to name the application and provide values for AppType, AppStartCmd, AppStopCmd and AppCheckCmd immediately following the AppName attribute. You can include blank lines for readability. For example:

[app2DSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0

AppName=reader
AppType=Standby
AppStartCmd=/mycluster/reader/app_start.sh
AppStopCmd=/mycluster/reader/app_stop.sh
AppCheckCmd=/mycluster/reader/app_check.sh

AppName=update
AppType=Active
AppStartCmd=/mycluster/update/app2_start.sh
AppStopCmd=/mycluster/update/app2_stop.sh
AppCheckCmd=/mycluster/update/app2_check.sh

The application is considered available if it has been running for 15 times the value of AppScriptTimeout attribute. The default value of AppScriptTimeout is 60 seconds, so the application's "uptime threshold" is 15 minutes by default. If the application fails after running for more than 15 minutes, it will be restarted on the same host. If the application fails within 15 minutes of being started, the failure is considered a failure to start properly, and the application will be restarted on another host. If you want to modify the application's uptime threshold after the application has started, use the crs_register -update command. See Oracle Clusterware Administration and Deployment Guide for information about the crs_register -update command.

If you set AppType to DualMaster, the application starts on both the active host and the standby host.The failure of the application on the active host causes the active database and all other applications on the host to fail over to the standby host. You can configure the failure interval, the number of restart attempts and the uptime threshold by setting the AppFailureInterval, AppRestartAttempts and AppUptimeThreshold attributes. These attributes have default values. For example:

[appDualDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
AppName=update
AppType=DualMaster
AppStartCmd=/mycluster/update/app2_start.sh
AppStopCmd=/mycluster/update/app2_stop.sh
AppCheckCmd=/mycluster/update/app2_check.sh
AppRestartAttempts=5
AppUptimeThreshold=300
AppFailureInterval=30

Recovering from permanent failure of both master nodes

If both master nodes fail and then come back up, Oracle Clusterware can automatically recover the master databases. Automatic recovery of temporary dual failure requires:

  • RETURN TWOSAFE is not specified for the active standby pair.

  • AutoRecover is set to y.

  • RepBackupDir specifies a directory on shared storage.

  • RepBackupPeriod is set to a value greater than 0.

If both master nodes fail permanently, Oracle Clusterware can automatically recover the master databases to two new nodes if:

  • Advanced availability is configured (virtual IP addresses and at least four hosts).

  • The active standby pair does not replicate cache groups.

  • A cache grid is not configured.

  • RETURN TWOSAFE is not specified.

  • AutoRecover is set to y.

  • RepBackupDir specifies a directory on shared storage.

  • RepBackupPeriod must be set to a value greater than 0.

TimesTen first performs a full backup of the active database and then performs incremental backups. You can specify the optional attribute RepFullBackupCycle to manage when TimesTen performs subsequent full backup. By default, TimesTen performs a full backup after every five incremental backups.

If RepBackupDir and RepBackupPeriod are configured for backups, TimesTen performs backups for any master database that becomes active. It does not delete backups that were performed for a database that used to be active and has become the standby unless the database becomes active again. Ensure that the shared storage has enough space for two complete database backups. ttCWAdmin -restore automatically chooses the correct backup files.

Incremental backups increase the amount of log records in the transaction log files. Ensure that the values of RepBackupPeriod and RepFullBackupCycle are small enough to prevent a large amount of log records in the transaction log file.

This example shows attribute settings for automatic recovery.

[autorecoveryDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
AutoRecover=y
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600

If you have cache groups in the active standby pair or prefer to recover manually from failure of both master hosts, ensure that AutoRecover is set to n (the default). Manual recovery requires:

  • RepBackupDir specifies a directory on shared storage

  • RepBackupPeriod must be set to a value greater than 0

This example shows attribute settings for manual recovery. The default value for AutoRecover is n, so it is not included in the file.

[manrecoveryDSN]
MasterHosts=host1,host2,host3
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600

Using the RepDDL attribute

The RepDDL attribute represents the SQL statement that creates the active standby pair. The RepDDL attribute is optional. You can use it to exclude tables, cache groups and sequences from the active standby pair.

If you include RepDDL in the cluster.oracle.ini file, do not specify ReturnServiceAttribute, MasterStoreAttribute or SubscriberStoreAttribute in the cluster.oracle.ini file. Include those replication settings in the RepDDL attribute.

When you specify a value for RepDDL, use the <DSN> macro for the database file name prefix. Use the <MASTERHOST[1]> and <MASTERHOST[2]> macros to specify the master host names. TimesTen substitutes the correct values from the MasterHosts or MasterVIP attributes, depending on whether your configuration uses virtual IP addresses. Similarly, use the <SUBSCRIBERHOST[n]> macro to specify subscriber host names, where n is a number from 1 to the total number of SubscriberHosts attribute values or 1 to the total number of SubscriberVIP attribute values if virtual IP addresses are used.

Use the RepDDL attribute to exclude tables, cache groups and sequences from the active standby pair:

[excludeDSN]
MasterHosts=host1,host2,host3,host4
SubscriberHosts=host5,host6
MasterVIP=192.168.1.1, 192.168.1.2
SubscriberVIP=192.168.1.3
VIPInterface=eth0
VIPNetMask=255.255.255.0
RepDDL=CREATE ACTIVE STANDBY PAIR \
<DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]>
SUBSCRIBER <DSN> ON <SUBSCRIBERHOST[1]>\
EXCLUDE TABLE pat.salaries, \
EXCLUDE CACHE GROUP terry.salupdate, \
EXCLUDE SEQUENCE ttuser.empcount

The replication agent transmitter obtains route information as follows, in order of priority:

  1. From the ROUTE clause in the RepDDL setting, if a ROUTE clause is specified. Do not specify a ROUTE clause if you are configuring advanced availability.

  2. From Oracle Clusterware, which provides the private host names and public host names of the local and remote hosts as well as the remote daemon port number. The private host name is preferred over the public host name. The replication agent transmitter cannot connect to the IPC socket, it attempts to connect to the remote daemon, using information that Oracle Clusterware maintains about the replication scheme.

  3. From the active and standby hosts. If they fail, then the replication agent chooses the connection method based on host name.

This is an example of specifying the ROUTE clause in RepDDL:

[routeDSN]
MasterHosts=host1,host2,host3,host4
RepDDL=CREATE ACTIVE STANDBY PAIR \
<DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]>\
ROUTE MASTER <DSN> ON <MASTERHOST[1]>  SUBSCRIBER <DSN> ON <MASTERHOST[2]>\
MASTERIP "192.168.1.2" PRIORITY 1\
SUBSCRIBERIP "192.168.1.3" PRIORITY 1\ 
MASTERIP "10.0.0.1" PRIORITY 2\
SUBSCRIBERIP "10.0.0.2" PRIORITY 2\
MASTERIP "140.87.11.203" PRIORITY 3\
SUBSCRIBERIP "140.87.11.204" PRIORITY 3\
ROUTE MASTER <DSN> ON <MASTERHOST[2]>  SUBSCRIBER <DSN> ON <MASTERHOST[1]>\
MASTERIP "192.168.1.3" PRIORITY 1\
SUBSCRIBERIP "192.168.1.2" PRIORITY 1\ 
MASTERIP "10.0.0.2" PRIORITY 2\
SUBSCRIBERIP "10.0.0.1" PRIORITY 2\
MASTERIP "140.87.11.204" PRIORITY 3\
SUBSCRIBERIP "140.87.11.203" PRIORITY 3\

Creating and initializing a cluster

To create and initialize a cluster, perform these tasks:

If you plan to have more than one active standby pair in the cluster, see "Including more than one active standby pair in a cluster".

If you want to configure an Oracle database as a remote disaster recovery subscriber, see "Configuring an Oracle database as a disaster recovery subscriber".

If you want to set up a read-only subscriber that is not managed by Oracle Clusterware, see "Configuring a read-only subscriber that is not managed by Oracle Clusterware".

Install Oracle Clusterware

Install Oracle Clusterware. By default, the installation occurs on all hosts concurrently. See Oracle Clusterware installation documentation for your platform.

Oracle Clusterware starts automatically after successful installation.

Install TimesTen on each host

Install TimesTen in the same location on each host in the cluster, including extra hosts. The instance name must be the same on each host. The user name of the instance administrator must be the same on all hosts. The TimesTen instance administrator must belong to the same UNIX primary group as the Oracle Clusterware installation owner.

On UNIX platforms, the installer prompts you for values for:

  • The TCP/IP port number associated with the TimesTen cluster agent. The port number can be different on each host. If you do not provide a port number, TimesTen uses the default TimesTen port.

  • The Oracle Clusterware location. The location must be the same on each host.

  • The hosts included in the cluster, including spare hosts, with host names separated by commas. This list must be the same on each host.

The installer uses these values to create the ttcrsagent.options file on UNIX platforms. See "TimesTen Installation" in Oracle TimesTen In-Memory Database Installation Guide for details. You can also use ttmodinstall -crs to create the file after installation. Use the -record and -batch options for setup.sh to perform identical installations on additional hosts if desired.

On Windows, execute ttmodinstall -crs on each node after installation to create the ttcrsagent.options file.

For more information about ttmodinstall, see "ttmodinstall" in Oracle TimesTen In-Memory Database Reference.

Register the TimesTen cluster information

TimesTen cluster information is stored in the Oracle Cluster Registry (OCR). As the root user on UNIX platforms, or as the instance administrator on Windows, enter this command:

ttCWAdmin -ocrConfig

As long as Oracle Clusterware and TimesTen are installed on the hosts, this step never needs to be repeated.

Start the TimesTen cluster agent

Start the TimesTen cluster agent by executing the ttCWAdmin -init command on one of the hosts. For example:

ttCWAdmin -init

This command starts the TimesTen cluster agent (ttCRSAgent) and the TimesTen daemon monitor (ttCRSDaemon). There is one TimesTen cluster agent and one TimesTen daemon monitor for the TimesTen installation. When the TimesTen cluster agent has started, Oracle Clusterware begins monitoring the TimesTen daemon and will restart it if it fails.


Note:

You must stop the TimesTen cluster agent on the local host before you execute a ttDaemonAdmin -stop command. Otherwise the cluster agent will restart the daemon.

Create and populate a TimesTen database on one host

Create a database on the host where you intend the active database to reside. The DSN must be the same as the database file name.

Create schema objects such as tables, AWT cache groups and read-only cache groups. Do not load the cache groups.

Create sys.odbc.ini files on other hosts

On all hosts that will be in the cluster, create sys.odbc.ini files. The DataStore attribute and the Data Source Name must be the same as the entry name for the cluster.oracle.ini file. See "The cluster.oracle.ini file" for information about the contents of the sys.odbc.ini files.

Create a cluster.oracle.ini file

Create a cluster.oracle.ini file as a text file. See "The cluster.oracle.ini file" for details about its contents and acceptable locations for the file.

Create the virtual IP addresses (optional)

For advanced availability, execute the ttCWAdmin -createVIPs command on any host in the cluster. On UNIX, you must execute this command as the root user. For example:

ttCWAdmin -createVIPs -dsn myDSN

Create an active standby pair replication scheme

Create an active standby pair replication scheme by executing the ttCWAdmin -create command on any host.


Note:

The cluster.oracle.ini file contains the configuration needed to perform the ttCWAdmin -create command and so must reachable by the ttCWAdmin executable. See "The cluster.oracle.ini file" for details about acceptable locations for the cluster.oracle.ini file.

For example:

ttCWAdmin -create -dsn myDSN

This command prompts for an encryption pass phrase that the user will not need again. The command also prompts for the user ID and password for an internal user with the ADMIN privilege if it does not find this information in the sys.odbc.ini file. This internal user will be used to create the active standby pair.

If the CacheConnect Clusterware attribute is enabled, the command prompts for the user password for the Oracle database. The Oracle password is used to set the autorefresh states for cache groups. See "CacheConnect" for more details on this attribute.

Start the active standby pair

Start the active standby pair replication scheme by executing the ttCWAdmin -start command on any host. For example:

ttCWAdmin -start -dsn myDSN

This command starts the following processes for the active standby pair:

  • ttCRSMaster

  • ttCRSActiveService

  • ttCRSsubservice

  • Monitor for application AppName

Load cache groups

If the active standby pair includes cache groups, use the LOAD CACHE GROUP statement to load the cache group tables from the Oracle tables.

Including more than one active standby pair in a cluster

If you want to use Oracle Clusterware to manage more than one active standby pair in a cluster, include additional configuration in the cluster.oracle.ini file. Oracle Clusterware can only manage more than one active standby pair in a cluster if all TimesTen databases are a part of the same TimesTen instance on a single host.

For example, the following cluster.oracle.ini file contains configuration information for two active standby pair replication schemes on the same host:


Note:

For details on configuration attributes in the cluster.oracle.ini file, see Chapter 8, "TimesTen Configuration Attributes for Oracle Clusterware".

[advancedSubscriberDSN]
MasterHosts=host1,host2,host3
SubscriberHosts=host4, host5
MasterVIP=192.168.1.1, 192.168.1.2
SubscriberVIP=192.168.1.3
VIPInterface=eth0
VIPNetMask=255.255.255.0

[advSub2DSN]
MasterHosts=host1,host2,host3
SubscriberHosts=host4, host5
MasterVIP=192.168.1.4, 192.168.1.5
SubscriberVIP=192.168.1.6
VIPInterface=eth0
VIPNetMask=255.255.255.0

Perform these tasks for additional replication schemes:

  1. Create and populate the databases.

  2. Create the virtual IP addresses. Use the ttCWAdmin -createVIPs command.

  3. Create the active standby pair replication scheme. Use the ttCWAdmin -create command.

  4. Start the active standby pair. Use the ttCWAdmin -start command.

Configuring an Oracle database as a disaster recovery subscriber

You can create an active standby pair on the primary site with an Oracle database as a remote disaster recovery subscriber. See "Using a disaster recovery subscriber in an active standby pair". Oracle Clusterware manages the active standby pair but does not manage the disaster recovery subscriber. The user must perform a switchover if the primary site fails.

To use Oracle Clusterware to manage an active standby pair that has a remote disaster recovery subscriber, perform these tasks:

  1. Use the RepDDL or RemoteSubscriberHosts Clusterware attribute to provide information about the remote disaster recovery subscriber.For example:

    [advancedDRsubDSN]
    MasterHosts=host1,host2,host3
    SubscriberHosts=host4, host5
    RemoteSubscriberHosts=host6
    MasterVIP=192.168.1.1, 192.168.1.2
    SubscriberVIP=192.168.1.3
    VIPInterface=eth0
    VIPNetMask=255.255.255.0
    CacheConnect=y
    
  2. Use ttCWAdmin -create to create the active standby pair replication scheme on the primary site. This does not create the disaster recovery subscriber.

  3. Use ttCWAdmin -start to start the active standby pair replication scheme.

  4. Load the cache groups that are replicated by the active standby pair.

  5. Set up the disaster recovery subscriber using the procedure in "Rolling out a disaster recovery subscriber".

Configuring a read-only subscriber that is not managed by Oracle Clusterware

You can include a read-only TimesTen subscriber database that is not managed by Oracle Clusterware. Perform these tasks:

  1. Include the RemoteSubscriberHosts Clusterware attribute in the cluster.oracle.ini file.For example:

    [advancedROsubDSN]
    MasterHosts=host1,host2,host3
    RemoteSubscriberHosts=host6
    MasterVIP=192.168.1.1, 192.168.1.2
    SubscriberVIP=192.168.1.3
    VIPInterface=eth0
    VIPNetMask=255.255.255.0
    
  2. Use ttCWAdmin -create to create the active standby pair replication scheme on the primary site.

  3. Use ttCWAdmin -start to start the active standby pair replication scheme. This does not create the read-only subscriber.

  4. Use the ttRepStateGet procedure to verify that the state of the standby database is STANDBY.

  5. On the subscriber host, use ttRepAdmin -duplicate option to duplicate the standby database to the read-only subscriber. See "Duplicating a database".

  6. Start the replication agent on the subscriber host.

To add a read-only subscriber to an existing configuration, see "Adding a read-only subscriber not managed by Oracle Clusterware".

To rebuild a read-only subscriber, see "Rebuilding a read-only subscriber not managed by Oracle Clusterware".

Using Oracle Clusterware with a TimesTen cache grid

You can use the TimesTen implementation of Oracle Clusterware to manage a cache grid when each grid member is an active standby pair. TimesTen does not support using Oracle Clusterware to manage standalone grid members.

This section includes:

Creating and initializing a cluster of cache grid members

See "Install TimesTen on each host" for installation requirements. In addition, each grid member must have a DSN that is unique within the cache grid.

Perform the tasks described in "Creating and initializing a cluster" for each grid member. Include the GridPort Clusterware attribute in the cluster.oracle.ini file as described in "Including the active standby pair in a cache grid". Ensure that the specified port numbers are not in use.

The ttCWAdmin -start command automatically attaches a grid member to the cache grid attach. The ttCWAdmin -stop command automatically detaches a grid member from the cache grid.

Failure and recovery for active standby pair grid members

If both nodes of an active standby pair grid member fail, then the grid member fails. Oracle Clusterware evicts the failed grid member from the grid automatically. However, when a cache grid is configured, any further automatic recovery after a dual failure, whether temporary or permanent, is not possible. In this case, you can only recover manually. For details, see "Manual recovery of both nodes of an active standby pair grid member".

Making schema changes to active standby pairs in a grid

You can add, drop or change a cache group while the active database is attached to the grid.

Use the ttCWAdmin -beginAlterSchema command to make these schema changes. This command stops replication but allows the active database to remain attached to the grid. The ttCWAdmin -endAlterSchema command duplicates the changes to the standby database, registers the altered replication scheme and starts replication.

To add a table and include it in the active standby pair, see "Making DDL changes in an active standby pair". See the same section for information about dropping a replicated table.

Add a cache group

Perform these steps on the active database of each active standby pair grid member.

  1. Enable the addition of the cache group to the active standby pair.

    ttCWAdmin -beginAlterSchema advancedGridDSN
    
  2. Create the cache group.

  3. If the cache group is a read-only cache group, alter the active standby pair to include the cache group.

    ALTER ACTIVE STANDBY PAIR INCLUDE CACHE GROUP samplecachegroup;
    
  4. Duplicate the change to the standby database.

    ttCWAdmin -endAlterSchema advancedGridDSN
    

You can load the cache group at any time after you create the cache group.

Drop a cache group

Perform these steps to drop a cache group.

  1. Unload the cache group in all members of the cache grid.

    CALL ttOptSetFlag('GlobalProcessing', 1);
    UNLOAD CACHE GROUP samplecachegroup;
    
  2. On the active database of an active standby pair grid member, enable dropping the cache group.

    ttCWAdmin -beginAlterSchema advancedGridDSN
    
  3. If the cache group is a read-only cache group, alter the active standby pair to exclude the cache group.

    ALTER ACTIVE STANDBY PAIR EXCLUDE CACHE GROUP samplecachegroup;
    
  4. If the cache group is a read-only cache group, set the autorefresh state to PAUSED.

    ALTER CACHE GROUP samplecachegroup SET AUTOREFRESH STATE PAUSED;
    
  5. Drop the cache group.

    DROP CACHE GROUP samplecachegroup;
    
  6. If the cache group was a read-only cache group, run the TimesTen_install_dir/oraclescripts/cacheCleanUp.sql SQL*Plus script as the cache administration user on the Oracle database to drop the Oracle objects used to implement autorefresh operations.

  7. Duplicate the change to the standby database.

    ttCWAdmin -endAlterSchema advancedGridDSN
    
  8. Repeat steps 2 through 7 on the active database of each active standby pair grid member.

Change an existing cache group

To change an existing cache group, first drop the existing cache group as described in "Drop a cache group". Then add the cache group with the desired changes as described in "Add a cache group".

Recovering from failures

Oracle Clusterware can recover automatically from many kinds of failures. The following sections describe several failure scenarios and how Oracle Clusterware manages the failures.

How TimesTen performs recovery when Oracle Clusterware is configured

The TimesTen database monitor (ttCRSmaster process) performs recovery. It attempts to connect to the failed database without using the forceconnect option. If the connection fails with error 994 (Data store connection terminated), the database monitor tries to connect 10 times. If the connection fails with error 707 (Attempt to connect to a data store that has been manually unloaded from RAM), the database monitor changes the RAM policy and tries to connect again. If the database monitor cannot connect, it returns connection failure.

If the database monitor can connect to the database, then it performs these tasks:

  • It queries the CHECKSUM column in the TTREP.REPLICATIONS replication table.

  • If the value in the CHECKSUM column matches the checksum stored in the Oracle Cluster Registry, then the database monitor verifies the role of the database. If the role is 'ACTIVE', then recovery is complete.

    If the role is not 'ACTIVE', then the database monitor queries the replication Commit Ticket Number (CTN) in the local database and the CTN in the active database to find out whether there are transactions that have not been replicated. If all transactions have been replicated, then recovery is complete.

  • If the checksum does not match or if some transactions have not been replicated, then the database monitor performs a duplicate operation from the remote database to re-create the local database.

If the database monitor fails to connect with the database because of error 8110 or 8111 (master catchup required or in progress), then it uses the forceconnect=1 option to connect and starts master catchup. Recovery is complete when master catchup has been completed. If master catchup fails with error 8112 (Operation not permitted), then the database monitor performs a duplicate operation from the remote database. For more information about master catchup, see "Automatic catch-up of a failed master database".

If the connection fails because of other errors, then the database monitor tries to perform a duplicate operation from the remote database.

The duplicate operation verifies that:

  • The remote database is available.

  • The replication agent is running.

  • The remote database has the correct role. The role must be 'ACTIVE' when the duplicate operation is attempted for creation of a standby database. The role must be 'STANDBY' or 'ACTIVE' when the duplicate operation is attempted for creation of a read-only subscriber.

When the conditions for the duplicate operation are satisfied, the existing failed database is destroyed and the duplicate operation starts.

When an active database or its host fails

If there is a failure on the node where the active database resides, Oracle Clusterware automatically changes the state of the standby database to 'ACTIVE'. If application failover is configured, then the application begins updating the new active database.

Figure 7-2 shows that the state of the standby database has changed to 'ACTIVE' and that the application is updating the new active database.

Figure 7-2 Standby database becomes active

Description of Figure 7-2 follows
Description of "Figure 7-2 Standby database becomes active"

Oracle Clusterware tries to restart the database or host where the failure occurred. If it is successful, then that database becomes the standby database.

Figure 7-3 shows a cluster where the former active node becomes the standby node.

Figure 7-3 Standby database starts on former active host

Description of Figure 7-3 follows
Description of "Figure 7-3 Standby database starts on former active host"

If the failure of the former active node is permanent and advanced availability is configured, Oracle Clusterware starts a standby database on one of the extra nodes.

Figure 7-4 shows a cluster in which the standby database is started on one of the extra nodes.

Figure 7-4 Standby database starts on extra host

Description of Figure 7-4 follows
Description of "Figure 7-4 Standby database starts on extra host"

If you do not want to wait for these automatic actions to occur, see "Performing a forced switchover after failure of the active database or host".

When a standby database or its host fails

If there is a failure on the standby node, Oracle Clusterware first tries to restart the database or host. If it cannot restart the standby database on the same host and advanced availability is configured, Oracle Clusterware starts the standby database on an extra node.

Figure 7-5 shows a cluster in which the standby database is started on one of the extra nodes.

Figure 7-5 Standby database on new host

Description of Figure 7-5 follows
Description of "Figure 7-5 Standby database on new host"

When read-only subscribers or their hosts fail

If there is a failure on a subscriber node, Oracle Clusterware first tries to restart the database or host. If it cannot restart the database on the same host and advanced availability is configured, Oracle Clusterware starts the subscriber database on an extra node.

When failures occur on both master nodes

This section includes these topics:

Automatic recovery when not attached to a grid

Oracle Clusterware can achieve automatic recovery from temporary failure on both master nodes after the nodes come back up if:

  • RETURN TWOSAFE is not specified for the active standby pair.

  • AutoRecover is set to y.

  • RepBackupDir specifies a directory on shared storage.

  • RepBackupPeriod is set to a value greater than 0.

Oracle Clusterware can achieve automatic recovery from permanent failure on both master nodes if:

  • Advanced availability is configured (virtual IP addresses and at least four hosts).

  • The active standby pair does not replicate cache groups.

  • A cache grid is not configured.

  • RETURN TWOSAFE is not specified for the active standby pair.

  • AutoRecover is set to y.

  • RepBackupDir specifies a directory on shared storage.

  • RepBackupPeriod is set to a value greater than 0.

See "Recovering from permanent failure of both master nodes" for examples of cluster.oracle.ini files.

Manual recovery of both nodes of an active standby pair grid member

If both nodes of an active standby pair grid member fail, then the grid member fails. Oracle Clusterware evicts the failed grid member from the grid automatically. After the failed grid member is removed from the grid, you can continue to recover manually. However, when a cache grid is configured, any further automatic recovery after a dual failure, whether temporary or permanent, is not possible.

If the active standby pair grid member is in an asynchronous replication scheme, the grid member is recovered automatically and reattached to the grid. If the active standby pair grid member is in a replication scheme with RETURN TWOSAFE configured, perform these steps to recover the grid member and reattach it to the grid:

  1. Stop the replication agent and the cache agent and disconnect the application from both databases. This step detaches the grid member from the grid.

    ttCWAdmin -stop advancedGridDSN
    
  2. Drop the active standby pair.

    ttCWAdmin -drop advancedGridDSN
    
  3. Create the active standby pair replication scheme.

    ttCWAdmin -create advancedGridDSN
    
  4. Start the active standby pair replication scheme. This step attaches the grid member to the grid.

    ttCWAdmin -start advancedGridDSN
    

Manual recovery for advanced availability

This section assumes that the failed master nodes will be recovered to new hosts on which TimesTen and Oracle Clusterware have been installed. These steps use the manrecoveryDSN database and cluster.oracle.ini file for examples.

To perform manual recovery in an advanced availability configuration, perform these tasks:

  1. Ensure that the TimesTen cluster agent is running on the local host.

    ttCWAdmin -init -hosts localhost
    
  2. Restore the backup database. Ensure that there is not already a database on the host with the same DSN as the database you want to restore.

    ttCWAdmin -restore -dsn manrecoveryDSN
    
  3. If there are cache groups in the database, drop and re-create the cache groups.

  4. If the new hosts are not already specified by MasterHosts and SubscriberHosts in the cluster.oracle.ini file, then modify the file to include the new hosts.

    These steps use manrecoveryDSN. This step is not necessary for manrecoveryDSN because extra hosts are already specified in the cluster.oracle.ini file.

  5. Re-create the active standby pair replication scheme.

    ttCWAdmin -create -dsn manrecoveryDSN
    
  6. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn manrecoveryDSN
    

Manual recovery for basic availability

This section assumes that the failed master nodes will be recovered to new hosts on which TimesTen and Oracle Clusterware have been installed. These steps use the basicDSN database and cluster.oracle.ini file for examples.

To perform manual recovery in a basic availability configuration, perform these steps:

  1. Acquire new hosts for the databases in the active standby pair.

  2. Ensure that the TimesTen cluster agent is running on the local host.

    ttCWAdmin -init -hosts localhost
    
  3. Restore the backup database. Ensure that there is not already a database on the host with the same DSN as the database you want to restore.

    ttCWADmin -restore -dsn basicDSN
    
  4. If there are cache groups in the database, drop and re-create the cache groups.

  5. Update the MasterHosts and SubscriberHosts entries in the cluster.oracle.ini file. This example uses the basicDSN database. The MasterHosts entry changes from host1 to host10. The SubscriberHosts entry changes from host2 to host20.

    [basicDSN]
    MasterHosts=host10,host20
    
  6. Re-create the active standby pair replication scheme.

    ttCWAdmin -create -dsn basicDSN
    
  7. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn basicDSN
    

Manual recovery to the same master nodes when databases are corrupt

Failures can occur on both master nodes so that the databases are corrupt. If you want to recover to the same master nodes, perform the following steps:

  1. Ensure that the replication agent and the cache agent are stopped and that applications are disconnected from both databases. This example uses the basicDSN database.

    ttCWAdmin -stop -dsn basicDSN
    
  2. On the node where you want the new active database to reside, destroy the databases by using the ttDestroy utility.

    ttDestroy basicDSN
    
  3. Restore the backup database.

    ttCWADmin -restore -dsn basicDSN
    
  4. If there are cache groups in the database, drop and re-create the cache groups.

  5. Re-create the active standby pair replication scheme.

    ttCWAdmin -create -dsn basicDSN
    
  6. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn basicDSN
    

Manual recovery when RETURN TWOSAFE is configured

You can configure an active standby pair to have a return service of RETURN TWOSAFE by using the ReturnServiceAttribute Clusterware attribute in the cluster.oracle.ini file. When RETURN TWOSAFE is configured, the database logs may be available on one or both nodes after both nodes fail.

This cluster.oracle.ini example includes backup configuration in case the database logs are not available:

[basicTwosafeDSN]
MasterHosts=host1,host2
ReturnServiceAttribute=RETURN TWOSAFE
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600

Perform these recovery tasks:

  1. Ensure that the replication agent and the cache agent are stopped and that applications are disconnected from both databases.

    ttCWAdmin -stop -dsn basicTwosafeDSN
    
  2. Drop the active standby pair.

    ttCWAdmin -drop -dsn basicTwosafeDSN
    
  3. Decide whether the former active or standby database is more up to date and re-create the active standby pair using the chosen database. The command prompts you to choose the host on which the active database will reside.

    ttCWAdmin -create -dsn basicTwosafeDSN
    

    If neither database is usable, restore the database from backups.

    ttCWAdmin -restore -dsn basicTwosafeDSN
    
  4. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn basicTwosafeDSN
    

When more than two master hosts fail

Approach a failure of more than two master hosts as a more extreme case of dual host failure. Use these guidelines:

Performing a forced switchover after failure of the active database or host

If you want to force a switchover to the standby database without waiting for automatic recovery to be performed by TimesTen and Oracle Clusterware, you can write an application that uses Oracle Clusterware commands. These are the tasks to perform:

  1. Use the crs_stop command to stop the ttCRSmaster resource on the active database. This causes the role of the standby database to change to active.

  2. Use the crs_start command to restart the ttCRSmaster resource on the former active database. This causes the database to recover and become the standby database.

See Oracle Clusterware Administration and Deployment Guide for more information about the crs_stop and crs_start commands.

Planned maintenance

This section includes the following topics:

Changing the schema

To include or exclude a table, see "Making DDL changes in an active standby pair".

To include or exclude a cache group, see "Making schema changes to active standby pairs in a grid".

To create PL/SQL procedures, sequences, materialized views and indexes on tables with data, perform these tasks:

  1. Enable the addition of the object to the active standby pair.

    ttCWAdmin -beginAlterSchema advancedDSN
    
  2. Create the object.

  3. If the object is a sequence and you want to include it in the active standby pair replication scheme, alter the active standby pair.

    ALTER ACTIVE STANDBY PAIR INCLUDE samplesequence;
    
  4. Duplicate the change to the standby database.

    ttCWAdmin -endAlterSchema advancedDSN
    

To add or drop a subscriber database or alter database attributes, perform the following tasks:

  1. Stop the replication agents on the databases in the active standby pair. These commands use the advancedCacheDSN as an example.

    ttCWAdmin -stop -dsn advancedCacheDSN
    
  2. Drop the active standby pair.

    ttCWAdmin -drop  -dsn advancedCacheDSN
    
  3. Modify the schema as desired.

  4. Re-create the active standby pair replication scheme.

    ttCWAdmin -create -dsn advancedCacheDSN
    
  5. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn advancedCacheDSN
    

Performing a rolling upgrade of Oracle Clusterware software

See Oracle Clusterware Administration and Deployment Guide.

Upgrading TimesTen

See "Upgrading TimesTen when using Oracle Clusterware" in Oracle TimesTen In-Memory Database Installation Guide.

Adding a read-only subscriber to an active standby pair

To add a read-only subscriber to an active standby pair replication scheme managed by Oracle Clusterware, perform these steps:

  1. Stop the replication agents on all databases. This example uses the advancedSubscriberDSN, which already has a subscriber and is configured for advanced availability.

    ttCWAdmin -stop -dsn advancedSubscriberDSN
    
  2. Drop the active standby pair.

    ttCWAdmin -drop -dsn advancedSubscriberDSN
    
  3. Modify the cluster.oracle.ini file.

    • Add the subscriber to the SubscriberHosts attribute.

    • If the cluster is configured for advanced availability, add a virtual IP address to the SubscriberVIP attribute.

    See "Configuring advanced availability" for an example using these attributes.

  4. Create the active standby pair replication scheme.

    ttCWAdmin -create -dsn advancedSubscriberDSN
    
  5. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn advancedSubscriberDSN
    

Removing a read-only subscriber from an active standby pair

To remove a read-only subscriber from an active standby pair, perform these steps:

  1. Stop the replication agents on all databases. This example uses the advancedSubscriberDSN, which has a subscriber and is configured for advanced availability.

    ttCWAdmin -stop -dsn advancedSubscriberDSN
    
  2. Drop the active standby pair.

    ttCWAdmin -drop -dsn advancedSubscriberDSN
    
  3. Modify the cluster.oracle.ini file.

    • Remove the subscriber from the SubscriberHosts attribute or remove the attribute altogether if there are no subscribers left in the active standby pair.

    • Remove a virtual IP from the SubscriberVIP attribute or remove the attribute altogether if there are no subscribers left in the active standby pair.

  4. Create the active standby pair replication scheme.

    ttCWAdmin -create -dsn advancedSubscriberDSN
    
  5. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn advancedSubscriberDSN
    

Adding an active standby pair to a cluster

To add an active standby pair (with or without subscribers) to a cluster that is already managing an active standby pair, perform these tasks:

  1. Create and populate a database on the host where you intend the active database to reside initially. See "Create and populate a TimesTen database on one host".

  2. Modify the cluster.oracle.ini file. This example adds advSub2DSN to the cluster.oracle.ini file that already contains the configuration for advancedSubscriberDSN. The new active standby pair is on different hosts from the original active standby pair.

    [advancedSubscriberDSN]
    MasterHosts=host1,host2,host3
    SubscriberHosts=host4, host5
    MasterVIP=192.168.1.1, 192.168.1.2
    SubscriberVIP=192.168.1.3
    VIPInterface=eth0
    VIPNetMask=255.255.255.0
    
    [advSub2DSN]
    MasterHosts=host6,host7,host8
    SubscriberHosts=host9, host10
    MasterVIP=192.168.1.4, 192.168.1.5
    SubscriberVIP=192.168.1.6
    VIPInterface=eth0
    VIPNetMask=255.255.255.0
    
  3. Create new virtual IP addresses. On UNIX, the user must be root to do this.

    ttCWAdmin -createVIPs -dsn advSub2DSN
    
  4. Create the new active standby pair replication scheme.

    ttCWAdmin -create -dsn advSub2DSN
    
  5. Start the new active standby pair replication scheme.

    ttCWAdmin -start -dsn advSub2DSN
    

Adding a read-only subscriber not managed by Oracle Clusterware

You can add a read-only subscriber that is not managed by Oracle Clusterware to an existing active standby pair replication scheme that is managed by Oracle Clusterware. Using the ttCWAdmin -beginAlterSchema command enables you to add the subscriber without dropping and recreating the replication scheme. Oracle Clusterware does not manage the subscriber because it is not part of the configuration that was set up for Oracle Clusterware management.

Perform these steps:

  1. Enter the ttCWAdmin -beginAlterSchema command to stop the replication agent on the active and standby databases.

  2. Using ttIsql to connect to the active database, add the subscriber to the replication scheme by using an ALTER ACTIVE STANDBY PAIR statement.

    ALTER ACTIVE STANDBY PAIR ADD SUBSCRIBER ROsubDSN ON host6;
    
  3. Enter the ttCWAdmin -endAlterSchema command to duplicate the standby database, register the altered replication scheme and start replication.

  4. Enter the ttIsql repschemes command to verify that the read-only subscriber has been added to the replication scheme.

  5. Use the ttRepStateGet procedure to verify that the state of the standby database is STANDBY.

  6. On the subscriber host, use ttRepAdmin -duplicate to duplicate the standby database to the read-only subscriber. See "Duplicating a database".

  7. Start the replication agent on the subscriber host.

Rebuilding a read-only subscriber not managed by Oracle Clusterware

You can destroy and rebuild a read-only subscriber that is not managed by Oracle Clusterware. Perform these tasks:

  1. Stop the replication agent on the subscriber host.

  2. Use the ttDestroy utility to destroy the subscriber database.

  3. On the subscriber host, use ttRepAdmin -duplicate to duplicate the standby database to the read-only subscriber. See "Duplicating a database".

Removing an active standby pair from a cluster

To remove an active standby pair (with or without subscribers) from a cluster, perform these tasks:

  1. Stop the replication agents on all databases in the active standby pair. This example uses advSub2DSN, which was added in "Adding an active standby pair to a cluster".

    ttCWAdmin -stop -dsn advSub2DSN
    
  2. Drop the active standby replication scheme.

    ttCWAdmin -drop -dsn advSub2DSN
    
  3. Drop the virtual IP addresses for the active standby pair.

    ttCWAdmin -dropVIPs -dsn advSub2DSN
    
  4. Modify the cluster.oracle.ini file (optional). Remove the entries for advSub2DSN.

  5. If you want to destroy the databases, log onto each host that was included in the configuration for this active standby pair and use the ttDestroy utility.

    ttDestroy advSub2DSN
    

    For more information about ttDestroy, see "ttDestroy" in Oracle TimesTen In-Memory Database Reference.

Adding a host to the cluster

Adding a host requires that the cluster be configured for advanced availability. The examples in this section use the advancedSubscriberDSN.

To add two spare master hosts to a cluster, enter a command similar to the following:

ttCWAdmin -addMasterHosts -hosts "host8,host9" -dsn advancedSubscriberDSN

To add a spare subscriber host to a cluster, enter a command similar to the following:

ttCWAdmin -addSubscriberHosts -hosts "subhost1" -dsn advancedSubscriberDSN

Removing a host from the cluster

Removing a host from the cluster requires that the cluster be configured for advanced availability. MasterHosts must list more than two hosts if one of the master hosts is to be removed. SubscriberHosts must list at least one more host than the number of subscriber databases if one of the subscriber hosts is to be removed.

The examples in this section use the advancedSubscriberDSN.

To remove two spare master host from the cluster, enter a command similar to the following:

ttCWAdmin -delMasterHosts "host8,host9" -dsn advancedSubscriberDSN

To remove a spare subscriber hosts from the cluster, enter a command similar to the following:

ttCWAdmin -delSubscriberHosts "subhost1" -dsn advancedSubscriberDSN

Reversing the roles of the master databases

After a failover, the active and standby databases are on different hosts than they were before the failover. You can use the -switch option of the ttCWAdmin utility to restore the original configuration.

For example:

ttCWAdmin -switch -dsn basicDSN

Ensure that there are no open transactions before using the -switch option. If there are open transactions, the command fails.

Figure 7-6 shows the hosts for an active standby pair. The active database resides on host A, and the standby database resides on host B.

Figure 7-6 Hosts for an active standby pair

Description of Figure 7-6 follows
Description of "Figure 7-6 Hosts for an active standby pair"

The ttCWAdmin -switch command performs these tasks:

  • Deactivates the TimesTen cluster agent (ttCRSAgent) on host A (the active node)

  • Disables the database monitor (ttCRSmaster) on host A

  • Calls the ttRepSubscriberWait, ttRepStop and ttRepDeactivate built-in procedures on host A

  • Stops the active service (ttCRSActiveService) on host A and reports a failure event to the Oracle Clusterware CRSD process

  • Enables monitoring on host A and moves the active service to host B

  • Starts the replication agent on host A, stops the standby service (ttCRSsubservice) on host B and reports a failure event to the Oracle Clusterware CRSD process on host B

  • Starts the standby service (ttCRSsubservice) on host A

Moving a database to a different host

When a cluster is configured for advanced availability, you can use the -relocate option of the ttCWAdmin utility to move a database from the local host to the next available spare host specified in the MasterHosts attribute in the cluster.oracle.ini file. If the database on the local host has the active role, the -relocate option first reverses the roles. Thus the newly migrated active database becomes the standby and the standby becomes the active.

The -relocate option is useful for relocating a database if you decide to take the host offline. Ensure that there are no open transactions before you use the command.

For example:

ttCWAdmin -relocate -dsn advancedDSN

Performing host or network maintenance

If you decide to upgrade the operating system or hardware for a host or perform network maintenance, shut down Oracle Clusterware and disable automatic startup. Execute these Oracle Clusterware commands as root or OS administrator:

# crsctl stop crs

# crsctl disable crs

Shut down TimesTen. See "Shutting down a TimesTen application" in Oracle TimesTen In-Memory Database Operations Guide.

Perform the host maintenance. Then enable automatic startup and start Oracle Clusterware:

# crsctl enable crs

# crsctl start crs

See Oracle Clusterware Administration and Deployment Guide for more information about these commands.

Performing maintenance on the entire cluster

When all of the hosts in the cluster need to be brought down, stop Oracle Clusterware on each host individually. Execute these Oracle Clusterware commands as root or OS administrator:

# crsctl stop crs

# crsctl disable crs

Shut down TimesTen. See "Shutting down a TimesTen application" in Oracle TimesTen In-Memory Database Operations Guide.

Perform the maintenance. Then enable automatic startup and start Oracle Clusterware:

# crsctl enable crs

# crsctl start crs

See Oracle Clusterware Administration and Deployment Guide for more information about these commands.

Changing user names or passwords

When you create the active standby pair replication scheme with the ttCWAdmin -create command, Oracle Clusterware prompts for the user name and password of the internal user. If there are cache groups in the active standby pair, Oracle Clusterware also stores the cache administration user name and password. To change the user name or password for the internal user or the cache administration user, you must re-create the cluster.

To change the user name or password of the internal user that created the active standby pair replication or to change the cache administration user name or password, perform these tasks:

  1. Stop the replication agents on the databases in the active standby pair. These commands use the advancedCacheDSN as an example.

    ttCWAdmin -stop -dsn advancedCacheDSN
    
  2. Drop the active standby pair.

    ttCWAdmin -drop -dsn advancedCacheDSN
    
  3. Change the appropriate user name or password:

  4. Re-create the active standby pair replication scheme.

    ttCWAdmin -create -dsn advancedCacheDSN
    
  5. Start the active standby pair replication scheme.

    ttCWAdmin -start -dsn advancedCacheDSN
    

Monitoring cluster status

This section includes:

Obtaining cluster status

Using the -status option of the ttCWAdmin utility reports information about all of the active standby pairs in an instance that are managed by the same instance administrator. If you specify the DSN, the utility reports information for the active standby pair with that DSN.

Example 7-1 Status after creating an active standby pair

After you have created an active standby pair replication scheme but have not yet started replication, ttCWAdmin -status returns information like this. Note that these grid states will be displayed before replication is started regardless of whether there is a cache grid.

$ ttCWAdmin -status
TimesTen Cluster status report as of Thu Nov 11 13:54:35 2010
 
====================================================================
TimesTen daemon monitors:
Host:HOST1 Status: online
Host:HOST2 Status: online
 
====================================================================
====================================================================
TimesTen Cluster agents
Host:HOST1 Status: online
Host:HOST2 Status: online
 
====================================================================
 
Status of Cluster related to DSN MYDSN:
====================================================================
1. Status of Cluster monitoring components:
Monitor Process for Active datastore:NOT RUNNING
Monitor Process for Standby datastore:NOT RUNNING
Monitor Process for Master Datastore 1 on Host host1: NOT RUNNING
Monitor Process for Master Datastore 2 on Host host2: NOT RUNNING
 
2.Status of  Datastores comprising the cluster
Master Datastore 1:
Host:host1
Status:AVAILABLE
State:ACTIVE
Grid:NO GRID
Master Datastore 2:
Host:host2
Status:UNAVAILABLE
State:UNKNOWN
Grid:UNKNOWN
====================================================================
The cluster containing the replicated DSN is offline

Example 7-2 Status when the active database is running

After you have started the replication scheme and the active database is running but the standby database is not yet running, ttCWAdmin -status returns information like this when a cache grid is not configured.

$ ttcwadmin -status
TimesTen Cluster status report as of Thu Nov 11 13:58:25 2010
 
====================================================================
TimesTen daemon monitors:
Host:HOST1 Status: online
Host:HOST2 Status: online
 
====================================================================
====================================================================
TimesTen Cluster agents
Host:HOST1 Status: online
Host:HOST2 Status: online
 
====================================================================
 
Status of Cluster related to DSN MYDSN:
====================================================================
1. Status of Cluster monitoring components:
Monitor Process for Active datastore:RUNNING on Host host1
Monitor Process for Standby datastore:RUNNING on Host host1
Monitor Process for Master Datastore 1 on Host host1: RUNNING
Monitor Process for Master Datastore 2 on Host host2: RUNNING
 
2.Status of  Datastores comprising the cluster
Master Datastore 1:
Host:host1
Status:AVAILABLE
State:ACTIVE
Grid:NO GRID
Master Datastore 2:
Host:host2
Status:AVAILABLE
State:IDLE
Grid:NO GRID
====================================================================
The cluster containing the replicated DSN is online

If a cache grid is configured, then the last section appears as follows:

2.Status of  Datastores comprising the cluster
Master Datastore 1:
Host:host1
Status:AVAILABLE
State:ACTIVE
Grid:AVAILABLE
Master Datastore 2:
Host:host2
Status:AVAILABLE
State:IDLE
Grid:NO GRID

Example 7-3 Status when the active and the standby databases are running

After you have started the replication scheme and the active database and the standby database are both running, ttCWAdmin -status returns information like this when a cache grid is not configured.

$ ttcwadmin -status
TimesTen Cluster status report as of Thu Nov 11 13:59:20 2010
 
====================================================================
TimesTen daemon monitors:
Host:HOST1 Status: online
Host:HOST2 Status: online
 
====================================================================
====================================================================
TimesTen Cluster agents
Host:HOST1 Status: online
Host:HOST2 Status: online
 
====================================================================
 
Status of Cluster related to DSN MYDSN:
====================================================================
1. Status of Cluster monitoring components:
Monitor Process for Active datastore:RUNNING on Host host1
Monitor Process for Standby datastore:RUNNING on Host host2
Monitor Process for Master Datastore 1 on Host host1: RUNNING
Monitor Process for Master Datastore 2 on Host host2: RUNNING
 
2.Status of  Datastores comprising the cluster
Master Datastore 1:
Host:host1
Status:AVAILABLE
State:ACTIVE
Grid:NO GRID
Master Datastore 2:
Host:host2
Status:AVAILABLE
State:STANDBY
Grid:NO GRID
====================================================================
The cluster containing the replicated DSN is online

If a cache grid is configured, then the last section appears as follows:

2.Status of  Datastores comprising the cluster
Master Datastore 1:
Host:host1
Status:AVAILABLE
State:ACTIVE
Grid:AVAILABLE
Master Datastore 2:
Host:host2
Status:AVAILABLE
State:STANDBY
Grid:AVAILABLE

Message log files

The monitor processes report events and errors to the ttcwerrors.log and ttcwmsg.log files. The files are located in the daemon_home/info directory. The default size of these files is the same as the default maximum size of the user log. The maximum number of log files is the same as the default number of files for the user log. When the maximum number of files has been written, additional errors and messages overwrite the files, beginning with the oldest file.

For the default values for number of log files and log file size, see "Modifying informational messages" in Oracle TimesTen In-Memory Database Operations Guide.