Oracle® In-Memory Database Cache User's Guide 11g Release 2 (11.2.2) Part Number E21634-05 |
|
|
PDF · Mobi · ePub |
This chapter describes the tasks for creating a second standalone TimesTen database and an active standby pair, and attaching these members to the cache grid that was created in Chapter 3, "Setting Up a Caching Infrastructure". It includes the following topics:
Note:
If you are planning to use Oracle Clusterware to manage active standby pairs in a cache grid, see "Using Oracle Clusterware with a TimesTen cache grid" in Oracle TimesTen In-Memory Database Replication Guide.Also see "Restricted commands and SQL statements" in Oracle TimesTen In-Memory Database Replication Guide. Use the ttCWAdmin
utility to manage the active standby pair grid members instead of the built-in procedures discussed in this chapter.
The following is the definition of the cachealone2
DSN for the second standalone TimesTen database that will become a member of the ttGrid
cache grid:
[cachealone2] DataStore=/users/OracleCache/alone2 PermSize=64 OracleNetServiceName=orcl DatabaseCharacterSet=WE8ISO8859P1
Start the ttIsql
utility and connect to the cachealone2
DSN as the instance administrator to create the database. Then create the cache manager user cacheuser
whose name, in this example, is the same as the Oracle cache administration user. Then create a cache table user oratt
whose name is the same as the Oracle schema user who will own the Oracle tables to be cached in the TimesTen database.
% ttIsql cachealone2 Command> CREATE USER cacheuser IDENTIFIED BY timesten; Command> CREATE USER oratt IDENTIFIED BY timesten;
As the instance administrator, use the ttIsql
utility to grant the cache manager user cacheuser
the privileges required to perform the operations listed in Example 3-8:
Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE TO cacheuser; Command> exit
Start the ttIsql
utility and connect to the cachealone2
DSN as the cache manager user. Set the cache administration user name and password by calling the ttCacheUidPwdSet
built-in procedure.
% ttIsql "DSN=cachealone2;UID=cacheuser;PWD=timesten;OraclePWD=oracle" Command> CALL ttCacheUidPwdSet('cacheuser','oracle');
Associate the second standalone database to the ttGrid
cache grid by calling the ttGridNameSet
built-in procedure as the cache manager user:
Command> CALL ttGridNameSet('ttGrid');
The ttGrid
cache grid was created from the first standalone TimesTen database. Since the grid already exists, it does not need to be created again.
If desired, you can test the connectivity between the second standalone TimesTen database and the Oracle database using the instructions stated in "Testing the connectivity between the TimesTen and Oracle databases".
Start the cache agent on the second standalone database by calling the ttCacheStart
built-in procedure as the cache manager user:
Command> CALL ttCacheStart;
Then create cache groups in the database as the cache manager user. For example, the following statement creates a dynamic AWT global cache group subscriber_accounts
that caches the oratt.subscriber
table:
CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH GLOBAL CACHE GROUP subscriber_accounts FROM oratt.subscriber (subscriberid NUMBER(10) NOT NULL PRIMARY KEY, name VARCHAR2(100) NOT NULL, minutes_balance NUMBER(5) NOT NULL, last_call_duration NUMBER(4) NOT NULL);
The definition of the oratt.subscriber
cached Oracle table is shown in "Global cache groups".
If any AWT cache groups were created, start the replication agent on the TimesTen database by calling the ttRepStart
built-in procedure as the cache manager user:
Command> CALL ttRepStart;
If any global cache groups were created, the database must attach to the cache grid that it is associated with in order to update the cache tables of the global cache groups. Attaching the database to the grid allows the database to become a member of the grid so that cache instances in the cache tables of the global cache groups can maintain consistency among the databases within the grid.
As the cache manager user, attach the second standalone database to the ttGrid
cache grid that it is associated with by calling the ttGridAttach
built-in procedure. The node number for a standalone TimesTen database is 1.
In the following example, alone2
is a name that is used to uniquely identify the grid member, sys2
is the host name of the TimesTen system where the second standalone database resides, and 5002
is the TCP/IP port for the second standalone database's cache agent process:
Command> CALL ttGridAttach(1,'alone2','sys2',5002); Command> exit
To achieve high availability, configure an active standby pair replication scheme for cache tables in a read-only cache group or an AWT cache group.
An active standby pair that replicates cache tables from one of these cache group types can automatically change the role of a TimesTen database as part of failover and recovery with minimal chance of data loss. Cache groups themselves provide resilience from Oracle database outages, further strengthening system availability. See "Administering an Active Standby Pair with Cache Groups" in Oracle TimesTen In-Memory Database Replication Guide for more information.
An active standby pair replication scheme provides for high availability of a TimesTen database. Multiple grid members provide for high availability of a TimesTen cache grid. Oracle Real Application Clusters (Oracle RAC) provides for high availability of an Oracle database. For more information about using Oracle In-Memory Database Cache in an Oracle RAC environment, see "Using Oracle In-Memory Database Cache in an Oracle RAC Environment".
Perform the following tasks to configure an active standby pair for TimesTen databases that cache Oracle tables:
The following is the definition of the cacheactive
DSN for the active database of the active standby pair that will become a member of the ttGrid
cache grid:
[cacheactive] DataStore=/users/OracleCache/cacheact PermSize=64 OracleNetServiceName=orcl DatabaseCharacterSet=WE8ISO8859P1
Start the ttIsql
utility and connect to the cacheactive
DSN as the instance administrator to create the database. Then create the cache manager user cacheuser
whose name, in this example, is the same as the Oracle cache administration user. Then create a cache table user oratt
whose name is the same as the Oracle schema user who will own the Oracle tables to be cached in the TimesTen database.
% ttIsql cacheactive Command> CREATE USER cacheuser IDENTIFIED BY timesten; Command> CREATE USER oratt IDENTIFIED BY timesten;
As the instance administrator, use the ttIsql
utility to grant the cache manager user cacheuser
the privileges required to perform the operations listed in Example 3-8 as well as create an active standby pair replication scheme which requires the ADMIN
privilege:
Command> GRANT CREATE SESSION, CACHE_MANAGER, > CREATE ANY TABLE, ADMIN TO cacheuser; Command> exit
Start the ttIsql
utility and connect to the cacheactive
DSN as the cache manager user. Set the cache administration user name and password by calling the ttCacheUidPwdSet
built-in procedure.
% ttIsql "DSN=cacheactive;UID=cacheuser;PWD=timesten;OraclePWD=oracle" Command> CALL ttCacheUidPwdSet('cacheuser','oracle');
Associate the active database to the ttGrid
cache grid by calling the ttGridNameSet
built-in procedure as the cache manager user:
Command> CALL ttGridNameSet('ttGrid');
The ttGrid
cache grid was created from the first standalone TimesTen database. Since the grid already exists, it does not need to be created again.
If desired, you can test the connectivity between the active database and the Oracle database using the instructions stated in "Testing the connectivity between the TimesTen and Oracle databases".
Start the cache agent on the active database by calling the ttCacheStart
built-in procedure as the cache manager user:
Command> CALL ttCacheStart;
Then create cache groups in the database as the cache manager user. For example, the following statement creates a dynamic AWT global cache group subscriber_accounts
that caches the oratt.subscriber
table:
CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH GLOBAL CACHE GROUP subscriber_accounts FROM oratt.subscriber (subscriberid NUMBER(10) NOT NULL PRIMARY KEY, name VARCHAR2(100) NOT NULL, minutes_balance NUMBER(5) NOT NULL, last_call_duration NUMBER(4) NOT NULL);
The definition of the oratt.subscriber
cached Oracle table is shown in "Global cache groups".
As the cache manager user, create an active standby pair replication scheme in the active database using a CREATE ACTIVE STANDBY PAIR
statement.
In the following example, cacheact
, cachestand
and subscr
are the file name prefixes of the checkpoint and transaction log files of the active database, standby database and read-only subscriber database. sys3
, sys4
and sys5
are the host names of the TimesTen systems where the active database, standby database and read-only subscriber database reside, respectively.
Command> CREATE ACTIVE STANDBY PAIR cacheact ON "sys3", cachestand ON "sys4" > SUBSCRIBER subscr ON "sys5";
As the cache manager user, start the replication agent on the active database by calling the ttRepStart
built-in procedure. Then declare the database as the active by calling the ttRepStateSet
built-in procedure.
Command> CALL ttRepStart; Command> CALL ttRepStateSet('active');
If any global cache groups were created, the database must attach to the cache grid that it is associated with in order to update the cache tables of the global cache groups. Attaching the database to the grid allows the database to become a member of the grid so that cache instances in the cache tables of the global cache groups can maintain consistency among the databases within the grid.
As the cache manager user, attach the active database to the ttGrid
cache grid that it is associated with by calling the ttGridAttach
built-in procedure. The node number for an active database is 1.
In the following example:
cacheact
is a name that is used to uniquely identify the active database grid node
cachestand
is a name that is used to uniquely identify the standby database grid node
sys3
is the host name of the TimesTen system where the active database resides
sys4
is the host name of the TimesTen system where the standby database resides
5003
is the TCP/IP port for the active database's cache agent process
5004
is the TCP/IP port for the standby database's cache agent process
Command> CALL ttGridAttach(1,'cacheact','sys3',5003,'cachestand','sys4',5004); Command> exit
The following is the definition of the cachestandby
DSN for the standby database of the active standby pair that will become a member of the ttGrid
cache grid:
[cachestandby] DataStore=/users/OracleCache/cachestand PermSize=64 OracleNetServiceName=orcl DatabaseCharacterSet=WE8ISO8859P1
As the instance administrator, create the standby database as a duplicate of the active database by running a ttRepAdmin -duplicate
utility command from the standby database system. The instance administrator user name of the active database's and standby database's instances must be identical.
Use the -keepCG
option so that cache tables in the active database are duplicated as cache tables in the standby database because the standby database will have connectivity with the Oracle database.
In the following example:
The -from
option specifies the file name prefix of the active database's checkpoint and transaction log files
The -host
option specifies the host name of the TimesTen system where the active database resides
The -uid
and -pwd
options specify a user name and password of a TimesTen internal user defined in the active database that has been granted the ADMIN
privilege
The -cacheuid
and -cachepwd
options specify the Oracle cache administration user name and password
cachestandby
is the DSN of the standby database
% ttRepAdmin -duplicate -from cacheact -host "sys3" -uid cacheuser -pwd timesten -cacheuid cacheuser -cachepwd oracle -keepCG cachestandby
Start the ttIsql
utility and connect to the cachestandby
DSN as the cache manager user. Set the cache administration user name and password by calling the ttCacheUidPwdSet
built-in procedure.
% ttIsql "DSN=cachestandby;UID=cacheuser;PWD=timesten;OraclePWD=oracle" Command> CALL ttCacheUidPwdSet('cacheuser','oracle');
The ttGrid
cache grid was created from the first standalone TimesTen database. Since the grid already exists, it does not need to be created again.
The ttRepAdmin -duplicate -keepCG
utility command associated the standby database to the ttGrid
cache grid so this association does not need to be done explicitly.
If desired, you can test the connectivity between the standby database and the Oracle database using the instructions stated in "Testing the connectivity between the TimesTen and Oracle databases".
Start the cache agent on the standby database by calling the ttCacheStart
built-in procedure as the cache manager user:
Command> CALL ttCacheStart;
As the cache manager user, start the replication agent on the standby database by calling the ttRepStart
built-in procedure.
Command> CALL ttRepStart;
If any global cache groups were created, the database must attach to the cache grid that it is associated with in order to update the cache tables of the global cache groups. Attaching the database to the grid allows the database to become a member of the grid so that cache instances in the cache tables of the global cache groups can maintain consistency among the databases within the grid.
As the cache manager user, attach the standby database to the ttGrid
cache grid that it is associated with by calling the ttGridAttach
built-in procedure. The node number for a standby database is 2. Use the same TCP/IP ports specified for the cache agent of the active and standby databases that were specified when configuring the active database.
In the following example:
cacheact
is a name that is used to uniquely identify the active database grid node
cachestand
is a name that is used to uniquely identify the standby database grid node
sys3
is the host name of the TimesTen system where the active database resides
sys4
is the host name of the TimesTen system where the standby database resides
5003
is the TCP/IP port for the active database's cache agent process
5004
is the TCP/IP port for the standby database's cache agent process
Command> CALL ttGridAttach(2,'cacheact','sys3',5003,'cachestand','sys4',5004); Command> exit
The following is the definition of the rosubscriber
DSN for the read-only subscriber database of the active standby pair:
[rosubscriber] DataStore=/users/OracleCache/subscr PermSize=64 DatabaseCharacterSet=WE8ISO8859P1
As the instance administrator, create the read-only subscriber database as a duplicate of the standby database by running a ttRepAdmin -duplicate
utility command from the read-only subscriber database system. The instance administrator user name of the standby database instance and read-only subscriber database instance must be identical.
Use the -noKeepCG
option so that cache tables in the standby database are duplicated as regular tables in the read-only subscriber database because the read-only subscriber database will have no connectivity with the Oracle database. As a result, the read-only subscriber database will not be associated with a cache grid.
In the following example:
The -from
option specifies the file name prefix of the standby database's checkpoint and transaction log files
The -host
option specifies the host name of the TimesTen system where the standby database resides
The -uid
and -pwd
options specify a user name and password of a TimesTen internal user defined in the standby database that has been granted the ADMIN
privilege
rosubscriber
is the DSN of the read-only subscriber database
% ttRepAdmin -duplicate -from cachestand -host "sys4" -uid cacheuser -pwd timesten -noKeepCG rosubscriber
As the cache manager user, start the replication agent on the read-only subscriber database by calling the ttRepStart
built-in procedure.
% ttIsql "DSN=rosubscriber;UID=cacheuser;PWD=timesten" Command> CALL ttRepStart; Command> exit
The definition of the oratt.subscriber
cached Oracle table is shown in "Global cache groups".
The following is the data in the oratt.subscriber
cached Oracle table.
SUBSCRIBERID NAME MINUTES_BALANCE LAST_CALL_DURATION ------------ ---------------- --------------- ------------------ 1001 Jane Anderson 75 15 1004 Robert Phillips 60 20 1005 William Ackerman 40 10 1009 Sandy Little 90 30
The oratt.subscriber
TimesTen cache table in the subscriber_accounts
global cache group is initially empty in all five TimesTen databases (cachealone1
, cachealone2
, cacheactive
, cachestandby
, rosubscriber
):
Command> SELECT * FROM oratt.subscriber; 0 rows found.
Issue the following SELECT
statement on the cachealone1
TimesTen database to dynamically load one cache instance from the cached Oracle table into the TimesTen cache table:
Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004; < 1004, Robert Phillips, 60, 20 >
As a result, the cachealone1
standalone database grid member has ownership of the cache instance with subscriber ID 1004. This cache instance does not exist in any of the other grid members.
Next issue the following SELECT
statement on the cachealone2
TimesTen database to dynamically load one cache instance from the cached Oracle table into the TimesTen cache table:
Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004; < 1004, Robert Phillips, 60, 20 >
As a result, the cachealone2
standalone database grid member has taken ownership of the cache instance with subscriber ID 1004 from the cachealone1
grid member. This cache instance no longer exists in cachealone1
and does not exist in any of the other grid members.
Next issue the following INSERT
statement on the cacheactive
TimesTen database to insert a new cache instance into the TimesTen cache table:
Command> INSERT INTO oratt.subscriber VALUES (1012, 'Charles Hill', 80, 16);
As a result, the cacheactive
active database grid node has ownership of the cache instance with subscriber ID 1012. The cache instance is replicated to the cachestandby
standby database and the rosubscriber
read-only subscriber database. The cache instance does not exist in any of the other grid members. The insert operation is also automatically propagated to the oratt.subscriber
cached Oracle table.
A standby database or a read-only subscriber database cannot directly take ownership of a cache instance. A dynamic or manual load operation is prohibited including SELECT
statements that result in a dynamic load because these databases are read-only.
No data sharing occurs with cache tables in local cache groups among the grid members. Each grid member can have a different number of local cache groups. If two grid members have a local cache group with the same definition, the data in the cache table within one grid member can overlap with the data in the cache table within the other grid member. There is no concept of cache instance ownership for cache tables in local cache groups.
If you want to access data on all the nodes of a cache grid, perform a global query. For example, consider this statement:
SELECT MAX(salary) FROM employees;
When global query processing is not enabled, the statement returns the maximum salary for the rows that exist on the local node. When global query processing is enabled, it returns the maximum salary across all employee records in the cache grid.
A global query can reference a cache table or a noncache table in all attached grid members. The referenced tables can be any combination of local tables, cache tables, views, materialized views and table synonyms. The tables must have the same definition for columns affected by the global query.
Enable global query processing by setting an optimizer flag. Before executing a global query, turn autocommit off and call the ttOptSetFlag
built-in procedure to set the GlobalProcessing
optimizer flag to 1:
autocommit 0; CALL ttOptSetFlag('GlobalProcessing', 1);
You can perform global queries with local joins by using the GlobalLocalJoin
optimizer flag instead of the GlobalProcessing
optimizer flag. See "Performing global queries with local joins".
Global queries that are enabled by the GlobalProcessing
optimizer flag have these restrictions:
The query must reference exactly one table.
The query cannot include a self join, a derived table or subqueries.
The query cannot reference a global temporary table.
The query cannot be performed on the standby database of an active standby grid member.
ROWNUM
and GROUP BY
clauses cannot be used in the same query.
The query cannot be used with GROUPING SETS
, CUBE
, ROLLUP
, GROUPING
, GROUPING_ID
, or GROUP_ID
.
The query cannot include the WITH
clause.
The query cannot include analytic SQL functions.
The PassThrough
connection attribute must be set to 0.
You can execute a global query with a local join. This means that the SELECT
statement is global (selects across grid members), but the join result is local (the join resides on the local node). You may find it useful to join fact and dimension tables, to join tables that are a similar size and whose data are distributed based on the join key or to join tables of a global cache group based on a primary key or foreign key relationship. Use the GlobalLocalJoin
optimizer flag to enable a global query with local join.
Global queries with local joins can join cache tables, global cache tables, noncache tables with the same definition, views and materialized views. Global queries with local joins can include sequences.
A global query executed in serializable isolation belongs to the global transaction of the SELECT
statement. A global query executed in read committed isolation is executed in its own transaction on the remote nodes.
These operations in a global query are executed locally in each grid member:
Joins
Derived tables
Views
GROUP BY
, HAVING
, ORDER BY
and DISTINCT
clauses in a subquery
These operations in the main query of a global query are executed globally:
GROUP BY
clause and aggregation
ORDER BY
clause
DISTINCT
clause
HAVING
clause. This clause cannot contain a join.
Synonyms are resolved on the node where the query originates.
Before executing a global query with local join, turn autocommit off and call the ttOptSetFlag
built-in procedure to set the GlobalLocalJoin
optimizer flag to 1:
autocommit off; CALL ttOptSetFlag('GlobalLocalJoin', 1)
Global queries with local joins have these restrictions:
The query cannot include the ROWNUM
expression.
The query cannot include a set operator.
The query cannot include the WITH
clause.
The query cannot be used with GROUPING SETS
, CUBE
, ROLLUP
, GROUPING
, GROUPING_ID
, or GROUP_ID
.
The query cannot include analytic SQL functions.
The PassThrough
connection attribute must be set to 0.
The query cannot be performed on the standby database of an active standby grid member.
You may wish to execute a global query without changing the location of the data.You can use SQL functions to determine which grid node contains the information and then execute a query for the information from that node.
Use these SQL functions in a global query to obtain information about the location of data in the cache grid:
TTGRIDMEMBERID()
- Returns the node ID of the node on which the query is executed.
TTGRIDNODENAME()
- Returns the name of the node on which the query is executed.
TTGRIDUSERASSIGNEDNAME()
- Returns the user-assigned name of the node on which the query is executed. The user assigns the name when the ttGridAttach
built-in procedure is called. If you are using Oracle Clusterware, you do not call ttGridAttach
directly and the user-assigned name is generated by TimesTen.
These functions can be used in a SELECT
statement and in these clauses of a SELECT
statement:
WHERE
clause
GROUP BY
clause
ORDER BY
clause
Figure 6-1 shows a cache grid whose members have user-assigned names alone1
, alone2
, and an active standby pair on nodes cacheact
and cachestand
. Queries do not retrieve data from the standby database. The standby database has the same data as the active database.
Figure 6-1 Location of data in a cache grid
The following example shows a global query that retrieves employee_id
, the user-assigned node name, and the member ID from the employee
table from the grid members.
autocommit off; CALL ttOptSetFlag('GlobalProcessing', 1); SELECT employee_id, TTGRIDUSERASSIGNEDNAME(), TTGRIDMEMBERID() FROM employees; COMMIT; < 100, alone1, 1> < 101, alone2, 2> < 102, cacheact, 3> < 103, alone1, 1> < 104, cacheact, 3> ...
The rows that are returned show which grid node and member owns each row of the cache instance. Subsequent queries can access the appropriate node without changing the ownership of the data. For example, execute this query on grid member cacheact
, including TTGRIDUSERASSIGNEDNAME()
in the query to verify that cacheact
is the grid where the query is executed:
SELECT employee_id, last_name, hire_date , TTGRIDUSERASSIGNEDNAME() FROM employees WHERE employee_id=104; < 104, Ernst, 1991-05-21 00:00:00, cacheact >
For more information about TTGRIDMEMBERID()
, TTGRIDNODENAME()
and TTGRIDUSERASSIGNEDNAME()
, see "Cache grid functions" in Oracle TimesTen In-Memory Database SQL Reference.
If a database that contains a global cache group is attached to a cache grid, a subsequent database can attach to the same grid and become a grid member only if it contains a global cache group with the same definition as the global cache group in the database that is attached to the grid. The subsequent database cannot attach to the same grid if it contains more or fewer global cache groups than the database that is attached to the grid. Each database can contain a different number of local cache groups with non-matching definitions between the databases.
Before you can create a new dynamic AWT global cache group in a TimesTen database that is attached to a cache grid, stop the replication agent on the database. Then restart the replication agent after creating the global cache group. The new global cache group cannot be manually or dynamically loaded, and its cache tables cannot be updated until the cache group has been created with the same definition in all the grid members. In the standalone databases and the active database, create the new global cache group manually. For the standby database and the read-only subscriber databases, use the ttDestroy
utility to drop the databases and a ttRepAdmin -duplicate
utility command to re-create the databases so that they contain the new global cache group.